Robots.txt

From Meta, a Wikimedia project coordination wiki
This is an archived version of this page, as edited by Brooke Vibber (talk | contribs) at 22:01, 24 August 2003 (robots o fun). It may differ significantly from the current version.
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

<- MediaWiki architecture < Apache config

(See en:robots.txt protocol for general info...)


In your robots.txt file, you would be wise to deny access to the script directory, hence diffs, old revisions, contribs lists, etc etc, which could severely raise the load on the server.

If not using URL rewriting, this could be difficult to do cleanly. If using a system like on Wikipedia where plain pages are gotten to via /wiki/Some_title and anything else via /w/wiki.phtml?title=Some_title&someoption=blah, it's easy:

 User-agent: *
 Disallow: /w/

Be careful, though! If you put this line by accident:

 Disallow: /w

you'll block access to the /wiki directory, and search engines will drop your wiki.

Problems

Unfortunately, there's are three big problems with robots.txt:

Rate control

You can only specify what paths a bot is allowed to spider. Even just allowing the plain page area can be a huge burden when two or three pages per second are being requested by one spider.

Some bots have a custom specification for this; Inktomi responds to a "Crawl-delay" line which can specify the minimum delay in seconds between hits. (Their default is 15 seconds.)

Bots that don't behave well by default could be forced into line with some sort of request throttling.

Don't index vs don't spider

Most search engine spiders will consider a match on a robots.txt 'Disallow' entry to mean that they should not return that URL in search results. Google is a rare exception, which is technically to specs but is very annoying: it will index such URLs and may return them in search results, albeit without being able to show the content or title of the page or anything other than the URL.

This means that sometimes "edit" URLs will turn up in Google results, which is very VERY annoying.

The only way to keep a URL out of Google's index is to let Google slurp the page and see a meta tag specifying robots="noindex". With our current system, this would be difficult to special case.

Evil bots

Sometimes a custom-written bot isn't very smart or is outright malicious and doesn't obey robots.txt at all (or obeys the path restrictions but spiders very fast, bogging down the site). It may be necessary to block specific user-agent strings or individual IPs of offenders.

Consider also request throttling.