Robots Exclusion Protocol: Revisiting 1994!

Do you recall the robots exclusion protocol in the early 1990s?

In simple words,  the protocol is basically when a website owner
uses the robots.txt file to give instructions about their site to web robots.

Well, let’s revisit those days.

Back in 1994, crawlers were overwhelming servers.

Webmaster Martijn Koster decided to fix this issue.

On Sun, 03 Jul 1994, he proposed a protocol to control
what URLs crawlers may access on websites.

He described his method in the proposal…

that the process involves steering robots from certain areas in a
Web server’s URL space, by providing a simple text file on the server.

Hence, users with large archives, CGI scripts with
massive URL subtrees, and temporary information could
benefit from this method.

Google tweeted about this today, revisiting the 1994 announcement.

Robots Exclusion Protocol

For all you SEO and Digital Marketing guys out there,
there is no need to sell the importance of robots.txt protocol.

You can have granular control over what crawlers may access,
regardless of a single URL, file-type, or the entire website.