Started in 1996, the Archive uses web-crawling bot programs to make copies of publicly accessible sites. The copies are then available for research purposes via a search tool called the Wayback Machine.
The site has so far accumulated 40 billion pages, about 1 petabyte, or 1 million gigabytes, of data and is growing at a rate of 20 terabytes per month. The Archive includes millions of pages from adult websites.
At the center of the current dispute is Philadelphia-based Healthcare Advocates, a company that recently lost a trade secrets lawsuit when attorneys for the defendant produced archived copies that showed the information in question had been made publicly available on a 1999 version of the company’s site.
The pages, Healthcare Advocates claims, were protected against unauthorized indexing and viewing by use of a robots.txt file, which are supposed to tell web crawlers when certain pages are not to be stored. The company says the Archive infringed its copyrights by not doing enough to block access to the pages.
In its suit, filed in U.S. District Court in Philadelphia, Healthcare Advocates said a representative of the Archive brushed off charges of wrongdoing and said the problem was probably caused by a glitch related to the robots.txt files and, therefore, was not the Archives concern.
Danny Sullivan of Search Engine Watch said he believes the Archive representative was right, adding that, while any outcome in the case is possible, he would be surprised if a judge doesn’t dismiss it summarily.
“Robots.txt is a voluntary opt-out option. It has no legal bearing,” Sullivan said.
If the court sides with the Archive, as Sullivan predicts, the decision could have far-reaching implications for adult webmasters who rely on nonbinding opt-out provisions of robots.txt to prevent search engines from copying and distributing their intellectual property.
Apparently, doing so is not as reliable as many might think. Attorneys for the defendant in the initial Healthcare Advocates case were able to access at least 92 pages that had supposedly been protected by robots.txt files.
And once a technology such as the Archive stores a page, webmasters may not have the right to make them disappear at a later date, for example, if they are lacking 2257 records for the models on the page.