Introduction to the Allegations
AI search engine Perplexity is using stealth bots and other tactics to evade websites’ no-crawl directives, an allegation that if true violates Internet norms that have been in place for more than three decades, network security and optimization service Cloudflare said Monday. In a blog post, Cloudflare researchers said the company received complaints from customers who had disallowed Perplexity scraping bots by implementing settings in their sites’ robots.txt files and through Web application firewalls that blocked the declared Perplexity crawlers.
The Investigation
Despite those steps, Cloudflare said, Perplexity continued to access the sites’ content. The researchers said they then set out to test it for themselves and found that when known Perplexity crawlers encountered blocks from robots.txt files or firewall rules, Perplexity then searched the sites using a stealth bot that followed a range of tactics to mask its activity. This undeclared crawler utilized multiple IPs not listed in Perplexity’s official IP range, and would rotate through these IPs in response to the restrictive robots.txt policy and block from Cloudflare.
Extent of the Issue
The issue is widespread, with the researchers observing requests coming from different ASNs in attempts to further evade website blocks. This activity was observed across tens of thousands of domains and millions of requests per day. The researchers provided a diagram to illustrate the flow of the technique they allege Perplexity used, which shows the extent of the evasion tactics employed by the stealth bot.
Background on Internet Norms
If true, the evasion flouts Internet norms in place for more than three decades. In 1994, engineer Martijn Koster proposed the Robots Exclusion Protocol, which provided a machine-readable format for informing crawlers they weren’t permitted on a given site. Sites that wanted to prevent their content from being indexed installed the simple robots.txt file at the top of their homepage. The standard, which has been widely observed and endorsed ever since, formally became a standard under the Internet Engineering Task Force in 2022.
Conclusion
The allegations against Perplexity highlight the importance of respecting Internet norms and protocols. The use of stealth bots to evade no-crawl directives undermines the trust and cooperation that is essential for the smooth functioning of the Internet. It is crucial for companies like Perplexity to respect the rules and protocols that have been established to ensure that the Internet remains a safe and open platform for all users.
FAQs
Q: What is Perplexity accused of doing?
A: Perplexity is accused of using stealth bots and other tactics to evade websites’ no-crawl directives, which violates Internet norms.
Q: What is the Robots Exclusion Protocol?
A: The Robots Exclusion Protocol is a standard that allows websites to inform crawlers that they are not permitted to access certain content.
Q: How widespread is the issue?
A: The issue is observed across tens of thousands of domains and millions of requests per day.
Q: What are the implications of Perplexity’s actions?
A: The implications are that it undermines the trust and cooperation that is essential for the smooth functioning of the Internet.