
The BBC is threatening legal action against US-based AI company Perplexity for allegedly using its content without authorisation. The BBC claims that Perplexity’s chatbot is reproducing its content "verbatim" and has demanded that the company cease using the material, delete any copies and propose financial compensation for its previous use.
This is the first time the BBC, a major global news organisation, has taken legal action against an AI company. In its letter to Perplexity's CEO Aravind Srinivas, the BBC stated that the use of its content violated UK copyright law and breached its terms of use. The BBC also referenced a study highlighting that Perplexity, along with other AI chatbots, inaccurately summarised news stories, including BBC content and failed to meet BBC Editorial Guidelines for impartial and accurate reporting. This, the BBC argued, damages its reputation and undermines audience trust, especially among UK licence fee payers.
The issue of AI bots using existing material without permission has raised concerns, as many generative AI models rely on vast data from web scraping, where bots automatically extract information from websites. The Professional Publishers Association (“<span class="news-text_medium">PPA</span>”) has expressed concerns over AI platforms violating UK copyright laws and harming the publishing industry. It also criticised the practice of bots scraping content without compensation, which threatens the £4.4 billion UK publishing sector.
In response to the BBC’s claims, Perplexity denied that its crawlers ignored the “robots.txt” file, a tool used by websites to block bots from scraping content. The company further clarified that it does not use website content for pre-training its AI models. Perplexity’s AI chatbot, which describes itself as an “answer engine” that synthesises information from trusted sources but advises users to verify the accuracy of its responses.