Introduction to AI Search Risks
Over half of us now use AI to search the web, yet the stubbornly low data accuracy of common tools creates new business risks. While generative AI (GenAI) offers undeniable efficiency gains, a new investigation highlights a disparity between user trust and technical accuracy that poses specific risks to corporate compliance, legal standing, and financial planning.
The Adoption of AI Tools
For the C-suite, the adoption of these tools represents a classic ‘shadow IT’ challenge. According to a survey of 4,189 UK adults conducted in September 2025, around a third of users believe AI is already more important to them than standard web searching. If employees trust these tools for personal queries, they are almost certainly employing them for business research.
The Accuracy Gap
The investigation, conducted by Which?, suggests that unverified reliance on these platforms could be costly. Around half of AI users report trusting the information they receive to a ‘reasonable’ or ‘great’ extent. Yet, looking at the granularity of the responses provided by AI models, that trust is often misplaced.
The Accuracy Gap When Using AI to Search the Web
The study tested six major tools – ChatGPT, Google Gemini (both standard and ‘AI Overviews’), Microsoft Copilot, Meta AI, and Perplexity – across 40 common questions spanning finance, law, and consumer rights. Perplexity achieved the highest total score at 71 percent, closely followed by Google Gemini AI Overviews at 70 percent. In contrast, Meta scored the lowest at 55 percent. ChatGPT, despite its widespread adoption, received a total score of 64 percent, making it the second-lowest performer among the tools tested.
Business Risks
However, the investigation revealed that all of these AI tools frequently misread information or provided incomplete advice that could pose serious business risks. For financial officers and legal departments, the nature of these errors is particularly concerning. When asked how to invest a £25,000 annual ISA allowance, both ChatGPT and Copilot failed to identify a deliberate error in the prompt regarding the statutory limit. Instead of correcting the figure, they offered advice that potentially risked breaching HMRC rules.
Source Transparency Issues
A primary concern for enterprise data governance is the lineage of information. The investigation found that AI search tools often bear a high responsibility to be transparent, yet frequently cited sources that were vague, non-existent, or have dubious accuracy, such as old forum threads. This opacity can lead to financial inefficiency.
Mitigating AI Business Risk
For business leaders, the path forward is not to ban AI tools – which often increases by driving usage further into the shadows – but to implement robust governance frameworks to ensure the accuracy of their output when bring used for web search:
- Enforce specificity in prompts: The investigation notes that AI is still learning to interpret prompts. Corporate training should emphasise that vague queries yield risky data.
- Mandate source verification: Trusting a single output is operationally unsound. Employees must demand to see sources and check them manually.
- Operationalise the “second opinion”: At this stage of technical maturity, GenAI outputs should be viewed as just one opinion among many. For complex issues involving finance, law, or medical data, AI lacks the ability to fully comprehend nuance.
Conclusion
The AI tools are evolving and their web search accuracy is gradually improving, but as the investigation concludes, relying on them too much right now could prove costly. For the enterprise, the difference between a business efficiency gain from AI and a compliance failure risk lies in the verification process.
FAQs
Q: What is the main risk of using AI tools for web search?
A: The main risk is the low data accuracy of common tools, which can create new business risks, such as corporate compliance, legal standing, and financial planning issues.
Q: How can businesses mitigate AI business risk?
A: Businesses can mitigate AI business risk by implementing robust governance frameworks, enforcing specificity in prompts, mandating source verification, and operationalising the “second opinion”.
Q: What is the importance of source transparency in AI search tools?
A: Source transparency is crucial for enterprise data governance, as AI search tools often cite sources that are vague, non-existent, or have dubious accuracy, leading to financial inefficiency.
Q: What is the role of human verification in AI outputs?
A: Human verification is essential to ensure the accuracy of AI outputs, especially for complex issues involving finance, law, or medical data, where AI lacks the ability to fully comprehend nuance.








