Introduction to AI Benchmarks
A new academic review suggests that AI benchmarks are flawed, potentially leading an enterprise to make high-stakes decisions on “misleading” data. Enterprise leaders are committing budgets of eight or nine figures to generative AI programmes. These procurement and development decisions often rely on public leaderboards and benchmarks to compare model capabilities.
The Problem with AI Benchmarks
A large-scale study, ‘Measuring what Matters: Construct Validity in Large Language Model Benchmarks,’ analysed 445 separate LLM benchmarks from leading AI conferences. A team of 29 expert reviewers found that “almost all articles have weaknesses in at least one area,” undermining the claims they make about model performance. For CTOs and Chief Data Officers, it strikes at the heart of AI governance and investment strategy. If a benchmark claiming to measure ‘safety’ or ‘robustness’ doesn’t actually capture those qualities, an organisation could deploy a model that exposes it to serious financial and reputational risk.
The Construct Validity Problem
The researchers focused on a core scientific principle known as construct validity. In simple terms, this is the degree to which a test measures the abstract concept it claims to be measuring. For example, while ‘intelligence’ cannot be measured directly, tests are created to serve as measurable proxies. The paper notes that if a benchmark has low construct validity, “then a high score may be irrelevant or even misleading”. This problem is widespread in AI evaluation. The study found that key concepts are often “poorly defined or operationalised”. This can lead to “poorly supported scientific claims, misdirected research, and policy implications that are not grounded in robust evidence”.
Where the Enterprise AI Benchmarks are Failing
The review identified systemic failings across the board, from how benchmarks are designed to how their results are reported.
- Vague or contested definitions: You cannot measure what you cannot define. The study found that even when definitions for a phenomenon were provided, 47.8 percent were “contested,” addressing concepts with “many possible definitions or no clear definition at all”.
- Lack of statistical rigour: Perhaps most alarming for data-driven organisations, the review found that only 16 percent of the 445 benchmarks used uncertainty estimates or statistical tests to compare model results.
- Data contamination and memorisation: Many benchmarks, especially those for reasoning (like the widely used GSM8K), are undermined when their questions and answers appear in the model’s pre-training data.
- Unrepresentative datasets: The study found that 27 percent of benchmarks used “convenience sampling,” such as reusing data from existing benchmarks or human exams. This data is often not representative of the real-world phenomenon.
From Public Metrics to Internal Validation
For enterprise leaders, the study serves as a strong warning: public AI benchmarks are not a substitute for internal and domain-specific evaluation. A high score on a public leaderboard is not a guarantee of fitness for a specific business purpose. Isabella Grandi, Director for Data Strategy & Governance, at NTT DATA UK&I, commented: “A single benchmark might not be the right way to capture the complexity of AI systems, and expecting it to do so risks reducing progress to a numbers game rather than a measure of real-world responsibility. What matters most is consistent evaluation against clear principles that ensure technology serves people as well as progress.
Recommendations for Enterprise Leaders
The paper’s eight recommendations provide a practical checklist for any enterprise looking to build its own internal AI benchmarks and evaluations, aligning with the principles-based approach.
- Define your phenomenon: Before testing models, organisations must first create a “precise and operational definition for the phenomenon being measured”.
- Build a representative dataset: The most valuable benchmark is one built from your own data.
- Conduct error analysis: Go beyond the final score.
- Justify validity: Finally, teams must “justify the relevance of the benchmark for the phenomenon with real-world applications”.
Conclusion
The race to deploy generative AI is pushing organisations to move faster than their governance frameworks can keep up. This report shows that the very tools used to measure progress are often flawed. The only reliable path forward is to stop trusting generic AI benchmarks and start “measuring what matters” for your own enterprise.
FAQs
- Q: What is the problem with AI benchmarks?
A: AI benchmarks are often flawed, which can lead to misleading data and poor decision-making. - Q: What is construct validity?
A: Construct validity refers to the degree to which a test measures the abstract concept it claims to be measuring. - Q: Why are enterprise AI benchmarks failing?
A: Enterprise AI benchmarks are failing due to vague or contested definitions, lack of statistical rigour, data contamination and memorisation, and unrepresentative datasets. - Q: What can enterprise leaders do to improve AI evaluation?
A: Enterprise leaders can build their own internal AI benchmarks and evaluations, aligning with a principles-based approach, and follow the recommendations provided in the paper.









