Today, MLCommons ® announced new results for its industry-standard MLPerf ® Inference v6.0 benchmark suite. This release includes several important advances that ensure the benchmark suite tests ...
AI models code simple games, but struggle to play them ...
Google researchers have proposed TurboQuant, a method for compressing the key-value caches that large language models rely on ...
As large language models (LLMs) continue to improve at coding, the benchmarks used to evaluate their performance are steadily becoming less useful. That's because though many LLMs have similar high ...
This study introduces MathEval, a comprehensive benchmarking framework designed to systematically evaluate the mathematical reasoning capabilities of large language models (LLMs). Addressing key ...
OpenAI Group PBC and Mistral AI SAS today introduced new artificial intelligence models optimized for cost-sensitive use ...
A new academic study challenges a core assumption in developing large language models (LLMs), warning that more pre-training data may not always lead to better models. Researchers from some of the ...