Why Stats Perform Switched to Groq: Intelligent Sports Insights, 7-10X Faster Inference

If you’ve ever heard a commentator drop a stat like, “That’s his 8th shot on target this match,” there’s a good chance it came from Stats Perform. For decades, its Opta brand has been the authority in sports data, powering how games are measured, understood, and shared with the world.

Behind the scenes, Stats Perform runs on 7.2 petabytes of proprietary sports data and AI models embedded in 200+ software modules. That’s like 360 billion pages of text. Their scale fuels an entire sports ecosystem, from broadcasters and leagues to media outlets, apps, and the global sports betting industry. The biggest names in sports rely on Stats Perform to engage fans, grow audiences, and, ultimately, win. And that’s why it’s recognized as the leader in sports AI.

“We don’t just collect data,” explains Christian Marko, Chief Innovation Officer at Stats Perform. “We’ve defined the language of sport. From KPIs to the way leagues and broadcasters describe performance, a lot of it came from us.”

That reputation is grounded in credibility, with Stats Perform’s data widely regarded as the “ground truth” benchmark. With accuracy levels that consistently sit at the very top of the field, the company doesn’t just capture the game; it sets the global standard for how the game is understood.

But as sports evolve, so do the demands on the data.

From research to real-world impact

Over the past decade, Stats Perform has invested heavily in AI research, obtaining more than 665 patent assets and an extensive library of models tailored to sports. “My job is to take these models out of the lab and turn them into real-world products,” explains Christian. To make it happen, Stats Perform needed infrastructure that could deliver performance, flexibility, and trust, at scale.

Sports data must be delivered in real-time, with hyper-accurate insights and speed measured in milliseconds. While Stats Perform previously relied on other open-source providers for inference, the latency and cost structures were limiting. Scaling AI inference workloads was expensive and rigid, and hardware-heavy alternatives posed high costs and slower time-to-market.

“For us, one second is too much,” Christian explains. “I need inference in real-time, and our old approach simply wasn’t going to work.” So Christian set out to define the right strategy for scaling AI, and the infrastructure to make it happen.

The ROI of smart investment

While many companies talk about cost savings with AI, Christian’s perspective is different. “I wasn’t hunting for cost savings. I was hunting for where to invest,” he shares. “I had an option to invest millions in hardware—or do something more strategic. Why should we try to run open source models ourselves when others do it better? Groq was the perfect fit: powerful, trustworthy, and with a pricing model that actually makes sense.”

Stats Perform strategically chose GroqCloud as their main inference partner for standard open source models, opting against purchasing extensive hardware, because of speed and cost efficiency and the support to move quickly and confidently.

A partnership built on speed and trust

The decision wasn’t about cost-cutting; it was about speed, flexibility, and partnership. “Choosing Groq was about getting to market faster and smarter. We get the performance and results we need to accelerate time-to-market. It felt like a win-win, like a true partnership,” Christian says.

Beyond speed and scale, another factor sealed the deal with Groq: trust. “For us, data is an IP asset,” Christian explains. “We can’t risk it being used to train someone else’s models. We value that Groq provides inference services without training on our data. That level of trust and performance gave us the confidence to scale quickly.”

That assurance, combined with Groq’s ability to keep up with the latest open source models, made it clear to Christian this was the right partnership.

Performance that changes the game

Stats Perform is already seeing measurable benefits. “The average inference speed with Groq is 7–10x faster than anything else we tested,” Christian says. “Even running models locally on expensive hardware, Groq still wins on overall performance.”

That performance isn’t just about bragging rights. It’s the difference between delivering a stat in time for a live broadcast or missing the moment.

“The biggest impact has been speed to market,” Christian shares. “Groq allows us to do things much faster—and at very high quality—that we couldn’t do elsewhere.”

Additionally, Groq is making it possible to scale. Many new AI microservices are planned for development this year, with Groq playing a central role. That includes a newly launched internal “ChatGPT-like” system, powering AI-driven workflows for employees across the business.

“We’ve got over 60 generative AI initiatives running right now,” Christian says. “This is less than 1% of what’s coming.”

Advice for others

What would Christian tell other companies considering Groq? “It’s simple: you won’t get this speed for this price anywhere else,” Christian says. “Sure, there are competitors who can be faster, but the costs are insane. With Groq, the balance of speed, quality, and pricing is unbeatable. Test it for yourself. After a week or two, you’ll see that it’s a no-brainer.”

Build Fast

Seamlessly integrate Groq starting with just a few lines of code