Resources: News
Groq is on a mission to set the standard for GenAI inference speed, helping real-time AI applications come to life today.
For general press inquiries, reach out to our PR team.
Groq® Is Still Faster
MOUNTAIN VIEW, CA, March 18, 2024 – Groq®, a generative AI solutions company, responds to NVIDIA GTC keynote: “Still faster.” About Groq Groq® is a generative
Groq® Acquires Definitive Intelligence to Launch GroqCloud™
Definitive Intelligence Co-founder and CEO Sunny Madra to Lead New GroqCloud Business Unit and Launch New Developer Playground MOUNTAIN VIEW, CA, March 1, 2024 – Groq®,
Groq® LPU™ Inference Engine Leads in First Independent LLM Benchmark
ArtificialAnalysis.ai Adjusts Chart Axes to Accommodate Groq Performance Levels MOUNTAIN VIEW, CA, February 13, 2024 – Groq®, a generative AI solutions company, is the clear
Groq® Opens API Access to Real-time Inference, the Magic Behind Instant Responses from Generative AI Products
Customer and Partner aiXplain Implements Game-changing Groq Technology to Bring the World’s Fastest AI Language Processing for Consumer Electronics to Market LAS VEGAS, CES® 2024,
Groq Sets New Large Language Model Performance Record of 300 Tokens per Second per User on Meta AI Foundational LLM, Llama-2 70B
The Groq Language Processing Unit™ system is the AI assistance enablement technology poised to provide real-time, “low lag” experiences for users with its inference performance.
Groq to Feature World’s Fastest GenAI Inference Performance for Foundational LLMs at Supercomputing ’23 on Its LPU™ Systems
Groq and their team will be showcasing a demo of the world’s best low latency performance for Large Language Models (LLMs) running on a Language
Argonne Deploys New Groq System to ALCF AI Testbed, Providing AI Accelerator Access to Researchers Globally
Groq, an artificial intelligence (AI) solutions company, and the US Department of Energy’s (DOE) Argonne National Laboratory announced today that Groq hardware is now available
Groq to Showcase World’s Fastest Large Language Model Performance, Powered by Its LPU™ System, at the Global Emerging Technology Summit in Washington, DC
Groq, an AI solutions company announced today a record-breaking AI processing demo, powered by the ultra-low latency performance of their LPU™ system, to be delivered
Groq Smashes LLM Performance Record Again Using an LPU™ System With No Response From GPU Companies
Groq, an artificial intelligence (AI) solutions provider, today announced it has more than doubled its inference performance of the Large Language Model (LLM), Llama-2 70B, in
Groq Selects Samsung Foundry to Bring Next-gen LPU™ to the AI Acceleration Market
Groq, an artificial intelligence (AI) inference systems innovator, today announced it has contracted with Samsung’s growing Foundry business to be its next-gen silicon partner, solidifying