Resources: News
Groq is on a mission to set the standard for GenAI inference speed, helping real-time AI applications come to life today.
For general press inquiries, reach out to our PR team.
Groq® and Carahsoft Co-host First GroqDay for Public Sector Leaders Focused on AI Inference Solutions for the Government
Alexis Bonnell, Karen Evans, and Jacqueline Tame Will Speak About Embracing AI Technology for Mission Efficiency and Will Explore Government Use Cases REGISTER MOUNTAIN VIEW,
Groq® and Earth Wind & Power to Build AI Compute Center for Europe in Norway that May Rival Tech Giant Scale
Leader in Real-time AI Inference on Track to Deliver 50% of the World’s Inference Compute Capacity via GroqCloud™ by End of 2025 MOUNTAIN VIEW, Calif.,
Demand for Real-time AI Inference from Groq® Accelerates Week Over Week
70,000 Developers in the Playground on GroqCloud™and 19,000 New Applications Running on the LPU™ Inference Engine MOUNTAIN VIEW, CA, April 2, 2024 – Groq®, a
Groq® Is Still Faster
MOUNTAIN VIEW, CA, March 18, 2024 – Groq®, a generative AI solutions company, responds to NVIDIA GTC keynote: “Still faster.” About Groq Groq® is a generative
Groq® Acquires Definitive Intelligence to Launch GroqCloud™
Definitive Intelligence Co-founder and CEO Sunny Madra to Lead New GroqCloud Business Unit and Launch New Developer Playground MOUNTAIN VIEW, CA, March 1, 2024 – Groq®,
Groq® LPU™ Inference Engine Leads in First Independent LLM Benchmark
ArtificialAnalysis.ai Adjusts Chart Axes to Accommodate Groq Performance Levels MOUNTAIN VIEW, CA, February 13, 2024 – Groq®, a generative AI solutions company, is the clear
Groq® Opens API Access to Real-time Inference, the Magic Behind Instant Responses from Generative AI Products
Customer and Partner aiXplain Implements Game-changing Groq Technology to Bring the World’s Fastest AI Language Processing for Consumer Electronics to Market LAS VEGAS, CES® 2024,
Groq Sets New Large Language Model Performance Record of 300 Tokens per Second per User on Meta AI Foundational LLM, Llama-2 70B
The Groq Language Processing Unit™ system is the AI assistance enablement technology poised to provide real-time, “low lag” experiences for users with its inference performance.
Groq to Feature World’s Fastest GenAI Inference Performance for Foundational LLMs at Supercomputing ’23 on Its LPU™ Systems
Groq and their team will be showcasing a demo of the world’s best low latency performance for Large Language Models (LLMs) running on a Language
Argonne Deploys New Groq System to ALCF AI Testbed, Providing AI Accelerator Access to Researchers Globally
Groq, an artificial intelligence (AI) solutions company, and the US Department of Energy’s (DOE) Argonne National Laboratory announced today that Groq hardware is now available