Groq® Is Still Faster

Written by:
Groq

MOUNTAIN VIEW, CA, March 18, 2024 – Groq®, a generative AI solutions company, responds to NVIDIA GTC keynote: “Still faster.”

About Groq

Groq® is a generative AI solutions company and the creator of the LPU™ Inference Engine, the fastest language processing accelerator on the market. It is architected from the ground up to achieve low latency, energy-efficient, and repeatable inference performance at scale. Customers rely on the LPU Inference Engine as an end-to-end solution for running Large Language Models (LLMs) and other generative AI applications at 10x the speed. Groq Systems powered by the LPU Inference Engine are available for purchase. Customers can also leverage the LPU Inference Engine for experimentation and production-ready applications via an API in GroqCloud™ by purchasing Tokens-as-a-Service. Jonathan Ross, inventor of the Google Tensor Processing Unit (TPU), founded Groq to preserve human agency while building the AI economy. Experience Groq speed for yourself at groq.com

Media Contact for Groq

Allyson Scott

Never miss a Groq update! Sign up below for our latest news.

The latest Groq news. Delivered to your inbox.

The latest Groq Developer news. Delivered to your inbox.