About
FAQ
What does Groq do?
We are an AI-born, software-first platform, made to run real-time AI solutions at scale.
- AI-born: We are designed and developed from the ground up for high AI performance at scale.
- Software-first platform: We put software, and the needs of software developers, at the center of our architecture.
- Real-time AI solutions: We are optimized for GenAI language-based solutions where speed matters.
- At scale: We can scale seamlessly, with practically no impact on performance or accuracy.
Built on GroqChip™ accelerator, the industry’s first Language Processing Unit™ inference engine, our unique and pioneering architecture transforms the pace, predictability, performance, and precision of AI solutions for language-based applications.
We offer superior performance for applications like LLMs, delivering real-time outcomes, higher throughput, greater precision, and more efficient scalability.
We accelerate the pace of AI workload development through rapid, kernel-less compilation, improving time to market, reducing resource requirements, and keeping up with the pace of innovation.
We deliver predictability, providing accurate data on workload performance and costs at compile time so that developers can optimize software design with full understanding of how it will run when deployed.
We enable higher precision, as lower latency allows for software techniques to deliver better predictions and improved business results in real-time.
We envision an AI solutions ecosystem that moves at the pace of software, unconstrained by the slow pace and high costs of big chip makers’ hardware development cycles. Partner with us today to build the AI solutions ecosystem of tomorrow.
Why does Groq exist?
Our inception started with the AI revolution’s arrival and its much bigger opportunity for impact – the potential to soon become a dominant global economic activity.
After designing the first TPU at Google, founder and CEO Jonathan Ross was concerned about the barrier to entry for others, so he wanted to create the opportunity for anyone to join in the AI economy.
At Groq, we’ve developed a new and innovative technology, which is now recognized by industry luminaries as revolutionary. Our initial customers span finance, industrial automation, cybersecurity, and scientific research for the leading government labs. Uniquely solving problems across such a wide range of markets is rare, even when you consider some of the established players. Our innovative deterministic single core streaming architecture lays the foundation for Groq compiler’s unique ability to predict exactly the performance and compute time for any given workload. The result is uncompromised low latency and performance, delivering real-time AI and HPC.
What does “Groq” mean?
What is an LPU™ inference engine?
An LPU™ inference engine, with LPU standing for Language Processing Unit™, is a new type of processing unit system invented by Groq to handle computationally intensive applications with a sequential component to them like that of Large Language Models (LLMs). The LPU inference engine is designed to overcome the two bottlenecks for LLMs, the amount of compute and memory bandwidth. An LPU inference engine has as much or more compute as a Graphics Processor (GPU), which is much more than a Central Processor (CPU), and reduces the amount of time per word calculated allowing sequences of text to be generated much faster. This alongside the elimination of external memory bandwidth bottlenecks enables LPU inference engine to deliver orders of magnitude better performance on LLMs than that of a Graphics Processor.
What is the definition of an LPU™ inference engine?
An LPU™ inference engine has the following characteristics:
- Exceptional sequential performance
- Single core architecture
- Synchronous networking that is maintained even for large scale deployments
- Ability to auto-compile >50B LLMs
- Instant memory access
- High accuracy that is maintained even at lower precision levels
What is the latest performance by Groq on the LPU™ inference engine?
Recently Groq published performance results of over 300 tokens per second per user on Llama 2 70B running on an LPU™ system. Read the full press release here.
How is a LPU™ inference engine different from a GPU or Graphics Processor?
An LPU™ inference engine is optimized to perform the large number of calculations required of LLM 10-100x as quickly as a Graphics Processor can. To generate the 100th word or token in a sequence, you must first generate the 99th. This forces a trade-off between model size and quality and usefulness, as waiting for a Graphics Processor to render text on larger models is a lot like using a dial-up modem, or a trade-off in batch size which slows text generation down, but improves cost and power efficiency.
An LPU inference engine can generate sequences of text much more quickly, even on very large language models, meaning that you no longer have to use smaller models to generate text nearly instantly. Also, because of the natural performance of an LPU system on language tasks, there isn’t a cost or power increase associated with running the models with a usable responsiveness. This improves user experience, lowers cost, and saves power.
Does Groq focus on inference or training?
Currently, Groq is specifically focused on inference. While the GroqChip™ accelerator is powerful enough for training, it is designed and optimized for real-time scaled inference applications.
What frameworks does Groq currently support?
Groq supports TensorFlow, PyTorch and any other framework that can be exported to ONNX is supported by Groq compiler. Groq also provides a low-level programming API in GroqWare™ developer suite.
How can I get access to Groq hardware and software?
Please reach out to the Groq team at [email protected].
What classes of models does Groq support?
Groq currently supports a broad range of models and workloads – including LLMs, FFTs, MatMuls, CNNs, Transformers, LSTMs, GNNs, FinTech workloads, and more. Furthermore, Groq is constantly broadening workload support and improving performance. Groq also has a handful of proof points that Groq has published on its GitHub page (github.com/groq/groqflow) that include computer vision, natural language processing, and speech.
I heard Groq is partnering with BittWare, can you provide some more details?
Yes, both Groq and Bittware have partnered to enable additional customers with GroqCard™ accelerator based solutions as well as custom server form factors. Learn more here.
What positions is Groq currently hiring for?
Groq has open positions across a number of roles. Visit groq.com/careers to learn more.