Announcements

Posts

@rowancheung Hey @rowancheung, another competitive difference is responsiveness. LLMs run faster on Groq®’s LPU™ chips than any other hardware, so if you want better answers fast let @elonmusk know that you want @xai to run at #GroqSpeed. That, or you can wait, and wait, and wait for them to…

Insights

About

FAQ

What does Groq do?

We are an AI-born, software-first platform, made to run real-time AI solutions at scale.  

  • AI-born: We are designed and developed from the ground up for high AI performance at scale.
  • Software-first platform: We put software, and the needs of software developers, at the center of our architecture.
  • Real-time AI solutions: We are optimized for GenAI language-based solutions where speed matters.
  • At scale: We can scale seamlessly, with practically no impact on performance or accuracy.  

Built on GroqChip™ accelerator, the industry’s first Language Processing Unit™ inference engine, our unique and pioneering architecture transforms the pace, predictability, performance, and precision of AI solutions for language-based applications.      

We offer superior performance for applications like LLMs, delivering real-time outcomes, higher throughput, greater precision, and more efficient scalability.

We accelerate the pace of AI workload development through rapid, kernel-less compilation, improving time to market, reducing resource requirements, and keeping up with the pace of innovation.  

We deliver predictability, providing accurate data on workload performance and costs at compile time so that developers can optimize software design with full understanding of how it will run when deployed.

We enable higher precision, as lower latency allows for software techniques to deliver better predictions and improved business results in real-time.  

We envision an AI solutions ecosystem that moves at the pace of software, unconstrained by the slow pace and high costs of big chip makers’ hardware development cycles. Partner with us today to build the AI solutions ecosystem of tomorrow.  

Our inception started with the AI revolution’s arrival and its much bigger opportunity for impact – the potential to soon become a dominant global economic activity.

After designing the first TPU at Google, founder and CEO Jonathan Ross was concerned about the barrier to entry for others, so he wanted to create the opportunity for anyone to join in the AI economy.

At Groq, we’ve developed a new and innovative technology, which is now recognized by industry luminaries as revolutionary. Our initial customers span finance, industrial automation, cybersecurity, and scientific research for the leading government labs. Uniquely solving problems across such a wide range of markets is rare, even when you consider some of the established players. Our innovative deterministic single core streaming architecture lays the foundation for Groq compiler’s unique ability to predict exactly the performance and compute time for any given workload. The result is uncompromised low latency and performance, delivering real-time AI and HPC.

Groq is our company’s trademarked brand name, it originates from the word “grok” which was first coined in Robert Heinlein’s 1959 book Stranger in a Strange Land. “To Grok: understand something intuitively or by empathy.” – Robert Heinlein, Stranger in a Strange Land – 1959

An LPU™ inference engine, with LPU standing for Language Processing Unit™, is a new type of processing unit system invented by Groq to handle computationally intensive applications with a sequential component to them like that of Large Language Models (LLMs). The LPU inference engine is designed to overcome the two bottlenecks for LLMs, the amount of compute and memory bandwidth. An LPU inference engine has as much or more compute as a Graphics Processor (GPU), which is much more than a Central Processor (CPU), and reduces the amount of time per word calculated allowing sequences of text to be generated much faster. This alongside the elimination of external memory bandwidth bottlenecks enables LPU inference engine to deliver orders of magnitude better performance on LLMs than that of a Graphics Processor. 

An LPU™ inference engine has the following characteristics:

  1. Exceptional sequential performance 
  2. Single core architecture
  3. Synchronous networking that is maintained even for large scale deployments
  4. Ability to auto-compile >50B LLMs
  5. Instant memory access  
  6. High accuracy that is maintained even at lower precision levels

Recently Groq published performance results of over 300 tokens per second per user on Llama 2 70B running on an LPU™ system. Read the full press release here.

An LPU™ inference engine is optimized to perform the large number of calculations required of LLM 10-100x as quickly as a Graphics Processor can. To generate the 100th word or token in a sequence, you must first generate the 99th. This forces a trade-off between model size and quality and usefulness, as waiting for a Graphics Processor to render text on larger models is a lot like using a dial-up modem, or a trade-off in batch size which slows text generation down, but improves cost and power efficiency.

An LPU inference engine can generate sequences of text much more quickly, even on very large language models, meaning that you no longer have to use smaller models to generate text nearly instantly. Also, because of the natural performance of an LPU system on language tasks, there isn’t a cost or power increase associated with running the models with a usable responsiveness. This improves user experience, lowers cost, and saves power.

Currently, Groq is specifically focused on inference. While the GroqChip™ accelerator is powerful enough for training, it is designed and optimized for real-time scaled inference applications. 

Groq supports TensorFlow, PyTorch and any other framework that can be exported to ONNX is supported by Groq compiler. Groq also provides a low-level programming API in GroqWare™ developer suite. 

Please reach out to the Groq team at [email protected].

Groq currently supports a broad range of models and workloads – including LLMs, FFTs, MatMuls, CNNs, Transformers, LSTMs, GNNs, FinTech workloads, and more. Furthermore, Groq is constantly broadening workload support and improving performance. Groq also has a handful of proof points that Groq has published on its GitHub page (github.com/groq/groqflow) that include computer vision, natural language processing, and speech. 

Yes, both Groq and Bittware have partnered to enable additional customers with GroqCard™ accelerator based solutions as well as custom server form factors. Learn more here

Groq has open positions across a number of roles. Visit groq.com/careers to learn more.

Play Video