MOUNTAIN VIEW, CA, October 26, 2023 — Groq, an artificial intelligence (AI) solutions company, announced today that it will have a booth and multiple talks at the premier industry conference for high performance compute, SC23, from November 12-17 in Denver, CO. Groq and their team will be showcasing a demo of the world’s best low latency performance for Large Language Models (LLMs) running on a Language Processing Unit™ system, its next-gen AI accelerator. Subject matter experts from Groq will be presenting four sessions during the conference on a range of HPC, AI, and research-related topics.
Jim Miller, VP of Engineering at Groq, and former engineering leader at Qualcomm, Broadcom, and Intel, shared, “The scale and performance of systems used for AI today is enormous, and will get larger if built with legacy technology. At Groq we are setting a new standard with our LPU™-based systems that improve performance, power, and scale when serving a large customer base. This is thanks to the hard work and innovative ideas of our dedicated team of engineers at Groq who are committed to solving truly novel problems.”
The LPU™ accelerator is the Groq response to the next level of processing power required by enterprise-scale AI applications. With a clear market need for a purpose-built and software-driven processor, the Groq LPU accelerator will power LLMs for the exploding GenAI market.
Yaniv Shemesh, Head of Cloud & HPC Software Engineering at Groq, said, “Groq’s groundbreaking speed, in the form of tokens-as-a-service, was a major milestone for my organization and the company. Running your own hardware and building a large scale HPC can be hard, but Groq’s token-as-a-service ease of use and consumption-based model are very attractive to customers. Our performance is beyond fast and is opening new possibilities and innovative customer use-cases previously unimaginable given existing market solutions limitations.”
To date, the company has showcased record-breaking performance of the open source foundational LLM, Llama-2 70B by Meta AI, now running generated language at over 280 tokens per second per user. Groq also recently deployed Falcon, a powerful language model available for both research and commercial use that’s currently at the top of the Hugging Face Leaderboard for pre-trained open source LLMs, and Code Llama, one of the newest LLMs from Meta AI helping users generate code.
Attend the show and you can meet Groq and interact with its technology in the following forums:
- Visit the booth, #1681, to see a live demo of Groq running LLMs on its LPU system,
- Schedule a time to talk with Groq specialists in their VIP Lounge by reaching out to [email protected].
You can also attend any and all of the following presentations led by Groq subject matter experts including:
- Strong Scaling of State-of-the-Art LLM Inference with Groq Software-scheduled Deterministic Networks by Igor Arsovski
- Exploring Converged HPC and AI on the Groq AI Inference Accelerator by Tobias Becker
- From Stencils to Tensors: Running 3D Finite Difference Seismic Imaging on the Groq AI Inference Accelerator by Tobias Becker
- The Argonne National Lab hosted workshop, Programming Novel AI Accelerators for Scientific Computing, where Sanjif Shanmugavelu will be presenting
If you are interested in learning more about Groq or scheduling a private demo of our LPU system, reach out to us at [email protected]. We look forward to meeting you at SC23.
Groq is an AI solutions company and the inventor of the Language Processing Unit™ accelerator that is purpose-built and software-driven to power Large Language Models (LLMs) for the exploding AI market. For more information, visit www.groq.com.
Groq, the Groq logo, and other Groq marks are trademarks of Groq, Inc. Other names and brands may be claimed as the property of others. Reference to specific trade names, trademarks or otherwise, does not necessarily constitute or imply its endorsement or recommendation by Groq.