skip to content
Groq
Platform
Arrow pointing down
GroqCloud
LPU Architecture
See Pricing
Solutions
Arrow pointing down
Customer Stories
Demos
Learn
Arrow pointing down
Blog
Whitepapers
Subscribe
Pricing
About
Arrow pointing down
About Groq
Newsroom
Life at Groq
Contact Us
Developers
Arrow pointing down
Free API key
Community
Docs
Enterprises
Start Building
Menu
Toggle Main Navigation
Close icon
Close mobile navigation
Platform
Arrow pointing down
GroqCloud
LPU Architecture
See Pricing
Solutions
Arrow pointing down
Customer Stories
Demos
Learn
Arrow pointing down
Blog
Whitepapers
Subscribe
Pricing
About
Arrow pointing down
About Groq
Newsroom
Life at Groq
Contact Us
Developers
Arrow pointing down
Free API key
Community
Docs
Enterprises
Groq Community
Discord
Twitter
YouTube
Thread
LinkedIn
Instagram
Whitepapers
Blog
Whitepapers
Newsroom
Customer Stories
Demos
Groq Automatic Speech Recognition (ASR) API
The Future of AI Is Agentic...and Groq
What is a Language Processing Unit?
Energy Efficiency with the Groq LPU™, AI Inference Technology
Inference Speed Is the Key To Unleashing AI’s Potential
Inference Deployment of Large Language Models
Groq RealScale™
Low Latency
Determinism
Groq TruePoint™ Technology
Conference Papers
ECTC Groq Paper 2024
US Army Report: Groq and Entanglement Cybersecurity Anomaly and Outlier Detection Validation
Answer Fast: Accelerating BERT on the Tensor Streaming Processor
Groq at ISCA 2022
ISCA 2020 Conference
Groq LPU Leads in Inference Performance
Spec Sheets
Product Spec Sheet - GroqCard Accelerator
Product Spec Sheet - GroqChip Processor
Product Spec Sheet - GroqNode Server
Product Spec Sheet - GroqRack Compute Cluster
Product Spec Sheet - GroqWare Suite
Build Fast
Seamlessly integrate Groq starting with just a few lines of code
Try Groq for Free