Resources: Articles
Groq is on a mission to set the standard for GenAI inference speed, helping real-time AI applications come to life today.
Hey Sridhar…
Hey Sridhar, Congrats on your new role as CEO of Snowflake. We’re kind of partial to Xoogler CEOs, so we wish you well! You recently
Groundbreaking Gemma 7B Performance Running on the Groq LPU™ Inference Engine
ArtificialAnalysis.ai Shares Gemma 7B Instruct API Providers Analysis, with Groq Offering Up To 15X Greater Throughput In the world of large language models (LLMs), efficiency
ArtificialAnalysis.ai LLM Benchmark Doubles Axis To Fit New Groq LPU™ Inference Engine Performance Results
Groq Represents a “Step Change” in Inference Speed Performance According to ArtificialAnalysis.ai We’re opening the second month of the year with our second LLM benchmark,
Hey Zuck…
Hey Mark, Word has it that you’re building a fantastic second home on a big plot of land in Kauai, replete with tree houses, rope
Hey Sam…
Hey Sam, Congratulations on finally launching your ChatGPT store! At Groq® (with a q, not a k) we’re an AI technology company too. We understand
Groq LPU™ Inference Engine Crushes First Public LLM Benchmark
Groq Delivers up to 18x Faster LLM Inference Performance on Anyscale’s LLMPerf Leaderboard Compared to Top Cloud-based Providers Source: https://github.com/ray-project/llmperf-leaderboard?tab=readme-ov-file Hey Groq Prompters! We’re thrilled