Resources: Articles
Groq is on a mission to set the standard for GenAI inference speed, helping real-time AI applications come to life today.
12 Hours Later, Groq Deploys Llama 3 Instruct (8 & 70B) by Meta AI on Its LPU™ Inference Engine
Llama 3 Now Available to Developers via GroqChat and GroqCloud™ Here’s what’s happened in the last 36 hours: April 18th, Noon: Meta releases versions of
What NVIDIA Didn’t Say
Hi everyone, We were captivated by Jensen Huang’s opening keynote last week at the NVIDIA GTC. He did a masterful job, mixing in humor, a
Hey Sridhar…
Hey Sridhar, Congrats on your new role as CEO of Snowflake. We’re kind of partial to Xoogler CEOs, so we wish you well! You recently
Groundbreaking Gemma 7B Performance Running on the Groq LPU™ Inference Engine
ArtificialAnalysis.ai Shares Gemma 7B Instruct API Providers Analysis, with Groq Offering Up To 15X Greater Throughput In the world of large language models (LLMs), efficiency
ArtificialAnalysis.ai LLM Benchmark Doubles Axis To Fit New Groq LPU™ Inference Engine Performance Results
Groq Represents a “Step Change” in Inference Speed Performance According to ArtificialAnalysis.ai We’re opening the second month of the year with our second LLM benchmark,
Hey Zuck…
Hey Mark, Word has it that you’re building a fantastic second home on a big plot of land in Kauai, replete with tree houses, rope