Menu
Announcements
News
Posts
@pmddomingos We love Nvidia, for training models. For inference, well, we have another offering from @GroqInc.
@rowancheung Hey @rowancheung, another competitive difference is responsiveness. LLMs run faster on Groq®’s LPU™ chips than any other hardware, so if you want better answers fast let @elonmusk know that you want @xai to run at #GroqSpeed. That, or you can wait, and wait, and wait for them to…