News in our world moves fast, in real-time. Shouldn’t your access to that ever-evolving information move at the same pace? That’s where Groq and Perigon come in.
Perigon’s new web application seamlessly retrieves the latest financial (and other) news articles using its contextual intelligence technology. The application collects 20 million data inputs daily from over 150,000 global sources, clusters the information into common themes and events, and relationally connects it across people, companies, and locations. Perigon integrates these data points with LLMs powered by Groq, providing an ultra-low latency solution with real-time contextual knowledge.
In this specific demo, the LLMs are enhanced with Retrieval Augmented Generation (RAG). RAG increases the contextual knowledge of LLMs with information stored in a vector database to help overcome their out-of-the-box limitations (static information from the original training data, non-specific domain knowledge, latency, context windows, and hallucination to name a few). This is a fundamentally different approach than browsing or keyword searching across the web because it is not beholden to traditional search dynamics to provide knowledge. This improves subject matter, temporal and source relevancy.