Responsibility at the Core

Written by:
Samidh Chakrabarti
Share:

Twenty years ago when I was a graduate student at the MIT Artificial Intelligence Lab, I burned the midnight oil trying to get neural networks running as control systems for Mars rovers. Sadly, this didn’t work at all. The idea was solid, but as it turned out, there just wasn’t sufficient computing power to make this vision a reality.

Fast forward to today and, thanks to advancements in hardware, much of what was previously impossible is now within reach. Imagine being a bridge builder who has only known wood but now has access to steel for the first time. That is what AI developers are on the verge of today. As I witnessed as a product leader at Google and Facebook, Search and Newsfeed already essentially boil down to thousands of interacting neural nets. 

My first autonomous robot competition at MIT Robotics in 1998.

Soon, every dimension of civilization will be similarly reshaped as we enter an epoch of exponential intelligence. 

Bringing such a disruptive new technology into the world opens up tremendous opportunities, but also requires a deep commitment to responsibility. Perhaps more than most, I am attuned both to the promise that AI holds to serve humanity as well as the risks that could materialize. Most recently, I founded Facebook’s Civic Integrity product initiative, where I worked to protect societies around the world from the downsides of social networks, such as election interference and ethno-religious violence.

One of the most important things I learned from leading Civic Integrity at Facebook is that you can’t solve socio-technical problems retrospectively or from the periphery. You have to have your eyes wide open from the outset and build the notion of responsibility into the core platform itself. You can’t just hope for the best or assume you can fix problems later. As we enter this new frontier of AI, it is imperative that all our tools are as forward-thinking as possible.

Groq is an example of one such powerful tool. It is a revolutionary new architecture that’s purpose-built for accelerating machine learning. From designing safer autonomous vehicles to bringing real-time control of fusion reactors within grasp, the developer-first Groq computing platform is already enabling breakthroughs in real-time AI. Argonne National Labs, for example, recently used Groq to accelerate COVID drug discovery by over 300x relative to a leading GPU.

At the Groq Lab in 2022.

Having spent my whole career working at the intersection of technology and tough societal problems, I can imagine no more important mission at this moment in history than democratizing access to AI. While this transformative technology is still in its formative phase, it must be shepherded in a way that empowers humanity. That’s why I’m excited to publicly share that I’ve joined Groq as Chief Product Officer.

At Groq we recognize that we have the potential to build foundational infrastructure for the AI revolution, and so we are committing to the world that we will build trust and safety into our platform from the ground up. 

On this front, I’m particularly eager to work with Edward Kmett, who is our head of software and a pioneer in AI safety. We’ll share much more about this in the future.

For now, if you are an AI developer who is looking to move on from wood and build with steel, a PM or engineer who wants to build a new foundation for the AI era, or a member of the broader Responsible AI community who is curious to collaborate, please get in touch! Democratizing access to AI is truly a moonshot that matters and it will take all of us to make it happen. Let’s build this new frontier together – with responsibility at the core. 


Interested in Groq? Explore our open roles. For the latest company news and updates, follow us on LinkedIn and Twitter.

The latest Groq news. Delivered to your inbox.