Groq CEO Jonathan Ross Exposes 2 Major ChatGPT Flaws, Calls for AI Speed Revolution
Groq CEO's Two Big Problems with OpenAI's ChatGPT

In a candid revelation that has sparked discussion within the tech community, Jonathan Ross, the founder and CEO of AI chip company Groq and a former key engineer at Google, has pointed out two significant shortcomings in OpenAI's widely used ChatGPT. His critique centres on the current limitations of AI processing speed and infrastructure, which he believes hinder true productivity and revenue generation.

The Two Critical Friction Points with ChatGPT

While acknowledging ChatGPT's capabilities, Ross identified two major pain points that disrupt the user experience, especially for power users and researchers. He shared these insights in a recent discussion, a clip of which was posted on the social media platform X (formerly Twitter).

The first issue is excessive latency during deep research tasks. Ross explained that when he poses complex queries to the AI chatbot, the wait time for a response can stretch to several minutes. This delay, he argues, breaks the flow of thought and collaboration. "I ask a question, and it comes back 10 minutes later. That's 10 minutes where I can't be asking subsequent questions... It slows me down. It's frustrating," he stated. This lag creates a barrier to seamless human-AI interaction.

The second problem he highlighted is the 'rate limit barrier'. This is a common system restriction that controls how often a user can make requests within a specific timeframe to prevent server overload and ensure fair access. For users engaged in intensive work, this limit can feel like an artificial ceiling, temporarily blocking or slowing down their requests and stifling productivity.

An 'Anti-Competition' Philosophy for AI Hardware

Ross's comments extend beyond mere criticism of ChatGPT; they form part of a broader philosophy about the direction of AI hardware development. When questioned about whether he aims to challenge established players like Nvidia, Ross presented a contrarian view.

"We are not competing with Nvidia. Competition is a waste of money," Ross asserted. He elaborated that, in his view, competition often means copying what another company is already doing, which leads to wasted research and development resources. "They've done what you should be doing. You should be doing things that haven't yet been done," he added.

For Groq, the mission is not to replicate existing graphics processing units (GPUs) but to pioneer new approaches. Ross emphasised that the industry must move beyond copying and instead solve pressing problems that need immediate attention. His company's focus is on scaling up computing capacity to generate AI output at unprecedented speeds, enabling real-time collaboration between humans and artificial intelligence.

The Path Forward: Speed as a Revenue Generator

Ross's argument connects technical performance directly to economic potential. He believes that by solving the twin problems of latency and rate limits, the AI industry can unlock new levels of utility and, consequently, generate more revenue. The current hardware infrastructure, he suggests, is creating unnecessary bottlenecks.

The call is clear: to advance AI from a tool that responds with delays to a partner that interacts in real-time. This shift requires a fundamental rethinking of computing architecture, moving away from legacy designs towards solutions built specifically for the demands of modern large language models and generative AI. Ross's critique underscores a pivotal moment where the focus is shifting from mere capability to the quality and speed of the user experience.