The spectre of artificial intelligence stealing jobs has moved from speculative fear to a tangible reality in corporate corridors. From dinner-table debates to anxious boardroom discussions, the question persists: Will AI render my role obsolete? For years, Silicon Valley's answer was a resounding and swift 'yes.' The tech world seemed ready to crown AI as the ultimate employee. However, a deeper look reveals a more complex and cautionary tale, with enterprise software giant Salesforce at its heart.
The AI Layoff Gamble and Its Unraveling
Salesforce, a global leader in customer relationship software, initially appeared to champion the narrative of AI replacing human workers. The anxiety crystallised into hard numbers when the company executed significant layoffs. Salesforce reduced its support staff from approximately 9,000 to about 5,000 employees, a cut of nearly 4,000 roles. CEO Marc Benioff openly linked this drastic reduction to AI agents taking over tasks previously handled by people. This statement sent shockwaves, confirming fears that even secure white-collar jobs were not immune.
Yet, the promised seamless transition to an AI-driven workforce soon hit formidable roadblocks. As deployments scaled, the limitations of Large Language Models (LLMs) became glaringly apparent. Sanjna Parulekar, Senior Vice President of Product Marketing at Salesforce, admitted that internal trust in these models had sharply declined. The industry's vision of AI as a flawless, all-purpose cognitive worker began to fray under real-world pressures.
When 'Smart' Systems Prove Unreliable
The cracks in the AI facade were not minor glitches but fundamental flaws impacting core operations. Muralidhar Krishnaprasad, CTO of Agentforce, pointed out a critical weakness: if you give a large language model more than eight instructions, it starts dropping some entirely. While tolerable in casual chat, such unpredictability is a deal-breaker in enterprise environments where precision and compliance are non-negotiable.
Real-world consequences followed. Vivint, a major home security company with 2.5 million customers, experienced a failure where AI agents tasked with sending customer satisfaction surveys simply stopped without warning or explanation. In another instance, executives noted 'AI drift,' where chatbots designed for specific tasks would lose focus if users asked irrelevant questions, abandoning their primary function altogether. These were not mere bugs but failures that struck at the heart of operational trust and reliability.
The Quiet Return to 'Boring' and Dependable Tech
In response to these setbacks, Salesforce's strategy has undergone a significant, telling pivot. The company is now championing 'deterministic' automation—rule-based systems that are less glamorous but far more predictable. This marks a retreat from the aggressive AI-first messaging. Even Marc Benioff has recalibrated, stating that strong data foundations, not AI models, are now the top strategic priority. The irony is stark: at the very moment AI is credited with eliminating thousands of jobs, its champion is pulling back from placing too much trust in it.
So, is AI taking jobs? The Salesforce story suggests we are asking the wrong question. The more precise inquiry is: What kind of work is your job built upon? Repetitive, high-volume, and error-tolerant tasks are undoubtedly vulnerable to automation. However, roles demanding human judgment, nuanced understanding, context, and accountability remain firmly out of AI's current reach.
The uncomfortable revelation is that companies may not be replacing humans because machines are superior, but because they are optimising for lower costs and a higher tolerance for error. AI, for now, acts less as an autonomous worker and more as an amplifier—it can magnify efficiency, but also mistakes and poor organisational choices. Salesforce's partial retreat is not a story of AI's failure but a crucial reality check. The future of work will be shaped not just by how fast machines learn, but by how wisely we understand what they still cannot be trusted to do.