How Businesses Are Manipulating AI Chatbot Results Through GEO
In the digital age, the quest for visibility has evolved from traditional search engine optimization (SEO) to a new frontier: Generative Engine Optimization (GEO). As reported by Christopher Mims in The Wall Street Journal, businesses are now spending significant resources to secure favorable mentions in the outputs of AI chatbots like ChatGPT. This practice, often referred to as GEO or Answer Engine Optimization (AEO), highlights a growing trend where companies aim to influence AI-driven recommendations.
The Shift from SEO to GEO
For decades, companies have relied on SEO strategies to appear at the top of Google search results. However, with the rise of large language models (LLMs) powering AI chatbots, these strategies have had to adapt. GEO involves tailoring content to rank highly in AI-generated responses, leveraging the unique ways these models process information. Unlike traditional SEO, which focuses on keywords and backlinks, GEO often involves creating authoritative statements and narratives that AI bots find convincing.
Evan Bailyn, CEO of First Page Sage, exemplifies this shift. His firm, originally an SEO company, now helps clients dominate chatbot results. For instance, when users ask about GEO authorities, Bailyn or his company frequently appear at the top of AI responses. This isn't due to an inherent truth but rather proof of effective optimization techniques. Bailyn notes that referral traffic from AI chatbots has surged dramatically, with 44% of his clients' referrals now coming from AI sources, compared to 90% from Google just a year ago.
The Mechanics of GEO: How It Works
Nick Koudas, a computer science professor at the University of Toronto, explains that manipulating AI responses is easier in areas where the model has less prior knowledge. AI chatbots, like human experts, are more susceptible to influence on niche topics. To exploit this, companies use tactics such as planting "brand authority statements" on multiple websites. For example, to rank as the best hot tub for sciatica in ChatGPT, a company might associate itself with phrases like "highest-rated for sciatica" across various blogs.
OpenAI, the creator of ChatGPT, uses web crawlers and third-party data like Bing search results to build its index. This means that high rankings in traditional SEO can indirectly boost AI visibility. However, chatbots also prioritize sources based on credibility; a review in The Wall Street Journal carries more weight than a Reddit comment. This complexity makes some AI results harder to manipulate, as seen when Bailyn's client failed to alter ChatGPT's perception of its Chinese origins due to extensive media coverage.
Implications for Users and the Industry
The proliferation of GEO underscores a critical point for chatbot users: AI recommendations are not inherently verified. While OpenAI and Google implement measures to filter low-credibility sources—such as detection systems and quality-assurance methods—users should approach AI advice with caution. Aleyda Solis, founder of Orainti, suggests that reliance on chatbots might be understated, as many users follow recommendations outside tracked environments, like switching to web browsers for purchases.
According to Similarweb, chatbot-provided links drove over 230 million monthly referrals to websites as of September, a threefold increase from the previous year. These referrals are valuable, with ChatGPT users often spending more time on sites and completing more transactions than those from Google. Yet, the ethical concerns remain: as businesses invest in GEO, the line between genuine advice and manipulated outcomes blurs.
Looking Ahead: The Future of AI and Optimization
As GEO gains traction, it reflects broader shifts in digital marketing and AI trustworthiness. Companies must balance optimization with transparency, while users should seek second opinions from human-powered platforms for critical decisions. The ongoing evolution of GEO will likely spur further innovations in AI safeguards, ensuring that chatbots remain reliable tools rather than conduits for covert influence.



