Why AI Became 2025's Designer of the Year, Not the God We Expected
AI: From Disappointment to Designer of the Year 2025

For years, the promise of Artificial General Intelligence (AGI) loomed like a cinematic deus ex machina—a god from the machine ready to solve everything. Instead, the reality for many has been akin to the lament of Morpheus in *The Matrix Reloaded*: "I have dreamed a dream, but now the dream is gone from me." This sentiment echoes among those who envisioned Terminators but got AI that forgets conversations after three prompts. Yet, in this landscape of adjusted expectations, one domain has seen a silent, profound revolution: generative AI, which has earned the unofficial title of Designer of the Year.

From Wilderness Prophets to Silicon Valley Kings

The journey was improbable. Just over a decade and a half ago, neural network researchers were modern-day prophets in the wilderness. Among them was Professor Geoffrey Hinton. His research group, using NVIDIA's CUDA platform, made strides in recognizing human speech. The real breakthrough, however, came from his students. Alex Krizhevsky and Ilya Sutskever trained a visual neural network using just two consumer-grade NVIDIA graphics cards, running the operation from Krizhevsky's parents' house. In a week, and with a hefty electricity bill, their model rivalled Google's efforts that used tens of thousands of CPUs.

This was the inflection point. If neural networks could see, what else was possible? The answer, as NVIDIA's Jensen Huang would later demonstrate, was nearly everything. When ChatGPT launched, powered by NVIDIA's chips, the market's perception shifted overnight. Valuations skyrocketed. Hinton would later share the Nobel Prize in Physics in 2024, while Huang became the undisputed arms dealer of the AI race, creating a company where countless employees became millionaires.

The Rise of the Deus Artifex: A God Who Builds

The initial dream of AI replacing human labour with superhuman efficiency largely fizzled, often flooding offices with AI-generated "slop." A viral tweet captured the disillusionment: "I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes." Yet, beneath this surface disappointment, a transformation was brewing in visual design.

For a long time, AI images were a joke—recognizable by wax-like faces, mangled fingers, and garbled text. This changed radically in 2025. The shift wasn't philosophical but engineering. Early models like DALL-E 2 and initial Stable Diffusion versions worked like a myopic person trying to recreate Van Gogh's *Starry Night* from memory. The integration into multimodal models like OpenAI's GPT-4o, Google's Gemini (nicknamed Nano Banana), and Stable Diffusion 3 changed everything.

These systems learned memory, consistency, and composition. They understood light sources, shadows, and spatial relationships. Characters in images could now persist across prompts; colour palettes held. Image generation stopped being a random slot machine and became a reliable tool. This technical leap manifested culturally in the Studio Ghibli-style filter craze, where people transformed personal photos into emotionally resonant, anime-style homages.

Collapsing the Cost of Visual Competence

The true impact, however, moved beyond art. With layout and consistency mastered, AI exploded in practical domains: infographics, diagrams, explainers, and presentations. Here, Google's decades of studying human attention gave it an edge, producing charts with intentional white space and legible labels. AI visuals became communicative, not just decorative.

Cartoons improved because AI learned controlled exaggeration. The result is not a sentient, creative god—a deus ex machina—but a deus artifex: a god who builds. It's a system that understands composition, respects constraints, and delivers relentlessly adequate results instantly. AI has collapsed the cost of visual competence, erasing the need for lengthy apprenticeships and the humiliation of early bad drafts.

The cost of this democratisation is aesthetic convergence. When the easiest path to beauty is the most travelled, styles flatten. Van Gogh's sunflowers weren't painted because they were trending. The machine can replicate the surface of genius but not the human necessity behind it. We did not get an omniscient god or a devil. We got a better, faster, more accessible craftsman. So perhaps Morpheus was mistaken. The dream of AI wasn't taken away. It simply, and powerfully, changed form.