AI naming is a mess. Can someone fix It?

Introduction

Every time I try to pick an AI model, I feel like I’m cracking a secret code. Do I go with GPT-4o or o3-mini-high? Claude 3.7 Sonnet or Claude 3.5 Haiku? And what exactly is a “Gemini 1.5 Flash” supposed to do?

Apparently, I’m not the only one confused. Julian Lehr, a writer at Linear, tweeted: “There’s a massive opportunity for a branding agency that specializes in AI model names.” Designer Sam Henri Gold added: “Naming things is famously hard, but did AI companies even try?”

Why are AI model names so bad?

AI names are all over the place. Some sound like secret project codes (o3-mini-high). Others could be the title of a fantasy novel (Gemini, Claude, Grok). And then there are the endless version numbers—3.5, 3.7, 4.0, 4o—like someone dropped a bag of numbers on the floor and decided to use them all.

The problem? These names make it hard to tell what the models actually do. If users have to guess or look up a chart just to figure out which AI is best for their needs, something is wrong. AI is supposed to simplify our lives, not make choosing one a frustrating puzzle. It’s okay if you don’t agree though. 😀

The reasons behind the naming mess

  • Tech jargon overload – AI companies love complicated names. Maybe they think it makes them sound more advanced. But most people don’t care about “o3-mini-high”—they just want to know if it can write an email or summarize a report.
  • The illusion of progress – Adding small version updates (3.5, 3.7, 4.0) makes it seem like big changes are happening, even when the improvements are tiny. Does GPT-4o really feel that different from GPT-4? I can’t tell. Actually, most users can’t tell.
  • Hiding the competition – If every company used names like “Fast, Faster, Fastest,” we’d instantly know which one is better. Instead, we get Sonnet, Haiku, Opus—pretty words with no clear meaning.
  • Mixing up models and assistants – Google’s “Gemini” is both the assistant and the model. OpenAI’s ChatGPT runs on GPT-4o, but also o1 and o3-mini. It’s a confusing mess that makes it hard to know what you’re using.

Tech companies have always been bad at naming

This isn’t new. Tech companies have always had bizarre naming habits:

  • Mythology & Fantasy – Palantir, Anduril, Samsara, Asana (thanks, Tolkien and Hinduism).
  • Random Acronyms – LLaMA (Large Language Model Meta AI), GPT (Generative Pre-trained Transformer).
  • Vague, Pretentious Words – Gemini, Claude, Grok (which sounds like a caveman grunt).

AI has taken this tradition and made it worse by mixing cryptic codenames with grandiose titles. The result? A mess that even tech insiders need a cheat sheet to navigate.

How to fix AI naming

  1. Keep version numbers internal – Nobody says “I’m using an iPhone with the A17 Pro chip.” They say “I have an iPhone 15.” AI companies should do the same.
  2. Simple, hierarchical naming – Basic, Pro, Ultra is clearer than Haiku, Sonnet, Opus.
  3. Focus on what it does – Instead of GPT-4o, call it Multimodal AI or Advanced Assistant.
  4. Test names on real people – If your grandma can’t guess what o3-mini-high means, it’s a bad name.
  5. Be distinct, not just clever – Copilot is a great metaphor, but Microsoft overused it. Apple Intelligence is simple and obvious—no decoding needed.

A quick guide to AI naming schemes (because you need it)

  • OpenAI – Started simple (GPT-1, GPT-2, GPT-3), now a mess (GPT-4o, o1, o3-mini-high).
  • Google – Went from Bard to Gemini, with models like Gemini 1.5 Flash (sounds like a camera feature).
  • Anthropic – Claude (the assistant) with models Haiku, Sonnet, Opus (poetic but confusing when versions like 3.7 Sonnet get added).
  • Meta – LLaMA 3.1, 3.2, 3.3 (because decimals make it sound scientific).
  • Microsoft – Everything is Copilot (even when it’s not).
  • Apple – Apple Intelligence (finally, a name I like, and that makes sense).

AI is changing the world, but its names are stuck in developer mode. If companies want AI to go mainstream, they need to stop making users decode their product names.

Until then, we’re always here to make fun. 😀

Let’s not be that judgmental.

Yes, AI naming schemes are a mess—but let’s not lose sight of what really matters. While I’m here all stuck debating whether o3-mini-high is better than Gemini 1.5 Flash, AI itself is advancing at a staggering pace.

Since the start of 2025, we’ve seen AI evolve from a handy tool into something far more powerful—almost like a digital collaborator. The names might be confusing, but the capabilities? Undeniably impressive.

How AI Is actually getting better (despite the bad names)

1. Real-Time, Multimodal AI – Finally feels human

Remember when AI responses had a noticeable lag? Those days are gone. Models like GPT-4o and Gemini 1.5 now process text, images, and voice in real time, making conversations feel fluid and natural.

  • Voice AI that doesn’t sound robotic – ChatGPT can now detect tone, adjust cadence, and even interrupt politely (just like a human).
  • Instant visual understanding – Upload a blurry photo of a broken car part, and AI can diagnose the issue, suggest fixes, and even link to repair tutorials.
  • Seamless switching between modes – Start with text, switch to voice, throw in an image—AI keeps up without missing a beat.

2. Smaller, Faster, On-Device AI – No Cloud Needed

Gone are the days when AI required massive server farms. Companies like Meta, Mistral, and Apple have optimized models to run locally on phones, laptops, and even wearables.

  • Full AI capabilities offline – Need a smart assistant on a flight? No problem.
  • Better privacy – Sensitive data stays on your device instead of being sent to the cloud.
  • Lower latency – No more waiting for servers to respond; AI reacts instantly.

3. AI Assistants Are Becoming True Digital Agents

AI isn’t just answering questions anymore—it’s taking action.

  • Auto-scheduling & workflow automation – Tell your AI, “Plan a team meeting, book flights, and draft an agenda”, and it handles everything.
  • Personalized research assistants – AI can now digest 100-page reports, summarize key insights, and even debate you on the findings.
  • Self-improving AI – Some models now learn from user feedback, refining responses over time without manual updates.

4. Ethical AI & Transparency Gains

After years of criticism, AI companies are finally prioritizing explainability, bias reduction, and user control.

  • “Why did you say that?” – Many AIs now provide sources, reasoning steps, and confidence scores for answers.
  • Customizable guardrails – Users can adjust AI behavior (e.g., “Be more formal” or “Avoid speculative answers”).
  • Bias audits – Independent researchers are testing models for fairness, forcing companies to address flaws.

5. AI-Powered Creativity Is Exploding

From hyper-realistic images to AI-generated music, creative tools are advancing at an insane pace.

  • Photorealistic AI art – Midjourney v6 and DALL·E 4 can now generate images indistinguishable from real photos.
  • AI music that doesn’t suck – Tools like Udio and Suno produce full songs in any genre—some even go viral.
  • AI video generation – Runway and Pika Labs allow filmmakers to generate short clips from text prompts.

6. Enterprise AI Is Revolutionizing Industries

AI isn’t just for consumers—businesses are adopting it at scale.

  • Legal AI – Tools like Harvey AI draft contracts, predict case outcomes, and flag legal risks.
  • Medical AI – AI diagnostics are now FDA-approved for certain conditions, assisting (not replacing) doctors.
  • Financial AI – Real-time fraud detection, personalized wealth management, and automated compliance checks.

What’s coming in Q2 2025?

The next few months will bring even bigger leaps:

Faster, More Responsive AI

  • Near-instant voice interactions (no more awkward pauses).
  • AI that can watch live video and provide real-time commentary (think: sports analysis or security monitoring).

Smarter Memory & Personalization

  • AI that remembers your preferences across months, not just one chat session.
  • Custom-trained mini-models tailored to your writing style, work habits, or hobbies.

AI Coding Assistants That Feel Like a Senior Dev

  • Debugging, refactoring, and even writing entire apps from scratch.
  • AI that understands your codebase and suggests optimizations.

Emotion & Voice Intelligence

  • AI that detects stress, excitement, or hesitation in your voice—and adjusts responses accordingly.
  • More natural, expressive synthetic voices for audiobooks, podcasts, and virtual assistants.

Next-Gen AI Search

  • Conversational search that understands context, not just keywords.
  • AI that cross-references multiple sources to give balanced, nuanced answers.

The bottom line: Don’t judge AI by its name

Yes, the naming schemes are chaotic. But behind the o3-mini-highs and Gemini 1.5 Flashs lies some of the most transformative tech of our time.

Instead of getting hung up on labels, focus on what AI can do—because in 2025, it’s doing more than ever.

Interested to learn all about AI? Read our previous blogs! 

 

Most read