AI sucks at naming. There, I said it.
Tools define eras. The cotton gin, nitrogen fertilizer, the internal combustion engine, the internet—all changed the lives of millions of people. New tools bring great capabilities and promise. They also introduce new risks and problems. Tools aren’t inherently good or bad–that comes down to who’s using them and how. Nuclear energy comes to mind. France can meet over 70% of their energy needs using nuclear reactors. At the same time there are over 13,000 nuclear warheads in existence, at least a couple of which are lost. So let’s take a look at one of the newest tools threatening to define an era: Artificial Intelligence (AI).
AI is an undoubtedly pervasive tool that has the potential to create big shifts in society. Some would argue it already has. AI will make some jobs easier while making others redundant. Since it’s particularly good at doing homework and taking tests, schools and other institutions have begun to change how they handle testing and evaluation. Again, this is neither good nor bad. It just is. AI, in particular tools like ChatGPT, change our relationship to information.
For now, humans still have to ask the questions and consider the answers. AI is at times confidently incorrect. For example, I asked ChatGPT to crack an unbroken line of Enigma code from WWII and it responded that the answer appeared to be “”THE QUICK BROWN FOX JUMPS OVER THE LAZY DOG XYZ.” In this sense it’s an evolution of Google search but the format of ChatGPT’s response is more assured than an infinite scroll of Google results from myriad sources. Ask and you shall receive. So if you ask it to name your new wine bar, it will. But are the names any good? That’s still a decision that happens between the ears. If you want to read about a namer asking ChatGPT to name things, you can check out Margaret Wolfson’s piece.
Tools are only as good as the people wielding them.
Some people see this as an opening for “prompt engineers” or whatever people want to call them. Yes, the way you ask a question plays an important role in the answer. This predates AI by however many years people have been asking questions.
The more important question is, who is validating the answers? Naming is an art. There is never a correct answer. Having deep knowledge of naming gives you a sense of how to wield the tool to aid in name generation, and more importantly, know which responses actually contain good names. This is not “Come up with a name for my new Denver-based coffee shop that specializes in Ethiopian coffees.” It’s much more like “What are some English words that originated in Amharic?” This is how I found out “shamrock” comes from the Amharic word for umbrella, “shamarock”. AI is a decent research tool and a poor creative decision maker.
This is not to downplay the benefits of using ChatGPT and other AI tools. That question I just asked about words in English from Amharic would be a difficult and time consuming question to answer using Google or other ancient yet still very persistent technologies like books. When I asked ChatGPT to calculate the tensile strength of a piece of fabric a few weeks ago, I was saved from revisiting high school physics. The way it showed how the answer was derived actually did a good job of helping me relearn how to calculate tensile strength. It did my homework and taught me the lesson at the same time. Neat.
Tools, however good or pervasive, create myopia.
To a hammer, every problem is a nail. One of the reasons people love AI is that no matter the question, it spits out an answer. So if you ask it to come up with names for something, it will. But just because a cutting edge tool came up with an answer, that doesn’t make the answer right, or good. The black box nature of neural networks make it hard if not impossible to understand how or why an answer is presented. But for some people, the idea that a new technology came up with an answer is enough for them. So novel.
“The truth knocks on the door and you say, “Go away, I’m looking for the truth,” and so it goes away.”
–Robert Pirsig in Zen and the Art of Motorcycle Maintenance
With naming, something amazing could be staring you in the face and you can’t see it.
Most great names don’t fit pre-existing patterns, so looking at what is in use isn’t a good signal. Neither is looking at what are generally seen as “good names.” People and AI both have a hard time separating names from the underlying entities they symbolize. Is Apple a good name or a good company? Is Yahoo a good name or good company? I can tell you that our clients stopped using Yahoo as an example of a good name when they stopped being a relevant company.
Naming is a process, not an exercise. There are a lot of considerations and components, both practical and creative. There’s often a lot of people involved. These people are often unsure about what they want. So if we’re extending the tool analogy, a saw is a very helpful tool for making a chair but just because you have a saw, doesn’t mean you have a chair. It also doesn’t mean that you need a saw to make a chair. And it definitely doesn’t mean you have a comfortable chair or even an Eames chair. So put ChatGPT at the hands of people who need a name but aren’t sure what the name is supposed to do, or who it’s for, or how it might be different. Even if we’re assuming good prompts are being used, there could be (but probably won’t be) several good names hiding in plain sight.
One of my favorite things about the naming business is that people are always showing up with new companies and products. Naming lives on the edge of what’s being built. We’ve seen many different eras: Crypto, DevOps, Big Data, Platforms, apps, eCommerce. Now it’s AI. So if AI was so good at generating names, how do we end up with AI clients? Probably the same reason I didn’t make my living room chairs even though I have a bunch of wood and a saw.