AI and customers: Which way are you going to lean in?

They’re not happy about it

piqsels, CC0, via Wikimedia Commons

Let’s say you are a car salesperson, and you are greeting a couple of prospective customers who first interacted with your website’s chatbot.

The chatbot was good enough to get them to your lot. But they aren’t exactly happy about it. They knew they were playing a game with a computer. They wanted information, the chatbot wanted them to set an appointment.

Now they’re here. And media over their lifetime has portrayed car salespeople in a pretty negative light. They are just expecting more of the same from you as they got from the AI. You know: It said the right things to manipulate them to come to the store … you’ll try to say the right things to manipulate them to buy the car.

It’s showtime. How are you going to interact with such a customer to turn them into loyal advocates of you and your business?

Two looming consequences

To answer that, I’m thinking about the podcast episodes my partner Mike and I have recorded around AI.

Let’s assume that AI will only get better at predicting what would be the right things to say given a particular prompt. And let’s assume that the ability to deepfake video and audio continues to grow.

If we are at the advent of AI dominance, I can see two consequences:

  1. An uneducated society gets completely taken in by audio-visual media that says exactly what the AI predicts we need to hear in order to manipulate society toward a certain behavior: buying a product or service, agitating on behalf of a political viewpoint, etc.
  2. A jaded society stops trusting anything from strangers unless they are standing right in front of them and can thus prove that they are not AIs.

I think both will happen at the same time. People can be both jaded and uneducated.

CSGriffel, CC0, via Wikimedia Commons
Aristotle said you needed all three parts of this triangle in your communication.

When I say “uneducated,” I mean that we have not mastered the liberal arts–the skills needed for us to be free (“liberated”) people. For instance, we don’t study logic, so we miss it when an argument is peppered with bad reasoning. We don’t practice rhetoric, so we are hoodwinked by presentations that tug at our emotions more than make a clear case.

This happens even when we don’t want it to. We all know that most of the news we see involves a photo op. And yet the lighting, the positioning of the camera, the wordplay, the after-the-fact summaries written by others–all of it can make us assume we know the truth. We are told what to think, because we didn’t learn how to think.

That’s the customer you are facing or will soon face. And we’d still like to serve them.

Leaning in (or not)

I had been thinking about this when two articles came across my feed.

The Wall Street Journal covered Po-Shen Loh, a Carnegie Mellon professor who is the US Math Olympiad coach. He is giving a talk in schools coast to coast titled “How to Survive the ChatGPT Invasion.” His advice:

Think about what makes humans human, and lean into that as hard as possible.

My Great Uncle Dick used to say, “If you’re smart, you’ll always have a job.”

We have to define “smart.” To Loh, it means adding value by identifying human pain points and solving those problems. AI will predict what would be the right answer. But humans can provide non-obvious answers. We can, he says, invent.

Stephen Elwyn RODDICK / Surveyor Mapping Mountain Topography
Topography and the common topics both see the same subject at different vantage points.

Through the question prompts known as the common topics, you can look at a subject from all angles–including non-obvious ones–and develop an expansive “inventory” of items that will best make your point or solve your problem. I call it an inventory because the common topics are part of the “canon of invention,” one of the five canons (rules or principles) that make up classical rhetoric.

Speaking of classical thought, Joshua Gibbs is a classical teacher who is as clever as he is insightful. And he is not worried about the fact that AI has made cheating incredibly easy. His argument: “ChatGPT Will Force Us To Be More Classical.”

Loh says to lean in to what makes you human. Gibbs adds: “First, anyone who suggests classical schools need to ‘lean in to ChatGPT’ ought to be sent to the salt mines.”

Instead, he argues that schools should ban ChatGPT and get back to the kind of assignments that can be trusted to come straight from the mind of an actual human. That means less homework, more oral examinations and memory projects.

His students recently had their final exam in the form of a recitation of a portion of Dante’s Divine Comedy. Now, if you think there is no way you or your kid or employee could do that, try it. It can be done. And having it stored in your mind will change you for the better:

The recitation assignment is the most classical kind of assessment there is. It can’t be faked. Can’t be cheated. It’s real. It involves the memory. It’s formative. And, by God, a teacher just can’t help loving his students more after he’s heard them recite beautiful poetry by heart.

Selling cars as a human

So what about your customers on the car lot? What’s the next step?

Quote Dante? Probably not. (But maybe!) Meanwhile:

Which way will you lean?

Formulas and manipulation will not work. Trying to out-AI the AI won’t either. What will work? One of the most human things on the planet: Salesmanship 101.

Loh would encourage you to identify their pain points so that you can come up with ingenious ways to solve their problems.

This will require a real conversation, not one where you guide them where you want them to go. You have to really be willing to ask questions and explore. Only then do you draw on your knowledge and put together a creative solution to their needs (from budget to appearance to family concerns to safety–you know the drill).

Speaking of knowledge, Gibbs would encourage you to know and share that knowledge. You know another term for memorizing? Knowing it “by heart.” It’s deep down in you. You might even find that you are kind of excited about it. Remember even before you were in the car business and you memorized horsepower, trim levels and the histories of your favorite cars?

Even if you don’t have the contagious excitement of an enthusiast, you will have the calming presence of an authority. Your customers will know they are in good hands.

dave_7 from Lethbridge, Canada, CC BY 2.0 <https://creativecommons.org/licenses/by/2.0>, via Wikimedia Commons

Both Loh and Gibbs would encourage you to be “human,” to be “real.” Humans have built-in deceit detectors. AI is going to make them more jaded as time goes on. Your ability, in-person, to address another human’s pain and to share your stored knowledge–in a real and authentic way–will be the antidote.

It is a tool

A LinkedIn post recently went viral where the author (Christopher Penn, who works for a marketing company that uses AI) had ChatGPT rewrite a profanity-laden internal email. The email was responding to an employee asking about reimbursement for two months of invoices dumped on Accounting in one day.

ChatGPT’s rewrite was diplomatic and grammatically correct. AI can absolutely be a tool.

But here’s the thing: If you sent the ChatGPT form of the email, the recipient would smell your polite cover a mile away. It masks what the employee did to cause this situation in the third person.

And it does not give any opportunity to help the two parties make sure this crisis never rears its head again.

You know what a human would do before sending an email like that to an employee or customer? Figure out a way to connect, one-on-one. See our work on Crucial Conversations for more on that.

Crucial Conversations are for high-stakes and high-emotion conversations. AI can help you prepare for them and suggest words to use. AI cannot have emotions in the conversation. It cannot care.

But you do. That’s what helps you in conversations–and with customers. Lean in to that.