Blog
Conversational AI
4 min read
February 28, 2023

No, ChatGPT did not write this article.

ChatGPT ushered in a new era of conversational AI. Does it present a new set of challenges and opportunities? Yes. Is it the end of human ingenuity as we know it? Probably not.

Article Quick Links
This blog is part of a larger collection of client story content for .
See the full collection
This blog is part of a larger collection of client story content.
See the full collection

ChatGPT did not write this article. But if it did, would you be able to tell the difference?

Yes. Obviously you would. 

I mean, ChatGPT is many things — articulate, academic, nimble — but it still has the personality of, well, a chatbot. It can answer your questions, sure. Oftentimes (huge emphasis on that word) accurately … but also quite rigidly. It forms sentences and paragraphs by the book, not by the heart. I’ll show you.

ChatGPT can’t write a sentence like this.

Or this. 

It can’t make words sing or soar. Or crescendo. 

Or fall.

It can absolutely make your life easier, but it can’t really make you feel. And it can absolutely make some decisions, but it can’t always make the right ones. Because for all of its NLP horsepower, it has one significant drawback: it’s not a human being.

Now, I admit that part of my musings here serve as sort of a pep talk to those of us who make a living putting one word in front of another who thought we may have just been poofed out of existence with a single snap (we will not go quietly into the night!). But, biased as I may be, I’m also not the first to point out that AI chatbots aren’t yet perfect. In fact, here’s a quote from a fairly reliable source on the topic:

“ChatGPT is a horrible product.”

Yikes. Who said it? Oh, just the CEO of the company that created ChatGPT.

If even the tech’s progenitor is dubious, where does that leave the rest of us? Well, caught smackdab in the messy middle between awed and uncertain — true enough, that’s how we ended up with recent think pieces titled “ChatGPT is amazing, creative, and totally wrong” and “The brilliance and weirdness of ChatGPT.” . 

So, yeah … let’s just call it a mixed bag so far. 

Since Paradox is a conversational recruiting software company with tech powered by an AI assistant, you may be wondering whether we think ChatGPT is a net positive or negative for our industry. The answer, quite fittingly, is: it’s complicated. But overall, the last few weeks very much reaffirmed our long held view on conversational AI:

  • It's incredibly powerful when used for the right task, with the right context. 
  • It’s potentially harmful when used for more subjective decisions that require nuance.
  • It’s not going to replace humans anytime soon.
  • In our opinion, it never should. 

Conversational AI is an Iron Man suit, not a T-1000. 

Somewhere between Google investing $300 million to keep pace in the AI arms race and Bing’s chatbot threatening a philosophy professor with blackmail and bodily harm, the internet began to wonder if man’s reach had potentially exceeded its grasp.

Is ChatGPT our version of Skynet? Is this the beginning of the end?

Not if we can help it.

“It feels like all of these things from sci-fi are coming to light,” said Paradox Head of Strategic Solutions Eleanor Vajzovic. “And that could be scary, or it could be amazing. It all comes down to what the intention is.” 

See, the problem isn’t with the technology. The problem is how we use it. Positioning ChatGPT or Bard or Olivia (what we call our conversational recruiting assistant) as a total replacement for human beings is when the results can become less than ideal. When we count on AI to make critical recruiting and HR decisions — like who to hire, who to promote, who to fire, or how much we should compensate someone — we wade into dangerous waters that require us to trust that an AI has better answers to complex questions than we do.

The danger of something like ChatGPT is that, unlike Google — which merely serves up different options for a user to choose from — it presents its answers as the answer. Do we really trust AI to make those decisions for us? The last few weeks have taught us that no, no we don’t. AI is simply not ready to be a T-1000 — it can’t stand on its own as a countermeasure to human judgment. But what it can be is a support system. A copilot. A way to enhance the capabilities of a person, not supplant them.

We like to think of our conversational AI more as an Iron Man suit for recruiters and hiring managers. 

No, it won’t give them the power to fly or make timely sarcastic remarks. But it will automate certain tasks (think screening for minimum qualifications, interview scheduling, answering common questions) to help maximize their strengths.

Hiring managers and talent professionals are incredibly valuable to an employer; they have a skillset that nobody else does within a company. They’re curious, independent, skilled to use a range of technologies, have a deep understanding of roles and team structures, and most importantly, they know people. Now imagine being able to remove time stealers from that person’s day so they can just focus on all the things that you can’t replace with automation, like actually interviewing candidates. 

That’s a powerful asset. And that’s the true power of conversational AI — it’s an enhancement, not a replacement. 

 

There is no conversational AI that can replace the impact of actual people.

Let’s go back to the question posed at the start of this article: If ChatGPT had written it, would you have been able to tell the difference?

At this point, the answer should be a pretty “definitive” yes. Why? Because there are simply certain aspects of writing that even the most advanced AI in the world can’t yet replicate. It paints in broad brushstrokes, not with pinpoint accuracy. 

The same is true for hiring and recruiting tasks. ChatGPT is pretty good at general applications … but it’s probably not going to be good at navigating the complexities of scheduling different kinds of interviews with multiple hiring managers. It also won’t be good at providing highly specific answers to candidate questions on things like company culture.

That’s simply not what ChatGPT was built for. But Paradox was.

When we created our tech, we saw a world where conversational experiences became the new interface for enterprise software, with the assistant behind those experiences functioning as a copilot to do the work people shouldn’t be doing — because the magic of building great teams lies in human to human interaction. Our mission was simply to strip away barriers and reduce friction points so hiring teams could do even more of that. 

“In the demos I’ve seen over the years, the most impressive solutions I’ve seen are those which focus on a single domain,” said leading HR analyst Josh Bersin in a recent article. “Olivia is smart enough to screen, interview, and hire a McDonald’s employee with amazing effectiveness … If we 'point' the AI toward our content, we suddenly release it to the world at scale. And we, as experts or designers, can train it behind the scenes.

Imagine the hundreds of applications in business: recruiting, onboarding, sales training, manufacturing training, compliance training, leadership development, even personal and professional coaching. If you focus the AI on a trusted domain of content (most companies have oodles of this), it can solve the ‘expertise delivery’ problem at scale.”

If the last few weeks and the dawn of this new evolution of AI taught us anything, it’s that the downfall of mankind has been greatly exaggerated. We’re not being replaced. We’re simply being given new tools to help us work in different ways.

Faster.

Better.

So we can all focus on the things that ChatGPT can’t do. Like writing this article.

Written by
Erik Schmidt
,
Director of Content
Erik Schmidt
Written by
,

Every great hire starts with a conversation.

Demo Olivia now