Blog
Candidate Experience
5 min read
July 15, 2024

How context has leveled up AI conversations.

Contextual Conversations makes AI responses more dynamic. Let's take a look under the hood at how that happens.

Article Quick Links
This blog is part of a larger collection of client story content for .
See the full collection
This blog is part of a larger collection of client story content.
See the full collection

At this point, it's pretty obvious conversational AI has made recruiting better.

Gone are the rigid structures of job applications past. With AI, everything can be reduced down to a simple conversation between the candidate and an AI assistant. For years, our clients experienced incredible upticks in their conversion metrics and candidate response times following implementation. 

But lately we’ve been asking ourselves an important question: How do we make conversational AI even better?

Originally, our AI assistants were powered by Natural Language Processing (NLP). When candidates asked a question, the NLP-powered AI would instantly pull an answer from a large set of pre-approved responses. This functionality alone was a game changer in terms of speed and the candidate experience. 

But each candidate is unique — they tend to ask similar questions differently, and expect tailored answers to their problems. NLP-based assistants could answer each question, sure, but the personalization was missing.

So we continued to iterate, with the mission of building an even better way to personalize each conversation. And as fate would have it, the timeline of that iteration coincided with some groundbreaking developments in the market (ChatGPT, namely). And that perfect storm led to what we're calling Contextual Conversations — the next evolution of the candidate experience and recruiting as we know it.

Let’s take a look under the hood at what we mean.

How AI contextualizes the initial message.

When candidates ask a question during the hiring process, even the slightest bit of friction can lead to drop-off. Or a diminished perception of your organization. Or confusion. Or even your recruiters having to manually step in to ensure that question is answered.

It might just be one question — but it’s one question you have to get right. Here’s an example of how Contextual Conversations ensure you get every one of those one questions right.

We’ll start with a set of texts from a sample conversation:

Now, a human can read this exchange as a cohesive conversation — they can contextualize that the final question is a continuation of the previous one. But most chatbots aren’t that sophisticated.

NLP-based assistants aren’t naturally able to handle context well (and complex workarounds like “State” tracking, while doable, become unmanageable at scale) — so they instead analyze individual texts as siloed pieces of data. So when a candidate asks “What about hats?”, the assistant answers with the same pre-approved dress code response as before. This can create friction — and frustration — at a pivotal part of the hiring process.

With Conversational Conversations, that process changes. 

Upon receiving a text, the AI assistant first undergoes an upgraded routing process. Basically, it discerns if the candidate is asking a question or not. In this example, the question mark is obviously a dead giveaway — but sometimes it’s not as clear. Think about how you text: Do you always have perfect punctuation? I didn’t think so.

If the AI detects that it’s not being asked a question, it will proceed through other functions of our product based on the context understood from the message, such as conversational job search or conversational scheduling. If the AI does detect that the candidate has asked a question, we pass the candidate response through our proprietary Large Language Model (LLM) to contextualize a response. We built and fine-tuned our own LLM for a few reasons:

  • Having our own model guarantees that our clients’ data is secure, and not shared with any third parties.
  • With our own model, there is no black box. We can explain how every message is contextualized, every time.
  • We’re able to control the uptime and performance of the model, ensuring a reliable and fast experience.
  • With our own model, we can train the AI on millions of specific recruiting conversations. That way, assistants can more easily contextualize conversations with candidates.

That last point is helpful here. The LLM will look back at the previous messages sent in the conversation, and rewrite the candidate’s question in a way that improves the context.

How AI decides if it should contextualize a new response.

At this point, the AI determines which kind of response it's going to send to the candidate. For 90% of questions, where there can be a little give and take with the answers, the AI will scan the client’s pre-approved knowledge base and contextualize a response (see: the next section).

But some critical questions need to be answered in a specific way. For example, if a candidate asks about legal policy, clients often want the AI to respond in the same manner every time. 

Under these circumstances, once the AI recontextualizes the candidate’s question, it’ll detect that the context falls under a predefined topic, and retrieve an already-approved response to send. The same answer, every time. 

How AI contextualizes a response.

For other topics, like this example, the AI will determine that the context (e.g. dress code) allows for a dynamic response.

When clients implement Contextual Conversations, they upload information into a knowledge base. Think: PDFs, documents, handbooks, their career site, and anything else that contains information they think candidates would find helpful in their job search. Through a process called retrieval-augmented generation (RAG), the AI will parse through the knowledge base to find relevant information pertaining to the candidate’s question. 

That information will be filtered into a few passages. These passages are just different applicable responses to the candidate’s question.

In this example, the passages could look something like this:

  • From a DE&I policy document: Religious symbols and headwear are allowed in the workplace at all times.
  • From the employee handbook: Branded headwear is prohibited during working hours.
  • From the career site FAQ: We like to maintain a professional work environment. Please refer to our company dress code policy for things like religious headwear, hats, socks, shoes, and belts. 

Each passage is then scored on relevancy. In fact, this is called a relevancy score (clever, right?). The AI chooses the most relevant information and dynamically contextualizes a response. Before that response is sent to the candidate, it’s passed through our LLM one more time. Here, the response is run through guardrails, making sure that it’s on-topic, truthful, and accurate.

If the response passes these final tests, it’s delivered to the candidate. And just like that, we have a dynamic, contextualized piece of content.

This may sound complicated, but in reality, this whole process takes place within a few seconds.

Candidates ask a question, and the AI assistant immediately responds with a personalized answer. So even though we have all these behind-the-scenes algorithms working 24/7, candidates feel like they’re talking to a real person. It’s a more human experience, made possible at scale entirely by AI (and that’s the Paradox!)

And better yet, it leads to a significantly reduced chance of friction in the application process. Which equals more candidates, happier candidates, and a healthier pipeline.

Oh, and a better job experience for all who apply.

Written by
Stephen Ost
,
SVP of Product
Stephen Ost
Written by
,

Every great hire starts with a conversation.

Demo Olivia now