What’s New and What’s Next in Conversational Candidate Chatbots
Nicole Mundy
What’s New: The Modern AI Chatbot Experience Is Contextual and Non-deterministic
We are at a point where candidate chatbots are fundamentally changing, and people in the HR tech industry are understanding this. Previously, the recruitment tech industry has thought about the chatbot experience as a linear conversation with candidates–e.g., capture their contact information, match them to a job, and push them through the application process into the ATS. But the concept that people are starting to accept is a chatbot experience that is more free-flowing and non-deterministic, which is powered by the new LLMs that we are now able to build. For example, the latest advanced LLMs can respond with company-specific information, and can anticipate what the next question might be in natural conversation, rather than trying to move the conversation in a linear direction toward completing a transaction.
Chatbots built on older-generation LLMs have limited contextual awareness and language processing capabilities. They struggle to understand the broader context of a conversation, often leading to responses that don’t logically follow previous interactions. In other words, if a chatbot doesn’t understand that it’s having a conversation about recruitment and job search, it can impact the quality and accuracy of its job recommendations.
The Role of Contextual AI in Enhancing Candidate Experience
Newer generation chatbots are able to contextualize the information they retain and can reference prior conversations to build on more complex responses. For example, Sense’s screening chatbot can reference a candidates’ resume or other details they’ve provided to ask follow up questions or offer job recommendations based on what was said in previous conversations, similar to how a recruiter would refer back to notes in a candidate’s record in a CRM. Contextual AI understanding is also improving the chatbot’s ability to tailor the language of its responses to directly answer a candidate’s query. Paradox has integrated contextual conversations into their product to autonomously understand the context of the candidate’s questions. For example, a topic such as PTO would typically have a generic one-size-fits-all response to a number of different questions. But with the upgrades that Paradox has made, the chatbot can understand and respond directly to questions about specific holidays and time-off policies that affect the candidate’s personal circumstances.
We’ve seen from Paradox’s results that the more intelligent and insightful the chatbot’s responses are, the more likely that candidates will continue to use it and ask more nuanced questions. Since Contextual Conversation has been in production, fewer candidates are asking to connect with a live recruiter because the chatbot is able to answer their questions.
The Importance of Data Maintenance for High-Performing AI Chatbots
For companies to fully leverage the enhanced capabilities of contextual AI chatbots, it’s very important to build a robust database of company knowledge, and it’s critical to maintain the body of knowledge with regular updates and modifications. This is where many organizations lose performance of their candidate chatbots, and it’s often because TA teams can’t afford to dedicate resources to the management of their conversational AI products. The lack of data increases the rate of chatbot confusion (the number of times that your chatbot shows users the default message: “I’m sorry, I couldn’t understand you”) and also increases the risk of AI hallucinations. Providers like Paradox have built in features to help monitor AI data quality, such as automated alerts when a user’s question has insufficient data, so that TA teams can manage and refresh the database more efficiently.
What’s Next: Is Building Your Own Candidate Chatbot Worth It?
A few years ago, organizations started thinking about AI business transformation strategies in the framework of enterprise LLMs, using public text-to-text based gen AI tools such as OpenAI’s ChatGPT and Anthropic’s Claude 2 to build their customer and brand experiences into conversational and hyper-personalized chatbots. These developments have reached the realm of HR and recruitment in some interesting and promising ways, from EY’s “eVe,” an AI Avatar that helps candidates prepare for interviews, to Walmart’s ”My Assistant,” a generative AI tool that supports new hires during orientation and employee self-service capabilities. While it’s mainly the largest companies that are building proprietary LLM applications for talent, we are seeing their influence spreading to corporate and staffing organizations both large and small.
Over the past few months, we’ve seen more organizations looking to implement multi-modal (i.e., voice and text) conversational AI solutions. Some organizations that have the technical sophistication and resources are building their own voice AI agents using agentic tools, such as Retell AI, Vapi, Bland AI, Hume, and Sindarin. These tools are relatively inexpensive and accessible for organizations that already have some experience with LLM development. For example, it can be easier to build and train voice AI agents on an employer brand than it is to train a text-based chatbot on visual and written brand voice, because people tend to speak in fairly similar ways.
In Conclusion
In general, we are glad to see recruitment organizations begin to experiment with building agentic AI and learning what is possible with these tools. Voice AI technology, in particular, is moving very fast and getting noticeably better month over month, and in time, this custom-build or co-build approach may become more practical and economical in the long term. However, this is rarely the case today: proprietary LLMs are time- and cost-prohibitive, and companies may underestimate the amount of AI research, infrastructure, and talent required to build and maintain their own AI recruitment chatbots. Candidate-facing conversational AI tools need to be bulletproof from legal and employer brand concerns, which involves continuously fine-tuning AI models to perform consistently and AI safeguards to detect hallucinations and sensitive topics. That said, we still consider it a positive sign that talent leaders are investing in the exploration of this technology and sharing their success, as these efforts help to drive the HR tech industry further along the AI transformation curve.
If you would like to learn more about how Talent Tech Labs can support your talent technology strategy, join the Talent Tech Labs’ community or contact us at hello@talenttechlabs.com. Also follow us on LinkedIn to stay informed on the latest talent technology insights and updates.
-
Bridging the Gap with Enterprise Security to Solve AI Hiring FraudBLOG
Bridging the Gap with Enterprise Security to Solve AI Hiring Fraud
-
Reimagining the Talent Experience: A New Operating System for WorkREPORT
Reimagining the Talent Experience: A New Operating System for Work
-
The Impact of Generative AI on Talent, Technology and Human CapitalREPORT
The Impact of Generative AI on Talent, Technology and Human Capital
