A chatbot to help human resources

  • UX design
  • Product Design
  • UX research
Project Overview
It all started with an internal project. The challenge was to create a recruiting tool that supported the HR teams in sourcing the right candidates in a more effective way and that re-shaped the experience of reaching out to talents. 

The AI team developed an algorithm that would scan profiles on Linkedin to suggest the HR team a list of potential candidates for the role they are looking for. I contributed to the project focusing on the next step: adding a chatbot component to get in touch with the selected talents with the ultimate goal of scheduling an in-person interview.

This chatbot also had the task to inform and pique the interest of the candidates, by presenting the job and the company and by answering their questions.
My Contributions
This internal project was assigned to me during my internship and was the main focus of my first months in Avanade.

I was in charge of the whole process, while reporting to my tutor. I had to design the conversational flow and the chatbot, prototype it and finally test it with the users using both implicit and explicit measures.

This work was also part of a published paper called "Implicit Measures as a Useful Tool for Evaluating User Experience".

It was also discussed during the World Usability Day Milan 2020 public event.
Three mockups of the chatbot interaction that show the introduction phase where the bot present itself, a q&a segment and the date selector
1. The idea behind
Why a chatbot?
A chatbot isn't exactly the most loved tool in the typical UX backpack. However, they are mostly used the wrong way, giving the user the responsibility of leading the discussion, that may result in an infinite number of discussion paths.

We believed that in this particular use case, a chatbot might stand a chance. Why? We had some ideas:

1. People might feel less stressed when answering to a bot rather than a real human recruiter: more time to think and no direct judgement.

2. Scheduling a date for an interview over the phone is an annoying task that may easily lead in compromises for the talent side.

3. Unlike most chatbot, in this case the bot would lead the conversation rather than the user. Yay for defined paths!
The question❓

Does the gender and the tone of voice of our chatbot have and effect on the user experience and its effectiveness in contacting and scheduling and interview with the candidate?


And if so, what is the version we should build?

An illustration of four people asking questions
2. The design
What would the chatbot look like?
Approaching the design phase, we asked ourselves: what characteristics would the bot need, to be the most effective and drive the best experience? Two of the variable we thought might influence it the most were the bot gender and its tone of voice.

So we designed a 2x2 experiment (combining the variables to create 4 version of the chatbot) to see if an effect were present.

Starting with some basic avatar icons from Flaticon, we designed an avatar for each combination of variables and a sample conversation flow for each tone of voice.
The four version of the chatbot avatar. The first is an informal female version with a blue tshirt. The second is a formal female version dressed with a blazer. The third is a formal male version, dressed with a suit and tie. The fourth is an informal male version dressed with a blue tshirt
The conversation was structured in 5 main blocks, as follow:
  • First, the bot would introduce itself and the company and describe the job vacancy to the candidate. 
  • Then it would ask for feedback about the interest or, if needed, offered additional assistance with a Q&A segment.
  • After that, assuming the candidate was interested, it would proceed to ask some basic questions (like the previous job title, or the level of knowledge of English). 
  • Then, the bot would provide the candidate with some dates for the in-person interview or offer them the possibility of choosing a date firsthand.
  • Once the appointment had been scheduled, the bot would provide some final information, such as the address for the interview, and then answer any additional questions.
3. Prototype and test
Building the interactive mockup and testing with users.
For the prototype, I chose to implement the bot in Dialogflow, to create a mockup that was as realistic as possible without writing any code. With that, we had access to some out of the box algorithms of natural language processing and machine learning that made the conversation quite realistic for a prototype.

Lastly, to test the chatbot with users we chose to embed it into Google Assistant, so that we could test it on a real mobile phone.

Once we were satisfied wit our design, it was time for user testing. We recruited 35 participants among our colleagues.

We wanted to understand how the users experience compared between the four version, but also the perceived efficacy of the bot.
To do so, we chose a simplified version of the UEQ+ questionnaire and we added two additional question about the efficacy.

Additionally, we wanted to employ a different approach to measuring UX by using implicit measurements.
Implicit measures 🔎
We tried to go a step beyond traditional UX measurements, employing a tool borrowed form modern cognitive psychology: the Implicit association test (IAT).

When we use explicit methods like questionnaires, we are limiting to study only what the users are willing to report, under the influence of many cognitive biases.

Implicit measures, on the other hand, rely on an automatic response and are not easily influenced.

The IAT consist in a categorisation task to measure our implicit association of two concept. In this case, The chatbot and "being good" or "being bad".

By combining the two methods, we can tell if something isn't right in the explicit responses.
The author, Francesco Ghedin, presenting the implicit association test to a colleague during a user test session
Our prediction was to find a preference for the female version, based on gender stereotypes related to the human resources field, and for the informal version since we chose to communicate through a chatbot, that is a pretty informal tool in itself.

But, according to our UEQ, there was no preference at all. We were able to gather a score for each dimension of the questionnaire, but there was no significant difference between the four versions. 

On the other hand, the indirect measure showed that there actually was a significant preference, and it was for the informal version.


A Bar graph about the results of the IAT. For the informal version is reported a D-score of 0.21 while for the formal version a D-score of -0.021


It was thanks to the IAT we were able to establish this preference. 

In another section of the questionnaire we also asked to rate the likeliness of booking an appointment with the chatbot and to actually show up at the appointment, on a scale from 1 to 7.

Coherently with the IAT results, we found that by using an informal tone of voice the rates went up significantly.

+12.3%

So, to sum it up, thanks to this combined methodology we were able to come up with an objective take about this chatbot, so that if we were to go back and implement it, we would be sure that the informal version is the one to push. 

4. Results
What we found out