No longer just a buzzword.
AI is here to help you find the right candidate for every role.
How does AI machine learning work at Vervoe?
We use a set of three different models that measure and predict performance.
The How Model works across all questions, the What Model is question specific and the Preference Model is employer or role-specific.
Tracks and analyzes the way a candidate interacts with your assessment.
Analyzes the candidates responses and benchmarks them against millions of others.
Train our AI to understand what a great answer looks like to you and your preferences
See them do the job
The How Model tracks and analyses the way a candidate interacts with your assessment.
This model assesses interactions such as: How long do they take to complete the assessment? Do they scroll back to a previous question? Do they consistently type or are there long pauses? Do they click away and come back?
This model grades how a candidate works and would be the same as you standing behind the candidate watching them complete the assessment.
The How Model is the first of the three models to provide a score for each of your candidates.
AI that grades like a human
The What Model analyzes the content of the candidates' responses and benchmarks them against millions of other responses. Through natural language processing candidates' responses are viewed looking for words, phrases, and sentiments that accurately reflect the outcomes required.
If you’re hiring a Call Centre Agent you might be looking for someone with skills like attention to detail and empathy.
This model processes thousands of responses quickly looking for certain words or sentiments that reflect these values accurately.
The What Model uses existing blind data sets, new candidate data, and user input in the form of correct answer samples.
Your own personalized AI model
The Preference Model uses the Naive Beyes method to predict probability. This model requires input from the user to train it to understand what the scale of bad to good answers look like for their specific use case.
This method uses a model called ‘iterative’ where a user blindly grades a set of candidate responses to individual questions giving a score from 0-10. The set of questions that are exposed for the user to grade are the furthest apart from each other. This scoring feedback teaches the AI to value the things you value such as correct spelling or positive sentiment.
If a user grades one response as a 10 our model will then look for an answer that appears to be completely different to see how you score that answer.
This variation in responses helps the model quickly identify and plug the gaps in between the potential score ranges to accurately grade all candidates with your preferences in mind.
How we combat bias
Machine learning can be susceptible to human hiring bias being introduced through the data set. If machine learning learns from biased signals like knowing whether an applicant is male or female then the outcomes will have bias.
To prevent this occurring at Vervoe we started with a clean dataset that we built up over a two year period with real candidates applying for real jobs being graded by real employers.
No identifying information
Our models are blind to anything that could potentially create bias. We remove personal identifying information like gender, ethnicity, or background from the assessment process.
Always on monitoring
We monitor manual grading to ensure data integrity. If manually graded responses show a clear bias towards the mention of certain keywords we can detect that the drop in score and flag it whilst removing it from our learning set.
We believe it’s a huge responsibility to develop a system that decides if someone gets hired. Our approach is to question what the AI is judging candidates on. Would we want to be judged that way? If not then we don’t include it in our data.
Take the guesswork out of hiring
Our platform enables you to test the job ready skills of every applicant and then automatically review, grade and rank them, through our AI, based on how well they perform tasks specific to your role.
Frequently asked questions
Non autogradable Google apps questions like spreadsheets, presentations, and documents use the How Model.
Video, audio and text are graded the same. Video and audio questions are transcribed first and then graded using the How and What models.
Autogradable question types like multiple choice use the correct answer model only.
AI grading starts to review candidate responses as soon as they are submitted. The first model to grade is the How Model and then the What Model. Sometimes due to the complexity of assessment questions and the volume of candidates final results might not be displayed in the platform for up to a few hours.
Initially you’ll be asked to blindly score a small portion of candidate responses to individual questions in your assessment. We need around 20 graded responses to start optimizing your assessment for your own tailored needs. Obviously the more you grade the more we understand about you but we’re aiming to get about five grades from you across the spectrum of 1 - 10.
If you update questions to change the new correct answer after some of your candidates have already taken an assessment, their score will recalculate taking into account the assessment updates to give them a new grade.