The How
Performance on real tasks
Scores how candidates actually do the work - video pitches, written plans, live role-plays. Behaviour-level signal, not multiple-choice proxy.
AI Scoring
Every score traceable to the evidence that produced it. No black-box rankings, no unexplained rejections - just defensible hiring decisions backed by real work.
How it scores
Each candidate is evaluated across three complementary lenses. No single signal carries the decision - the evidence compounds.
The How
Scores how candidates actually do the work - video pitches, written plans, live role-plays. Behaviour-level signal, not multiple-choice proxy.
The What
Targeted knowledge checks calibrated per role. Fast to answer, weighted conservatively - used to confirm fundamentals, never to rank on trivia.
The Preference
Captures how a candidate prefers to work - collaboration style, pace, ambiguity tolerance. Informs the brief, never the ranking.
Outputs
Auto-ranking
Candidates ranked by real evidence, not keyword density. Ties resolved by the weighted skills that matter for the role - never by who submitted first.
Skill group scores
Scores broken out by skill group, weighted per role. Reviewers see the full picture instead of one opaque number.
AI summaries
Plain-language summary of strengths, watch-outs, and suggested probe questions - pulled directly from what the candidate actually submitted.
Explainability
Independently validated by Holistic AI. Fairness results published openly, not hidden behind an NDA. Every model release goes through the same audit.
The same submission gets the same score - today, next quarter, by any reviewer. Rubrics are stable across the funnel; deterministic where it needs to be.
From candidate appeals to legal review, every score links back to the evidence that produced it. No black-box rejections, no unexplained rankings.
Closed-loop learning
Most AI scoring stops at the submission. Vervoe keeps watching - four months after hire, the platform checks in with hiring managers and feeds the outcome back into scoring weights. Your funnel improves because the model improves.
Day 0
Written, video, code, or task-based answers land in the platform. Scoring runs in seconds.
Day 1–30
Shortlist, interview, reject - every recruiter action is logged against the original submission.
Day 30
Who got the offer, who didn’t, and why. The decision trail is preserved alongside the scoring evidence.
~Day 120
Platform pings the hiring manager four months in: is this person performing? The answer feeds the model.
Ongoing
Scoring weights drift toward signals that predicted real-world success - not just assessment completion.
Role-specific. Evidence-backed. Open to audit. Stop guessing which candidate will perform - know.