AI Video Assessment Platform
When a third-party AI video product sends results back via webhooks, the hard part isn’t the API—it’s making sure the right result lands in the right place.
Overview
I integrated a third-party AI video interview product into an existing recruitment platform. The platform handled assessment creation, sending candidate invites, and receiving results via webhooks. My work covered the backend API layer that talked to the video provider, the persistence of assessment config and results, and the frontend flows for creating assessments and viewing scores alongside the rest of the candidate data.
Tech Stack
Impact & Scale
- Recruiters can create video assessments and invite candidates from the main platform
- Scores and personality insights flow back into the pipeline via webhook processing
- Single source of truth for assessment status across the hiring workflow
Key Challenges & Solutions
Webhook ordering and idempotency
Results could arrive out of order or be retried. We stored a processing state per assessment and used idempotency keys so duplicate payloads did not overwrite or double-apply scores. Critical for keeping candidate records consistent.
Mapping provider data to our domain
The video provider had its own schema for scores and traits. We defined a small translation layer so the rest of the product could work with a stable, internal model. New provider fields were handled in one place without changing downstream UI or reporting.
Handling partial or delayed results
Some assessments completed without a full result set. We designed the UI and API to show "pending" or "partial" states and to update in place when late webhooks arrived, so recruiters always saw the latest status.
Technical Highlights
- REST integration with the video assessment provider for create, invite, and status endpoints
- Webhook handler with signature verification, idempotency, and error retry behaviour
- Assessment and result models aligned with the rest of the candidate pipeline
- Frontend flows for creating assessments, attaching them to jobs, and viewing scores in the candidate detail view
Role and scope
I owned the integration between the recruitment product and the AI video assessment vendor. That meant designing and implementing the API client, webhook ingestion, and the data model so that assessments and scores were first-class parts of the candidate journey. The frontend for creating and viewing assessments was built to match existing patterns in the app.
Architecture
The backend exposed endpoints that delegated to the video provider’s API for creating assessments and sending invites. When a candidate completed an assessment, the provider sent a webhook; we verified the payload, applied idempotency, and wrote scores and metadata into our database. The frontend listed assessments per job and showed scores on the candidate profile so recruiters could act on them without leaving the platform.
Lessons learned
- Webhooks need a clear contract. Document expected payload shape, retries, and idempotency so both sides can debug and evolve safely.
- Normalise early. Translating the provider’s schema into our domain model at the boundary kept the rest of the codebase simple and made it easier to swap or add providers later.
- Partial success is still success. Designing for "pending" and "partial" results from day one avoided special cases when the provider behaved differently than the docs suggested.
Interested in discussing this project or working together? Get in touch.