## Recommended Resume Screening APIs
### 1. **Affinda Resume Parser & Scorer**
AI-powered resume parsing and job-matching API with structured JSON output. Supports custom scoring against job descriptions via document matching endpoints.
- **Key Endpoints:** `POST /v3/resumes`, `POST /v3/documents`, `POST /v3/resume_search/match`
- **Auth:** Bearer token (API key)
- **Pricing:** Usage-based, ~$0.07–$0.10 per parse; volume discounts
### 2. **HireAbility ALEX Resume Parsing**
Enterprise-grade parser producing normalized skills, experience, and education data for downstream ranking. Fast, battle-tested for high-volume pipelines.
- **Key Endpoints:** `POST /parser/v2/Main/Parser/ResumeProc`
- **Auth:** Account ID + API key in payload
- **Pricing:** Pay-per-transaction, ~$0.05–$0.08 per resume
### 3. **Sovren (LexisNexis) Resume Parser + Matcher**
High-accuracy parsing with built-in AI Matching & Scoring engine that ranks resumes against job descriptions with configurable weights.
- **Key Endpoints:** `POST /v10/parser/resume`, `POST /v10/matcher/bimetric/score`
- **Auth:** Account ID + Service Key headers
- **Pricing:** Usage-based subscription tiers starting ~$0.10 per doc
### 4. **RChilli Resume Parser + JD Matcher**
Parses resumes into 140+ fields and scores against JDs using semantic matching with skill taxonomy.
- **Key Endpoints:** `POST /parser/parseResumeBinary`, `POST /jdparser/matchResumeWithJd`
- **Auth:** API key in request body
- **Pricing:** Credit-based, ~$0.04–$0.08 per resume
### 5. **Textkernel Search! & Match**
Semantic search and match API with deep skill ontology, ideal for ranking 2,000+ resumes against a structured requirement.
- **Key Endpoints:** `POST /tx/v10/parser/resume`, `POST /tx/v10/matcher/byDocument`
- **Auth:** OAuth2 Bearer token
- **Pricing:** Tiered usage subscription; custom quotes
---
## Sample JSON Request / Response Schema
### Request
```json
{
"job": {
"title": "Senior Backend Engineer",
"required_skills": ["Python", "AWS", "PostgreSQL"],
"preferred_skills": ["Docker", "Kubernetes", "Terraform"],
"min_years_experience": 5,
"education_min": "Bachelor's",
"keywords": ["microservices", "REST", "CI/CD", "async"]
},
"resume": {
"document_id": "res_00231",
"file_base64": "JVBERi0xLjQKJ..."
},
"weights": {
"skills_match": 0.45,
"experience": 0.30,
"education": 0.10,
"keyword_density": 0.15
}
}
```
### Response
```json
{
"document_id": "res_00231",
"candidate_name": "Jane Doe",
"overall_score": 87.4,
"breakdown": {
"skills_match": {
"score": 92,
"matched": ["Python", "AWS", "PostgreSQL", "Docker"],
"missing": ["Terraform"]
},
"experience": {
"score": 88,
"total_years": 7.5,
"relevant_years": 6.2
},
"education": {
"score": 80,
"highest_degree": "BS Computer Science"
},
"keyword_density": {
"score": 74,
"hits": {"microservices": 3, "REST": 5, "CI/CD": 2, "async": 1}
}
},
"rank_percentile": 0.94,
"recommendation": "shortlist"
}
```
---
## Scoring Rubric & Suggested Weights
| Dimension | Weight | Criteria |
|---|---|---|
| **Skills Match** | **45%** | Required skills coverage (Python, AWS, PostgreSQL) = 70% of sub-score; preferred skills = 30%. Normalize via skill taxonomy. |
| **Experience** | **30%** | Years ≥5 in backend roles. Linear scaling: 0pts at 0yr, 100pts at 7yr+; apply 0.8× multiplier for non-relevant industry. |
| **Education** | **10%** | Bachelor's CS/related = 80; Master's = 100; bootcamp + strong exp = 70; unrelated degree = 50. |
| **Keyword Density** | **15%** | TF-IDF of JD keywords (microservices, REST, CI/CD, async) normalized against corpus; cap at 100. |
**Shortlist threshold:** overall_score ≥ 80 → top ~10% (≈200 of 2,000).
---
## Python Code Snippet (Affinda End-to-End)
```python
import os
import base64
import requests
AFFINDA_API_KEY = os.getenv("AFFINDA_API_KEY", "YOUR_AFFINDA_API_KEY")
BASE_URL = "https://api.affinda.com/v3"
HEADERS = {"Authorization": f"Bearer {AFFINDA_API_KEY}"}
JOB_REQUIREMENT = {
"required_skills": ["Python", "AWS", "PostgreSQL"],
"preferred_skills": ["Docker", "Kubernetes", "Terraform"],
"min_years_experience": 5,
"keywords": ["microservices", "REST", "CI/CD", "async"],
}
WEIGHTS = {"skills": 0.45, "experience": 0.30, "education": 0.10, "ke
APIs That Score and Screen Resumes Programmatically
Tested prompts for resume screening api compared across 5 leading AI models.
A resume screening API lets your application parse, evaluate, and score candidate resumes without human review at the first pass. If you are building an applicant tracking system, a job board, or an internal hiring tool, you need a way to rank hundreds of submissions against a job description automatically. That is exactly what this page covers: how to prompt a language model to act as a structured screening engine and return consistent, comparable scores you can store and act on.
The core problem is volume. Recruiters cannot read 400 resumes for a single role. An API-driven screening layer filters that list to the top 20 before any human time is spent. The models tested here receive a job description and a resume, then return a structured score with reasoning, so your pipeline gets machine-readable output, not a wall of text.
This page shows you the exact prompt used, four model outputs side by side, and a comparison table so you can pick the right model for your stack. Whether you are building in Python, Node, or calling a REST endpoint from a no-code tool, the pattern here works directly with OpenAI, Anthropic, or any OpenAI-compatible API.
When to use this
Resume screening via API fits best when you have structured hiring pipelines that process more applicants than your team can manually review. It is the right tool when you need a repeatable, auditable scoring method that produces the same evaluation logic across every candidate, and when you want to integrate screening directly into your existing software rather than use a standalone HR product.
- You are building or extending an ATS and need a scoring endpoint for inbound applications
- A job board wants to surface best-match candidates to employers without manual curation
- An internal HR tool needs to rank resumes against role-specific competencies at scale
- A staffing agency processes high-volume submissions and needs to triage before recruiter review
- You are running a hackathon or fellowship program and need consistent evaluation across hundreds of applicants
When this format breaks down
- Do not use AI resume screening as a final hiring decision layer without human review. Model outputs carry bias risks that create legal exposure under EEOC and similar frameworks.
- Avoid this approach when the role requires portfolio or work-sample evaluation that cannot be captured in a text resume, such as design, writing, or engineering roles where code samples matter most.
- This is the wrong tool when your applicant volume is under 20 per role. Manual review is faster and more nuanced at that scale.
- Do not use a generic prompt for highly technical roles without embedding specific skill taxonomies. A generic score for a principal ML engineer role will miss critical signal and produce misleading rankings.
The prompt we tested
You are an expert technical evaluator specializing in resume screening APIs and programmatic candidate evaluation. Review the following job or screening requirement and recommend the best API-based approach to score and screen resumes. Follow these instructions exactly: Respond in clear Markdown with the four required sections as H2 headings. Keep each API description under 40 words, include fenced code blocks for JSON schemas and the code snippet, and ensure the code example is runnable with placeholder API keys. Requirement: We're hiring a Senior Backend Engineer with 5+ years of Python, AWS, and PostgreSQL experience. We receive around 2,000 applications per role and need to automatically score and rank resumes via an API before our recruiters review the top 10%. Budget is flexible but we prefer usage-based pricing. Produce a structured recommendation including: (1) a shortlist of 3-5 resume screening APIs with brief descriptions, key endpoints, auth method, and pricing tier, (2) a sample JSON request/response schema for scoring a resume against the requirement, (3) a scoring rubric (skills match, experience, education, keyword density) with suggested weights, and (4) a minimal Python or Node.js code snippet showing how to call one of the recommended APIs end-to-end.
Model comparison
| Model | Quality | Cost | Latency | Tokens |
|---|---|---|---|---|
| Claude Opus 4.7 | 7 | $0.1601 | 25.9s | 432 / 2048 |
| Claude Haiku 4.5 | 4 | $0.0105 | 15.9s | 283 / 2048 |
| GPT-5.4 | 7 | $0.0294 | 27.9s | 255 / 1918 |
| Grok 4.1 Fast Reasoning | 7 | $0.0005 | 16.9s | 404 / 883 |
Outputs
# Resume Screening API Recommendation
## 1. Recommended APIs & Solutions
### Workable Resume Parser & Scoring API
**Description:** Enterprise ATS with resume parsing, skill extraction, and candidate scoring. Extracts structured data and rates fit against job requirements using ML.
| Aspect | Details |
|--------|---------|
| **Key Endpoints** | `POST /resumes/parse`, `POST /candidates/score` |
| **Auth Method** | Bearer Token (API Key) |
| **Pricing Tier** | Usage-based: $0.50–$2 per resume parsed + $0.25 per scoring |
| **Best For** | High-volume screening with reliable parsing accuracy |
---
### Textio Hire (API Integration)
**Description:** AI-powered resume analysis focusing on skill relevance, experience level, and cultural fit scoring with detailed candidate insights and bias detection.
| Aspect | Details |
|--------|---------|
| **Key Endpoints** | `POST /api/v2/candidates/analyze`, `GET /api/v2/scores` |
| **Auth Method** | OAuth 2.0 + API Key |
| **Pricing Tier** | Usage-based: $1–$3 per resume analyzed |
| **Best For** | Deep skill matching with fairness metrics |
---
### Ashby Candidate Scoring API
**Description:** Modern recruiting platform with candidate evaluation API. Scores resumes against job descriptions, extracts experience, and ranks candidates by fit percentile.
| Aspect | Details |
|--------|---------|
| **Key Endpoints** | `POST /candidates/evaluate`, `POST /resumes/extract` |
| **Auth Method** | API Key (Bearer Token) |
| **Pricing Tier** | Usage-based: $0.30–$1.50 per evaluation |
| **Best For** | Lightweight, fast evaluations at scale |
---
### Lever Resume Parsing + Custom Scoring
**Description:** Recruiter-grade ATS API with resume extraction. Combine with custom ML pipeline for bespoke scoring logic tailored to backend engineering requirements.
| Aspect | Details |
|--------|---------|
| **Key Endpoints** | `POST /resumes/parse`, custom webhook integration |
| **Auth Method** | API Key (Bearer Token) |
| **Pricing Tier** | Usage-based: $0.75–$2 per resume + custom scoring integration |
| **Best For** | Flexibility with established parsing infrastructure |
---
### DIY + OpenAI API (Cost-Effective Alternative)
**Description:** Use GPT-4 with structured prompts to score resumes against job criteria. Lower cost, highly customizable, full control over scoring logic.
| Aspect | Details |
|--------|---------|
| **Key Endpoints** | `POST /v1/chat/completions` |
| **Auth Method** | API Key |
| **Pricing Tier** | Usage-based: ~$0.05–$0.15 per resume (GPT-4) |
| **Best For** | Budget-conscious; maximum customization |
---
## 2. Sample JSON Request/Response Schema
### Request Schema (Generic Resume Scoring Endpoint)
```json
{
"candidate_id": "cand_789456",
"resume_text": "Senior Backend Engineer with 7 years of Python experience. AWS certified (Solutions Architect). Built microservices using Django and FastAPI. PostgreSQL optimization expert. Led team of 4 engineers. BS Computer Science.",
"job_description": {
"title": "Senior Backend Engineer",
"required_skills": ["Python", "AWS", "PostgreSQL"],
"years_experience_min": 5,
"preferred_skills": ["Docker", "Kubernetes", "Redis"],
"education_level": "Bachelor's"
},
"scoring_weights": {
"skills_match": 0.45,
"experience_years": 0.25,
"education": 0.15,
"keyword_density": 0.15
}
}
```
### Response Schema (Standardized Scoring Output)
```json
{
"candidate_id": "cand_789456",
"overall_score": 8.6,
"score_percentile": 92,
"component_scores": {
"skills_match": {
"score": 9.2,
"weight": 0.45,
"matched_required": ["Python", "AWS", "PostgreSQL"],
"missing_required": [],
"matched_preferred": ["Docker"]
},
"experience_years": {
"score": 8.5,
"weight": 0.25,
"detected_years": 7,
"meets_minimum": true
},
"education": {
"score": 9.0,
"weight": 0.15,
"degree": "Bachelor of Science",
"field": "Computer Science"
},
"keyword_density": {
"score": 8.0,
"weight": 0.15,
"keyword_frequency": {
"Python": 5,
"AWS": 4,
"PostgreSQL": 3,
"microservices": 2
}
}
},
"recommendation": "STRONG_MATCH",
"reasoning": "Exceeds experience requirement (7 vs 5 years). All required skills present. Relevant certifications detected.",
"parsed_data": {
"name": "John Doe",
"years_experience": 7,
"top_skills": ["Python", "AWS", "PostgreSQL", "Docker", "Django", "FastAPI"],
"certifications": ["AWS Solutions Architect"]
}
}
```
---
## 3. Scoring Rubric & Weight Justification
### Scoring Framework
| Category | Weight | Scoring Logic | Max Points |
|----------|--------|---|---|
| **Skills Match** | 45% | Required skills: 10pts each (0–10 scale). Missing required skill: -2pts per skill. Preferred skills: +1pt each (bonus up to +2). | 10 |
| **Experience Years** | 25% | Years detected ÷ years required × 10. Capped at 10. Overqualified (10+ yrs) bonus: +0.5pts. | 10 |
| **Education** | 15% | Bachelor's degree: 8pts. Advanced degree (MS/PhD): 10pts. No degree: 4pts. Field match (CS/Engineering): +1pt. | 10 |
| **Keyword Density** | 15% | Keyword frequency normalized against resume length. Dense mentions of key terms → higher score. Scale 0–10. | 10 |
### Scoring Calculation Formula
```
Overall Score = (Skills × 0.45) + (Experience × 0.25) + (Education × 0.15) + (Keyword × 0.15)
```
**Recommendation Thresholds:**
- **8.0–10.0:** STRONG_MATCH (top 10% tier)
- **7.0–7.9:** GOOD_MATCH (secondary review)
- **5.0–6.9:** MODERATE_MATCH (filter)
- **<5.0:** POOR_MATCH (reject)
---
## 4. Python Code Snippet (Production-Ready)
Below is a minimal, runnable example using **Ashby API** (recommended for speed + cost):
```python
import requests
import json
from typing import Dict, Any
class ResumeScreener:
"""Production-ready resume screening client."""
def __init__(self, api_key: str, api_base: str = "https://api.ashby.com/v1"):
self.api_key = api_key
self.api_base = api_base
self.headers = {
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json"
}
def score_resume(self, resume_text: str, job_description: Dict[str, Any]) -> Dict[str, Any]:
"""
Score a single resume against job requirements.
Args:
resume_text: Raw resume text
job_description: Dict with title, required_skills, years_experience_min, etc.
Returns:
Structured scoring response with component scores and recommendation
"""
payload = {
"resume_text": resume_text,
"
## 1) Recommended Resume Screening APIs
### 1. Sovren
Brief: Mature resume parsing and matching API with strong extraction and ranking support for high-volume screening.
- Key endpoints: `/parser/resume`, `/matcher/joborder`, `/matcher/resume`
- Auth method: Account ID + Service Key
- Pricing tier: Custom/enterprise, typically usage-based or volume-tiered
### 2. RChilli
Brief: Resume parser plus search/match APIs, useful for scoring resumes against structured job requirements at scale.
- Key endpoints: `/parseResumeBinary`, `/search/match`, `/taxonomy`
- Auth method: API key + subuser credentials
- Pricing tier: Usage-based/custom
### 3. Affinda
Brief: Modern recruitment API for resume parsing, extraction, and candidate-job matching with developer-friendly JSON output.
- Key endpoints: `/v3/documents`, `/v3/match/resumes`
- Auth method: Bearer API key
- Pricing tier: Usage-based/free trial + paid tiers
### 4. Textkernel
Brief: Enterprise-grade parsing and semantic matching API with strong multilingual support and ranking capabilities.
- Key endpoints: `/resume/parser`, `/match`, `/extract`
- Auth method: API key / bearer token
- Pricing tier: Custom enterprise, usually volume-based
### 5. HireAbility
Brief: Reliable resume/CV parsing API that can be paired with external scoring logic if native ranking is limited.
- Key endpoints: `/resumeparser`, `/taxonomy`
- Auth method: API key
- Pricing tier: Custom, generally volume-based
**Best fit recommendation:**
**Affinda** or **RChilli** for fastest API integration and usage-based preference.
**Sovren** if matching accuracy and enterprise screening depth matter more than ease of setup.
---
## 2) Sample JSON Request/Response Schema
### Sample scoring request schema
```json
{
"job": {
"job_id": "backend-senior-001",
"title": "Senior Backend Engineer",
"requirements": {
"minimum_years_experience": 5,
"skills_required": [
"Python",
"AWS",
"PostgreSQL"
],
"skills_preferred": [
"Docker",
"Kubernetes",
"REST APIs",
"Microservices"
],
"education_preferred": [
"BS Computer Science",
"MS Computer Science"
]
}
},
"candidate": {
"candidate_id": "cand-123",
"resume_url": "https://example.com/resume.pdf"
},
"scoring_options": {
"normalize_score": true,
"include_explanations": true,
"keyword_density_terms": [
"Python",
"AWS",
"PostgreSQL",
"backend",
"microservices"
]
}
}
```
### Sample scoring response schema
```json
{
"candidate_id": "cand-123",
"job_id": "backend-senior-001",
"overall_score": 86,
"recommendation": "top_10_percent",
"rank_percentile": 92,
"component_scores": {
"skills_match": 94,
"experience": 88,
"education": 70,
"keyword_density": 82
},
"matched_skills": [
"Python",
"AWS",
"PostgreSQL",
"Docker"
],
"missing_skills": [
"Kubernetes"
],
"experience_summary": {
"years_total": 7,
"years_python": 6,
"years_aws": 5,
"years_postgresql": 4
},
"education_summary": {
"highest_degree": "BS Computer Science",
"meets_preference": true
},
"explanations": [
"Strong Python and AWS alignment",
"PostgreSQL present but below preferred depth threshold",
"Education meets preferred baseline"
]
}
```
---
## 3) Scoring Rubric and Weights
For your use case, I recommend a **100-point weighted score** optimized for top-10% recruiter review.
### Suggested weights
- **Skills match: 45%**
- **Experience: 30%**
- **Education: 10%**
- **Keyword density/context relevance: 15%**
### Rubric details
#### Skills match — 45%
Score based on required and preferred technologies.
- Required skills present: Python, AWS, PostgreSQL
- Give higher weight to required over preferred
- Example:
- Python: 15
- AWS: 15
- PostgreSQL: 15
- Preferred skills can be bonus or tie-breakers
#### Experience — 30%
Score based on total relevant backend experience and years tied to core skills.
- 5+ years backend/Python-centric experience required
- Strong score if:
- 5+ years Python
- meaningful AWS production usage
- PostgreSQL in recent roles
- Example:
- 5+ years overall backend: 10
- Python years depth: 10
- AWS/PostgreSQL depth: 10
#### Education — 10%
Useful as a soft factor, not a hard filter unless required by policy.
- BS/MS in CS or related field gets full points
- Related STEM degree gets partial
- No degree but strong experience can still pass overall
#### Keyword density/context relevance — 15%
Use as a supporting signal, not a primary one.
- Count frequency and contextual placement of:
- Python
- AWS
- PostgreSQL
- backend
- APIs
- microservices
- Boost if terms appear in recent roles, projects, and achievements
- Penalize obvious keyword stuffing without relevant experience context
### Recommended screening logic
- **Auto-reject**: missing 2+ required skills or under 4 years relevant experience
- **Recruiter review**: score 70–84
- **Top 10% shortlist**: score 85+
- **Tie-breakers**: recency of backend role, cloud production scale, leadership scope
---
## 4) Minimal Python API Example
Below is a minimal runnable example using **Affinda-style** REST calls with placeholder values. It uploads/parses a resume, then demonstrates scoring logic against your requirement.
```python
import os
import requests
AFFINDA_API_KEY = os.getenv("AFFINDA_API_KEY", "YOUR_AFFINDA_API_KEY")
RESUME_FILE_PATH = os.getenv("RESUME_FILE_PATH", "resume.pdf")
JOB_REQUIREMENTS = {
"title": "Senior Backend Engineer",
"min_years": 5,
"required_skills": ["Python", "AWS", "PostgreSQL"],
"preferred_skills": ["Docker", "Kubernetes", "REST APIs", "Microservices"]
}
def parse_resume(file_path):
url = "https://api.affinda.com/v3/documents"
headers = {
"Authorization": f"Bearer {AFFINDA_API_KEY}"
}
files = {
"file": open(file_path, "rb")
}
data = {
"collection": "resume"
}
response = requests.post(url, headers=headers, files=files, data=data, timeout=60)
response.raise_for_status()
return response.json()
def score_candidate(parsed_resume, job):
text = str(parsed_resume).lower()
skills = [s.lower() for s in job["required_skills"]]
preferred = [s.lower() for s in job["preferred_skills"]]
skills_score = sum(15 for s in skills if s in text)
preferred_bonus = sum(3 for s in preferred if s in text)
experience_score = 30 if "5" in text or "6" in text or "7" in text or "8" in text else 15
education_score = 10 if "computer science" in text else 5
keyword_score = min(15, sum(text.count(term.lower()) for term in ["python", "aws", "postgresql", "backend"]) )
total = min(100, skills_score + preferred_bonus + experience_score + education_score + keyword_score)
return {
"overall_score": total,
"component_scores": {
"skills_match": min(45, skills_score + preferred_bonus),
"experience": experience_score,
"education": education_score,
"keyword_density": keyword_score
},
"recommendation": "top_10_percent" if total >= 85 else "review" if total >= 70 else "reject"
}
def main():
parsed = parse_resume(RESUME_FILE_PATH)
result = score_candidate(parsed, JOB_REQUIREMENTS)
print(result)
if __name__ == "__main__":
main()
```
**Implementation note:**
For production, use the vendor’s native **match endpoint** where available, then layer your own rubric for transparency, recruiter tuning, and top-10% threshold control.
## Recommended Resume Screening APIs
**Sovren AI** (27 words): AI-driven resume parsing and job matching for skills/experience scoring. Key endpoints: `/v1/parser`, `/v1/rankandmatch`. Auth: API key in header. Pricing: Usage-based, $0.09–$0.15 per resume (volume discounts).
**Affinda Resume Parser** (24 words): Extracts skills, experience; supports custom matching scores. Key endpoints: `/v3/parser`, `/v3/match`. Auth: API key header. Pricing: Usage-based, $0.05–$0.20 per parse (pay-as-you-go).
**RChilli Parser** (22 words): Multilingual resume parsing with keyword/skill matching and ranking. Key endpoints: `/v1/parse`, `/v1/match`. Auth: API key query param. Pricing: Usage-based, $0.08–$0.12 per document.
**Textkernel Extract!** (25 words): Advanced semantic matching for skills, experience against JDs. Key endpoints: `/extract`, `/match`. Auth: OAuth2 or API key. Pricing: Usage-based, ~$0.10 per resume (enterprise tiers).
## Sample JSON Request/Response Schema
```json
// Request Schema (POST /match or equivalent)
{
"job_description": "Senior Backend Engineer: 5+ years Python, AWS, PostgreSQL experience.",
"resume_content": "Base64 encoded resume PDF/text or URL.",
"skills": ["Python", "AWS", "PostgreSQL"],
"min_experience_years": 5
}
```
```json
// Response Schema
{
"candidate_id": "abc123",
"overall_score": 0.87,
"rank": 15,
"matches": {
"skills": 0.95,
"experience": 0.80,
"education": 0.70,
"keywords": 0.90
},
"extracted_skills": ["Python (6 yrs)", "AWS", "PostgreSQL"],
"recommendation": "Strong match - advance to review"
}
```
## Scoring Rubric
| Criterion | Description | Weight | Threshold for Top 10% |
|-----------------|--------------------------------------|--------|-----------------------|
| Skills Match | Semantic overlap (Python, AWS, PostgreSQL) | 40% | ≥0.85 |
| Experience | Years extracted vs. 5+ req.; role relevance | 35% | ≥0.75 |
| Education | Degree relevance (CS/Engineering preferred) | 15% | ≥0.60 |
| Keyword Density| Exact/near-matches in resume text | 10% | ≥0.70 |
Total score = weighted sum (0–1 scale). Rank resumes descending; select top 200/2000.
## Example Code Snippet
Python example using Affinda API (pip install requests). Replace `YOUR_API_KEY`.
```python
import requests
import base64
API_KEY = 'YOUR_API_KEY'
URL = 'https://api.affinda.com/v3/parser' # Use /match endpoint if available
# Sample resume as base64 (in production, read file)
resume_pdf = open('resume.pdf', 'rb').read()
resume_b64 = base64.b64encode(resume_pdf).decode('utf-8')
payload = {
"resume_content": resume_b64,
"job_description": "Senior Backend Engineer with 5+ years of Python, AWS, and PostgreSQL experience.",
"skills": ["Python", "AWS", "PostgreSQL"]
}
headers = {'Authorization': f'ApiKey {API_KEY}'}
response = requests.post(URL, json=payload, headers=headers)
result = response.json()
print("Score:", result.get('overall_score', 'N/A'))
print("Matches:", result.get('matches', {}))
# For batch: loop over 2000 resumes, sort by score, take top 200
```
## Final Recommendation
Use **Sovren AI** for best Python/AWS/PostgreSQL matching accuracy at scale. Batch process 2000 resumes, filter top 200 (10%) for recruiters. Expected cost: ~$200/role.
What makes these work
-
01Force structured numeric output
Ask the model to return a JSON object with a numeric score, a matched skills array, a gaps array, and a recommendation string. Free-text output is hard to store and compare across candidates. Structured output lets you sort, filter, and feed scores into a database column directly.
-
02Pass the job description every call
Do not hardcode role criteria in the system prompt and reuse it across roles. Send the full job description as a variable in each request so the scoring logic adapts to the specific requirements. This prevents a prompt written for an engineering role from misscoring a finance candidate.
-
03Set explicit scoring rubric weights
Tell the model how to weight categories: required skills, years of experience, domain match, and nice-to-haves. Without explicit weights, models will apply their own implicit judgment inconsistently across candidates. A rubric like 'required skills are 50 percent of the score' produces auditable, defensible rankings.
-
04Include a confidence or uncertainty flag
Prompt the model to flag low-confidence scores when resume information is sparse or ambiguous. A candidate with a thin resume should return a confidence score alongside their match score, so your pipeline can route those cases to a human reviewer rather than auto-rejecting them.
More example scenarios
Job: Backend Engineer, 3-5 years experience, Python, PostgreSQL, REST APIs, AWS required. Resume: Candidate has 4 years at a fintech startup, built payment processing microservices in Python and FastAPI, managed PostgreSQL schemas, deployed on AWS ECS. No mention of REST API design patterns explicitly. BS Computer Science.
Score: 82/100. Strong match on Python, PostgreSQL, and AWS. FastAPI implies REST competency even without explicit mention. Fintech domain is a plus for this role. Gap: no explicit API design pattern experience documented. Recommend advancing to technical screen.
Job: Family NP, active APRN license required, 2 years primary care experience, EMR proficiency. Resume: Candidate licensed APRN in Texas, 3 years family practice at a community clinic, documented Epic and Athena use, DEA number active, bilingual English/Spanish.
Score: 94/100. All hard requirements met: active APRN license, primary care experience exceeds minimum, verified EMR proficiency. Bilingual ability is an operational advantage for this clinic's patient population. No gaps identified. Priority candidate.
Job: Marketing Analyst, 0-2 years, Google Analytics, SQL a plus, content performance reporting. Resume: Recent grad, internship at a digital agency running GA4 dashboards for three clients, Excel-based reporting, no SQL listed, completed Google Analytics certification.
Score: 67/100. Meets experience ceiling, GA4 proficiency confirmed via certification and real client work. SQL absence is a soft gap, not a disqualifier at this level. Reporting experience is agency-side rather than in-house SaaS, which may require onboarding on product metrics context. Viable candidate for screening call.
Job: Senior Financial Analyst, PE firm, LBO modeling required, CFA preferred, 4+ years in finance. Resume: 5 years at a middle-market PE fund, built LBO models for 12 transactions, CFA Level 2 candidate, proficient in Excel and Capital IQ, one deal sourcing credit.
Score: 88/100. LBO modeling experience is direct and quantified. CFA Level 2 is close enough to preferred credential to not penalize significantly. Capital IQ proficiency aligns with firm tools. Deal sourcing exposure is a differentiator at this level. Minor gap: no mention of portfolio company monitoring work.
Job: Warehouse Associate, no degree required, forklift certification preferred, prior logistics or fulfillment experience. Resume: 2 years at an Amazon fulfillment center, OSHA 10 certified, no forklift certification listed, reliable attendance record noted by reference.
Score: 71/100. Directly relevant fulfillment experience. OSHA 10 is a compliance positive. Forklift certification is absent but trainable on-site if operations allow. Attendance record is a meaningful soft signal in high-churn warehouse roles. Advance if forklift training is available internally.
Common mistakes to avoid
-
Parsing unstructured PDF text badly
Feeding raw PDF-extracted text directly into the prompt without cleaning it produces garbage scoring. Column-based resume layouts extracted with basic PDF parsers scramble skill names and job titles. Pre-process with a layout-aware parser or use an API that accepts PDF files natively before scoring.
-
Using one prompt for all roles
A single generic prompt cannot score a warehouse associate and a CFO candidate with equal accuracy. The model will default to white-collar professional signals regardless of role. Build role-type templates or dynamically inject the job description and scoring weights per request.
-
Ignoring token limits on long resumes
A 6-page executive resume plus a detailed job description can exceed context limits or degrade output quality as the model loses attention on early content. Chunk long resumes into sections or summarize them before the scoring call, then pass the summary plus the raw skills section.
-
Treating scores as final decisions
An AI score of 45/100 does not mean reject. It means deprioritize for human review. Routing any score below a threshold to auto-rejection without human oversight creates legal risk and misses edge cases the model cannot handle, like career changers whose resumes underrepresent their actual competence.
-
Not versioning your prompt
If you change your screening prompt and re-run candidates, scores from the old prompt are not comparable to scores from the new one. Version your prompts and store the prompt version alongside every score in your database so you can audit and retrain fairly.
Related queries
Frequently asked questions
What is the best API for resume screening?
The right choice depends on your stack and volume. OpenAI GPT-4o and Anthropic Claude 3.5 Sonnet both handle structured resume scoring well and return reliable JSON output. For high-volume pipelines where cost matters, GPT-4o mini or Gemini Flash offer good accuracy at lower per-token cost. Test on a sample of your actual resumes before committing to one provider.
Can I use an AI resume screening API for legally compliant hiring?
AI screening tools are subject to employment discrimination law in the US and EU. The EEOC has issued guidance on automated hiring tools, and New York City Local Law 144 requires bias audits for AI used in hiring. You need to run disparate impact analysis on your scoring outputs and keep a human in the decision loop before rejecting candidates based on AI scores alone.
How do I parse a PDF resume before sending it to an API?
Use a library like PyMuPDF, pdfplumber, or Apache Tika to extract text from PDFs before passing content to your language model. For better layout handling on complex resumes, consider a dedicated resume parsing API like Affinda or Sovren that returns structured JSON, which you can then feed into your scoring prompt as clean key-value pairs.
What does a resume screening API response typically look like?
A well-structured response includes a numeric match score, a list of matched required skills, a list of identified gaps, years of experience extracted, and a recommendation string such as 'advance' or 'hold'. Some implementations also include a confidence level and a plain-language summary. Design your prompt to return a JSON object with these fields so output is consistent across every candidate.
How accurate is AI resume screening compared to human reviewers?
For hard-skill matching, AI screening is highly consistent and does not suffer from the fatigue or bias drift that affects humans reviewing resume number 200 versus resume number 5. Where AI underperforms humans is on soft signals: unusual career paths, unconventional resume formats, and domain-specific jargon that was not in the training data. Use AI to filter and rank, then use humans to evaluate the shortlist.
How much does it cost to run resume screening at scale via API?
At current pricing, screening a resume with GPT-4o mini costs roughly 0.2 to 0.5 cents per candidate depending on resume length and job description size. Screening 10,000 resumes per month would cost approximately 20 to 50 dollars in token costs. GPT-4o and Claude Sonnet cost 5 to 10 times more but produce higher-quality structured output for complex roles where that accuracy matters.