Data Labeling Pricing Guide 2026: How Much Does Annotation Cost?
Pricing Models: Per-Image vs Per-Hour vs Per-Annotation
Data labeling vendors use three main pricing models. Each has trade-offs:
| Model | Best For | Risk |
|---|---|---|
| Per-image | Consistent complexity (e.g., all street scenes with similar object count) | Vendor may rush complex images or you overpay for simple ones |
| Per-hour | Variable complexity, new annotation types, exploratory projects | Less predictable total cost — but you only pay for actual work |
| Per-annotation | Simple tasks with known object counts (e.g., exactly 5 bounding boxes per image) | Edge cases and difficult images get the same price as easy ones |
Our recommendation: Start with per-hour pricing on your first project. It's the most transparent — you see exactly how long tasks take and can estimate future costs. Switch to per-image once you have baseline time-per-image data.
Real Pricing by Annotation Type
These ranges are based on real production projects with professional annotation teams — not crowdsourced platforms where quality varies widely.
| Annotation Type | Price Range | What Drives Cost |
|---|---|---|
| Bounding boxes | $0.02–0.10 per box | Number of objects per image, occlusion, classification complexity |
| Image classification | $0.01–0.05 per image | Number of categories, ambiguity between classes |
| Polygon / instance segmentation | $0.20–1.50 per object | Object shape complexity, number of vertices, overlapping objects |
| Semantic segmentation (pixel-level) | $0.50–3.00 per image | Number of classes, image resolution, required precision |
| Video annotation (per-frame tracking) | $0.03–0.15 per keyframe | Keyframe frequency, number of tracked objects, interpolation between keyframes reduces cost |
| Multi-attribute classification | $0.05–0.15 per object | Number of attributes (age, gender, clothing, etc.) |
What Makes Annotation Expensive (or Cheap)
Factors that increase cost
- Dense scenes — 50+ objects per image vs 5 objects per image can mean 10x the annotation time
- Ambiguous edge cases — "is this a truck or a van?" requires guidelines, discussion, and sometimes multiple review rounds
- Pixel-level precision — semantic segmentation costs 5-10x more than bounding boxes on the same image
- Multiple annotation types — bounding box + classification + attributes on the same image compounds the work
- Small batches — 100 images cost more per-image than 10,000 because of setup and guideline development overhead
Factors that decrease cost
- Consistent image type — same camera angle, same objects, same scene type = faster annotation
- Good annotation guidelines — clear, visual instructions with edge case examples reduce rework by 20-40%
- Pre-labeling — using model predictions as a starting point, with human review and correction
- Volume — larger batches amortize setup costs and let annotators build speed through repetition
- Ongoing partnership — annotators who know your domain get faster over time without losing quality
Not sure which annotation type you need? Read our Semantic vs Instance Segmentation guide — choosing the right method before you get quotes can save 2-5x on your annotation budget.
The Pilot Batch: How to Test Before You Commit
Never sign a large contract without a pilot batch first. Here's the standard approach:
- Prepare 100-500 representative images — include your hardest cases, not just easy ones
- Write annotation guidelines — or ask your vendor to help draft them
- Run the pilot — typically takes 3-7 days
- Review quality — check edge cases specifically, not random samples
- Measure time per image — this gives you cost predictability for production batches
Red flag: If a vendor won't do a pilot batch or insists on a large minimum commitment before you've seen their work — walk away. Any confident team will let you test first.
Want a real estimate for your project? Send us a sample of 10-20 images and your annotation requirements — we'll give you a detailed quote within 24 hours. Book a free call or email us directly.
Hidden Costs to Watch For
- Revision rounds — some vendors charge extra for corrections. Others include 1-2 rounds in the base price. Ask upfront.
- Guideline development — writing clear annotation instructions takes time. Some vendors help with this, some expect you to deliver perfect guidelines day one.
- Format conversion — if your vendor delivers in CVAT format but your pipeline needs COCO, who converts? This should be included.
- Project management — dedicated PM vs ticket system makes a big difference in communication speed and quality feedback loops.
- Quality rework — the cheapest per-unit price means nothing if 30% of annotations need fixing. Calculate cost per usable annotation, not cost per annotation.
How to Budget: A Quick Formula
For planning purposes:
- Estimate your total images/frames
- Determine the annotation type (bbox, polygon, segmentation)
- Estimate objects per image (average)
- Multiply: images × objects × per-annotation cost
- Add 20-30% buffer for revisions, edge cases, and guideline iterations
Example: 5,000 images × 8 objects/image × $0.05/bbox = $2,000 base. With 25% buffer = ~$2,500 total budget. This gives your CFO a number while leaving room for reality.
Bottom Line
Data labeling is not a commodity. The cheapest option almost never delivers the best cost-per-usable-label. Focus on:
- Getting a pilot batch before committing
- Measuring time per image to predict production costs
- Working with a dedicated team that learns your domain
- Calculating total cost including rework, not just unit price