ContentQuo Evaluate MTContentQuo
Evaluate MT enables scalable human evaluation of machine translation using Adequacy-Fluency, MQM, and post-editing analysis.
Vendor
ContentQuo
Company Website




Product details
Manage your human evaluations of Machine Translation quality effortlessly at any scale
ContentQuo Evaluate MT is a cloud-based platform for evaluating machine translation quality through structured human assessments. It supports Adequacy-Fluency scoring, MQM error annotation, and post-editing analysis, helping localization teams select the best MT engines, optimize training, and reduce evaluation overhead. Designed for flexibility, it integrates with any TMS and supports unlimited users.
- Perform quick human assessment with Adequacy-Fluency
- Deeply analyze MT output with MQM error annotation
- Run post-editing “lab tests” for Edit Distance
Features
- **Quick eval with Adequacy-Fluency: **Rating Scale approach is the fastest and cheapest way to measure how humans perceive raw MT quality
- **Detailed eval with Error Annotation (MQM): **Error Typology approach is the best way to deeply analyze mistakes made by an MT engine and get specific ideas for retraining
- **Post-Editing analysis with Edit Distance: **Perform a “mock post-edit” online or import PEMT jobs from your Translation Management System to calculate TER
- **Central database of MT quality scores: **Collect and manage all results of your human quality evaluations for your Machine Translation engines in one place