Logo
Sign in
Product Logo
ContentQuo Evaluate MTContentQuo

Evaluate MT enables scalable human evaluation of machine translation using Adequacy-Fluency, MQM, and post-editing analysis.

Vendor

Vendor

ContentQuo

Company Website

Company Website

628e1a300333853108c27a77_image 170.png
628e1a303a98aa6c0e94f04a_image 165.png
628e19a3bf819c1d21a6f492_image 191.png
628e1a31f87432b38c530a75_image 173.png
Product details

Manage your human evaluations of Machine Translation quality effortlessly at any scale

ContentQuo Evaluate MT is a cloud-based platform for evaluating machine translation quality through structured human assessments. It supports Adequacy-Fluency scoring, MQM error annotation, and post-editing analysis, helping localization teams select the best MT engines, optimize training, and reduce evaluation overhead. Designed for flexibility, it integrates with any TMS and supports unlimited users.

  • Perform quick human assessment with Adequacy-Fluency
  • Deeply analyze MT output with MQM error annotation
  • Run post-editing “lab tests” for Edit Distance

Features

  • **Quick eval with Adequacy-Fluency: **Rating Scale approach is the fastest and cheapest way to measure how humans perceive raw MT quality
  • **Detailed eval with Error Annotation (MQM): **Error Typology approach is the best way to deeply analyze mistakes made by an MT engine and get specific ideas for retraining
  • **Post-Editing analysis with Edit Distance: **Perform a “mock post-edit” online or import PEMT jobs from your Translation Management System to calculate TER
  • **Central database of MT quality scores: **Collect and manage all results of your human quality evaluations for your Machine Translation engines in one place