Quality Metrics
Knowing the relative quality of translations is hard to assess. When texts are translated into multiple languages it is reasonable that many of the target languages are not known by the people seeking quality metrics.
There are a number of ways to do measure the quality of translators and translations. Depending on the type of publications and the user community, there are several methods that may be used.
- Editorial or top-down reputation tracking, where you have "superusers" that supervise or police system activity. This is a top down method of watching and assessing the quality of work contributed by users.
- User-provided reputation tracking, where large numbers of users vote on documents, translations and other translators. This is a form of distributed peer review that can generate a large amount of statistically useful data.
- Self assessed reputations, where people provide metrics about their abilities or about the quality of their work.
There's not really a single solution to this issue as each method has its strengths and weaknesses, a combination is usually best. The results are used to
- learn which translator(s) consistently submit good or bad work.
- generate rules for allowing or rejecting translations from specific users or user populations.
- learn where users are coming from, what languages and topics they are interested in.
- identify and deal with suspicious or malicious behaviour, edits, robotic scores, etc.