enow.com Web Search

  1. Ad

    related to: evaluation of machine translation

Search results

  1. Results from the WOW.Com Content Network
  2. Evaluation of machine translation - Wikipedia

    en.wikipedia.org/wiki/Evaluation_of_machine...

    There are some machine translation evaluation survey works, [17] [18] [19] where people introduced more details about what kinds of human evaluation methods they used and how they work, such as the intelligibility, fidelity, fluency, adequacy, comprehension, and informativeness, etc. For automatic evaluations, they also did some clear ...

  3. BLEU - Wikipedia

    en.wikipedia.org/wiki/BLEU

    BLEU (bilingual evaluation understudy) is an algorithm for evaluating the quality of text which has been machine-translated from one natural language to another. Quality is considered to be the correspondence between a machine's output and that of a human: "the closer a machine translation is to a professional human translation, the better it is" – this is the central idea behind BLEU.

  4. Category:Evaluation of machine translation - Wikipedia

    en.wikipedia.org/wiki/Category:Evaluation_of...

    Pages in category "Evaluation of machine translation" The following 11 pages are in this category, out of 11 total. This list may not reflect recent changes. ...

  5. Machine translation - Wikipedia

    en.wikipedia.org/wiki/Machine_translation

    Machine translation is use of computational techniques to translate text or speech from one language to another, ... Even though human evaluation is time-consuming, ...

  6. Machine translation software usability - Wikipedia

    en.wikipedia.org/wiki/Machine_translation...

    Annual machine translation system evaluations and evaluation plan. Papineni, Kishore, Salim Roukos, Todd Ward and Wei-Jing Zhu. (2002) BLEU: A Method for automatic evaluation of machine translation.

  7. METEOR - Wikipedia

    en.wikipedia.org/wiki/METEOR

    METEOR (Metric for Evaluation of Translation with Explicit ORdering) is a metric for the evaluation of machine translation output. The metric is based on the harmonic mean of unigram precision and recall , with recall weighted higher than precision.

  8. Comparison of different machine translation approaches

    en.wikipedia.org/wiki/Comparison_of_different...

    A rendition of the Vauquois triangle, illustrating the various approaches to the design of machine translation systems.. The direct, transfer-based machine translation and interlingual machine translation methods of machine translation all belong to RBMT but differ in the depth of analysis of the source language and the extent to which they attempt to reach a language-independent ...

  9. LEPOR - Wikipedia

    en.wikipedia.org/wiki/LEPOR

    LEPOR [4] is designed with the factors of enhanced length penalty, precision, n-gram word order penalty, and recall.The enhanced length penalty ensures that the hypothesis translation, which is usually translated by machine translation systems, is punished if it is longer or shorter than the reference translation.

  1. Ad

    related to: evaluation of machine translation