Interested in Contributing?
- Check out available resources.
- Create an account and start submitting your own systems.
Scored Systems
System | Submitter | System Notes | Constraint | Run Notes | BLEU | BLEU-cased | TER | BEER 2.0 | CharactTER |
---|---|---|---|---|---|---|---|---|---|
uedin-nmt-ensemble (Details) | rsennrich University of Edinburgh |
BPE neural MT system with monolingual training data (back-translated). ensemble of 4 L2R and 4 R2L models. | yes |
28.9 |
28.3 |
0.612 |
0.592 |
0.518 |
|
UniMelb-NMT-Transformer-BT (Details) | vhoang The University of Melbourne, Australia |
NMT with Transformer architecture (medium size, 4 heads + 4 encoder/decoder layers) enhanced with WMT'16 back-translation data. (decoding with single best system) | yes |
27.8 |
27.3 |
0.612 |
0.585 |
0.540 |
|
en_de_transformer_test (Details) | hpulfc hpulfc |
transformer base system | yes |
27.7 |
27.2 |
0.627 |
0.583 |
0.538 |
|
en_de_transformer_test (Details) | hpulfc hpulfc |
transformer base system | yes |
27.7 |
27.1 |
0.626 |
0.584 |
0.540 |
|
LMU-nmt-reranked-wmt17-en-de (Details) | Matthias.Huck LMU Munich |
NMT, single model plus r2l reranking, linguistically motivated target word segmentation | yes |
27.9 |
27.1 |
0.618 |
0.583 |
0.538 |
|
SYSTRAN-single (Details) | jmcrego SYSTRAN |
OpenNMT + BPE + backtranslated monolingual data + hyperspecialization | yes |
28.0 |
26.7 |
0.611 |
0.580 |
0.542 |
|
xmu-ensemble (Details) | Zhixing Tan Xiamen University |
ensemble 4 models + bpe + backtranslation | yes |
27.2 |
26.7 |
0.622 |
0.580 |
0.540 |
|
lium-nmt-backtrans-ensemble-ftuned (Details) | ozancaglayan LIUM - Le Mans University |
Ensemble of 2 backtranslated augmented finetuned-NMT trained with nmtpy | yes |
27.2 |
26.6 |
0.633 |
0.581 |
0.544 |
|
LMU-nmt-single-wmt17-en-de (Details) | Matthias.Huck LMU Munich |
NMT, single model (contrastive), linguistically motivated target word segmentation | yes |
27.3 |
26.6 |
0.626 |
0.579 |
0.544 |
|
uedin-nmt-single (Details) | rsennrich University of Edinburgh |
BPE neural MT system with monolingual training data (back-translated). single model. (contrastive) | yes |
27.2 |
26.6 |
0.627 |
0.582 |
0.539 |
|
fbk-nmt-combination (Details) | Mattia Di Gangi AppTek |
OpenNMT + bpe + backtranslations + system combination | yes |
26.9 |
26.3 |
0.636 |
0.574 |
0.592 |
|
KIT-cG-mA-single (Details) | thanhleha KIT |
yes |
26.9 |
26.3 |
0.631 |
0.578 |
0.547 |
||
xmu-single-backtrans (Details) | Zhixing Tan Xiamen University |
single model on preprocessed data + bpe + backtranslation (contrastive) | yes |
26.7 |
26.1 |
0.628 |
0.576 |
0.544 |
|
KIT primary (Details) | eunah.cho KIT |
NMT, BPE, rescore using five models | yes |
26.7 |
26.1 |
0.641 |
0.577 |
0.546 |
|
RWTH NMT (Details) | jtp RWTH Aachen University |
Ensemble of 3 using backtranslated data and BPE | yes |
26.8 |
26.0 |
0.628 |
failed |
failed |
|
KIT correct BT single (Details) | thanhleha KIT |
yes |
26.5 |
25.9 |
0.628 |
0.577 |
0.568 |
||
xmu-single (Details) | Zhixing Tan Xiamen University |
single model on preprocessed data + bpe (contrastive) | yes |
26.3 |
25.7 |
0.639 |
0.570 |
0.562 |
|
uedin-nmt-2016 (Details) | rsennrich University of Edinburgh |
single system of WMT16 (uedin-nmt-single). Contrastive. | yes |
25.5 |
24.9 |
0.649 |
0.571 |
0.559 |
|
fbk-nmt-single (Details) | Mattia Di Gangi AppTek |
OpenNMT + bpe + backtranslations (contrastive) | yes |
25.3 |
24.8 |
0.644 |
0.569 |
0.576 |
|
C-3MA (Details) | mphi University of Tartu |
Nematus + filtered monolingual back-translated data + NE forcing + ngram deduplication | yes | NeuralMonkey + filtered monolingual back-translated data + NE forcing + ngram deduplication |
23.2 |
22.7 |
0.669 |
0.553 |
0.598 |
Moses Phrase-Based, word clusters (Details) | jhu-smt Johns Hopkins University |
Moses, use of word clusters. Preliminary run (full run may not finish): use of word clusters in OSM, LM. | yes | Moses phrase-based, word cluster LM |
22.2 |
21.6 |
0.709 |
0.557 |
0.596 |
TALP-UPC (Details) | cescolano TALP-UPC |
Character to character NMT system with additional monolingual training data (backtranslated) and rescoring using the inverse language pair model. | yes | Character to character NMT system with extra corpus and rescoring using the inverse language pair model. |
21.9 |
21.2 |
0.685 |
0.548 |
0.587 |
BaseNematusEnDe (Details) | m4t1ss Tilde |
yes | This should be right |
21.4 |
21.0 |
0.706 |
0.544 |
0.599 |
|
ParFDA (Details) | bicici |
en-de ParFDA Moses phrase-based SMT system | yes | en-de (after the deadline) |
19.1 |
18.5 |
0.749 |
0.533 |
0.647 |
PROMT Rule-based (Details) | Alex Molchanov PROMT LLC |
no |
17.0 |
16.6 |
0.752 |
0.527 |
0.602 |