Interested in Contributing?
- Check out available resources.
- Create an account and start submitting your own systems.
Scored Systems
System | Submitter | System Notes | Constraint | Run Notes | BLEU | BLEU-cased | TER | BEER 2.0 | CharactTER |
---|---|---|---|---|---|---|---|---|---|
sharpL (Details) | sharp sharp |
yes | trained data from wmt2019 |
50.3 |
49.7 |
0.398 |
0.704 |
0.356 |
|
Microsoft-Marian (Details) | marcinjd Microsoft |
Marian, Transformer-big ensemble x4. With filtered, clean, and domain-weighted paracrawl. Also domain-weighthing original parallel data. Decoder-time ensemble with in-domain Transformer-LM. Right-to-left scoring with Transformer-big models. | yes |
48.9 |
48.3 |
0.407 |
0.697 |
0.362 |
|
NMT-SMT Hybrid (Details) | fstahlberg University of Cambridge |
MBR-based combination of neural models and SMT | yes |
47.1 |
46.6 |
0.415 |
0.691 |
0.369 |
|
NTT Transformer-based System (Details) | makoto-mr NTT |
Based on Transformer Big model. Trained with filtered version of CommonCrawl, ParaCrawl and synthetic corpus of newscrawl2017. R2L reranking. | yes |
47.0 |
46.5 |
0.426 |
0.688 |
0.370 |
|
KIT Primary Submission (Details) | pianist Karlsruhe Institute of Technology |
Primary Submission | yes |
46.9 |
46.3 |
0.428 |
0.687 |
0.382 |
|
MMT production system (Details) | nicolabertoldi MMT srl |
Transformer-based neural MT; single model; single pass decoding. | no | Trained on public and proprietary data. |
46.7 |
46.2 |
0.432 |
0.682 |
0.387 |
Facebook-FAIR (Details) | edunov Facebook FAIR |
Ensemble of six self-attentional models with back-translation data according to https://arxiv.org/abs/1808.09381 | yes |
46.5 |
46.1 |
0.423 |
0.689 |
0.381 |
|
Ubiqus-NMT (Details) | vince62s Ubiqus |
Base transformer Include a selection of Paracrawl Include WMT16 Rico's BackT | yes |
46.1 |
45.6 |
0.422 |
0.685 |
0.383 |
|
Contrastive (Single) (Details) | pianist Karlsruhe Institute of Technology |
Single Transformer (Base) Model | yes |
45.7 |
45.1 |
0.435 |
0.682 |
0.389 |
|
uedin-en-de-single-transfomer (Details) | ugermann University of Edinburgh |
Single transformer trained on WMT2017 data plus a selection of paracrawl. | yes |
44.9 |
44.4 |
0.441 |
0.676 |
0.391 |
|
MMT contrastive 2 (Details) | nicolabertoldi MMT srl |
Transformer-based neural MT; single model, single pass decoding. | yes | Trained on a filtered version of the supplied data. |
44.6 |
44.2 |
failed |
0.673 |
0.399 |
JHU (Details) | jhu-nmt Johns Hopkins University |
Marian Deep RNN | yes | Contrastive run, fine-tuned to previous test sets (but not R2L reranking) |
43.9 |
43.4 |
0.448 |
0.672 |
0.391 |
JHU (Details) | jhu-nmt Johns Hopkins University |
Marian Deep RNN | yes | Marian deep model, ensemble of 4 runs using base data (without Paracrawl), re-back-translated news 2016. R2L Reranking. Primary. |
44.0 |
43.4 |
0.449 |
0.672 |
0.392 |
uedin-en-de-single-transformer-reranked (Details) | ugermann University of Edinburgh |
Single transformer, reranked with two R2L transformers. | yes |
43.8 |
43.2 |
0.450 |
0.669 |
0.397 |
|
JHU (Details) | jhu-nmt Johns Hopkins University |
Marian Deep RNN | yes | Marian deep model, ensemble of 4 runs using base data (without Paracrawl), re-back-translated news 2016. Not final system yet. |
43.6 |
43.0 |
0.453 |
0.670 |
0.394 |
MMT contrastive (Details) | nicolabertoldi MMT srl |
Transformer-based neural MT; single model, single pass decoding. | yes | Trained on a filtered version of the supplied data. German de-compounding applied. |
42.9 |
42.5 |
0.463 |
0.667 |
0.411 |
uedin-en-de-2+2-transformer (Details) | ugermann University of Edinburgh |
2 transformers ensembled, reranked with 2 R2L systems. Include paracrawl | yes |
42.3 |
41.8 |
0.463 |
0.663 |
0.405 |
|
LMU-nmt-reranked-wmt18-en-de (Details) | Matthias.Huck LMU Munich |
Nematus encoder-decoder NMT (single model + r2l reranking), like last year | yes |
40.6 |
40.0 |
0.480 |
0.655 |
0.421 |
|
NJUNMT (Details) | ZhaoChengqi Nanjing University |
transformer base without back translation | yes | transformer base without back translation |
40.6 |
40.0 |
0.496 |
0.647 |
0.436 |
LMU-nmt-single-wmt18-en-de (Details) | Matthias.Huck LMU Munich |
Nematus encoder-decoder NMT (single model), like last year | yes |
39.3 |
38.8 |
0.492 |
0.647 |
0.433 |
|
parfda (Details) | bicici |
yes | en-de using PRO for tuning |
27.3 |
26.7 |
0.620 |
0.591 |
0.570 |
|
Wink (Details) | anything uni saarland |
Out-of-domain data | no |
22.3 |
21.9 |
0.706 |
0.540 |
0.741 |
|
LMU-unsupervised-nmt-wmt18-en-de (Details) | Matthias.Huck LMU Munich |
Unsupervised NMT (no parallel training corpora) | yes |
15.8 |
15.5 |
0.762 |
0.500 |
failed |
|
RWTH Unsupervised NMT Ensemble (Details) | yunsukim RWTH Aachen University |
(Unsupervised) Transformer with shared encoder/decoder, separate top-50k word vocabs, iterative back-translation, ensemble 4x | yes |
15.9 |
14.8 |
0.753 |
0.514 |
0.607 |
|
RWTH Unsupervised NMT Single (Details) | yunsukim RWTH Aachen University |
(Unsupervised) Transformer with shared encoder/decoder, separate top-50k word vocabs, iterative back-translation | yes |
15.6 |
14.5 |
0.758 |
0.510 |
0.615 |
|
LMU-unsupervised-pbt-wmt18-en-de (Details) | Matthias.Huck LMU Munich |
Unsupervised (no parallel training corpora), BWEs + PBT | yes |
14.6 |
14.3 |
0.791 |
0.518 |
0.627 |
|
Ubiqus-NMT (Details) | vince62s Ubiqus |
Base transformer Include a selection of Paracrawl Include WMT16 Rico's BackT | yes |