Interested in Contributing?

Scored Systems

System Submitter System Notes Constraint Run Notes BLEU BLEU-cased TER BEER 2.0 CharactTER
RWTH Aachen ensemble of Transformer models  (Details) jschamper
RWTH Aachen
Ensemble of 3-strongest Transformer models; contains Edinburgh's back translation from WMT16; contains filtered version of ParaCrawl (18M sentences); contains backtranslation of news.2017.en.shuffled (13M) yes

49.9

48.4

0.381

0.695

0.369

NMT-SMT Hybrid  (Details) fstahlberg
University of Cambridge
MBR-based combination of neural models and SMT yes Fix quotes

49.3

48.0

0.385

0.692

0.374

RWTH Aachen Transformer model (single)  (Details) jschamper
RWTH Aachen
trained on 4 GPUs; num-embed: 1024; num-layers: 6; attention-heads: 16; transformer-feed-forward-num-hidden: 4096; transformer-model-size: 1024; 50k joint-BPE; contains Edinburgh's back translation from WMT16; contains filtered version of ParaCrawl (18M sentences); contains backtranslation of news.2017.en.shuffled (13M) yes

49.1

47.6

0.388

0.691

0.376

NTT Transformer-based System  (Details) makoto-mr
NTT
Based on Transformer Big model. Trained with filtered version of CommonCrawl, ParaCrawl and synthetic corpus of newscrawl2017. R2L reranking. yes

48.2

46.8

0.398

0.687

0.379

JHU  (Details) jhu-nmt
Johns Hopkins University
Marian Deep RNN yes Contrastive run, fine-tuned to previous test sets (but not R2L reranking)

46.6

45.3

0.409

0.679

0.392

JHU  (Details) jhu-nmt
Johns Hopkins University
Marian Deep RNN yes Marian deep model, ensemble of 4 runs using base data (without Paracrawl), and 1 run with partial Paracrawl, re-back-translated news 2016. R2L Reranking. Primary.

46.5

45.3

0.406

0.680

0.391

MLLP-UPV Transformer Ensemble  (Details) mllp
MLLP group - Univ. Politècnica de València
Transformer base model. Trained with 10M filtered sentences, including Paracrawl, and 20M backtranslated sentences from news-2017. Ensemble of 4 models. yes

46.4

45.1

0.408

0.681

0.387

JHU  (Details) jhu-nmt
Johns Hopkins University
Marian Deep RNN yes Marian deep model, ensemble of 4 runs using base data (without Paracrawl), and 1 run with partial Paracrawl, re-back-translated news 2016. Not final system yet.

46.1

44.9

0.412

0.677

0.395

MLLP-UPV Transformer Single  (Details) mllp
MLLP group - Univ. Politècnica de València
Transformer base model. Trained with 10M filtered sentences, including Paracrawl, and 20M backtranslated sentences from news-2017. Single model (contrastive). yes

45.9

44.7

0.411

0.677

0.392

Ubiqus-NMT  (Details) vince62s
Ubiqus
OpenNMT Transformer Base system with Rico's back translation from WMT16 Does not include Paracrawl. yes Single system.

45.1

44.1

0.412

0.674

0.407

uedin-de-en-3ens-2rerank  (Details) ugermann
University of Edinburgh
3 transformers ensembled, reranked with 3 R2L systems. Trained with a selection of paracrawl. yes Fixed run. The first submission had the wrong input (no BPE).

45.1

43.9

0.417

0.673

0.399

LMU-nmt-wmt18-de-en  (Details) Matthias.Huck
LMU Munich
Nematus encoder-decoder NMT, single hidden layer, no r2l reranking yes

42.5

40.9

0.446

0.658

0.426

NJUNMT-private  (Details) ZhaoChengqi
Nanjing University
yes transformer base

39.6

38.3

0.479

0.644

0.450

parfda  (Details) bicici
parfda Moses phrase-based SMT yes de-en using PRO for tuning

34.6

33.4

0.529

0.619

0.514

RWTH Unsupervised NMT Ensemble  (Details) yunsukim
RWTH Aachen University
(Unsupervised) Transformer with shared encoder/decoder, separate top-50k word vocabs, iterative back-translation, ensemble 4x yes

19.9

18.6

0.663

0.518

0.622

RWTH Unsupervised NMT Single  (Details) yunsukim
RWTH Aachen University
(Unsupervised) Transformer with shared encoder/decoder, separate top-50k word vocabs, iterative back-translation yes

19.5

18.1

0.669

0.513

0.648

LMU-unsupervised-nmt-wmt18-de-en  (Details) Matthias.Huck
LMU Munich
Unsupervised NMT (no parallel training corpora) yes

18.8

17.9

0.684

0.511

0.679