Interested in Contributing?

Scored Systems

System Submitter System Notes Constraint Run Notes BLEU BLEU-cased TER BEER 2.0 CharactTER
MSRA.MADL  (Details) Microsoft
Microsoft
Multi-Agent dual learning + transformer_big yes

45.2

44.9

0.450

0.677

0.401

sharpL  (Details) sharp
sharp
yes

45.0

44.6

0.462

0.673

0.408

sharpL  (Details) sharp
sharp
yes

44.7

44.4

0.465

0.672

0.412

Microsoft-WMT19-sentence/document  (Details) marcinjd
Microsoft
Sentence-level/document-level combination via 2-pass decoding from sentence to document-level. yes

44.3

43.9

0.458

0.676

0.402

Microsoft-WMT19-document-level  (Details) marcinjd
Microsoft
Pure document-level system yes

44.2

43.9

0.459

0.675

0.402

sharpL  (Details) sharp
sharp
yes fairseq ensemble

44.0

43.6

0.470

0.668

0.417

NEU  (Details) NiuTrans
Northeastern University
Ensemble of 8 deep Transformer (30 layers) models + back-translation with sampling + distillation by ensemble teachers + hypothesis combination + fix quotes yes

43.9

43.5

0.456

0.672

0.412

UCAM  (Details) fstahlberg
University of Cambridge
yes 1 sentence-level LM, 1 document-level LM, 4 NMT models fine-tuned with EWC, fixed quotes

43.4

43.0

0.463

0.668

0.417

Microsoft-WMT19-sentence-level  (Details) marcinjd
Microsoft
Pure sentence-level system yes

43.3

43.0

0.466

0.670

0.413

JHU  (Details) kelly-yash-jhu
Johns Hopkins University
2 Transformer base ensemble + filtered backtranslation with restricted sampling + filtered ParaCrawl & CommonCrawl + continued training on newstest15-18 + reranking with R2L models + fix quotes yes Post Submission Work

43.2

42.8

0.467

0.664

0.417

Facebook FAIR   (Details) edunov
Facebook FAIR
yes + fix-quotes

43.1

42.7

failed

0.670

0.409

JHU  (Details) kelly-yash-jhu
Johns Hopkins University
2 Transformer base ensemble + filtered backtranslation with restricted sampling + filtered ParaCrawl & CommonCrawl + continued training on newstest15-18 + reranking with R2L models + fix quotes yes Fix quotes

42.9

42.5

0.472

0.663

0.420

Microsoft WMT18-baseline  (Details) marcinjd
Microsoft
yes WMT18-baseline, same as last year, fixed quotes.

42.3

41.9

0.479

0.660

0.429

eTranslation  (Details) eTranslation
DGT/CNECT
base transformer ensemble of 3 models plus LM, fine tuned on devset yes FQ

42.3

41.9

0.473

0.663

0.421

sharpL  (Details) sharp
sharp
yes

42.1

41.8

0.484

0.661

0.427

MLLP-UPV   (Details) mllp
MLLP group - Univ. Politècnica de València
Transformer big model. Includes 10M sentences from Paracrawl and 18M backtranslated sentences. Finetuned on newstest08-16. Single model. yes

42.1

41.7

0.483

0.660

0.427

dfki-nmt  (Details) zhangjingyi
dfki
yes

43.0

41.6

0.476

0.661

0.420

Helsinki-NLP  (Details) aarnetalman
University of Helsinki
yes Transformer ensemble

41.9

41.4

0.476

0.662

0.429

test  (Details) kxyg
SYU
test yes

41.1

40.7

0.495

0.653

0.439

lmu-ctx-tf-single-en-de   (Details) dario
LMU Munich
context-aware single transformer big model yes fixed quotes

40.6

40.3

0.496

0.653

0.438

UdS-DFKI  (Details) cristinae
UdS-DFKI
Coreference-aware NMT (ensemble 4, first version) yes

39.9

39.5

0.511

0.644

0.464

UdS-DFKI  (Details) cristinae
UdS-DFKI
Coreference-aware NMT, ensemble, small models yes

39.7

39.3

0.512

0.643

0.471

UdS-DFKI  (Details) cristinae
UdS-DFKI
Coreference-aware NMT (quotes fixed) yes

39.0

38.6

0.510

0.644

0.468

PROMT NMT  (Details) Alex Molchanov
PROMT LLC
test no

38.8

38.4

failed

0.643

0.458

PROMT NMT  (Details) Alex Molchanov
PROMT LLC
test no test2

38.5

38.1

0.515

0.642

0.455

PROMT NMT EN-DE  (Details) Alex Molchanov
PROMT LLC
transformer, single model, fine-tuned no transformer, single model, fine-tuned

38.3

37.9

0.514

0.642

0.454

UdS-DFKI  (Details) cristinae
UdS-DFKI
Coreference-aware NMT (best transformer, no ensembling) yes

37.9

37.6

0.528

0.635

0.486

TartuNLP-c  (Details) andre
University of Tartu
yes baselines

36.8

36.4

0.531

0.627

0.466

helloword  (Details) yly
jinx
yes

33.3

33.0

0.561

0.606

0.506

parfda  (Details) bicici
yes en-de with hyphen splitting

24.3

23.9

0.641

0.572

0.612

parfda  (Details) bicici
yes en-de with nplm

24.1

23.8

0.642

0.572

0.603

parfda  (Details) bicici
yes en-de with nplm and kenlm

24.0

23.6

0.652

0.575

0.592

parfda  (Details) bicici
yes en-de

23.9

23.5

0.647

0.570

0.614

en_de_task  (Details) abhranil08
Jadavpur University, Kolkata.
no

17.1

16.9

0.807

0.499

0.663

  (Details) kylogong
no

failed

failed

0.000

0.948

0.047