Interested in Contributing?

Scored Systems

System Submitter System Notes Constraint Run Notes BLEU BLEU-cased TER BEER 2.0 CharactTER
CUNI-DocTransformer-T2T  (Details) popel
UFAL, Charles University in Prague
document-level trained Transformer yes

30.4

29.9

0.600

0.587

0.524

CUNI-Transformer-T2T-2018  (Details) popel
UFAL, Charles University in Prague
my system from WMT2018 yes my system from WMT2018

30.3

29.9

0.599

0.587

0.523

CUNI-Transformer-T2T-2019  (Details) popel
UFAL, Charles University in Prague
Same model as CUNI-DocTransformer-T2T, but applied on single sentences (i.e. with no cross-sentence context). yes

29.9

29.4

0.604

0.579

0.536

CUNI-DocTransformer-Marian  (Details) dominik-machacek
CUNI
yes

28.6

28.1

0.620

0.574

0.554

CUNI-Transformer-Marian  (Details) dominik-machacek
CUNI
yes

28.6

28.1

0.619

0.573

0.540

CUNI-DocTransformer-Marian  (Details) dominik-machacek
CUNI
yes

28.6

28.1

0.620

0.574

0.554

uedin  (Details) romang-wmt19
University of Edinburgh
yes

28.3

27.9

0.612

0.576

0.534

TartuNLP-c  (Details) andre
University of Tartu
yes Baseline

23.4

22.8

0.673

0.542

0.581

test submission (edited)  (Details) barry-wmt18
University of Edinburgh
yes

23.2

22.0

0.669

0.541

0.581

Benchmark-Supervised  (Details) kvapili
Charles University
no

19.3

18.8

0.719

0.517

0.636

parfda  (Details) bicici
yes en-cs with nplm and kenlm

16.0

15.6

0.741

0.498

0.694

parfda  (Details) bicici
yes en-cs with hyphen splitting

15.5

15.2

0.741

0.498

0.695

parfda  (Details) bicici
yes en-cs

15.5

15.2

0.746

0.496

0.695

parfda  (Details) bicici
yes en-cs with nplm

14.6

14.2

0.760

0.486

0.699

Benchmark-fromEN  (Details) kvapili
Charles University
no

13.6

13.3

0.793

0.482

0.683

CUNI-Unsupervised-base  (Details) kvapili
Charles University
no

13.6

13.3

0.799

0.482

0.688

caiwoss  (Details) rabbit
rabbit
no

failed

failed

failed

0.000

0.000