Interested in Contributing?
- Check out available resources.
- Create an account and start submitting your own systems.
Scored Systems
System | Submitter | System Notes | Constraint | Run Notes | BLEU | BLEU-cased | TER | BEER 2.0 | CharactTER |
---|---|---|---|---|---|---|---|---|---|
ForyorMT_Chinese2English (Details) | Zeng Hui |
no | In-house tokenization and detokenization system + reranking system |
40.9 |
39.9 |
0.494 |
0.672 |
0.500 |
|
MSRA.MASS (Details) | Microsoft Microsoft |
no | MASS pretraining + back translation + knowledge distillation + ensemble + reranking + crawling data from the web + speculation |
40.5 |
39.3 |
0.487 |
0.667 |
0.489 |
|
MSRA.MASS (Details) | Microsoft Microsoft |
no | MASS pretraining + back translation + knowledge distillation + ensemble + reranking + crawling data from the web |
39.5 |
38.3 |
0.495 |
0.662 |
0.504 |
|
Baidu-system (Details) | Baidu-MT Baidu |
yes |
39.3 |
38.0 |
0.513 |
0.659 |
0.489 |
||
KSAI-system (Details) | Aida KSAI |
yes |
38.8 |
37.5 |
0.524 |
0.651 |
0.506 |
||
ForyorMT_Chinese2English (Details) | Zeng Hui |
no | Single model + reranking |
37.8 |
36.5 |
0.515 |
0.667 |
0.500 |
|
mt-semantic-d (Details) | Chen, Tanfang DiDiChuxing |
yes |
37.2 |
36.1 |
failed |
0.656 |
0.500 |
||
ForyorMT_Chinese2English (Details) | Zeng Hui |
no | re-ranked using in-house ranking model |
36.5 |
35.3 |
0.531 |
0.656 |
0.519 |
|
ForyorMT_Chinese2English (Details) | Zeng Hui |
no | 1224new reranking model |
35.4 |
34.2 |
0.538 |
0.648 |
0.547 |
|
NEU (Details) | NiuTrans Northeastern University |
Ensemble of 8 deep Transformer (30 layers) models + back-translation with beam search + iterative distillation by ensemble teachers + hypothesis combination | yes |
35.4 |
34.2 |
0.546 |
0.639 |
0.536 |
|
BTRANS-ensemble (Details) | BTRANS test2019xxx |
yes |
35.1 |
33.8 |
0.559 |
0.637 |
0.541 |
||
ForyorMT_Chinese2English (Details) | Zeng Hui |
Single base transformer + back translation of web crawled text + own tokenization and detokenization system + checkpoint averaging | no | Single base transformer + back translation of web crawled text + own tokenization and detokenization system + checkpoint averaging |
34.9 |
33.7 |
0.548 |
0.637 |
0.559 |
ForyorMT_Chinese2English (Details) | Zeng Hui |
no | Single base transformer + back translation of web crawled text + own tokenization and detokenization system |
34.7 |
33.5 |
0.557 |
0.635 |
0.566 |
|
BTRANS (Details) | BTRANS test2019xxx |
yes |
34.7 |
33.4 |
0.564 |
0.635 |
0.544 |
||
wyx_mt (Details) | wyx |
baseline test | yes |
34.8 |
32.9 |
0.561 |
0.629 |
0.553 |
|
Captain Marvel (Details) | marvel S.H.I.E.L.D |
yes |
33.6 |
32.3 |
failed |
0.630 |
1.819 |
||
RWTH System (Details) | wwang RWTH Aachen University |
Ensemble of 4 Transformer models, back translation data, split long sentences, beam size 16 | yes |
33.0 |
31.7 |
0.561 |
0.624 |
0.576 |
|
Captain Marvel (Details) | marvel S.H.I.E.L.D |
yes |
32.5 |
31.2 |
0.565 |
0.627 |
0.579 |
||
NICT (Details) | Nedved NICT |
Transformer, back-translation, ensemble, and fine-tuning | yes |
32.3 |
31.0 |
0.599 |
0.615 |
0.569 |
|
ForyorMT_Chinese2English (Details) | Zeng Hui |
Single base transformer + back translation of web crawled text + own tokenization and detokenization system + + checkpoint averaging | no | Single base transformer + back translation of web crawled text + own tokenization and detokenization system + checkpoint averaging |
30.6 |
29.3 |
failed |
0.618 |
0.598 |
ForyorMT_Chinese2English (Details) | Zeng Hui |
no | not ranked |
29.4 |
28.0 |
0.589 |
0.609 |
0.619 |
|
ForyorMT_Chinese2English (Details) | Zeng Hui |
Single base transformer + back translation of web crawled text + own tokenization and detokenization system + checkpoint averaging | no | Single base transformer + back translation of web crawled text + own tokenization and detokenization system + checkpoint averaging |
29.2 |
27.8 |
failed |
0.611 |
0.611 |
ForyorMT_Chinese2English (Details) | Zeng Hui |
Single base transformer + back translation + finetune + checkpoint averaging | no | Single base transformer + back translation + finetune + checkpoint averaging |
29.0 |
27.7 |
0.589 |
0.611 |
0.610 |
UEDIN (Details) | XapaJIaMnu UEDIN |
yes | 48, b12 |
28.9 |
27.7 |
0.630 |
0.604 |
0.607 |
|
IIE.STRANS (Details) | Xiangpeng Wei IIE, CAS |
Base Transformer | yes |
28.9 |
27.5 |
0.641 |
0.597 |
0.625 |
|
test2019news (Details) | len2618187 18810296219 |
yes |
28.4 |
27.3 |
0.607 |
0.598 |
0.678 |
||
ForyorMT_Chinese2English (Details) | Zeng Hui |
Single base transformer + back translation + finetune | no | Single base transformer + back translation + finetune |
28.5 |
27.2 |
0.602 |
0.607 |
0.626 |
baseline (Details) | research Machine Translator |
baseline model | yes | baseline with back-translation data |
28.2 |
27.0 |
0.609 |
0.600 |
0.626 |
ForyorMT_Chinese2English (Details) | Zeng Hui |
Single base transformer + back-translation of thousands of in-domain sentences + checkpoint averaging | no | Single base transformer + back-translation of thousands of in-domain sentences + checkpoint averaging |
28.0 |
26.9 |
failed |
0.606 |
0.591 |
txshi_baseline (Details) | txshi |
Baseline (Single Transformer) | yes |
27.9 |
26.7 |
0.616 |
0.600 |
0.624 |
|
ForyorMT_Chinese2English (Details) | Zeng Hui |
Single base transformer + back-translation of thousands of in-domain sentences | no | Single base transformer + back-translation of thousands of in-domain sentences |
27.6 |
26.5 |
failed |
0.602 |
0.600 |
try (Details) | testone |
yes |
26.1 |
24.9 |
0.622 |
0.587 |
0.707 |
||
ForyorMT_Chinese2English (Details) | Zeng Hui |
Single base transformer. | yes |
25.6 |
24.6 |
0.621 |
0.588 |
0.675 |
|
Wonder Woman (Details) | fansiawang Personal |
ensemble of 4 transformer models with beam 12 | yes |
26.6 |
24.4 |
0.631 |
0.587 |
0.706 |
|
Wonder Woman (Details) | fansiawang Personal |
ensemble of 4 transformer models with beam 12 | yes |
26.6 |
24.4 |
0.631 |
0.588 |
0.706 |
|
Wonder Woman (Details) | fansiawang Personal |
ensemble of 4 transformer models with beam 12 | yes |
26.2 |
24.0 |
0.623 |
0.588 |
0.707 |
|
Wonder Woman (Details) | fansiawang Personal |
ensemble of 4 transformer models with beam 12 | yes |
26.1 |
23.9 |
0.624 |
0.587 |
0.713 |
|
Wonder Woman (Details) | fansiawang Personal |
ensemble of 4 transformer models with beam 12 | yes |
26.0 |
23.8 |
0.635 |
0.582 |
0.736 |
|
Apprentice-c (Details) | nickeilf |
yes |
17.7 |
16.9 |
0.745 |
0.512 |
0.866 |
||
Apprentice-g (Details) | nickeilf |
yes |
17.3 |
16.6 |
0.746 |
0.509 |
0.869 |
||
Sunny (Details) | sunny |
yes |
failed |
failed |
failed |
0.000 |
0.000 |
||
BTRANS-ensemble (Details) | BTRANS test2019xxx |
yes |
failed |
failed |
failed |
0.000 |
0.000 |
||
rabbit (Details) | rabbit rabbit |
yes |
failed |
failed |
failed |
0.000 |
0.000 |
||
BTRANS-ensemble (Details) | BTRANS test2019xxx |
yes |
failed |
failed |
failed |
0.000 |
0.000 |
||
BTRANS-ensemble (Details) | BTRANS test2019xxx |
yes |
failed |
failed |
failed |
0.000 |
0.000 |
||
BTRANS-ensemble (Details) | BTRANS test2019xxx |
yes |
failed |
failed |
failed |
0.000 |
0.000 |
||
submit (Details) | littlexzt |
yes |
failed |
failed |
failed |
0.000 |
0.000 |
||
submit (Details) | littlexzt |
yes |
failed |
failed |
failed |
0.000 |
0.000 |