Interested in Contributing?
- Check out available resources.
- Create an account and start submitting your own systems.
Scored Systems
System | Submitter | System Notes | Constraint | Run Notes | BLEU | BLEU-cased | TER | BEER 2.0 | CharactTER |
---|---|---|---|---|---|---|---|---|---|
Tencent ensemble system (Details) | Mr Translator Tencent |
Rerank ensemble outputs with 48 features (including t2t R2l, t2t L2R, rnn L2R, rnn R2L etc.) Back translation. Joint train with English to Chinese systems. Fine-tuning with selected data. Knowledge distillation. | yes |
30.8 |
29.3 |
failed |
0.594 |
0.574 |
|
NiuTrans (Details) | NiuTrans Northeastern University |
Ensemble of 15 Transformer models with re-ranking | yes | Ensemble of 15 Transformer models with re-ranking |
29.9 |
28.7 |
failed |
0.589 |
0.589 |
Uni-NMT-Transformer-ZhEn (Details) | Unisound Unisound AI Labs |
BackTranslation + Ensemble + Rerank(ZhEn-L2R + ZhEn-R2L + EnZh-L2R + EnZh-R2L +LM) | yes | BackTranslation + Ensemble + Rerank(ZhEn-L2R + ZhEn-R2L + EnZh-L2R + EnZh-R2L +LM, Average rescore weight) |
30.1 |
28.4 |
failed |
0.590 |
0.589 |
TencentFmRD-zhen-new (Details) | Bojie Hu TencentFmRD |
Transformer, ensemble, reranking, finetune, back-translation | yes | Transformer, ensemble, reranking, finetune, back-translation |
30.2 |
28.3 |
failed |
0.593 |
0.576 |
TencentFmRD-zhen (Details) | Bojie Hu TencentFmRD |
Transformer, ensemble, reranking, finetune, back-translation | yes | Transformer, ensemble, reranking, finetune, back-translation |
30.1 |
28.2 |
failed |
0.593 |
0.577 |
NMT-SMT Hybrid (Details) | fstahlberg University of Cambridge |
MBR-based combination of neural models and SMT | yes |
29.0 |
27.7 |
failed |
0.587 |
0.589 |
|
Uni-NMT Transformer (Details) | Unisound Unisound AI Labs |
BackTranslation + Ensemble + Rerank + SMT | yes | average weight rerank |
29.3 |
27.7 |
failed |
0.589 |
0.593 |
Tencent single system (Details) | Mr Translator Tencent |
Single LSTM systems with 6 layers encoders and 3 layers decoders. Back translation (ensemble) with parallel target parts and 20 million mono lingual corpus. Trained with R2l regulazition and target to source joint training. Fine-tuning with selected data from CNN and RNN classification model. | yes |
29.8 |
27.5 |
failed |
0.585 |
0.583 |
|
Li Muze (Details) | Li Muze CCNI |
Ensembles of 4 averaged Transformer models with 1 zh-en R2L and 1 en-zh T2S averaged Transformer model, all the models are same as Transformer big-model, trained on the official training data with 4.5M back-translation data on the news2016&2017 data. And English vocabulary size is 3.6w BPE subwords. | yes |
28.5 |
27.4 |
failed |
0.585 |
0.603 |
|
NICT (Details) | rui.wang NICT |
The same team with benjamin.marie of NICT. | yes | Transformer, back-translation, ensemble and reranking |
28.0 |
26.7 |
failed |
0.578 |
0.610 |
test_normal (Details) | wangwei |
yes |
27.5 |
26.4 |
failed |
0.578 |
0.612 |
||
zzy_zh2en2 (Details) | wangwei |
test | yes |
27.3 |
26.1 |
failed |
0.577 |
0.613 |
|
RWTH Transformer (Details) | pbahar RWTH Aachen University |
The ensemble of 4 checkpoints, back-translated data | yes |
27.3 |
26.1 |
failed |
0.573 |
0.622 |
|
ForyorMT_Chinese2English (Details) | Zeng Hui |
no |
27.3 |
25.9 |
failed |
0.580 |
failed |
||
bit-zhen (Details) | bit-nmt |
yes |
27.2 |
25.8 |
failed |
0.576 |
0.621 |
||
ForyorMT_Chinese2English (Details) | Zeng Hui |
no |
27.0 |
25.7 |
failed |
0.572 |
failed |
||
bit-zhen (Details) | bit-nmt |
yes |
26.6 |
25.3 |
failed |
0.573 |
0.639 |
||
Wonder Woman (Details) | fansiawang Personal |
yes |
27.6 |
25.0 |
failed |
0.570 |
0.638 |
||
PERCY-trans (Details) | PERCY-sys ATT |
single | yes |
25.8 |
24.7 |
failed |
0.570 |
0.648 |
|
Wonder Woman (Details) | fansiawang Personal |
yes |
27.0 |
24.5 |
failed |
0.566 |
0.638 |
||
bit-zhen (Details) | bit-nmt |
yes |
25.9 |
24.5 |
failed |
0.568 |
0.721 |
||
bit-zhen (Details) | bit-nmt |
yes |
25.9 |
24.4 |
failed |
0.567 |
0.723 |
||
transformer (Details) | weijia University of Maryland |
yes | ensemble of 3 transformer models, reranking with r2l, t2s |
25.6 |
24.4 |
failed |
0.570 |
failed |
|
Wonder Woman (Details) | fansiawang Personal |
yes |
26.4 |
24.1 |
failed |
0.569 |
0.666 |
||
Wonder Woman (Details) | fansiawang Personal |
yes |
26.4 |
24.0 |
failed |
0.569 |
0.666 |
||
UEDIN (Details) | XapaJIaMnu UEDIN |
yes | Best deep with layer normalization and multi-head attention. Small vocabulary (18k) + ensembles trained for about 40 hours |
25.1 |
24.0 |
failed |
0.562 |
0.680 |
|
Wonder Woman (Details) | fansiawang Personal |
yes |
26.4 |
23.8 |
failed |
0.568 |
0.667 |
||
Wonder Woman (Details) | fansiawang Personal |
yes |
26.4 |
22.0 |
failed |
0.558 |
0.675 |
||
Wonder Woman (Details) | fansiawang Personal |
yes |
26.4 |
21.6 |
failed |
0.556 |
0.677 |
||
S-MT (Details) | Hongxin Shao Shopee |
yes |
22.1 |
21.1 |
failed |
0.546 |
0.697 |
||
Wonder Woman (Details) | fansiawang Personal |
yes | ensemble of 4 transformer model |
failed |
failed |
failed |
0.556 |
0.677 |
|
yanghaocsg (Details) | yanghaocsg dr |
yes | |||||||
Wonder Woman (Details) | fansiawang Personal |
yes | ensemble of 4 transformer models |
failed |
failed |
failed |
0.556 |
0.677 |
|
A3-180 (Details) | saumitray IIIT Hyderabad |
Baseline NMT system with Global Attention | no |
failed |
failed |
failed |
0.447 |
0.915 |
|
NiuTrans (Details) | NiuTrans Northeastern University |
Ensemble of 15 Transformer models with re-ranking | yes | Ensemble of 15 Transformer models with re-ranking |
failed |
failed |
failed |
0.589 |
0.589 |
A3-180 (Details) | saumitray IIIT Hyderabad |
Baseline NMT system with Global Attention | no |