Interested in Contributing?

Scored Systems

System Submitter System Notes Constraint Run Notes BLEU BLEU (11b) BLEU-cased BLEU-cased (11b) TER
CMU HIEN  (Details) armatthe
Carnegie Mellon University
Hiero with transliteration, synthetic determiner rules, source-side paraphrasing, segmentation lattices, brown cluster language model, and lexical features. yes

18.0

18.0

16.7

16.7

0.767

uedin-syntax-hi-en  (Details) Phil Williams
University of Edinburgh
String-to-tree with transliteration yes

16.4

16.4

15.1

15.1

0.823

uedin-stanford-unconstrained  (Details) heafield
Stanford
Very late submission (March 7), so probably not official. Edinburgh's official WMT phrase-based system plus a CommonCrawl 2012 English model with fr-en truecasing (so a truecasing mismatch). The model was used for decoding and transliteration. no

16.2

16.2

14.8

14.8

0.828

uedin-wmt14-hi-en  (Details) Nadir
University of Edinburgh
Phrase-based Moses yes

15.3

15.3

13.9

13.9

0.840

iitb_hi_en_ranked_ppl  (Details) cfilt
IIT Bombay
Ranked output by ppl score of: 1) Phrase Based Source Reordered with TAG as factor 2)Phrase Based Source Reordered with Case Number as factor yes

14.5

14.5

13.5

13.5

0.897

iitb_hi_en_pb_src_reordered  (Details) cfilt
IIT Bombay
Phrase Based Source Reorder yes

13.7

13.7

12.6

12.6

0.902

AFRL hi-en  (Details) jeremy.gwinnup
AFRL
Phrase-Based Moses yes Variant 3

13.1

13.1

12.1

12.1

0.857

DCU-HiEn  (Details) xiaofengwu
CNGL,DCU
moses phrasebased yes 2pArE.tran

13.2

13.2

11.7

11.7

0.879

DCU-HiEn  (Details) xiaofengwu
CNGL,DCU
moses phrasebased yes 3pArE.tran

13.1

13.1

11.6

11.6

0.864

DCU-HIEN-T  (Details) DCU-HIEN
DCU
yes

12.4

12.4

11.2

11.2

0.871

FDA5-DCU  (Details) bicici
Centre for Next Genetation Localisation, School of Computing, Dublin City University
Feature Decay Algorithms (FDA) 5 and Moses RELEASE-2.1 Instance selection with FDA5 and using Moses phrase based system to decode. yes hi-en_popt

11.5

11.5

10.5

10.5

0.878

DCU-HIEN-T  (Details) DCU-HIEN
DCU
yes

11.3

11.3

10.2

10.2

0.919

DCU-HIEN  (Details) DCU-HIEN
DCU
Phrase-Based yes PB-base

11.5

11.5

10.1

10.1

0.864

DCU-HIEN  (Details) DCU-HIEN
DCU
Phrase-Based yes context-based (POS)

11.5

11.5

10.1

10.1

0.856

DCU-HIEN  (Details) DCU-HIEN
DCU
Phrase-Based yes source suffix stripped

11.4

11.4

10.0

10.0

0.873

DCU-HIEN  (Details) DCU-HIEN
DCU
Phrase-Based yes source stemmed

10.4

10.4

9.2

9.2

0.884

UdS-MaN  (Details) alvations
Universitaet des Saarland
Using MWE extractor and NE tagger outputs to improve MT. Simply adding NLP outputs to MOSES inputs improves MT results. yes HI-EN: Extract MWE by setting a threshold of PMI > 10. Extract NE using CRF tagger trained with NERSSEAL shared task data (http://ltrc.iiit.ac.in/ner-ssea-08/index.cgi?topic=5)

9.9

9.9

7.1

7.1

0.869

UdS-MaNaWi  (Details) alvations
Universitaet des Saarland
Using MWE extractor and NE tagger outputs and NEs from wikipedia titles to improve MT. Simply adding NLP outputs to MOSES inputs improves MT results. yes HI-EN: Same as MaN system but added wikipedia titles as NEs to MOSES

10.0

10.0

7.1

7.1

0.869

system1  (Details) user
IIIT-Hyderabad
no

8.0

8.0

7.0

7.0

0.900

system1  (Details) user
IIIT-Hyderabad
no

7.8

7.8

6.9

6.9

0.928

system1  (Details) user
IIIT-Hyderabad
no

7.2

7.2

6.4

6.4

0.976

UdS-MaNaWi  (Details) alvations
Universitaet des Saarland
Using MWE extractor and NE tagger outputs and NEs from wikipedia titles to improve MT. Simply adding NLP outputs to MOSES inputs improves MT results. yes HI-EN: MaNaWi runs with cleaned inputs

7.8

7.8

5.7

5.7

0.869

UdS-MaNaWi  (Details) alvations
Universitaet des Saarland
Using MWE extractor and NE tagger outputs and NEs from wikipedia titles to improve MT. Simply adding NLP outputs to MOSES inputs improves MT results. yes HI-EN: cleaned training data by phrase length

7.0

7.0

4.9

4.9

0.865