InstructionReranking¶
- Number of tasks: 5
Core17InstructionRetrieval¶
Measuring retrieval instruction following ability on Core17 narratives for the FollowIR benchmark.
Dataset: jhu-clsp/core17-instructions-mteb
• License: mit • Learn more →
Task category | Score | Languages | Domains | Annotations Creators | Sample Creation |
---|---|---|---|---|---|
text to text (t2t) | p-MRR | eng | News, Written | derived | found |
Citation
@misc{weller2024followir,
archiveprefix = {arXiv},
author = {Orion Weller and Benjamin Chang and Sean MacAvaney and Kyle Lo and Arman Cohan and Benjamin Van Durme and Dawn Lawrie and Luca Soldaini},
eprint = {2403.15246},
primaryclass = {cs.IR},
title = {FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions},
year = {2024},
}
News21InstructionRetrieval¶
Measuring retrieval instruction following ability on News21 narratives for the FollowIR benchmark.
Dataset: jhu-clsp/news21-instructions-mteb
• License: mit • Learn more →
Task category | Score | Languages | Domains | Annotations Creators | Sample Creation |
---|---|---|---|---|---|
text to text (t2t) | p-MRR | eng | News, Written | derived | found |
Citation
@misc{weller2024followir,
archiveprefix = {arXiv},
author = {Orion Weller and Benjamin Chang and Sean MacAvaney and Kyle Lo and Arman Cohan and Benjamin Van Durme and Dawn Lawrie and Luca Soldaini},
eprint = {2403.15246},
primaryclass = {cs.IR},
title = {FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions},
year = {2024},
}
Robust04InstructionRetrieval¶
Measuring retrieval instruction following ability on Robust04 narratives for the FollowIR benchmark.
Dataset: jhu-clsp/robust04-instructions-mteb
• License: mit • Learn more →
Task category | Score | Languages | Domains | Annotations Creators | Sample Creation |
---|---|---|---|---|---|
text to text (t2t) | p-MRR | eng | News, Written | derived | found |
Citation
@misc{weller2024followir,
archiveprefix = {arXiv},
author = {Orion Weller and Benjamin Chang and Sean MacAvaney and Kyle Lo and Arman Cohan and Benjamin Van Durme and Dawn Lawrie and Luca Soldaini},
eprint = {2403.15246},
primaryclass = {cs.IR},
title = {FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions},
year = {2024},
}
mFollowIR¶
This tasks measures retrieval instruction following ability on NeuCLIR narratives for the mFollowIR benchmark on the Farsi, Russian, and Chinese languages.
Dataset: jhu-clsp/mFollowIR-parquet-mteb
• License: odc-by • Learn more →
Task category | Score | Languages | Domains | Annotations Creators | Sample Creation |
---|---|---|---|---|---|
text to text (t2t) | p-MRR | fas, rus, zho | News, Written | expert-annotated | found |
Citation
@article{weller2024mfollowir,
author = {Weller, Orion and Chang, Benjamin and Yang, Eugene and Yarmohammadi, Mahsa and Barham, Sam and MacAvaney, Sean and Cohan, Arman and Soldaini, Luca and Van Durme, Benjamin and Lawrie, Dawn},
journal = {arXiv preprint TODO},
title = {{mFollowIR: a Multilingual Benchmark for Instruction Following in Retrieval}},
year = {2024},
}
mFollowIRCrossLingual¶
This tasks measures retrieval instruction following ability on NeuCLIR narratives for the mFollowIR benchmark on the Farsi, Russian, and Chinese languages with English queries/instructions.
Dataset: jhu-clsp/mFollowIR-cross-lingual-parquet-mteb
• License: odc-by • Learn more →
Task category | Score | Languages | Domains | Annotations Creators | Sample Creation |
---|---|---|---|---|---|
text to text (t2t) | p-MRR | eng, fas, rus, zho | News, Written | expert-annotated | found |
Citation
@article{weller2024mfollowir,
author = {Weller, Orion and Chang, Benjamin and Yang, Eugene and Yarmohammadi, Mahsa and Barham, Sam and MacAvaney, Sean and Cohan, Arman and Soldaini, Luca and Van Durme, Benjamin and Lawrie, Dawn},
journal = {arXiv preprint TODO},
title = {{mFollowIR: a Multilingual Benchmark for Instruction Following in Retrieval}},
year = {2024},
}