Skip to content

Adding a Benchmark

Adding a benchmark

The MTEB Leaderboard is available here and we encourage additions of new benchmarks.

To add a new benchmark:

  1. Add your benchmark to benchmarks.py as a Benchmark object, and select the MTEB tasks that will be in the benchmark. If some of the tasks do not exist in MTEB, follow the "add a dataset" instructions to add them.
  2. Open a PR at results repository with results of models on your benchmark.
  3. [optional] When your PR with benchmarks results is merged, you can add your benchmark to the most fitting section in benchmark_selector.py to be shown on the leaderboard.
  4. When PRs are merged, your benchmark will be added to the leaderboard automatically after the next workflow trigger (every day at midnight Pacific Time (8 AM UTC)).