Leaderboard Guidelines
Every task in every relational database of RelBench is a benchmark, and we provide training, validation, and test sets for it, together with data splits and performance evaluation metrics.
To participate in the leaderboard for a specific benchmark, follow these steps:
Use the RelBench data loader to retrieve the data and split.
Use training and/or validation set to train your model.
Use the RelBench model evaluator to calculate the performance of your model on the test set.
Submit the test set performance to a RelBench leaderboard.
How to submit?
Use the Google form to make a submission. The results will be posted after we check the model validity (expect to take about about a week).
The FAIR Guiding Principles
ML tools have become essential for research. To improve the findability, accessibility, interoperability, and reuse of ML tools, we apply FAIR4RS principles and implementation guidelines to all software and ML tools included in RelBench leaderboards. We strongly believe that software and ML tools should be open and adhere to FAIR principles to encourage repeatability, reproducibility, and reuse. RelBench follows the FAIR guidelines.