Rules


Challenge

Statement of interest

If you are interested of participating to any of the tasks, please drop us an email at dcasechallenge@gmail.com stating your interest and your preferred task(s). It is possible to take part in only one or multiple tasks.

Technical report

Because the challenge is premised upon the sharing of ideas and results, all participants are expected to submit maximum 5 pages technical report about the submitted system to help us and the community better understand how the algorithm works. The deadline for the technical report is the same as the results submission deadline.

Reproducibility of results

For obtaining fully comparable results between participants, a file list containing train/test folds is provided for all tasks. If you are using this dataset for research, please publish results using the given setup, so that we get a good comparison of the performance using different methods and approaches.

After the challenge deadline, full ground truth for evaluation datasets will be published for some tasks (currently 1 and 3).

Development dataset

Audio and ground truth annotations for development will be available through this website (development datasets). Participants are not allowed to use external data for training. Manipulation of provided data is allowed.

Evaluation dataset (audio only) will be provided close to the submission deadline. Participants are not allowed to make subjective judgments of the test data, nor to annotate it.

Submission

Participants are requested to submit results files in the required format. Participants are allowed to train their system using any subset or complete set of the available development dataset.

More details about the submission steps and formatting of the output can be found on the submission page.

Evaluation

The evaluation of the submitted results will be done by the organizers.

Ranking:

  • The submissions for task 1 will be ranked based on the classification accuracy.
  • The submissions for task 3 will be ranked based on the total error rate (main metric).
  • The submissions for task 4 will be ranked based on equal error rate.