Rules


Challenge

Statement of interest

You do not need to register in order to participate. All challenge submissions done before the deadline will be accepted.

Technical report

Because the challenge is premised upon the sharing of ideas and results, all participants are required to submit a technical report about the submitted system to help us and the community better understand how the algorithm works. The technical report must describe in sufficient detail the system. Maximum length for the document is 5 pages. The technical report is submitted at the same time as the results.

Cross-validation setup

For obtaining fully comparable results between participants, a file list containing train/test folds is provided for all tasks. Use the provided setup for reporting development set results. If you are using this dataset for research, please also publish results using the given setup, this way it is possible to compare fairly the performance using different methods and approaches.

Challenge rules

  • Manipulation of provided training and development data is allowed. Task specific modifications of this rule are indicated in the individual task pages.
  • Participants are not allowed to make subjective judgments of the evaluation data, nor to annotate it.
  • Evaluation dataset cannot be used to train the submitted system. The use of statistics about the evaluation data in the decision making is also forbidden.
  • The system outputs that do not respect the challenge rules will be evaluated on request, but they will not be officially included in the challenge rankings.
  • Persons from the institutes of DCASE organizing teams are eligible to participate the challenge, if they can have an arrangement which prevents them from having any unfair advantage in the challenge. This arrangement should be briefly documented in the technical report describing the submission.
  • By submitting their results, the participants give the rights to have their system output be evaluated and the results be published on the challenge web page. In order to support open science and make the challenge fair, withdrawing the submission is not allowed.
  • Other task specific rules may apply. These are indicated and highlighted in the individual task pages.
  • Organizers reserve the right to make changes to the rules and schedule of the challenge.

Use of external data

Information on use of external data is available in each task page.

Where allowed, use of external data must obey the following conditions:

  • The used external resource is clearly referenced and freely accessible to any other research group in the world. External data refers to public datasets or trained models. The dataset/models must be public and freely available before 1st of April 2019.
  • Participants submit at least one system without external training data so that we can study the contribution of such resources. The list of external data sources used in training must be clearly indicated in the technical report.
  • Participants inform the organizers in advance about such data sources, so that all competitors know about them and have equal opportunity to use them; please send an email to the task coordinators; we will update the list of external datasets on the webpage accordingly. Once the evaluation set is published, the list of allowed external data resources is locked (no further external sources allowed).

Submission

Participants are requested to submit results in the required submission package. Participants are allowed to train their system using any subset or complete set of the available development dataset.

Multiple system outputs can be submitted (maximum 4 per participant). If submitting multiple systems, the individual system output files should be packaged into same submission package. Please follow instruction carefully when forming the submission package.

Evaluation

The evaluation of the submitted results will be done by the organizers. The metric used for ranking of submissions is different for each task; it is described on each of the task pages.

The evaluation results will be announced online, and discussed at DCASE 2019 Workshop.