Rules


Challenge

Statement of interest

You can drop us an email at dcasechallenge@gmail.com if you are interested of participating to any of the tasks. However, it is not necessary, and all challenge submissions will be accepted.

Technical report

Because the challenge is premised upon the sharing of ideas and results, all participants are expected to submit a technical report about the submitted system to help us and the community better understand how the algorithm works. The technical report must describe in sufficient detail the system. Maximum length for the document is 5 pages. The technical report is submitted at the same time as the results.

Detailed information on required content can be found on the submission page.

Cross-validation setup

For obtaining fully comparable results between participants, a file list containing train/test folds is provided for all tasks. Use the provided setup for reporting development set results. If you are using this dataset for research, please also publish results using the given setup, this way it is possible to compare fairly the performance using different methods and approaches.

After the challenge, reference annotations of the evaluation datasets will be also published.

Challenge rules

  • Participants are not allowed to use external data for system development. Data from another task is considered external data.
  • Manipulation of provided training and development data is allowed. Task specific modifications of this rules are indicated in the individual task pages.
  • Participants are not allowed to make subjective judgments of the evaluation data, nor to annotate it.
  • The evaluation dataset cannot be used to train the submitted system; the use of statistics about the evaluation data in the decision making is also forbidden.
  • The system outputs that do not respect the challenge rules will be evaluated on request, but they will not be officially included in the challenge rankings.
  • Other task specific rules may apply. These are indicated and highlighted in the individual task pages.

Submission

Participants are requested to submit results in the required submission package. Participants are allowed to train their system using any subset or complete set of the available development dataset.

Multiple system outputs can be submitted (maximum 4 per participant). If submitting multiple systems, the individual system output files should be packaged into same submission package. Please follow instruction carefully when forming the submission package.

More details about the whole submission process can be found on the submission page.

Evaluation

The evaluation of the submitted results will be done by the organizers.

Ranking:

  • The submissions for task 1 will be ranked based on the classification accuracy.
  • The submissions for task 2 will be ranked based on the event-based error rate.
  • The submissions for task 3 will be ranked based on the segment-based error rate.
  • The submissions for task 4 will be ranked independently for each of the two metrics (with and without timestamps).

The evaluation results will be announced online, and discussed at DCASE 2017 Workshop.