There are two important updates on Task 4's data.
1. Verifying downloaded data
Thanks to the inquiry sent by Justin Salamon, we are providing two methods for participants to check that their development set was downloaded properly, not only in terms of the number of files but also in terms of duration per clip.
For the first method, we have computed lists containing the duration of each clip for the training and testing sets released in Task 4's GitHub repository. Additionally, a script is provided to compute the clip's duration of all audio files in a given path. The lists and script are called:
- Training Set Audio Duration:
- Testing Set Audio Duration:
- Script to compute duration:
Second, we are sharing a direct-download-link containing the clips of the training and testing sets "own" by the organisers. The training set is Here (Password: DCASE_2017_training_set) and the testing set is Here (Password: DCASE_2017_testing_set). Later on, we will do the same for the evaluation set.
AudioSet inherently has clips with the duration of less than 10 seconds. The testing set has 85/488 ~ 17 percent of clips, whereas the training set has 10785/51172 ~ 21 percent of clips and the evaluation set has a lower ballpark.
2. Verified strong labels
We thank Kyriakos Poutos for pointing out an issue with the file
groundtruth_strong_labels_testing_set.csv. A few rows in the strong labels contained the class "Car horn", which is not a class in our set. After revising the annotations we found that the impact in performance is minor. We reviewed the annotations and have updated the file on GitHub. Participants are asked to update such file and to review the "Update Log" in the README of GitHub.
Apologies for the inconvenience and let us know if further clarification is needed.