Submission


Instructions

Introduction

Challenge submission consists in submission package (one zip-package) containing system outputs, system meta information, and technical report (pdf file). The technical report can be the same as your DCASE2018 Workshop submission, but please use the template provided for each.

Submission process shortly:

  1. Participants run their system with evaluation dataset, and produce the system output in specified format. Participants are allowed to submit 4 different system outputs per task or subtask.
  2. Participants create a meta information file to go along the system output to describe the system used to produce this particular output. Meta information file has a predefined format to help the automatic handling of the challenge submissions. Information provided in the meta file will be later used to produce challenge results.
  3. Participants describe their system in a technical report in sufficient detail. There is a template provided for the technical report.
  4. Participants prepare the submission package (zip-file). The submission package contains system outputs, maximum 4 per task, systems meta information and the technical report.
  5. Participants submit the submission package and the technical report to DCASE2018 Challenge.

Please read carefully the requirements for the files included in the submission package!

Submission system

The submission system is now available:

Submission system

  • Create user account and login
  • Go to "All Conferences" tab in the system and type DCASE to filter the list
  • Create a new submission by selecting DCASE2018 challenge in the dropdown menu

Submission package

Participants are instructed to pack their system output(s), system meta information, and technical report into one zip-package. Example package:


Please prepare your submission zip-file as the provided example. Follow the same file structure and fill meta information with similar structure as the one in *.meta.yaml -files. The zip-file should contain system outputs for all tasks/subtasks, maximum 4 submissions per task/subtask, separate meta information for each system, and technical report(s) covering all submitted systems.

If you submit similar systems for multiple tasks, you can describe everything in one technical report. If your approaches for different tasks are significantly different, prepare one technical report for each and include it in the corresponding task folder.

More detailed instructions for constructing the package can be found in the following sections. Technical report template is available here.

Submission label

Submission label is used to index all your submissions (systems per tasks). To avoid overlapping labels among all submitted systems, use following way to form your label:

[Last name of corresponding author]_[Abbreviation of institute of the corresponding author]_task[task number][subtask letter (optional)]_[index number of your submission (1-4)]

For example the baseline systems would have the following labels:

  • Heittola_TUT_task1a_1
  • Heittola_TUT_task1b_1
  • Heittola_TUT_task1c_1
  • Fonseca_UPF_task2_1
  • Stowell_QMUL_task3_1
  • Serizel_ULO_task4_1
  • Dekkers_KUL_task5_1

Package structure

Make sure your zip-package follows provided file naming convention and directory structure:

Zip-package root
│  
└───task1                                           Task1 submissions
│   │   Heittola_TUT_task1.technical_report.pdf     Technical report covering all subtasks
│   │   Heittola_TUT_task1a.technical_report.pdf    (optional) Technical report for subtask A system only
│   │   Heittola_TUT_task1b.technical_report.pdf    (optional) Technical report for subtask B system only
│   │
│   └───Heittola_TUT_task1a_1                       Subtask A System 1 submission files
│   │       Heittola_TUT_task1a_1.meta.yaml         Subtask A System 1 meta information
│   │       Heittola_TUT_task1a_1.output.csv        Subtask A System 1 output
│   :
│   └───Heittola_TUT_task1a_4                       Subtask A System 4 submission files
│   │       Heittola_TUT_task1a_2.meta.yaml         Subtask A System 4 meta information
│   │       Heittola_TUT_task1a_2.output.csv        Subtask A System 4 output
│   │            
│   └───Heittola_TUT_task1b_1                       Subtask B System 1 submission files
│   │       Heittola_TUT_task1b_1.meta.yaml         Subtask B System 1 meta information
│   │       Heittola_TUT_task1b_1.output.csv        Subtask B System 1 output
│   :
│   └───Heittola_TUT_task1b_4                       Subtask B System 4 submission files
│   │       Heittola_TUT_task1b_1.meta.yaml         Subtask B System 4 meta information
│   │       Heittola_TUT_task1b_1.output.csv        Subtask B System 4 output
│   │      
│   └───Heittola_TUT_task1c_4                       Subtask C System 1 submission files
│   │       Heittola_TUT_task1c_1.meta.yaml         Subtask C System 1 meta information
│   │       Heittola_TUT_task1c_1.output.csv        Subtask C System 1 output
│   :            
│   └───Heittola_TUT_task1c_4                       Subtask C System 4 submission files
│           Heittola_TUT_task1c_4.meta.yaml         Subtask B System 4 meta information
│           Heittola_TUT_task1c_4.output.csv        Subtask B System 4 output
│          
└───task2                                           Task2 submissions
│   │   Fonseca_UPF_task2.technical_report.pdf      Technical report                       
│   │
│   └───Fonseca_UPF_task2_1                         System 1 submission files
│   │     Fonseca_UPF_task2_1.meta.yaml             System 1 meta information
│   │     Fonseca_UPF_task2_1.output.csv            System 1 output   
│   :
│   │
│   └───Fonseca_UPF_task2_2                         System 2 submission files
│         Fonseca_UPF_task2_2.meta.yaml             System 2 meta information
│         Fonseca_UPF_task2_2.output.csv            System 2 output   
│   
└───task3                                           Task3 submissions
│   │   Stowell_QMUL_task3.technical_report.pdf     Technical report                      
│   │
│   └───Stowell_QMUL_task3_1                        System 1 submission files
│   │     Stowell_QMUL_task3_1.meta.yaml            System 1 meta information
│   │     Stowell_QMUL_task3_1.output.csv           System 1 output
│   :
│   │
│   └───Stowell_QMUL_task3_4                        System 4 submission files
│         Stowell_QMUL_task3_2.meta.yaml            System 4 meta information
│         Stowell_QMUL_task3_2.output.csv           System 4 output
│
└───task4                                           Task4 submissions
│   │   Serizel_ULO_task4.technical_report.pdf      Technical report                      
│   │
│   └───Serizel_ULO_task4_1                         System 1 submission files
│   │     Serizel_ULO_task4_1.meta.yaml             System 1 meta information
│   │     Serizel_ULO_task4_1.output.csv            System 1 output
│   :
│   │
│   └───Serizel_ULO_task4_4                         System 4 submission files
│         Serizel_ULO_task4_4.meta.yaml             System 4 meta information
│         Serizel_ULO_task4_4.output.csv            System 4 output
│
└───task5                                           Task5 submissions
    │   Dekkers_KUL_task5.technical_report.pdf      Technical report                      
    │
    └───Dekkers_KUL_task5_1                         System 1 submission files
    │     Dekkers_KUL_task5_1.meta.yaml             System 1 meta information
    │     Dekkers_KUL_task5_1.output.csv            System 1 output
    :
    │
    └───Dekkers_KUL_task5_4                         System 4 submission files
          Dekkers_KUL_task5_2.meta.yaml             System 4 meta information
          Dekkers_KUL_task5_2.output.csv            System 4 output

System outputs

Participants must submit the results for the provided evaluation datasets.

  • Follow the system output format specified in the task description.

  • Tasks are independent. You can participate to a single task or multiple tasks.

  • Multiple submissions for the same task are allowed (maximum 4 per task). Use a running index in the submission label, and give more detailed names for the submitted systems in the system meta information files. Please mark carefully the connection between the submitted systems and system parameters description in the technical report (for example by referring to the systems by using the submission label or system name given in the system meta information file).

  • Submitted system outputs will be published online on the DCASE2018 website later to allow future evaluations.

Technical report

All participants are expected to submit a technical report about the submitted system, to help the DCASE community better understand how the algorithm works.

Technical reports are not peer-reviewed. The technical reports will be published on the challenge website together with all other information about the submitted system. For the technical report it is not necessary to follow closely the scientific publication structure (for example there is no need for extensive literature review). The report should however contain sufficient description of the system.

Please report the system performance using the provided cross-validation setup or development set, according to the task. For participants taking part in multiple tasks, one technical report covering all tasks is sufficient, if the systems have only small differences. Describe the task specific parameters in the report.

Participants can also submit the same report as a scientific paper to DCASE 2018 Workshop. In this case, the paper must respect the structure of a scientific publication, and be prepared according to the provided Workshop paper instructions and template. Please note that the template is slightly different, and you will have to create a separate submission to the DCASE2018 Workshop track in the submission system. Please refer to the workshop webpage for more details. DCASE2018 Workshop papers will be peer-reviewed.

Template

Reports are in format 4+1 pages. Papers are maximum 5 pages, including all text, figures, and references, with the 5th page containing only references. The templates for technical report are available here:

Latex template (279 KB)
version 1.0 (.zip)


Word template (38 KB)
version 1.0 (.docx)



Meta information

In order to allow meta analysis of submitted systems, participants should provide rough meta information presented in a structured and correctly formatted YAML-file.

See example meta files below for each baseline system. These examples are also available in the example submission package. Meta file structure is mostly the same for all tasks, only the metrics collected in results->development_dataset-section differ per challenge task.

Example meta information file for Task 1 baseline system task1/Heittola_TUT_task1a_1/Heittola_TUT_task1a_1.meta.yaml:

# Submission information
submission:
  # Submission label
  # Label is used to index submissions, to avoid overlapping codes among submissions
  # use following way to form your label:
  # [Last name of corresponding author]_[Abbreviation of institute of the corresponding author]_task[task number]_[index number of your submission (1-4)]
  label: Heittola_TUT_task1a_1

  # Submission name
  # This name will be used in the results tables when space permits
  name: DCASE2018 baseline system

  # Submission name abbreviated
  # This abbreviated name will be used in the results table when space is tight, maximum 10 characters
  abbreviation: Baseline

  # Submission authors in order, mark one of the authors as corresponding author.
  authors:
    # First author
    - lastname: Heittola
      firstname: Toni
      email: toni.heittola@tut.fi                     # Contact email address
      corresponding: true                             # Mark true for one of the authors

      # Affiliation information for the author
      affiliation:
        abbreviation: TUT
        institute: Tampere University of Technology
        department: Laboratory of Signal Processing
        location: Tampere, Finland

    # Second author
    - lastname: Mesaros
      firstname: Annamaria
      email: annamaria.mesaros@tut.fi                 # Contact email address

      # Affiliation information for the author
      affiliation:
        abbreviation: TUT
        institute: Tampere University of Technology
        department: Laboratory of Signal Processing
        location: Tampere, Finland

# System information
system:
  # System description, meta data provided here will be used to do
  # meta analysis of the submitted system. Use general level tags, if possible use the tags provided in comments.
  # If information field is not applicable to the system, use "!!null".
  description:

    # Audio input
    input_channels: mono                  # e.g. one or multiple [mono, binaural, left, right, mixed, ...]
    input_sampling_rate: 48kHz            #

    # Acoustic representation
    acoustic_features: log-mel energies   # e.g one or multiple [MFCC, log-mel energies, spectrogram, CQT, ...]

    # Data augmentation methods
    data_augmentation: !!null             # [time stretching, block mixing, pitch shifting, ...]

    # Machine learning
    # In case using ensemble methods, please specify all methods used (comma separated list).
    machine_learning_method: CNN          # e.g one or multiple [GMM, HMM, SVM, kNN, MLP, CNN, RNN, CRNN, NMF, random forest, ensemble, ...]

    # Ensemble method subsystem count
    # In case ensemble method is not used, mark !!null.
    ensemble_method_subsystem_count: !!null # [2, 3, 4, 5, ... ]

    # Decision making methods
    decision_making: !!null               # [majority vote, ...]

  # System complexity, meta data provided here will be used to evaluate
  # submitted systems from the computational load perspective.
  complexity:

    # Total amount of parameters used in the acoustic model. For neural networks, this
    # information is usually given before training process in the network summary.
    # For other than neural networks, if parameter count information is not directly available,
    # try estimating the count as accurately as possible.
    # In case of ensemble approaches, add up parameters for all subsystems.
    total_parameters: 116118

  # URL to the source code of the system [optional]
  source_code: https://github.com/DCASE-REPO/dcase2018_baseline/tree/master/task1

# System results
results:
  # Full results are not mandatory, but for through analysis of the challenge submissions recommended.
  # If you cannot provide all results, also incomplete results can be reported.

  development_dataset:
    # System result for development dataset with provided the cross-validation setup.

    # Overall accuracy (mean of class-wise accuracies)
    overall:
      accuracy: 59.7

    # Class-wise accuracies
    class_wise:
      airport:
        accuracy: 72.9
      bus:
        accuracy: 62.9
      metro:
        accuracy: 51.2
      metro_station:
        accuracy: 55.4
      park:
        accuracy: 79.1
      public_square:
        accuracy: 40.4
      shopping_mall:
        accuracy: 49.6
      street_pedestrian:
        accuracy: 50.0
      street_traffic:
        accuracy: 80.5
      tram:
        accuracy: 55.1

  public_leaderboard:
    # System score from public leaderboard (https://www.kaggle.com/c/dcase2018-task1a-leaderboard)
    overall:
      accuracy: 62.5                     
                

Example meta information file for Task 2 baseline system task2/Fonseca_UPF_task2_1/Fonseca_UPF_task2_1.meta.yaml:

# Submission information
submission:
  # Submission label
  # Label is used to index submissions, to avoid overlapping codes among submissions
  # use following way to form your label:
  # [Last name of corresponding author]_[Abbreviation of institute of the corresponding author]_task[task number]_[index number of your submission (1-4)]
  label: Fonseca_UPF_task2_1

  # Submission name
  # This name will be used in the results tables when space permits
  name: DCASE2018 baseline system

  # Submission name abbreviated
  # This abbreviated name will be used in the results table when space is tight, maximum 10 characters
  abbreviation: Baseline

  # Submission authors in order, mark one of the authors as corresponding author.
  authors:
    # First author
    - lastname: Fonseca
      firstname: Eduardo
      email: eduardo.fonseca@upf.edu                 # Contact email address
      corresponding: true                         # Mark true for one of the authors

      # Affiliation information for the author
      affiliation:
        abbreviation: UPF
        institute: Universitat Pompeu Fabra, Barcelona
        department: Music Technology Group
        location: Barcelona, Spain

    # Second author
    - lastname: Font
      firstname: Frederic
      email: frederic.font@upf.edu                  # Contact email address

      # Affiliation information for the author
      affiliation:
        abbreviation: UPF
        institute: Universitat Pompeu Fabra, Barcelona
        department: Music Technology Group
        location: Barcelona, Spain

    # Third author
    - lastname: Plakal
      firstname: Manoj
      email: plakal@google.com                # Contact email address

      # Affiliation information for the author
      affiliation:
        abbreviation: GOOGLE
        institute: Google Research
        department: Machine Perception Team
        location: New York, USA

    # Fourth author
    - lastname: Ellis
      firstname: Daniel P. W.
      email: dpwe@google.com               # Contact email address

      # Affiliation information for the author
      affiliation:
        abbreviation: GOOGLE
        institute: Google Research
        department: Machine Perception Team
        location: New York, USA


# System information
system:
  # System description, meta data provided here will be used to do
  # meta analysis of the submitted system. Use general level tags, if possible use the tags provided in comments.
  # If information field is not applicable to the system, use "!!null".
  description:

    # Audio input
    input_channels: mono                  # e.g. one or multiple [mono, binaural, left, right, mixed, ...]
    input_sampling_rate: 44.1kHz            #

    # Acoustic representation
    acoustic_features: log-mel energies   # e.g one or multiple [MFCC, log-mel energies, spectrogram, CQT, ...]

    # Data augmentation methods
    data_augmentation: !!null             # [time stretching, block mixing, pitch shifting, ...]

    # Machine learning
    # In case using ensemble methods, please specify all methods used (comma separated list).
    machine_learning_method: CNN          # e.g one or multiple [GMM, HMM, SVM, kNN, MLP, CNN, RNN, CRNN, NMF, random forest, ensemble, ...]

    # Ensemble method subsystem count
    # In case ensemble method is not used, mark !!null.
    ensemble_method_subsystem_count: !!null # [2, 3, 4, 5, ... ]

    # Decision making methods
    decision_making: !!null               # [majority vote, ...]

    # External data approach
    external_data: !!null                 # [pre-trained model, audio data, ...]

    # Re-labeling of the non-verified portion of the train set
    re_labeling: !!null                  # [automatic, manual, ...]

  # System complexity, meta data provided here will be used to evaluate
  # submitted systems from the computational load perspective.
  complexity:

    # Total amount of parameters used in the acoustic model. For neural networks, this
    # information is usually given before training process in the network summary.
    # For other than neural networks, if parameter count information is not directly available,
    # try estimating the count as accurately as possible.
    # In case of ensemble approaches, add up parameters for all subsystems.
    total_parameters: 658100

  # URL to the source code of the system [optional]
  source_code: https://github.com/DCASE-REPO/dcase2018_baseline/tree/master/task2/

# System results
results:
  # Full results are not mandatory, but for thorough analysis of the challenge submissions recommended.
  # If you cannot provide all results, also incomplete results can be reported.

  public_leaderboard:
    # System score from public leaderboard (https://www.kaggle.com/c/freesound-audio-tagging/leaderboard)
    overall:
      mAP: 0.704              
                

Example meta information file for Task 3 baseline system task3/*.meta.yaml:

# Submission information
submission:
  # Submission label
  # Label is used to index submissions, to avoid overlapping codes among submissions
  # use following way to form your label:
  # [Last name of corresponding author]_[Abbreviation of institute of the corresponding author]_task[task number]_[index number of your submission (1-4)]
  label: Stowell_QMUL_task3_1

  # Submission name
  # This name will be used in the results tables when space permits
  name: DCASE2018 all-zero example

  # Submission name abbreviated
  # This abbreviated name will be used in the results table when space is tight, maximum 10 characters
  abbreviation: AllZeros

  # Submission authors in order, mark one of the authors as corresponding author.
  authors:
    # First author
    - lastname: Stowell
      firstname: Dan
      email: dan.stowell@qmul.ac.uk                   # Contact email address
      corresponding: true                             # Mark true for one of the authors

      # Affiliation information for the author
      affiliation:
        abbreviation: QMUL
        institute: Queen Mary University of London
        department: Machine Listening Lab / Centre for Digital Music
        location: London, UK

    # Second author
    - lastname: Stowell
      firstname: Dan
      email: dan.stowell@qmul.ac.uk                   # Contact email address

      # Affiliation information for the author
      affiliation:
        abbreviation: QMUL
        institute: Queen Mary University of London
        department: Machine Listening Lab / Centre for Digital Music
        location: London, UK

# System information
system:
  # System description, meta data provided here will be used to do
  # meta analysis of the submitted system. Use general level tags, if possible use the tags provided in comments.
  # If information field is not applicable to the system, use "!!null".
  description:

    # Audio input
    input_channels: mono                  # e.g. one or multiple [mono, binaural, left, right, mixed, ...]
    input_sampling_rate: 44.1kHz          #

    # Acoustic representation
    acoustic_features: log-mel energies   # e.g one or multiple [MFCC, log-mel energies, spectrogram, CQT, ...]

    # Data augmentation methods
    data_augmentation: !!null             # [time stretching, block mixing, pitch shifting, ...]

    # Machine learning
    # In case using ensemble methods, please specify all methods used (comma separated list).
    machine_learning_method: CNN          # e.g one or multiple [GMM, HMM, SVM, kNN, MLP, CNN, RNN, CRNN, NMF, random forest, ensemble, ...]

    # Ensemble method subsystem count
    # In case ensemble method is not used, mark !!null.
    ensemble_method_subsystem_count: !!null # [2, 3, 4, 5, ... ]

    # Decision making methods
    decision_making: !!null               # [majority vote, ...]

  # System complexity, meta data provided here will be used to evaluate
  # submitted systems from the computational load perspective.
  complexity:

    # Total amount of parameters used in the acoustic model. For neural networks, this
    # information is usually given before training process in the network summary.
    # For other than neural networks, if parameter count information is not directly available,
    # try estimating the count as accurately as possible.
    # In case of ensemble approaches, add up parameters for all subsystems.
    total_parameters: 116118

  # URL to the source code of the system [optional]
  source_code: https://github.com/DCASE-REPO/dcase2018_baseline/tree/master/task3

# System results
results:

  development_dataset:
    # System result for development dataset with the provided cross-validation setup.

    # Overall score (harmonic mean of AUCs for each of the folds/datasets)
    overall:
      auc: 50.0

    # These detailed results are not mandatory, but for through analysis of the challenge submissions recommended.
    # If you cannot provide all results, also incomplete results can be reported. Simply delete these lines.
    perdataset:
      warblrb10k:
        auc: 50.0
      chern:
        auc: 50.0
      PolandNFC:
        auc: 50.0

  public_leaderboard:
    # System score from public leaderboard http://lsis-argo.lsis.org:8005/
    overall:
      auc: 50.0           
                

Example meta information file for Task 4 baseline system task4/Serizel_ULO_task4_1/Serizel_ULO_task4_1.meta.yaml:

    
# Submission information
submission:
  # Submission label
  # Label is used to index submissions, to avoid overlapping codes among submissions
  # use following way to form your label:
  # [Last name of corresponding author]_[Abbreviation of institute of the corresponding author]_task[task number]_[index number of your submission (1-4)]
  label: Serizel_UL_task4_1

  # Submission name
  # This name will be used in the results tables when space permits
  name: DCASE2018 baseline system

  # Submission name abbreviated
  # This abbreviated name will be used in the results table when space is tight, maximum 10 characters
  abbreviation: Baseline

  # Submission authors in order, mark one of the authors as corresponding author.
  authors:
    # First author
    - lastname: Serizel
      firstname: Romain
      email: romain.serizel@loria.fr                  # Contact email address
      corresponding: true                             # Mark true for one of the authors

      # Affiliation information for the author
      affiliation:
        abbreviation: ULO
        institute: University of Lorraine, Loria
        department: Department of Natural Language Processing & Knowledge Discovery
        location: Nancy, France

    # Second author
    - lastname: Eghbal-Zadeh
      firstname: Hamid
      email: hamid.eghbal-zadeh@jku.at                # Contact email address

      # Affiliation information for the author
      affiliation:
        abbreviation: JKP
        institute: Johannes Kepler University
        department: Department of Computational Perception
        location: Linz, Austria

    - lastname: Turpault
      firstname: Nicolas
      email: nicolas.turpault@inria.fr                # Contact email address

      # Affiliation information for the author
      affiliation:
        abbreviation: INR
        institute: Inria Nancy Grand-Est
        department: Department of Natural Language Processing & Knowledge Discovery
        location: Nancy, France

    - lastname: Parag Shah
      firstname: Ankit
      email: aps1@andrew.cmu.edu                      # Contact email address

      # Affiliation information for the author
      affiliation:
        abbreviation: CMU
        institute: Carnegie Mellon University
        department: School of Computer Science
        location: Pittsburgh, United States

# System information
system:
  # System description, meta data provided here will be used to do
  # meta analysis of the submitted system. Use general level tags, if possible use the tags provided in comments.
  # If information field is not applicable to the system, use "!!null".
  description:

    # Audio input
    input_channels: mono                  # e.g. one or multiple [mono, binaural, left, right, mixed, ...]
    input_sampling_rate: 44.1kHz          #

    # Acoustic representation
    acoustic_features: log-mel energies   # e.g one or multiple [MFCC, log-mel energies, spectrogram, CQT, ...]

    # Data augmentation methods
    data_augmentation: !!null             # [time stretching, block mixing, pitch shifting, ...]

    # Machine learning
    # In case using ensemble methods, please specify all methods used (comma separated list).
    machine_learning_method: CRNN         # e.g one or multiple [GMM, HMM, SVM, kNN, MLP, CNN, RNN, CRNN, NMF, random forest, ensemble, ...]

    # Ensemble method subsystem count
    # In case ensemble method is not used, mark !!null.
    ensemble_method_subsystem_count: !!null # [2, 3, 4, 5, ... ]

    # Decision making methods
    decision_making: !!null               # [majority vote, ...]

  # System complexity, meta data provided here will be used to evaluate
  # submitted systems from the computational load perspective.
  complexity:

    # Total amount of parameters used in the acoustic model. For neural networks, this
    # information is usually given before training process in the network summary.
    # For other than neural networks, if parameter count information is not directly available,
    # try estimating the count as accurately as possible.
    # In case of ensemble approaches, add up parameters for all subsystems.
    total_parameters: 126090

  # URL to the source code of the system [optional]
  source_code: https://github.com/DCASE-REPO/dcase2018_baseline/tree/master/task4/

# System results
results:
  # Full results are not mandatory, but for through analysis of the challenge submissions recommended.
  # If you cannot provide all results, also incomplete results can be reported.

  development_dataset:
    # System result for development dataset with provided the cross-validation setup.
    overall:
      ER: 1.54

    # Class-wise accuracies
    class_wise:
      Alarm_bell_ringing:
        ER: 1.33
      Blender:
        ER: 1.60
      Cat:
        ER: 1.70
      Dishes:
        ER: 1.22
      Dog:
        ER: 1.21
      Electric_shaver_toothbrush:
        ER: 1.71
      Frying:
        ER: 2.08
      Running_water:
        ER: 1.84
      Speech:
        ER: 1.42
      Vacuum_cleaner:
        ER: 1.28                       
                

Example meta information file for Task 5 baseline system task5/Dekkers_KUL_task5_1/Dekkers_KUL_task5_1.meta.yaml:

     
# Submission information
submission:
  # Submission label
  # Label is used to index submissions, to avoid overlapping codes among submissions
  # use following way to form your label:
  # [Last name of corresponding author]_[Abbreviation of institute of the corresponding author]_task[task number]_[index number of your submission (1-4)]
  label: Dekkers_KUL_task5_1

  # Submission name
  # This name will be used in the results tables when space permits
  name: DCASE2018 Task 5 baseline system

  # Submission name abbreviated
  # This abbreviated name will be used in the results table when space is tight, maximum 10 characters
  abbreviation: Baseline

  # Submission authors in order, mark one of the authors as corresponding author.
  authors:
    # First author
    - lastname: Dekkers
      firstname: Gert
      email: gert.dekkers@kuleuven.be                   # Contact email address
      corresponding: true                               # Mark true for one of the authors

      # Affiliation information for the author
      affiliation:
        abbreviation: KUL
        institute: KU Leuven - ADVISE
        department: Computer Science
        location: Geel, Belgium

    # Second author
    - lastname: Karsmakers
      firstname: Peter
      email: peter.karsmakers@kuleuven.be                # Contact email address

      # Affiliation information for the author
      affiliation:
        abbreviation: KUL
        institute: KU Leuven - ADVISE
        department: Computer Science
        location: Geel, Belgium

# System information
system:
  # System description, meta data provided here will be used to do
  # meta analysis of the submitted system. Use general level tags, if possible use the tags provided in comments.
  # If information field is not applicable to the system, use "!!null".
  description:

    # Audio input
    input_channels: all                   # e.g. one or multiple [all, mixed, mono, ...]
    input_sampling_rate: 16kHz            #

    # Features
    acoustic_features: log-mel energies   # e.g one or multiple [MFCC, log-mel energies, spectrogram, CQT, ...]
    spatial_features: !!null              # e.g one or multiple [GCC-PHAT, MUSIC, ...]

    # Data augmentation methods
    data_augmentation: !!null             # [time stretching, block mixing, pitch shifting, GAN, ...]

    # External data/model
    external_data: !!null                 # [AudioSet, TUT Acoustic Scenes 2017, ...]
    external_model: !!null                # [VGGisch, ResNet50, ...]

    # Machine learning
    # In case using ensemble methods, please specify all methods used (comma separated list).
    machine_learning_method: CNN          # e.g one or multiple [GMM, HMM, SVM, kNN, MLP, CNN, RNN, CRNN, NMF, random forest, ensemble, ...]

    # Fusion level
    fusion_level: decision                # [audio, feature, classifier, decision, ...]

    # Fusion method (specifying which method is used to fuse)
    fusion_method: average                # e.g one or multiple [product, average, majority vote, stacking, ...]

    # Ensemble method subsystem count
    ensemble_method_subsystem_count: !!null # [2, 3, 4, 5, ... ]

    # Decision making methods (in case a multiple estimations are available in one 10s window)
    decision_making: !!null               # [majority vote, product, average]

  # URL to the source code of the system [optional]
  source_code: https://github.com/DCASE-REPO/dcase2018_baseline/tree/master/task5

# System results
results:
  # Full results are not mandatory, but for through analysis of the challenge submissions recommended.
  # If you cannot provide all results, also incomplete results can be reported.

  development_dataset:
    # System result for development dataset with the provided cross-validation setup.

    # Overall f1-score (mean of class-wise F1-scores)
    overall:
      f1: 84.50

    # Class-wise accuracies
    class_wise:
      other:
        f1: 44.76
      social_activity:
        f1: 93.92
      eating:
        f1: 83.64
      working:
        f1: 82.03
      absence:
        f1: 85.41
      vacuum_cleaner:
        f1: 99.31
      dishwashing:
        f1: 76.73
      watching_tv:
        f1: 99.59
      cooking:
        f1: 95.14