Sound Event Localization and Detection


Task description

The goal of this task is to recognize individual sound events, detect their temporal activity, and estimate their location during it.

Challenge has ended. Full results for this task can be found in the Results page.

Description

Given multichannel audio input, a sound event detection and localization (SELD) system outputs a temporal activation track for each of the target sound classes, along with one or more corresponding spatial trajectories when the track indicates activity. This results in a spatio-temporal characterization of the acoustic scene that can be used in a wide range of machine cognition tasks, such as inference on the type of environment, self-localization, navigation without visual input or with occluded targets, tracking of specific types of sound sources, smart-home applications, scene visualization systems, and audio surveillance, among others.

Figure 1: Overview of sound event localization and detection system.


Audio dataset

The TAU-NIGENS Spatial Sound Events 2020 dataset contains multiple spatial sound-scene recordings, consisting of sound events of distinct categories integrated into a variety of acoustical spaces, and from multiple source directions and distances as seen from the recording position. The spatialization of all sound events is based on filtering through real spatial room impulse responses (RIRs), captured in multiple rooms of various shapes, sizes, and acoustical absorption properties. Furthermore, each scene recording is delivered in two spatial recording formats, a microphone array one (MIC), and first-order Ambisonics one (FOA). The sound events are spatialized as either stationary sound sources in the room, or moving sound sources, in which case time-variant RIRs are used. Each sound event in the sound scene is associated with a trajectory of its direction-of-arrival (DoA) to the recording point, and a temporal onset and offset time. The isolated sound event recordings used for the synthesis of the sound scenes are obtained from the NIGENS general sound events database. These recordings serve as the development dataset for the DCASE 2020 Sound Event Localization and Detection Task of the DCASE 2020 Challenge.

The RIRs were collected in Finland by staff of Tampere University between 12/2017 - 06/2018, and between 11/2019 - 1/2020. The older measurements from five rooms were also used for the earlier development and evaluation datasets TAU Spatial Sound Events 2019, while ten additional rooms were added for this dataset. The data collection received funding from the European Research Council, grant agreement 637422 EVERYSOUND.

ERC

A detailed description of the dataset collection and generation, along with details on the baseline and evaluation can be found in the following paper:

Publication

Archontis Politis, Sharath Adavanne, and Tuomas Virtanen. A dataset of reverberant spatial sound scenes with moving sources for sound event localization and detection. In Proceedings of the Detection and Classification of Acoustic Scenes and Events 2020 Workshop (DCASE2020), 165–169. Tokyo, Japan, November 2020. URL: https://dcase.community/workshop2020/proceedings.

PDF

A Dataset of Reverberant Spatial Sound Scenes with Moving Sources for Sound Event Localization and Detection

Abstract

This report details the dataset and the evaluation setup of the Sound Event Localization \& Detection (SELD) task for the DCASE 2020 Challenge. Training and testing SELD systems requires datasets of diverse sound events occurring under realistic acoustic conditions. A significantly more complex dataset is created for DCASE 2020 compared to the previous challenge. The two key differences are a more diverse range of acoustical conditions, and dynamic conditions, i.e. moving sources. The spatial sound scene recordings for all conditions are generated using real room impulse responses, while ambient noise recorded on location is added to the spatialized sound events. Additionally, an improved version of the SELD baseline used in the previous challenge is included, providing benchmark scores for the task.

PDF

and a longer non-peer-reviewed version with some additional details in the challenge technical report.

Recording procedure

To construct a realistic dataset, real-life IR recordings were collected using an Eigenmike spherical microphone array. A Genelec G Three loudspeaker was used to playback a maximum length sequence (MLS) around the Eigenmike. The IRs were obtained in the STFT domain using a least-squares regression between the known measurement signal (MLS) and far-field recording independently at each frequency.

The IRs were recorded at fifteen different indoor locations inside the Tampere University campus at Hervanta, Finland. Apart from the five spaces measured and used for the same task in DCASE2019, we added ten new spaces. Additionally, 30 minutes of ambient noise recordings were collected at the same locations with the IR recording setup unchanged. Contrary to DCASE2019, the new IRs were not measured on a spherical grid of fixed azimuth and elevation resolution, and at fixed distances. Instead, IR directions and distances differ with the space. Possible azimuths span the whole range of \(\phi\in[-180,180)\), while the elevations span approximately a range between \(\theta\in[-45,45]\) degrees. A summary of the measured spaces is as follows:


DCASE2019

  1. Large common area with multiple seating tables and carpet flooring. People chatting and working.
  2. Large cafeteria with multiple seating tables and carpet flooring. People chatting and having food.
  3. High ceiling corridor with hard flooring. People walking around and chatting.
  4. Corridor with classrooms around and hard flooring. People walking around and chatting.
  5. Large corridor with multiple sofas and tables, hard and carpet flooring at different parts. People walking around and chatting.

DCASE2020

  1. (2x) Large lecture halls with inclined floor. Ventilation noise.
  2. (2x) Modern classrooms with multiple seating tables and carpet flooring. Ventilation noise.
  3. (2x) Meeting rooms with hard floor and partially glass walls. Ventilation noise.
  4. (2x) Old-style large classrooms with hard floor and rows of desks. Ventilation noise.
  5. Large open space in underground bomb shelter, with plastic floor and rock walls. Ventilation noise.
  6. Large open gym space. People using weights and gym equipment.

Recording formats

The array response of the two recording formats can be considered known. The following theoretical spatial responses (steering vectors) modeling the two formats describe the directional response of each channel to a source incident from direction-of-arrival (DOA) given by azimuth angle \(\phi\) and elevation angle \(\theta\).

For the first-order ambisonics (FOA):

\begin{eqnarray} H_1(\phi, \theta, f) &=& 1 \\ H_2(\phi, \theta, f) &=& \sin(\phi) * \cos(\theta) \\ H_3(\phi, \theta, f) &=& \sin(\theta) \\ H_4(\phi, \theta, f) &=& \cos(\phi) * \cos(\theta) \end{eqnarray}

The (FOA) format is obtained by converting the 32-channel microphone array signals by means of encoding filters based on anechoic measurements of the Eigenmike array response. Note that in the formulas above the encoding format is assumed frequency-independent, something that holds true up to around 9kHz with the specific microphone array, while the actual encoded responses start to deviate gradually at higher frequencies from the ideal ones provided above.

For the tetrahedral microphone array (MIC):

The four microphone have the following positions, in spherical coordinates \((\phi, \theta, r)\):

\begin{eqnarray} M1: &\quad(&45^\circ, &&35^\circ, &4.2\mathrm{cm})\nonumber\\ M2: &\quad(&-45^\circ, &-&35^\circ, &4.2\mathrm{cm})\nonumber\\ M3: &\quad(&135^\circ, &-&35^\circ, &4.2\mathrm{cm})\nonumber\\ M4: &\quad(&-135^\circ, &&35^\circ, &4.2\mathrm{cm})\nonumber \end{eqnarray}

Since the microphones are mounted on an acoustically-hard spherical baffle, an analytical expression for the directional array response is given by the expansion:

\begin{equation} H_m(\phi_m, \theta_m, \phi, \theta, \omega) = \frac{1}{(\omega R/c)^2}\sum_{n=0}^{30} \frac{i^{n-1}}{h_n'^{(2)}(\omega R/c)}(2n+1)P_n(\cos(\gamma_m)) \end{equation}

where \(m\) is the channel number, \((\phi_m, \theta_m)\) are the specific microphone's azimuth and elevation position, \(\omega = 2\pi f\) is the angular frequency, \(R = 0.042\)m is the array radius, \(c = 343\)m/s is the speed of sound, \(\cos(\gamma_m)\) is the cosine angle between the microphone and the DOA, and \(P_n\) is the unnormalized Legendre polynomial of degree \(n\), and \(h_n'^{(2)}\) is the derivative with respect to the argument of a spherical Hankel function of the second kind. The expansion is limited to 30 terms which provides negligible modeling error up to 20kHz. Example routines that can generate directional frequency and impulse array responses based on the above formula can be found here.

Sound event classes

To generate the spatial sound scenes the measured room IRs are convolved with dry recordings of sound samples belonging to distinct sound classes. The sound event database of sound samples used for that purpose is the recent NIGENS general sound events database:

The 14 sound classes of the spatialized events are:

  1. alarm
  2. crying baby
  3. crash
  4. barking dog
  5. running engine
  6. female scream
  7. female speech
  8. burning fire
  9. footsteps
  10. knocking on door
  11. male scream
  12. male speech
  13. ringing phone
  14. piano

Dataset specifications

The specifications of the dataset can be summarized in the following:

  • 600 one-minute long sound scene recordings with metadata (development dataset).
  • 200 one-minute long sound scene recordings without metadata (evaluation dataset).
  • Sampling rate 24kHz.
  • About 700 sound event samples spread over 14 classes (see here for more details).
  • Two 4-channel 3-dimensional recording formats: first-order Ambisonics (FOA) and tetrahedral microphone array.
  • Realistic spatialization and reverberation through RIRs collected in 15 different enclosures.
  • From about 1500 to 3500 possible RIR positions across the different rooms.
  • Both static reverberant and moving reverberant sound events.
  • Three possible angular speeds for moving sources of about 10, 20, or 40deg/sec.
  • Up to two overlapping sound events possible, temporally and spatially.
  • Realistic spatial ambient noise collected from each room is added to the spatialized sound events, at varying signal-to-noise ratios (SNR) ranging from noiseless (30dB) to noisy (6dB).

Each recording corresponds to a single room, and allowed overlap of two simulatenous sources, or no overlap. Each event spatialized in the recording has equal probability of being either static or moving, and is asigned randomly one of the room RIR positions, or motion along one of the predefined trajectories. The moving sound events are synthesized with a slow (10deg/sec), moderate (20deg/sec), or fast (40deg/sec) angular speed. A partitioned time-frequency interpolation scheme of the RIRs extracted from the measurements at regular intervals is used to approximate the time-variant room response corresponding to the target motion.

In the development dataset, eleven out of the fifteen rooms along with the NIGENS event samples are assigned to 6 disjoint sets, and their combinations form 6 distinct splits of 100 recordings each. The splits permits testing and validation across different acoustic conditions.

Reference labels and directions-of-arrival

For each recording in the development dataset, the labels and DoAs are provided in a plain text CSV file of the same filename as the recording, in the following format:

[frame number (int)], [active class index (int)], [track number index (int)], [azimuth (int)], [elevation (int)]

Frame, class, and track enumeration begins at 0. Frames correspond to a temporal resolution of 100msec. Azimuth and elevation angles are given in degrees, rounded to the closest integer value, with azimuth and elevation being zero at the front, azimuth \(\phi \in [-180^{\circ}, 180^{\circ}]\), and elevation \(\theta \in [-90^{\circ}, 90^{\circ}]\). Note that the azimuth angle is increasing counter-clockwise (\(\phi = 90^{\circ}\) at the left).

The track index indicates instances of the same class in the recording, overlapping or non-overlapping, and it increases for each new occuring instance. By instance we mean a single sound event that is spatialized with a distinct static position in the room, or with a coherent continuous spatial trajectory in the case of moving events. This information is mostly redundant in the case of recordings with no overlap, but it becomes more important when overlap occurs. For example, when there are two same-class events occurring at the same time, and the user would like to resample their position to a higher resolution than 100msec, the track index can be used directly to disentangle the DoAs for interpolation, without the users having to solve the association problem themselves.

Overlapping sound events are indicated with duplicate frame numbers, and can belong to a different or the same class. An example sequence could be as:

10,     1,  0,  -50,  30
11,     1,  0,  -50,  30
11,     1,  1,   10, -20
12,     1,  1,   10, -20
13,     1,  1,   10, -20
13,     4,  0,  -40,   0

which describes that in frame 10-13, the first instance (track 0) of class crying baby (class 1) is active, however at frame 11 a second instance (track 1) of the same class appears simultaneously at a different direction, while at frame 13 an additional event of class 4 appears. Frames that contain no sound events are not included in the sequence. Note that track information is only included in the development metadata and is not required to be provided by the participants in their results.

In the scenario that a participant would like to use a higher temporal resolution in their estimation than 100msec, we would recommend for an integer number of sub-frames to be used, to simplify processing of the metadata. A simple example routine performing (linear) spherical interpolation of directions is provided with the baseline code here (where e.g. for a sub-frame of 20msec, four interpolated directions are returned between the two input directions spaced at 100msec).

Download

The dataset has been updated to a new version including the evaluation files.




Task setup

In order to allow a fair comparison of methods on the development dataset, participants are required to report results using the following split:

Training splits Validation split Testing split
3, 4, 5, 6 2 1

The evaluation dataset is released a few weeks before the final submission deadline. This dataset consists of only audio recordings without any metadata/labels. Participants can decide the training procedure, i.e. the amount of training and validation files in the development dataset, the number of ensemble models etc., and submit the results of the SELD performance on the evaluation dataset.

Development dataset

The recordings in the development dataset follow the naming convention:

fold[split number]_room[room number per split]_mix[recording number per room per split]_ov[number of overlapping sound events].wav

Note that the room number only distinguishes different rooms used inside a split. For example, room1 in the first split is not the same as room1 in the second split. The room or overlap information is provided for the user of the dataset to understand the performance of their method with respect to different conditions.

Evaluation dataset

The evaluation dataset consists of 200 one-minute recordings without any information on the location, or the number of overlapping sound events in the naming convention as below:

mix[recording number].wav


Submission

The results for each of the 200 recordings in the evaluation dataset should be collected in individual CSV files. Each result file should have the same name as the file name of the respective audio recording, but with the .csv extension, and should contain the same information at each row as the reference labels, excluding the track id:

[frame number (int)],[active class index (int)],[azimuth (int)],[elevation (int)]

Enumeration of frame and class indices begins at zero. The class indices are as ordered in the class descriptions mentioned above. The evaluation will be performed at a temporal resolution of 100msec. In case the participants use a different frame or hop length for their study, we expect them to use a suitable method to extract the information at the specified resolution before submitting the evaluation results.

In addition to the CSV files, the participants are asked to update the information of their method in the provided file and submit a technical report describing the method. We allow upto 4 system output submissions per participant/team. For each system, meta-information should be provided in a separate file, containing the task specific information. All files should be packaged into a zip file for submission. The detailed information regarding the challenge information can be found in the submission page.

General information for all DCASE submissions can be found on the Submission page.



Task rules

  • Use of external data is not allowed.
  • Manipulation of the provided cross-validation split in the development dataset is not allowed.
  • The development dataset can be augmented without the use of external data (e.g. using techniques such as pitch shifting or time stretching).
  • Participants are not allowed to make subjective judgments of the evaluation data, nor to annotate it. The evaluation dataset cannot be used to train the submitted system.


Evaluation

Contrary to the SELD task of DCASE 2019, we do not rate the systems in terms of independent sound event detection performance and localization performance. In order to have a more representative evaluation of the task, we introduce modified metrics that consider the joint nature of localization-and-detection.

Metrics

The first two metrics are the classic sound event detection (SED) metrics of F-score (\(F_{\leq T^\circ}\)) and Error Rate (\(ER_{\leq T^\circ}\)), but are now location-dependent, considering true positives predicted only under a distance threshold \(T^\circ\) (angular in our case) from the reference. For the evaluation of this challenge we take this threshold to be \(T = 20^\circ\).

The next two metrics are focused on the localization part, but are now classification-dependent, meaning that they are computed only across each class only, instead of across all outputs. The first is the localization error \(LE_{\mathrm{CD}}\) expressing average angular distance between predictions and references of the same class. The second is a simple localization recall metric \(LR_{\mathrm{CD}}\) expressing the true positive rate of how many of these localization estimates were detected in a class, out of the total class instances. Unlike the location-dependent detection, these localization metrics do not use any threhold.

All metrics are computed in one-second non-overlapping frames. For a more thorough analysis on the joint SELD metrics please refer to:

Publication

Annamaria Mesaros, Sharath Adavanne, Archontis Politis, Toni Heittola, and Tuomas Virtanen. Joint measurement of localization and detection of sound events. In IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA). New Paltz, NY, Oct 2019. Accepted.

PDF

Joint Measurement of Localization and Detection of Sound Events

Abstract

Sound event detection and sound localization or tracking have historically been two separate areas of research. Recent development of sound event detection methods approach also the localization side, but lack a consistent way of measuring the joint performance of the system; instead, they measure the separate abilities for detection and for localization. This paper proposes augmentation of the localization metrics with a condition related to the detection, and conversely, use of location information in calculating the true positives for detection. An extensive evaluation example is provided to illustrate the behavior of such joint metrics. The comparison to the detection only and localization only performance shows that the proposed joint metrics operate in a consistent and logical manner, and characterize adequately both aspects.

Keywords

Sound event detection and localization, performance evaluation

PDF

Ranking

Overall ranking will be based on the cumulative rank of the four metrics mentioned above, sorted in ascending order. By cumulative rank we mean the following: if system A was ranked individually for each metric as \(ER:1, F1:1, LE:3, LR:1\), then its cumulative rank is \(1+1+3+1=5\). Then if system B has \(ER:3, F1:2, LE:2, LR:3\) (10), and system C has \(ER:2, F1:3, LE:1, LR:2\) (8), then the overall rank of the systems is A,C,B. If two systems end up with the same cumulative rank, then they are assumed to have equal place in the challenge, even though they will be listed alphabetically in the ranking tables.



Results

The SELD task received 49 submissions in total from 14 teams across the world. Main results for these submissions are as following (the table below includes only the best performing system per submitting team):

Rank Submission Information Evaluation dataset
Submission name Author Affiliation System
rank
Error Rate
(20°)
F-score
(20°)
Localization
error (°)
Localization
recall
Du_USTC_task3_4 Jun Du University of Science and Technology of China 1 0.20 84.9 6.0 88.5
Nguyen_NTU_task3_2 Thi Ngoc Tho Nguyen Nanyang Technological University 2 0.23 82.0 9.3 90.0
Shimada_SONY_task3_4 Kazuki Shimada SONY Corporation 3 0.25 83.2 7.0 86.2
Cao_Surrey_task3_4 Yin Cao University of Surrey 4 0.36 71.2 13.3 81.1
Park_ETRI_task3_4 Sooyoung Park Electronics and Telecommunications Research Institute 5 0.43 65.2 16.8 81.9
Phan_QMUL_task3_3 Huy Phan Queen Mary University of London 6 0.49 61.7 15.2 72.4
PerezLopez_UPF_task3_2 Andres Perez-Lopez Pompeu Fabra University 7 0.51 60.1 12.4 65.1
Sampathkumar_TUC_task3_1 Arunodhayan Sampathkumar Techniche Universität Chemnitz 8 0.53 56.6 14.8 66.5
Patel_MST_task3_4 Sohel Patel Missouri University of Science and Technology 9 0.55 55.5 14.4 65.5
Ronchini_UPF_task3_2 Francesca Ronchini Pompeu Fabra University 10 0.58 50.8 16.9 65.5
Naranjo-Alcazar_VFY_task3_2 Javier Naranjo-Alcazar Universitat de Valencia 11 0.61 49.1 19.5 67.1
Song_LGE_task3_3 Ju-man Song LG Electronics 12 0.57 50.4 20.0 64.3
Tian_PKU_task3_1 Congzhou Tian Peking University 13 0.64 47.6 24.5 67.5
Singla_SRIB_task3_2 Rohit Singla Samsung Research Institute Bangalore 14 0.88 18.0 53.4 66.2
DCASE2020_MIC_baseline Archontis Politis Tampere University 15 0.69 41.3 23.1 62.4

Complete results and technical reports can be found in the results page



Baseline system

As the baseline, we use the recently published SELDnet, a CRNN-based method that uses the confidence of SED to estimate one DOA for each sound class. The SED is obtained as a multiclass-multilabel classification, whereas DOA estimation is performed as a multioutput regression.

Publication

Sharath Adavanne, Archontis Politis, Joonas Nikunen, and Tuomas Virtanen. Sound event localization and detection of overlapping sources using convolutional recurrent neural networks. IEEE Journal of Selected Topics in Signal Processing, 13(1):34–48, March 2018. URL: https://ieeexplore.ieee.org/abstract/document/8567942, doi:10.1109/JSTSP.2018.2885636.

PDF

Sound Event Localization and Detection of Overlapping Sources Using Convolutional Recurrent Neural Networks

Abstract

In this paper, we propose a convolutional recurrent neural network for joint sound event localization and detection (SELD) of multiple overlapping sound events in three-dimensional (3D) space. The proposed network takes a sequence of consecutive spectrogram time-frames as input and maps it to two outputs in parallel. As the first output, the sound event detection (SED) is performed as a multi-label classification task on each time-frame producing temporal activity for all the sound event classes. As the second output, localization is performed by estimating the 3D Cartesian coordinates of the direction-of-arrival (DOA) for each sound event class using multi-output regression. The proposed method is able to associate multiple DOAs with respective sound event labels and further track this association with respect to time. The proposed method uses separately the phase and magnitude component of the spectrogram calculated on each audio channel as the feature, thereby avoiding any method- and array-specific feature extraction. The method is evaluated on five Ambisonic and two circular array format datasets with different overlapping sound events in anechoic, reverberant and real-life scenarios. The proposed method is compared with two SED, three DOA estimation, and one SELD baselines. The results show that the proposed method is generic and applicable to any array structures, robust to unseen DOA values, reverberation, and low SNR scenarios. The proposed method achieved a consistently higher recall of the estimated number of DOAs across datasets in comparison to the best baseline. Additionally, this recall was observed to be significantly better than the best baseline method for a higher number of overlapping sound events.

Keywords

Direction-of-arrival estimation;Estimation;Task analysis;Azimuth;Microphone arrays;Recurrent neural networks;Sound event detection;direction of arrival estimation;convolutional recurrent neural network

PDF

Baseline changes

Compared to DCASE 2019 and the published SELDnet version, a few modifications have been integrated in the model, in order to take into account some of the simplest effective improvements demonstrated by the participants in the previous year. Some of them are:

  • instead of raw multichannel magnitude and phase spectrograms as SED features, the more compressed log-mel spectral coefficients are used
  • instead of raw multichannel magnitude and phase spectrograms as localization features, generalized cross-correlation (GCC) features are used for the MIC format, and the acoutic intensity vector for the FOA format
  • the localization part of the joint loss is masked with the ground truth activations of each class, hence not contributing to the training when an event is not active

Note that these changes, among others, were first introduced by the second-best performing team in DCASE2019. A more thorough description and analysis of the effects of these changes can be found in their report:

Publication

Yin Cao, Qiuqiang Kong, Turab Iqbal, Fengyan An, Wenwu Wang, and Mark Plumbley. Polyphonic sound event detection and localization using a two-stage strategy. In Proceedings of the Detection and Classification of Acoustic Scenes and Events 2019 Workshop (DCASE2019), 30–34. New York University, NY, USA, October 2019.

PDF

Polyphonic Sound Event Detection and Localization using a Two-Stage Strategy

Abstract

Sound event detection (SED) and localization refer to recognizing sound events and estimating their spatial and temporal locations. Using neural networks has become the prevailing method for SED. In the area of sound localization, which is usually performed by estimating the direction of arrival (DOA), learning-based methods have recently been developed. In this paper, it is experimentally shown that the trained SED model is able to contribute to the direction of arrival estimation (DOAE). However, joint training of SED and DOAE degrades the performance of both. Based on these results, a two-stage polyphonic sound event detection and localization method is proposed. The method learns SED first, after which the learned feature layers are transferred for DOAE. It then uses the SED ground truth as a mask to train DOAE. The proposed method is evaluated on the DCASE 2019 Task 3 dataset, which contains different overlapping sound events in different environments. Experimental results show that the proposed method is able to improve the performance of both SED and DOAE, and also performs significantly better than the baseline method.

PDF

Furthermore, the newly-introduced joint SELD metrics are used for tuning, and instead of (azimuth, elevation) angles the localization regressors output the estimated direction in Cartesian coordinates (x,y,z), as in the original SELDnet publication.

Repository

This repository implements SELDnet and performs cross-validation in the manner we recommend. We also provide scripts to visualize your SELD results and estimate the relevant metric scores before final submission.


Results for the development dataset

The evaluation metric scores for the test split of the development dataset are given below. The location-dependent detection metrics are computed within a 20° threshold from the reference.

Dataset ER20° F20° LECD LRCD
Ambisonic 0.72 37.4 % 22.8° 60.7 %
Microphone array 0.78 31.4 % 27.3° 59.0 %

For a comparison, using the independent detection and localization metrics as done in DCASE2019 would result in the following:

Dataset ER F LE LR
Ambisonic 0.54 60.9 % 20.4° 66.6 %
Microphone array 0.56 59.2 % 22.6° 66.8 %

Note: The reported baseline system performance is not exactly reproducible due to varying setups. However, you should be able to obtain very similar results.

Citation

If you are participating in this task or using the dataset and code please consider citing the following papers:

Publication

Archontis Politis, Sharath Adavanne, and Tuomas Virtanen. A dataset of reverberant spatial sound scenes with moving sources for sound event localization and detection. In Proceedings of the Detection and Classification of Acoustic Scenes and Events 2020 Workshop (DCASE2020), 165–169. Tokyo, Japan, November 2020. URL: https://dcase.community/workshop2020/proceedings.

PDF

A Dataset of Reverberant Spatial Sound Scenes with Moving Sources for Sound Event Localization and Detection

Abstract

This report details the dataset and the evaluation setup of the Sound Event Localization \& Detection (SELD) task for the DCASE 2020 Challenge. Training and testing SELD systems requires datasets of diverse sound events occurring under realistic acoustic conditions. A significantly more complex dataset is created for DCASE 2020 compared to the previous challenge. The two key differences are a more diverse range of acoustical conditions, and dynamic conditions, i.e. moving sources. The spatial sound scene recordings for all conditions are generated using real room impulse responses, while ambient noise recorded on location is added to the spatialized sound events. Additionally, an improved version of the SELD baseline used in the previous challenge is included, providing benchmark scores for the task.

PDF
Publication

Sharath Adavanne, Archontis Politis, Joonas Nikunen, and Tuomas Virtanen. Sound event localization and detection of overlapping sources using convolutional recurrent neural networks. IEEE Journal of Selected Topics in Signal Processing, 13(1):34–48, March 2018. URL: https://ieeexplore.ieee.org/abstract/document/8567942, doi:10.1109/JSTSP.2018.2885636.

PDF

Sound Event Localization and Detection of Overlapping Sources Using Convolutional Recurrent Neural Networks

Abstract

In this paper, we propose a convolutional recurrent neural network for joint sound event localization and detection (SELD) of multiple overlapping sound events in three-dimensional (3D) space. The proposed network takes a sequence of consecutive spectrogram time-frames as input and maps it to two outputs in parallel. As the first output, the sound event detection (SED) is performed as a multi-label classification task on each time-frame producing temporal activity for all the sound event classes. As the second output, localization is performed by estimating the 3D Cartesian coordinates of the direction-of-arrival (DOA) for each sound event class using multi-output regression. The proposed method is able to associate multiple DOAs with respective sound event labels and further track this association with respect to time. The proposed method uses separately the phase and magnitude component of the spectrogram calculated on each audio channel as the feature, thereby avoiding any method- and array-specific feature extraction. The method is evaluated on five Ambisonic and two circular array format datasets with different overlapping sound events in anechoic, reverberant and real-life scenarios. The proposed method is compared with two SED, three DOA estimation, and one SELD baselines. The results show that the proposed method is generic and applicable to any array structures, robust to unseen DOA values, reverberation, and low SNR scenarios. The proposed method achieved a consistently higher recall of the estimated number of DOAs across datasets in comparison to the best baseline. Additionally, this recall was observed to be significantly better than the best baseline method for a higher number of overlapping sound events.

Keywords

Direction-of-arrival estimation;Estimation;Task analysis;Azimuth;Microphone arrays;Recurrent neural networks;Sound event detection;direction of arrival estimation;convolutional recurrent neural network

PDF