Automatic creation of textual content descriptions for general audio signals.
Challenge has ended. Full results for this task can be found in the Results page.
Description
Automated audio captioning (AAC) is the task of general audio content description using free text. It is an inter-modal translation task (not speech-to-text), where a system accepts as an input an audio signal and outputs the textual description (i.e. the caption) of that signal. AAC methods can model concepts (e.g. "muffled sound"), physical properties of objects and environment (e.g. "the sound of a big car", "people talking in a small and empty room"), and high level knowledge ("a clock rings three times"). This modeling can be used in various applications, ranging from automatic content description to intelligent and content oriented machine-to-machine interaction.
The task of AAC is a continuation of the AAC task from DCASE2020. Compared to DCASE2020, this year the task of AAC will allow the usage of any external data and/or pre-trained models. For example, now participants are allowed to use other datasets for AAC or even datasets for sound event detection/tagging, acoustic scene classification, or datasets from any other task that might deemed fit. Additionally, participants can now use pre-trained models, like (but not limited to) Word2Vec, BERT, and YAMNet, wherever they want in their model. Please see below for some recommendations for datasets and pre-tuned models. Finally, this year Clotho dataset will be augmented by around 40% more data, providing a publicly available validation split and extra data in the training split, which participants can use in order to develop their methods. The new version of Clotho will be referred to as Clotho v2, it is expected to be available late March, and the exact numbers for Clotho v2 (e.g. exact amount of words and exact amount of audio samples) will be known upon release.
This year, the task of Automated Audio Captioning is partially supported by the European Union’s Horizon 2020 research and innovation programme under grant agreement No 957337, project MARVEL.
Audio dataset
The AAC task in DCASE2021 will be using Clotho v2 dataset for the evaluation of the submissions. Though, participants can use any other dataset, from any other task, for the development of their methods. In this section, we will describe Clotho v2 dataset.
Clotho dataset
Clotho v2 is an extension of the original Clotho dataset (i.e. v1) and consists of audio samples of 15 to 30 seconds duration, each audio sample having five captions of eight to 20 words length. There is a total of 6974 (4981 from version 1 and 1993 from v2) audio samples in Clotho, with 34 870 captions (i.e. 6974 audio samples * 5 captions per each sample). Clotho v2 is built with focus on audio content and caption diversity, and the splits of the data are not hampering the training or evaluation of methods. The new data in Clotho v2 will not affect the splits used in order to assess the performance of methods using the previous version of Clotho (i.e. the evaluation and testing splits of Clotho V1). All audio samples are from the Freesound platform, and captions are crowdsourced using Amazon Mechanical Turk and annotators from English speaking countries. Unique words, named entities, and speech transcription are removed with post-processing.
Clotho v2 has a total of around 4500 words and is divided in four splits: development, validation, evaluation, and testing. Audio samples are publicly available for all four splits, but captions are publicly available only for the development, validation, and evaluation splits. There are no overlapping audio samples between the four different splits and there is no word that appears in the evaluation, validation, or testing splits, and not appearing in the development split. Also, there is no word that appears in the development split and not appearing at least in one of the other three splits. All words appear proportionally between splits (the word distribution is kept similar across splits), i.e. 55% in the development, 15% in the and validation, 15% in the evaluation, and 15% in the testing split.
Words that could not be divided using the above scheme of 55-15-15-15 (e.g. words that appear only two times in the all four splits combined), appear at least one time in the development split and at least one time to one of the other three splits. This splitting process is similar to the one used for the previous version of Clotho. More detailed info about the splitting process can be found on the paper presenting Clotho, freely available online here and cited as:
Konstantinos Drossos, Samuel Lipping, and Tuomas Virtanen. Clotho: An audio captioning dataset. In 45th IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP). Barcelona, Spain, May 2020. URL: https://arxiv.org/abs/1910.09387.
Clotho: An Audio Captioning Dataset
Abstract
Audio captioning is the novel task of general audio content description using free text. It is an intermodal translation task (not speech-to-text), where a system accepts as an input an audio signal and outputs the textual description (i.e. the caption) of that signal. In this paper we present Clotho, a dataset for audio captioning consisting of 4981 audio samples of 15 to 30 seconds duration and 24 905 captions of eight to 20 words length, and a baseline method to provide initial results. Clotho is built with focus on audio content and caption diversity, and the splits of the data are not hampering the training or evaluation of methods. All sounds are from the Freesound platform, and captions are crowdsourced using Amazon Mechanical Turk and annotators from English speaking countries. Unique words, named entities, and speech transcription are removed with post-processing. Clotho is freely available online (https://zenodo.org/record/3490684).
The data collection of Clotho received funding from the European Research Council, grant agreement 637422 EVERYSOUND.
Audio samples in Clotho
have durations ranging from 10s to 300s, no spelling errors in the first sentence of the description on Freesound, good quality (44.1kHz and 16-bit), and no tags on Freesound indicating sound effects, music or speech. Before extraction, all 12k files were normalized and the preceding and trailing silences were trimmed.
The content of audio samples in Clotho greatly varies, ranging from ambiance in a forest (e.g. water flowing over some rocks), animal sounds (e.g. goats bleating), and crowd yelling or murmuring, to machines and engines operating (e.g. inside a factory) or revving (e.g. cars, motorbikes), and devices functioning (e.g. container with contents moving, doors opening/closing). For a thorough description how the audio samples are selected and filtered, you can check the paper that presents Clotho dataset.
In the following figure is the distribution of the duration of audio files in Clotho and similar distribution is expected in Clotho v2.
Please note bold that since the new split of Clotho is not yet available, the information about Clotho v2 is based on the initial expectation of Clotho v2, which is to have 2000 extra audio files. The exact information will be provided later on, when the new version of Clotho will be released. The expected release date is late March. Until then, participants are encouraged to use the original version of Clotho and any (or all) of the available external resources.
Captions in Clotho
The captions in the Clotho dataset range from 8 to 20 words in length, and were gathered by employing the crowdsourcing platform Amazon Mechanical Turk and a three-step framework. The three steps are:
- audio description,
- description editing, and
- description scoring.
In step 1, five initial captions were gathered for each audio clip from distinct annotators. In step 2, these initial captions were edited to fix grammatical errors. Grammatically correct captions were instead rephrased, in order to acquire diverse captions for the same audio clip. In step 3, the initial and edited captions were scored based on accuracy, i.e. how well the caption describes the audio clip, and fluency, i.e. the English fluency in the caption itself. The initial and edited captions were scored by three distinct annotators. The scores were then summed together and the captions were sorted by the total accuracy score first, total fluency score second. The top five captions, after sorting, were selected as the final captions of the audio clip. More information about the caption scoring (e.g. scoring values, scoring threshold, etc.) is at the corresponding paper of the three-step framework.
We then manually sanitized the final captions of the dataset by removing apostrophes, making compound words consistent, removing phrases describing the content of speech, and replacing named entities. We used in-house annotators to replace transcribed speech in the captions. If the resulting caption were under 8 words, we attempt to find captions in the lower-scored captions (i.e. that were not selected in step 3) that still have decent scores for accuracy and fluency. If there were no such captions or these captions could not be rephrased to 8 words or less, the audio file was removed from the dataset entirely. The same in-house annotators were used to also replace unique words that only appeared in the captions of one audio clip. Since audio clips are not shared between splits, if there are words that appear only in the captions of one audio clip, then these words will appear only in one split.
A thorough description of the three-step framework can be found at the corresponding paper, freely available online here and cited as:
Samuel Lipping, Konstantinos Drossos, and Tuoams Virtanen. Crowdsourcing a dataset of audio captions. In Proceedings of the Detection and Classification of Acoustic Scenes and Events Workshop (DCASE). Nov. 2019. URL: https://arxiv.org/abs/1907.09238.
Crowdsourcing a Dataset of Audio Captions
Abstract
Audio captioning is a novel field of multi-modal translation and it is the task of creating a textual description of the content of an audio signal (e.g. “people talking in a big room”). The creation of a dataset for this task requires a considerable amount of work, rendering the crowdsourcing a very attractive option. In this paper we present a three steps based framework for crowdsourcing an audio captioning dataset, based on concepts and practises followed for the creation of widely used image captioning and machine translations datasets. During the first step initial captions are gathered. A grammatically corrected and/or rephrased version of each initial caption is obtained in second step. Finally, the initial and edited captions are rated, keeping the top ones for the produced dataset. We objectively evaluate the impact of our framework during the process of creating an audio captioning dataset, in terms of diversity and amount of typographical errors in the obtained captions. The obtained results show that the resulting dataset has less typographical errors than the initial captions, and on average each sound in the produced dataset has captions with a Jaccard similarity of 0.24, roughly equivalent to two ten-word captions having in common four words with the same root, indicating that the captions are dissimilar while they still contain some of the same information.
Development, validation, and evaluation datasets of Clotho
Clotho v1 is currently divided into a development split of 2893 audio clips with 14465 captions, an evaluation split of 1045 audio clips with 5225 captions, and a testing split of 1043 audio clips with 5215 captions. These splits are created by first constructing the sets of unique words of the captions of each audio clip. These sets of words are combined to form the bag of words of the whole dataset, from which we can derive the frequency of a given word. With the unique words of audio files as classes, we use multi-label stratification. More information on the splits of Clotho can be found at the corresponding paper.
At the moment, we are at the process of finalizing the annotation of 2000 more sounds with five captions each, following the above mentioned process of the three-step framework (i.e. Clotho v2). The new data will be publicly available and allocated as a validation split and as extra training data for Clotho, keeping intact the existing evaluation and testing splits of Clotho v1. When the new split is ready and publicly available, we will update the information here, in order to include the exact new information. We estimate that the new data will be available at the end of March.
The name of the splits for Clotho differ from the DCASE terminology. To avoid confusion for participants, the correspondence of splits between Clotho and DCASE challenge is:
Clotho naming of splits | DCASE Challenge naming of splits |
development | development |
validation | |
evaluation | |
testing | evaluation |
Clotho development and validation splits are meant for optimizing audio captioning methods. The performance of the audio captioning methods can then be assessed (e.g. for reporting results in a conference or journal paper) using Clotho evaluation split. Clotho testing split is meant only for usage in scientific challenges, e.g. DCASE challenge. For the rest of this text, the DCASE challenge terminology will be used. For differentiating between Clotho development and evaluation, the terms development-training, development-validation, and development-testing will be used, wherever necessary. Development-training refers to Clotho development split, development-validation refers to Clotho validation split, and development-testing refers to Clotho evaluation split.
Clotho download
DCASE Development split of Clotho can be found at the online Zenodo repository. Make sure that you download Clotho v2.1, as there were some minor fixes in the dataset (fixing of file naming and some corrupted files).
Development-training data are:
clotho_audio_development.7z
: The development-training audio clips.clotho_captions_development.csv
: The captions of the development-training audio clips.clotho_metadata_development.csv
: The meta-data of the development-training audio clips.
Development-validation data are:
clotho_audio_validation.7z
: The development-validation audio clips.clotho_captions_validation.csv
: The captions of the development-validation audio clips.clotho_metadata_validation.csv
: The meta-data of the development-validation audio clips.
Development-testing data are:
clotho_audio_evaluation.7z
: The development-testing audio clips.clotho_captions_evaluation.csv
: The captions of the development-testing audio clips.clotho_metadata_evaluation.csv
: The meta-data of the development-testing audio clips.
DCASE Evaluation split of Clotho (i.e. Clotho testing) can be found at the online Zenodo repository.
Evaluation data are:
clotho_audio_test.7z
: The development-testing audio clips.clotho_metadata_test.csv
: The meta-data of the development-testing audio clips, containing only authors and licence.
NOTE BOLD: Participants are strongly prohibited to use any additional information for the DCASE evaluation (testing) of their method, apart the provided audio files from the DCASE Evaluation split.
Other suggested resources
This year, participants are allowed to use any available resource (e.g. pre-trained models, other datasets) for developing their AAC methods. In order to help participants discover and use other resources, we have put-up a GitHub repository with AAC resources.
Participants are encouraged to browse the information at the GitHub repository with the other suggested AAC resources and use any resource that they deem proper. Additionally, participants are allowed to use any other resource, no matter if it is listed or not at the information of the GitHub repository.
Task setup
Participants are free to use any dataset and pre-trained model, in order to develop their AAC method(s). The assessment of the methods will be performed using the withheld split of Clotho, which is the same as last year, offering direct comparison with the results of the AAC task at DCASE2020.
Task rules
Participants are allowed to:
- Use external data (e.g. audio files, text, annotations).
- Use pre-trained models (e.g. text models like Word2Vec, audio tagging models, sound event detection models).
- Augment the development dataset (i.e. development-training and development-testing) with or without the use of external data.
- Use all the available metadata provided, but they must explicitly state it and indicate if they use the available metadata. This will not affect the rating of their method.
Participants are NOT allowed to:
- Make subjective judgments of the evaluation data, nor to annotate it.
- Use additional information of the DCASE evaluation (testing) for their method, apart the provided audio files from the DCASE Evaluation split.
Submission
All participants should submit:
- the output of their audio captioning with the Clotho-testing split (
*.csv
file, the split will be announced later) - metadata for their submission (
*.yaml
file), and - a technical report for their submission (
*.pdf
file).
We allow up to 4 system output submissions per participant/team.
For each system, metadata should be provided in a separate file,
containing the task specific information. All files should be
packaged into a zip file for submission. Please make a clear
connection between the system name in the submitted metadata
(the .yaml
file), submitted system output (the .csv
file), and
the technical report (the .pdf
file)! For indicating the connection
of your files, you can consider using the following naming convetion:
<author>_<institute>_task6_submission_<submission_index>_<output or metadata or report>.<csv or yaml or pdf>
For example:
drossos_tau_task6_submission_1_output.csv
drossos_tau_task6_submission_1_metadata.yaml
drossos_tau_task6_submission_1_report.pdf
The field <submission_index>
is to differentiate your submissions
in case that you have multiple submissions.
System output file
The system output file should be a *.csv
file, and should have the following two columns:
file_name
, which will contain the file name of the audio file.caption_predicted
, which will contain the output of the audio captioning method for the file with file name as specified in thefile_name
column.
For example, if a file has a file name 0001.wav
and the predicted
caption of the audio captioning method for the file 0001.wav
is hello world
, then the CSV file should have the entry:
file_name caption_predicted
. .
. .
. .
test_0001.wav hello world
Please note bold: automated procedures will be used for the evaluation of the submitted results. Therefore, the column names should be exactly as indicated above.
Metadata file
For each system, metadata should be provided in a separate file. The file format should be as indicated below.
# Submission information for task 6
submission:
# Submission label
# Label is used to index submissions.
# Generate your label following way to avoid
# overlapping codes among submissions:
# [Last name of corresponding author]_[Abbreviation of institute of the corresponding author]_task[task number]_[index number of your submission (1-4)]
label: drossos_tau_task6_1
#
# Submission name
# This name will be used in the results tables when space permits
name: DCASE2021 baseline system
#
# Submission name abbreviated
# This abbreviated name will be used in the results table when space is tight.
# Use maximum 10 characters.
abbreviation: Baseline
# Authors of the submitted system. Mark authors in
# the order you want them to appear in submission lists.
# One of the authors has to be marked as corresponding author,
# this will be listed next to the submission in the results tables.
authors:
# First author
- lastname: Drossos
firstname: Konstantinos
email: konstantinos.drossos@tuni.fi # Contact email address
corresponding: true # Mark true for one of the authors
# Affiliation information for the author
affiliation:
abbreviation: TAU
institute: Tampere University
department: Computing Sciences # Optional
location: Tampere, Finland
# Second author
- lastname: Lipping
firstname: Samuel
email: samuel.lipping@tuni.fi # Contact email address
# Affiliation information for the author
affiliation:
abbreviation: TAU
institute: Tampere University
department: Computing Sciences # Optional
location: Tampere, Finland
# Third author
- lastname: Virtanen
firstname: Tuomas
email: tuomas.virtanen@tuni.fi
# Affiliation information for the author
affiliation:
abbreviation: TAU
institute: Tampere University
department: Computing Sciences
location: Tampere, Finland
# System information
system:
# System description, meta data provided here will be used to do
# meta analysis of the submitted system.
# Use general level tags, when possible use the tags provided in comments.
# If information field is not applicable to the system, use "!!null".
description:
# Audio input / sampling rate
# e.g. 16kHz, 22.05kHz, 44.1kHz, 48.0kHz
input_sampling_rate: 44.1kHz
# Acoustic representation
# Here you should indicate what can or audio representation
# you used. If your system used hand-crafted features (e.g.
# mel band energies), then you can do:
#
# `acoustic_features: mel energies`
#
# Else, if you used some pre-trained audio feature extractor,
# you can indicate the name of the system, for example:
#
# `acoustic_features: audioset`
acoustic_features: log-mel energies
# Word embeddings
# Here you can indicate how you treated word embeddings.
# If your method learned its own word embeddings (i.e. you
# did not used any pre-trained word embeddings) then you can
# do:
#
# `word_embeddings: learned`
#
# Else, specify the pre-trained word embeddings that you used
# (e.g. Word2Vec, BERT, etc).
word_embeddings: one-hot
# Data augmentation methods
# e.g. mixup, time stretching, block mixing, pitch shifting, ...
data_augmentation: !!null
# Method scheme
# Here you should indicate the scheme of the method that you
# used. For example:
machine_learning_method: encoder-decoder
# Learning scheme
# Here you should indicate the learning scheme.
# For example, you could specify either
# supervised, self-supervised, or even
# reinforcement learning.
learning_scheme: supervised
# Ensemble
# Here you should indicate if you used ensemble
# of systems or not.
ensemble: No
# Audio modelling
# Here you should indicate the type of system used for
# audio modelling. For example, if you used some stacked CNNs, then
# you could do:
#
# audio_modelling: cnn
#
# If you used some pre-trained system for audio modelling,
# then you should indicate the system used (e.g. COALA, COLA,
# transfomer).
audio_modelling: cnn
# Word modelling
# Similarly, here you should indicate the type of system used
# for word modelling. For example, if you used some RNNs,
# then you could do:
#
# word_modelling: rnn
#
# If you used some pre-trained system for word modelling,
# then you should indicate the system used (e.g. transfomer).
word_modelling: rnn
# Loss function
# Here you should indicate the loss fuction that you employed.
loss_function: crossentropy
# Optimizer
# Here you should indicate the name of the optimizer that you
# used.
optimizer: adam
# Learning rate
# Here you should indicate the learning rate of the optimizer
# that you used.
leasrning_rate: 1e-3
# Gradient clipping
# Here you should indicate if you used any gradient clipping.
# You do this by indicating the value used for clipping. Use
# 0 for no clipping.
gradient_clipping: 0
# Gradient norm
# Here you should indicate the norm of the gradient that you
# used for gradient clipping. This field is used only when
# gradient clipping has been employed.
gradient_norm: !!null
# Metric monitored
# Here you should report the monitored metric
# for optimizing your method. For example, did you
# monitored the loss on the validation data (i.e. validation
# loss)? Or you monitored the SPIDEr metric? Maybe the training
# loss?
metric_monitored: validation_loss
# System complexity, meta data provided here will be used to evaluate
# submitted systems from the computational load perspective.
complexity:
# Total amount of parameters used in the acoustic model.
# For neural networks, this information is usually given before training process
# in the network summary.
# For other than neural networks, if parameter count information is not directly
# available, try estimating the count as accurately as possible.
# In case of ensemble approaches, add up parameters for all subsystems.
# In case embeddings are used, add up parameter count of the embedding
# extraction networks and classification network
# Use numerical value (do not use comma for thousands-separator).
total_parameters: 46246
# List of external datasets used in the submission.
# Development dataset is used here only as example, list only external datasets
external_datasets:
# Dataset name
- name: Clotho
# Dataset access url
url: https://doi.org/10.5281/zenodo.3490683
# Has audio:
has_audio: Yes
# Has images
has_images: No
# Has video
has_video: No
# Has captions
has_captions: Yes
# Number of captions per audio
nb_captions_per_audio: 5
# Total amount of examples used
total_audio_length: 24430
# Used for (e.g. audio_modelling, word_modelling, audio_and_word_modelling)
used_for: audio_and_word_modelling
# URL to the source code of the system [optional]
source_code: https://github.com/audio-captioning/dcase-2021-baseline
# System results
results:
development_evaluation:
# System results for development evaluation split.
# Full results are not mandatory, however, they are highly recommended
# as they are needed for through analysis of the challenge submissions.
# If you are unable to provide all results, also incomplete
# results can be reported.
bleu1: 0.378
bleu2: 0.119
bleu3: 0.050
bleu4: 0.017
rougel: 0.263
meteor: 0.078
cider: 0.075
spice: 0.028
spider: 0.051
Open and reproducible research
Finally, for supporting open and reproducible research, we kindly ask from each participant/team to consider making available the code of their method (e.g. in GitHub) and pre-trained models, after the challenge is over.
Evaluation
The submitted systems will be evaluated according to their performance on the withheld evaluation split. For the evaluation, the captions will not have any punctuation and all letters will be small case. Therefore, the participants are advised to optimized their methods using captions which do not have punctuation and all letters are small case. The freely and online available tools of Clotho dataset, provide already such functionality.
All of the following metrics will be reported for every submitted method:
- BLEU1
- BLEU2
- BLEU3
- BLEU4
- ROUGEL
- METEOR
- CIDEr
- SPICE
- SPIDEr
Ranking of the methods will be performed according to the SPIDEr metric, which is a combination of CIDEr and SPICE. Specifically, the evaluation will be performed based on the average of CIDEr and SPICE, referred to as SPIDEr and shown to have the combined benefits of CIDEr and SPICE. More information is available on the corresponding paper, available online here.
For a brief introduction and more pointers on the above mentioned metrics, you can refer to the original paper of audio captioning:
Konstantinos Drossos, Sharath Adavanne, and Tuomas Virtanen. Automated audio captioning with recurrent neural networks. In IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA). New Paltz, New York, U.S.A., Oct. 2017. URL: https://arxiv.org/abs/1706.10006.
Automated Audio Captioning with Recurrent Neural Networks
Abstract
We present the first approach to automated audio captioning. We employ an encoder-decoder scheme with an alignment model in between. The input to the encoder is a sequence of log mel-band energies calculated from an audio file, while the output is a sequence of words, i.e. a caption. The encoder is a multi-layered, bi-directional gated recurrent unit (GRU) and the decoder a multi-layered GRU with a classification layer connected to the last GRU of the decoder. The classification layer and the alignment model are fully connected layers with shared weights between timesteps. The proposed method is evaluated using data drawn from a commercial sound effects library, ProSound Effects. The resulting captions were rated through metrics utilized in machine translation and image captioning fields. Results from metrics show that the proposed method can predict words appearing in the original caption, but not always correctly ordered.
Results
Complete results and technical reports can be found in the results page
Baseline system
To provide a starting point and some initial results for the challenge, there is a baseline system for the task of automated audio captioning. The baseline system is freely available online, is a sequence-to-sequence model, and is implemented using PyTorch.
The baseline system consists of four parts:
- the caption evaluation part,
- the dataset pre-processing/feature extraction part,
- the data handling part for PyTorch library, and
- the deep neural network (DNN) method part
You can find the baseline system of automated audio captioning task at GitHub.
Caption evaluation
Caption evaluation is performed using a version of the caption evaluation tools used for the MS COCO challenge. This version of the code has been updated in order to be compliant with Python 3.6 and above and with the needs of the automated audio captioning task. The code can for the evaluation is included in the baseline system, but also can be found online.
Dataset pre-processing/feature extraction
Clotho data are WAV and CSV files. In order to be used for an audio captioning method, features have to be extracted from the audio clips (i.e. WAV files) and the captions in the CSV files have to be turned to a more computational friendly form (e.g. one-hot encoding). Finally, the extracted features and processed words, have to be matched and used as input-output pairs for optimizing the parameters of an audio captioning method.
In the baseline system there is code that implements the above. This code is also available as stand-alone, in the following repository:
Data handling for PyTorch library
In PyTorch there is the DataLoader class, which offers a convenient way of handling the data iteration. Clotho dataset has associated data loader for PyTorch, available in the baseline system and online.
Deep neural network (DNN) method
Finally, the DNN of the baseline is a sequence-to-sequence system, consisting of an encoder and a decoder. The encoder takes as an input 64 log mel-band energies, consists of three bi-directional GRU, and outputs the summary of the input sequence of features. Each GRU of the encoder has 256 output features.
The input sequence to the encoder (i.e. the extracted audio features has different length from the targeted output sequence (i.e. the words). For that reason, there has to be some kind of alignment between these two sequences. Our baseline system does not employ any alignment mechanism. Instead, the encoder outputs the summary vector of the input sequence, and this summary vector is then repeated as an input to the decoder.
The decoder consists of one GRU and one classifier (a trainable affine transform with bias and a non-linearity at the end), accepts the output of the encoder, and outputs a probability for each of the unique words. The decoder iterates for 22 time steps, which is the length of the longer caption.
Please not bold: The DNN method serves as an example. It is not meant to be used as a solid method for audio captioning, used in further hyper-parameter optimization.
Hyper-parameters
Feature extraction and caption processing: The baseline system uses the following hyper-parameters for the feature extraction:
- 64 log mel-band energies
- Hamming window of 46 ms length with 50% overlap
Captions are pre-processed according to the following:
- Removal of all punctuation
- All letters to small case
- Tokenization
- Pre-pending of start-of-sentence token (i.e.
<sos>
) - Appending of end-of-sentence token (i.e.
<eos>
)
Neural network and optimization hyper-parameters: The deep neural network used in the baseline system has the following hyper-parameters:
- Three layers of bi-directional gated recurrent units (GRUs).
- First GRU has 64 input dimensionality and outputs 256 features (i.e. 256 * 2 = 512 for the two directions)
- Second and third GRUs have 512 input dimensionality and output 256 features (i.e. 256 * 2 = 512 for the two directions)
- The outputs from the second GRU are added with the inputs to the second GRU, before used as an input to the third GRU (i.e. there is a residual connection between the second and third GRU).
- The outputs from the third are also added to the input of the third GRU, before used by the attention mechanism.
- One GRU layer, with input dimensionality 512 and output 256
- One trainable affine transform with bias, acting as a classifier, with input dimensionality of 256 and output of 4637 (i.e. the length of the one-hot encoding of the words).
The optimization of the parameters of the DNN performed using Adam optimizer for 300 epochs, a batch size of 16 and the cross-entropy loss. The learning rate of Adam is 10-4 and before every weight update, the 2-norm of the gradients clipped using as a threshold the value of 2. The validation split was used for using the early stopping paradigm, with a patience of 50 epochs.
All input audio features and captions in a batch are padded to the longest
length in the batch. That is, the input audio features are padded with
zero vectors to the beginning, in order all of the input audio features
sequences to have the same amount of vectors. The output sequences of words,
are padded with <eos>
tokens at the end, so all sequences of word will
have the same amount of words.
Results for the development dataset
The results of the baseline system for the development dataset of Clotho v2.1 are:
Metric | Value |
BLEU1 | 0.378 |
BLEU2 | 0.119 |
BLEU3 | 0.050 |
BLEU4 | 0.017 |
ROUGEL | 0.263 |
METEOR | 0.078 |
CIDEr | 0.075 |
SPICE | 0.028 |
SPIDEr | 0.051 |
The pre-trained weights for the DNN of the baseline system yielding the above results are freely available on Zenodo:
Citations
If you participating in this task, you might want to check the following papers. If you find a paper that had to be cited here but it is not (e.g. a paper for some of the suggested resources that is missing), please contact us and report it to us.
- The initial publication on audio captioning:
Konstantinos Drossos, Sharath Adavanne, and Tuomas Virtanen. Automated audio captioning with recurrent neural networks. In IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA). New Paltz, New York, U.S.A., Oct. 2017. URL: https://arxiv.org/abs/1706.10006.
Automated Audio Captioning with Recurrent Neural Networks
Abstract
We present the first approach to automated audio captioning. We employ an encoder-decoder scheme with an alignment model in between. The input to the encoder is a sequence of log mel-band energies calculated from an audio file, while the output is a sequence of words, i.e. a caption. The encoder is a multi-layered, bi-directional gated recurrent unit (GRU) and the decoder a multi-layered GRU with a classification layer connected to the last GRU of the decoder. The classification layer and the alignment model are fully connected layers with shared weights between timesteps. The proposed method is evaluated using data drawn from a commercial sound effects library, ProSound Effects. The resulting captions were rated through metrics utilized in machine translation and image captioning fields. Results from metrics show that the proposed method can predict words appearing in the original caption, but not always correctly ordered.
- The three-step framework, employed for collecting the annotations of Clotho (if you use the three-step framework, consider citing the paper):
Samuel Lipping, Konstantinos Drossos, and Tuoams Virtanen. Crowdsourcing a dataset of audio captions. In Proceedings of the Detection and Classification of Acoustic Scenes and Events Workshop (DCASE). Nov. 2019. URL: https://arxiv.org/abs/1907.09238.
Crowdsourcing a Dataset of Audio Captions
Abstract
Audio captioning is a novel field of multi-modal translation and it is the task of creating a textual description of the content of an audio signal (e.g. “people talking in a big room”). The creation of a dataset for this task requires a considerable amount of work, rendering the crowdsourcing a very attractive option. In this paper we present a three steps based framework for crowdsourcing an audio captioning dataset, based on concepts and practises followed for the creation of widely used image captioning and machine translations datasets. During the first step initial captions are gathered. A grammatically corrected and/or rephrased version of each initial caption is obtained in second step. Finally, the initial and edited captions are rated, keeping the top ones for the produced dataset. We objectively evaluate the impact of our framework during the process of creating an audio captioning dataset, in terms of diversity and amount of typographical errors in the obtained captions. The obtained results show that the resulting dataset has less typographical errors than the initial captions, and on average each sound in the produced dataset has captions with a Jaccard similarity of 0.24, roughly equivalent to two ten-word captions having in common four words with the same root, indicating that the captions are dissimilar while they still contain some of the same information.
- The Clotho dataset (if you use Clotho consider citing the Clotho paper):
Konstantinos Drossos, Samuel Lipping, and Tuomas Virtanen. Clotho: An audio captioning dataset. In 45th IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP). Barcelona, Spain, May 2020. URL: https://arxiv.org/abs/1910.09387.
Clotho: An Audio Captioning Dataset
Abstract
Audio captioning is the novel task of general audio content description using free text. It is an intermodal translation task (not speech-to-text), where a system accepts as an input an audio signal and outputs the textual description (i.e. the caption) of that signal. In this paper we present Clotho, a dataset for audio captioning consisting of 4981 audio samples of 15 to 30 seconds duration and 24 905 captions of eight to 20 words length, and a baseline method to provide initial results. Clotho is built with focus on audio content and caption diversity, and the splits of the data are not hampering the training or evaluation of methods. All sounds are from the Freesound platform, and captions are crowdsourced using Amazon Mechanical Turk and annotators from English speaking countries. Unique words, named entities, and speech transcription are removed with post-processing. Clotho is freely available online (https://zenodo.org/record/3490684).