Publications

The Use of Voice Source Features for Sung Speech Recognition.

Published in ICASSP, 2021

In this paper, we ask whether vocal source features (pitch, shimmer, jitter, etc) can improve the performance of automatic sung speech recognition, arguing that conclusions previously drawn from spoken speech studies may not be valid in the sung speech domain. We first use a parallel singing/speaking corpus (NUS-48E) to illustrate differences in sung vs spoken voicing characteristics including pitch range, syllables duration, vibrato, jitter and shimmer. We then use this analysis to inform speech recognition experiments on the sung speech DSing corpus, using a state of the art acoustic model and augmenting conventional features with various voice source parameters. Experiments are run with three standard (increasingly large) training sets, DSing1 (15.1 hours), DSing3 (44.7 hours) and DSing30 (149.1 hours). Pitch combined with degree of voicing produces a significant decrease in WER from 38.1% to 36.7% when training with DSing1 however smaller decreases in WER observed when training with the larger more varied DSing3 and DSing30 sets were not seen to be statistically significant. Voicing quality characteristics did not improve recognition performance although analysis suggests that they do contribute to an improved discrimination between voiced/unvoiced phoneme pairs.

Recommended citation: Roa Dabike, Gerardo and Barker, Jon. (2021). "The Use of Voice Source Features for Sung Speech Recognition." ICASSP 2021, pages 6513–6517

The Sheffield University System for the MIREX 2020:Lyrics Transcription Task.

Published in MIREX, 2020

This extended abstract describes the system we submitted to the MIREX 2020 Lyrics Transcription task. The system consists of two modules: a source separation front-end and an ASR back-end. The first module separates the vocal from a polyphonic song by utilising a convolutional timedomain audio separation network (ConvTasNet). The second module transcribes the lyrics from the separated vocal by using a factored-layer time-delay neural network (fTDNN) acoustic model and a 4-gram language model. Both the separation and the ASR modules are trained on a large open-source singing corpora, namely, Smule DAMPVSEP and Smule DAMP-MVP. Using a separation module audio pre-processing reduced the transcription error by roughly 11% absolute WER for polyphonic songs compared with transcriptions without vocal separation. However, the best WER achieved was 52.06%, very high compared to WERs as low as 19.60% that we achieved previously for unaccompanied song.

Recommended citation: Roa Dabike, Gerardo and Barker, Jon. (2020). "The Sheffield University System for the MIREX 2020:Lyrics Transcription Task." MIREX 2020.

Automatic lyric transcription from karaoke vocal tracks: Resources and a baseline system.

Published in Interspeech, 2019

Automatic sung speech recognition is a relatively understudied topic that has been held back by a lack of large and freely available datasets. This has recently changed thanks to the release of the DAMP Sing! dataset, a 1100 hour karaoke dataset originating from the social music-making company, Smule. This paper presents work undertaken to define an easily replicable, automatic speech recognition benchmark for this data. In particular, we describe how transcripts and alignments have been recovered from Karaoke prompts and timings; how suitable training, development and test sets have been defined with varying degrees of accent variability; and how language models have been developed using lyric data from the LyricWikia website. Initial recognition experiments have been performed using factored-layer TDNN acoustic models with lattice-free MMI training using Kaldi. The best WER is 19.60% – a new state-of-the-art for this type of data. The paper concludes with a discussion of the many challenging problems that remain to be solved. Dataset definitions and Kaldi scripts have been made available so that the benchmark is easily replicable.

Recommended citation: Roa Dabike, Gerardo, and Barker, Jon. (2019). "Automatic lyric transcription from karaoke vocal tracks: Resources and a baseline system." Interspeech 2019, pages 549-583