Unsupervised neural and Bayesian models for zero-resource speech processing
View/ Open
Date
07/07/2017Author
Kamper, Herman
Metadata
Abstract
Zero-resource speech processing is a growing research area which aims to develop methods
that can discover linguistic structure and representations directly from unlabelled speech
audio. Such unsupervised methods would allow speech technology to be developed
in settings where transcriptions, pronunciation dictionaries, and text for language
modelling are not available. Similar methods are required for cognitive models of
language acquisition in human infants, and for developing robotic applications that are
able to automatically learn language in a novel linguistic environment.
There are two central problems in zero-resource speech processing: (i) finding frame-level feature representations which make it easier to discriminate between linguistic units
(phones or words), and (ii) segmenting and clustering unlabelled speech into meaningful
units. The claim of this thesis is that both top-down modelling (using knowledge of
higher-level units to to learn, discover and gain insight into their lower-level constituents)
as well as bottom-up modelling (piecing together lower-level features to give rise to
more complex higher-level structures) are advantageous in tackling these two problems.
The thesis is divided into three parts. The first part introduces a new autoencoder-like
deep neural network for unsupervised frame-level representation learning. This
correspondence autoencoder (cAE) uses weak top-down supervision from an unsupervised
term discovery system that identifies noisy word-like terms in unlabelled speech data.
In an intrinsic evaluation of frame-level representations, the cAE outperforms several
state-of-the-art bottom-up and top-down approaches, achieving a relative improvement
of more than 60% over the previous best system. This shows that the cAE is particularly
effective in using top-down knowledge of longer-spanning patterns in the data; at the
same time, we find that the cAE is only able to learn useful representations when it is
initialized using bottom-up pretraining on a large set of unlabelled speech. The second part of the thesis presents a novel unsupervised segmental Bayesian
model that segments unlabelled speech data and clusters the segments into hypothesized
word groupings. The result is a complete unsupervised tokenization of the input speech
in terms of discovered word types|the system essentially performs unsupervised speech
recognition. In this approach, a potential word segment (of arbitrary length) is embedded
in a fixed-dimensional vector space. The model, implemented as a Gibbs sampler, then
builds a whole-word acoustic model in this embedding space while jointly performing
segmentation. We first evaluate the approach in a small-vocabulary multi-speaker
connected digit recognition task, where we report unsupervised word error rates (WER)
by mapping the unsupervised decoded output to ground truth transcriptions. The model
achieves around 20% WER, outperforming a previous HMM-based system by about 10% absolute. To achieve this performance, the acoustic word embedding function (which
maps variable-duration segments to single vectors) is refined in a top-down manner by
using terms discovered by the model in an outer loop of segmentation.
The third and final part of the study extends the small-vocabulary system in order to handle larger vocabularies in conversational speech data. To our knowledge, this is the
first full-coverage segmentation and clustering system that is applied to large-vocabulary
multi-speaker data. To improve efficiency, the system incorporates a bottom-up syllable
boundary detection method to eliminate unlikely word boundaries. We compare the
system on English and Xitsonga datasets to several state-of-the-art baselines. We
show that by imposing a consistent top-down segmentation while also using bottom-up
knowledge from detected syllable boundaries, both single-speaker and multi-speaker
versions of our system outperform a purely bottom-up single-speaker syllable-based
approach. We also show that the discovered clusters can be made less speaker- and
gender-specific by using features from the cAE (which incorporates both top-down and
bottom-up learning). The system's discovered clusters are still less pure than those of
two multi-speaker unsupervised term discovery systems, but provide far greater coverage.
In summary, the different models and systems presented in this thesis show that both
top-down and bottom-up modelling can improve representation learning, segmentation
and clustering of unlabelled speech data.