Show simple item record

dc.contributor.advisorWilliams, Chris
dc.contributor.advisorFerrari, Vittorio
dc.contributor.authorNash, Charlie
dc.date.accessioned2021-01-05T19:08:45Z
dc.date.available2021-01-05T19:08:45Z
dc.date.issued2020-11-30
dc.identifier.urihttps://hdl.handle.net/1842/37476
dc.identifier.urihttp://dx.doi.org/10.7488/era/760
dc.description.abstractLatent variable models assume the existence of unobserved factors that are responsible for generating observed data. Deep latent variable models that make use of neural components are effective at modelling and learning representations of data. In this thesis we present specialised deep latent variable models for a range of complex data domains, address challenges associated with the presence of missing-data, and develop tools for the analysis of the representations learned by neural networks. First we present the shape variational autoencoder (ShapeVAE), a deep latent variable model of part-structured 3D objects. Given an input collection of part-segmented objects with dense point correspondences the ShapeVAE is capable of synthesizing novel, realistic shapes, and by performing conditional inference can impute missing parts or surface normals. In addition, by generating both points and surface normals, our model enables us to use powerful surface-reconstruction methods for mesh synthesis. We provide a quantitative evaluation of the ShapeVAE on shape-completion and test-set log-likelihood tasks and demonstrate that the model performs favourably against strong baselines. We demonstrate qualitatively that the ShapeVAE produces plausible shape samples, and that it captures a semantically meaningful shape-embedding. In addition we show that the ShapeVAE facilitates mesh reconstruction by sampling consistent surface normals. Latent variable models can be used to probabilistically “fill-in” missing data entries. The variational autoencoder architecture (Kingma and Welling, 2014; Rezende et al., 2014) includes a “recognition” or “encoder” network that infers the latent variables given the data variables. However, it is not clear how to handle missing data variables in these networks. The factor analysis (FA) model is a basic autoencoder, using linear encoder and decoder networks. We show how to calculate exactly the latent posterior distribution for the FA model in the presence of missing data, and note that this solution exhibits a non-trivial dependence on the pattern of missingness. We also discuss various approximations to the exact solution. Experiments compare the effectiveness of various approaches to imputing the missing data. Next, we present an approach for learning latent, object-based representations from image data, called the “multi-entity variational autoencoder” (MVAE), whose prior and posterior distributions are defined over a set of random vectors. Object-based representations are closely linked with human intelligence, yet relatively little work has explored how object-based representations can arise through unsupervised learning. We demonstrate that the model can learn interpretable representations of visual scenes that disentangle objects and their properties. Finally we present a method for the analysis of neural network representations that trains autoregressive decoders called inversion models to express a distribution over input features conditioned on intermediate model representations. Insights into the invariances learned by supervised models can be gained by viewing samples from these inversion models. In addition, we can use these inversion models to estimate the mutual information between a model's inputs and its intermediate representations, thus quantifying the amount of information preserved by the network at different stages. Using this method we examine the types of information preserved at different layers of convolutional neural networks, and explore the invariances induced by different architectural choices. Finally we show that the mutual information between inputs and network layers initially increases and then decreases over the course of training, supporting recent work by Shwartz-Ziv and Tishby (2017) on the information bottleneck theory of deep learning.en
dc.contributor.sponsorEngineering and Physical Sciences Research Council (EPSRC)en
dc.language.isoenen
dc.publisherThe University of Edinburghen
dc.relation.hasversionNash, C. and Williams, C. K. I. (2017). The shape variational autoencoder: A deep generative model of part-segmented 3d objects. In Computer Graphics Forum, volume 36, pages 112. Wiley Online Library.en
dc.relation.hasversionWilliams, C. K. I., Nash, C. and Naz´abal, A. (2018). Autoencoders and probabilistic inference with missing data: An exact solution for the factor analysis case. Available at arXiv:1801.03851en
dc.relation.hasversionNash, C., Kushman, N., and Williams, C. K. (2019). Inverting supervised representations with autoregressive neural density models. In AISTATS, Proceedings of Machine Learning Researchen
dc.relation.hasversion“The Shape Variational Autoencoder: A Deep Generative Model of Part-Segmented 3D Objects” (Nash and Williams, 2017), published at the Symposium of Geometry Processing.en
dc.relation.hasversion“Inverting supervised representations with autoregressive neural density models.” (Nash et al., 2019), presented at AISTATS 2019.en
dc.relation.hasversionC. Nash, S. A. Eslami, C. Burgess, I. Higgins, D. Zoran, T. Weber, and P. Battaglia. The multi-entity variational autoencoder. NeurIPS Learning Disentangled Features workshop, 2017.en
dc.subjectlatent variable modelsen
dc.subjectlatent variable model of 3D objectsen
dc.subjectapproximation method analysisen
dc.subjectmachine learningen
dc.subjectanalysing neural network layersen
dc.titleUnsupervised learning with neural latent variable modelsen
dc.typeThesis or Dissertationen
dc.type.qualificationlevelDoctoralen
dc.type.qualificationnamePhD Doctor of Philosophyen


Files in this item

This item appears in the following Collection(s)

Show simple item record