Edinburgh Research Archive logo

Edinburgh Research Archive

University of Edinburgh homecrest
View Item 
  •   ERA Home
  • Informatics, School of
  • Informatics thesis and dissertation collection
  • View Item
  •   ERA Home
  • Informatics, School of
  • Informatics thesis and dissertation collection
  • View Item
  • Login
JavaScript is disabled for your browser. Some features of this site may not work without it.

Meta-learning to optimise: loss functions and update rules

View/Open
GaoB_2023.pdf (5.703Mb)
Date
07/02/2023
Author
Gao, Boyan
Metadata
Show full item record
Abstract
Meta-learning, aka “learning to learn”, aims to extract invariant meta-knowledge from a group of tasks in order to improve the generalisation of the base models in the novel tasks. The learned meta-knowledge takes various forms, such as neural architecture, network initialization, loss function and optimisers. In this thesis, we study learning to optimise through meta-learning with of main components, loss function learning and optimiser learning. At a high level, those two components play important roles where optimisers provide update rules to modify the model parameters through the gradient information generated from the loss function. We work on the meta-model’s re-usability across tasks. In the ideal case, the learned meta-model should provide a “plug-and-play” drop-in which can be used without further modification or computational expense with any new dataset or even new model architecture. We apply these ideas to address three challenges in machine learning, namely improving the convergence rate of optimisers, learning with noisy labels, and learning models that are robust to domain shift. We first study how to meta-learn loss functions. Unlike most prior work parameterising a loss function in a black-box fashion with neural networks, we meta-learn a Taylor polynomial loss and apply it to improve the robustness of the base model to label noise in the training data. The good performance of deep neural networks relies on gold-stand labelled data. However, in practice, wrongly labelled data is common due to human error and imperfect automatic annotation processes. We draw inspiration from hand-designed losses that modify the training dynamic to reduce the impact of noisy labels. Going beyond existing hand-designed robust losses, we develop a bi-level optimisation meta-learner Automated Robust Loss (ARL) that discovers novel robust losses that outperform the best prior hand-designed robust losses. A second contribution, ITL, extends the loss function learning idea to the problem of Domain Generalisation (DG). DG is the challenging scenario of deploying a model trained on one data distribution to a novel data distribution. Compared to ARL where the target loss function is optimised by a genetic-based algorithm, ITL benefits from gradient-based optimisation of loss parameters. By leveraging the mathematical guarantee from the Implicit Function Theorem, the hypergradient required to update the loss can be efficiently computed without differentiating through the whole base model training trajectory. This reduces the computational cost dramatically in the meta-learning stage and accelerates the loss function learning process by providing a more accurate hypergradient. Applying our learned loss to the DG problem, we are able to learn base models that exhibit increased robustness to domain shift compared to the state-of-theart. Importantly, the modular plug-and-play nature of our learned loss means that it is simple to use, requiring just a few lines of code change to standard Empirical Risk Minimisation (ERM) learners. We finally study accelerating the optimisation process itself by designing a metalearning algorithm that searches for efficient optimisers, which is termed MetaMD. We tackle this problem by meta-learning Mirror Descent-based optimisers through learning the strongly convex function parameterizing a Bregman divergence. While standard meta-learners require a validation set to define a meta-objective for learning, MetaMD instead optimises the convergence rate bound. The resulting learned optimiser uniquely has mathematically guaranteed convergence and generalisation properties.
URI
https://hdl.handle.net/1842/39821

http://dx.doi.org/10.7488/era/3069
Collections
  • Informatics thesis and dissertation collection

Related items

Showing items related by title, author, creator and subject.

  • Learning and generalization in radial basis function networks 

    Freeman, Jason Alexis Sebastian (The University of Edinburgh, 1998)
    The aim of supervised learning is to approximate an unknown target function by adjusting the parameters of a learning model in response to possibly noisy examples generated by the target function. The performance of the ...
  • Cognitive biases for sequential learning in language: A functional and evolutionary approach 

    Galante, Lara (The University of Edinburgh, 2012-11)
    The cognitive mechanisms involved in forming the structure of language are the subject of much discussion. The world's languages tend to show a strong pattern for various properties, aptly termed 'language universals'. The ...
  • The relationship between executive functions, creativity, and learning in young children 

    Comes, Aurelie (The University of Edinburgh, 2015)

Library & University Collections HomeUniversity of Edinburgh Information Services Home
Privacy & Cookies | Takedown Policy | Accessibility | Contact
Privacy & Cookies
Takedown Policy
Accessibility
Contact
feed RSS Feeds

RSS Feed not available for this page

 

 

All of ERACommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsPublication TypeSponsorSupervisorsThis CollectionBy Issue DateAuthorsTitlesSubjectsPublication TypeSponsorSupervisors
LoginRegister

Library & University Collections HomeUniversity of Edinburgh Information Services Home
Privacy & Cookies | Takedown Policy | Accessibility | Contact
Privacy & Cookies
Takedown Policy
Accessibility
Contact
feed RSS Feeds

RSS Feed not available for this page