Learning and generalization in radial basis function networks
dc.contributor.author
Freeman, Jason Alexis Sebastian
en
dc.date.accessioned
2018-09-13T15:55:44Z
dc.date.available
2018-09-13T15:55:44Z
dc.date.issued
1998
dc.description.abstract
en
dc.description.abstract
The aim of supervised learning is to approximate an unknown target function
by adjusting the parameters of a learning model in response to possibly noisy
examples generated by the target function. The performance of the learning model
at this task can be quantified by examining its generalization ability. Initially the
concept of generalization is reviewed, and various methods of measuring it, such as
generalization error, prediction error, PAC learning and the evidence, are discussed
and the relations between them examined. Some of these relations are dependent
on the architecture of the learning model.
en
dc.description.abstract
Two architectures are prevalent in practical supervised learning: the multi -layer
perceptron (MLP) and the radial basis function network (RBF). While the RBF
has previously been examined from a worst -case perspective, this gives little insight
into the performance and phenomena that can be expected in the typical case.
This thesis focusses on the properties of learning and generalization that can be
expected on average in the RBF.
en
dc.description.abstract
There are two methods in use for training the RBF. The basis functions can be
fixed in advance, utilising an unsupervised learning algorithm, or can adapt during
the training process. For the case in which the basis functions are fixed, the
typical generalization error given a data set of particular size is calculated by
employing the Bayesian framework. The effects of noisy data and regularization
are examined, the optimal settings of the parameters that control the learning
process are calculated, and the consequences of a mismatch between the learning
model and the data -generating mechanism are demonstrated.
en
dc.description.abstract
The second case, in which the basis functions are adapted, is studied utilising the
on -line learning paradigm. The average evolution of generalization error is calculated in a manner which allows the phenomena of the learning process, such as the
specialization of the basis functions, to be eludicated. The three most important
stages of training: the symmetric phase, the symmetry- breaking phase and the
convergence phase, are analyzed in detail; the convergence phase analysis allows
the derivation of maximal and optimal learning rates. Noise on both the inputs
and outputs of the data -generating mechanism is introduced, and the consequences
examined. Regularization via weight decay is also studied, as are the effects of the
learning model being poorly matched to the data generator.
en
dc.identifier.uri
http://hdl.handle.net/1842/32226
dc.publisher
The University of Edinburgh
en
dc.relation.ispartof
Annexe Thesis Digitisation Project 2018 Block 20
en
dc.relation.isreferencedby
en
dc.title
Learning and generalization in radial basis function networks
en
dc.type
Thesis or Dissertation
en
dc.type.qualificationlevel
Doctoral
en
dc.type.qualificationname
PhD Doctor of Philosophy
en
Files
Original bundle
1 - 1 of 1
- Name:
- FreemanJAS_1998redux.pdf
- Size:
- 19.13 MB
- Format:
- Adobe Portable Document Format
This item appears in the following Collection(s)

