LWPR: A Scalable Method for Incremental Online Learning in High Dimensions
Neural Computation
View/ Open
Date
06/2005Author
Vijayakumar, Sethu
D'Souza, Aaron
Schaal, Stefan
Metadata
Abstract
Locally weighted projection regression (LWPR) is a new algorithm for incremental nonlinear func-
tion approximation in high dimensional spaces with redundant and irrelevant input dimensions. At
its core, it employs nonparametric regression with locally linear models. In order to stay computa-
tionally efficient and numerically robust, each local model performs the regression analysis with a
small number of univariate regressions in selected directions in input space in the spirit of partial
least squares regression. We discuss when and how local learning techniques can successfully work
in high dimensional spaces and compare various techniques for local dimensionality reduction before
finally deriving the LWPR algorithm. The properties of LWPR are that it i) learns rapidly with
second order learning methods based on incremental training, ii) uses statistically sound stochastic
leave-one-out cross validation for learning without the need to memorize training data, iii) adjusts
its weighting kernels based only on local information in order to minimize the danger of negative
interference of incremental learning, iv) has a computational complexity that is linear in the num-
ber of inputs, and v) can deal with a large number of - possibly redundant - inputs, as shown in
various empirical evaluations with up to 50 dimensional data sets. For a probabilistic interpreta-
tion, predictive variance and confidence intervals are derived. To our knowledge, LWPR is the first
truly incremental spatially localized learning method that can successfully and efficiently operate
in very high dimensional spaces.