Show simple item record

dc.contributor.authorBuchan, Alexander Fairleyen
dc.date.accessioned2018-03-29T12:21:29Z
dc.date.available2018-03-29T12:21:29Z
dc.date.issued1939
dc.identifier.urihttp://hdl.handle.net/1842/29466
dc.description.abstracten
dc.description.abstractOne of the many ways of obtaining from a set of observations a second smoothed or graduated set is to assume that the second set is a linear combination of the first. Thus if u denotes the column vector of n observed values, y that of the graduated and C the matrix performing the linear transformation, then y = Cu. This method was considered by 7.F. Sheppard, in the case where the observed data are equidistant, equally weighted and uncorrelated; the assumptions being that the sum of the squared coefficients in the transformation shall be a minimum; and that each / shall differ from a specified u by differences of u of order exceeding j, i.e. if the u's are already polynomial values of degree j, then the linear transformation leaves them unaltered. In this way each graduated value depends upon every observation, and not simply on those on either side as, for example, in the case of the centred finite summation formulae of Spencer or Woolhouse. Sheppard points out that the solution of this problem yields precisely the same final results as that of fitting a curve of degree j to the u's by the method of least squares. A.C. Aitken has shown more recently how this problem in its two aspects may be solved much more concisely by using the matrix calculus, and indeed he gives the solution for the case where the u's are not subject to the above restricted conditions but may be of arbitrary functional type. The transformations which he derives for the restricted and general cases areen
dc.description.abstracty = P (P' V' P)⁻'p' (no correlation and equal weights)en
dc.description.abstractand y = P(P' V' P)⁻'P' V⁻ respectively,en
dc.description.abstractwhere P is a matrix of prescribed functional values by which the y's are expressed, and V is the symmetric variance matrix associated with the data u. (V[vij]:[Pijδiδj])en
dc.description.abstractIn Chapter I of this thesis the problem of graduation by linear combination is again considered, but with different minimal conditions. Firstly, what linear combination y - Cu is such that the set of k differences Pky = CIA, has minimum sum of squared residuals and secondly, what linear combination CDcu of the k differences -05 of the observed values produces a set of smoothed k differences with minimum sum squared residuals. Examples are given using both factorial polynomials and the orthogonal polynomials of Tchebychef. It is also shown that this problem leads to the same solution as that obtained by using Sheppard's original assumptions.en
dc.description.abstractIn Chapter II the linear combination C of observed data is considered where the y's are expressed in terms of the harmonic functions. The properties of the transforming matrix are established, and the Fourier coefficients are given in matrix form. The question of estimate errors from residuals which is of prime importance in the examination of any physical phenomena associated with harmonic analysis, is also considered.en
dc.description.abstractIn the appendix tables of C are given for values of 2n, the number of data, equal to .,6, 6,10,12,16 and 24 with k = 1,2 . n, the number of harmonics in the series. A bibliography of works consulted is also given.en
dc.publisherThe University of Edinburghen
dc.relation.ispartofAnnexe Thesis Digitisation Project 2018 Block 17en
dc.relation.isreferencedbyen
dc.titleLinear combination of data with least error of differencesen
dc.typeThesis or Dissertationen
dc.type.qualificationlevelDoctoralen
dc.type.qualificationnamePhD Doctor of Philosophyen


Files in this item

This item appears in the following Collection(s)

Show simple item record