dc.contributor.author Buchan, Alexander Fairley en dc.date.accessioned 2018-03-29T12:21:29Z dc.date.available 2018-03-29T12:21:29Z dc.date.issued 1939 dc.identifier.uri http://hdl.handle.net/1842/29466 dc.description.abstract en dc.description.abstract One of the many ways of obtaining from a set of en observations a second smoothed or graduated set is to assume that the second set is a linear combination of the first. Thus if u denotes the column vector of n observed values, y that of the graduated and C the matrix performing the linear transformation, then y = Cu. This method was considered by 7.F. Sheppard, in the case where the observed data are equidistant, equally weighted and uncorrelated; the assumptions being that the sum of the squared coefficients in the transformation shall be a minimum; and that each / shall differ from a specified u by differences of u of order exceeding j, i.e. if the u's are already polynomial values of degree j, then the linear transformation leaves them unaltered. In this way each graduated value depends upon every observation, and not simply on those on either side as, for example, in the case of the centred finite summation formulae of Spencer or Woolhouse. Sheppard points out that the solution of this problem yields precisely the same final results as that of fitting a curve of degree j to the u's by the method of least squares. A.C. Aitken has shown more recently how this problem in its two aspects may be solved much more concisely by using the matrix calculus, and indeed he gives the solution for the case where the u's are not subject to the above restricted conditions but may be of arbitrary functional type. The transformations which he derives for the restricted and general cases are dc.description.abstract y = P (P' V' P)⁻'p' (no correlation and equal weights) en dc.description.abstract and y = P(P' V' P)⁻'P' V⁻ respectively, en dc.description.abstract where P is a matrix of prescribed functional values by en which the y's are expressed, and V is the symmetric variance matrix associated with the data u. (V[vij]:[Pijδiδj]) dc.description.abstract In Chapter I of this thesis the problem of graduation en by linear combination is again considered, but with different minimal conditions. Firstly, what linear combination y - Cu is such that the set of k differences Pky = CIA, has minimum sum of squared residuals and secondly, what linear combination CDcu of the k differences -05 of the observed values produces a set of smoothed k differences with minimum sum squared residuals. Examples are given using both factorial polynomials and the orthogonal polynomials of Tchebychef. It is also shown that this problem leads to the same solution as that obtained by using Sheppard's original assumptions. dc.description.abstract In Chapter II the linear combination C of observed en data is considered where the y's are expressed in terms of the harmonic functions. The properties of the transforming matrix are established, and the Fourier coefficients are given in matrix form. The question of estimate errors from residuals which is of prime importance in the examination of any physical phenomena associated with harmonic analysis, is also considered. dc.description.abstract In the appendix tables of C are given for values en of 2n, the number of data, equal to .,6, 6,10,12,16 and 24 with k = 1,2 . n, the number of harmonics in the series. A bibliography of works consulted is also given. dc.publisher The University of Edinburgh en dc.relation.ispartof Annexe Thesis Digitisation Project 2018 Block 17 en dc.relation.isreferencedby en dc.title Linear combination of data with least error of differences en dc.type Thesis or Dissertation en dc.type.qualificationlevel Doctoral en dc.type.qualificationname PhD Doctor of Philosophy en
﻿