Loading [MathJax]/extensions/TeX/boldsymbol.js

 

 

 

The \chi^2 function

For a linear fit (a first-order polynomial) we don't need to invert a matrix!! Defining

\gamma = \sum_{i=0}^{n-1}\frac{1}{\sigma_i^2}, \gamma_x = \sum_{i=0}^{n-1}\frac{x_{i}}{\sigma_i^2}, \gamma_y = \sum_{i=0}^{n-1}\left(\frac{y_i}{\sigma_i^2}\right), \gamma_{xx} = \sum_{i=0}^{n-1}\frac{x_ix_{i}}{\sigma_i^2}, \gamma_{xy} = \sum_{i=0}^{n-1}\frac{y_ix_{i}}{\sigma_i^2},

we obtain

\beta_0 = \frac{\gamma_{xx}\gamma_y-\gamma_x\gamma_y}{\gamma\gamma_{xx}-\gamma_x^2}, \beta_1 = \frac{\gamma_{xy}\gamma-\gamma_x\gamma_y}{\gamma\gamma_{xx}-\gamma_x^2}.

This approach (different linear and non-linear regression) suffers often from both being underdetermined and overdetermined in the unknown coefficients \beta_i . A better approach is to use the Singular Value Decomposition (SVD) method discussed next week.