This post is part 1 of a 3 part series: Part I, Part II, Part III.

Imagine that we have a bunch of points that we want to fit a line to:

In the plot above, $y_i = a\cdot x_i + b + n_i$ where $n_i$ is random noise. We know every point $(x_i, y_i)$ and would like to estimate $a$ and $b$ in the presence of noise.

What method can we use to find an estimation for $a$ and $b$? What constraints does this method have and when does it fail? This is a classic problem in signal processing – we are given some noisy observations and would like to determine how the data was generated.

## Least squares

After we express this problem in a matrix multiplication language, the solution becomes more straightforward. We abstract some of the details away and focus on larger concepts. In doing this I will denote scalars with plain math font, vectors with lower-case bold math font and matrices with upper-case bold math font.

To represent this with matrix multiplication, we’ll stack our observations $y_i$ into a vector $\yb$. When we define $\Ab$ and $\wb$ (our data and weights respectively) in certain ways, we can say that $\yb = \Ab\cdot\wb + \nb$ when $\nb$ is a vector of noise. In particular, we’ll define $\Ab$ so that each row is $[x_i,~1]$ and our weights $\wb$ as $[a, b]^T$.

$$\Align{ \yb &= \Ab\cdot\Matrix{a\\b} + \nb\\ &= \Ab\cdot\wb + \nb }$$

We are collecting more data than we have unknowns and are fitting a linear line which means that this problem is overdetermined – we have more measurements than unknowns. Our only unknowns are $a$ and $b$ and we have many measurements.1

Our approximation of $\wb$ will be called $\est{\wb}$. The key behind finding this approximation is realizing the objective. Should we minimize the error? Should we say that large errors should be penalized more than small errors?

We want to minimize the error, or the distance between our observations and the value that our approximation would predict. That is, we want to minimize the distance between $\yb$ and $\Ab \cdot \est{\wb}$ and penalize large errors more. We can represent this a concise mathematical equation:

$$\est{\wb} = \arg \min_\wb \norm{\yb - \Ab\cdot\wb}_2^2$$

This says that our estimate $\est{\wb}$ will be the argument that minimizes the standard error term. After writing this equation down and knowing what to Google, we can find a closed form solution with the help of the Wikipedia page for least squares!

$$\est{\wb} = \left( \Ab^T \Ab\right) \Ab^T \yb$$

With this equation we find that $\est{\wb} = [\widehat{a},~\widehat{b}]^T \approx [0.89,~0.03]$, and not that far away from the true solution $\xb = [1,~0]^T$.

I won’t cover on how to solve this equation, but know that matrix inversion is often expensive time- and memory-wise. Instead, we often rely on methods like gradient descent to find a solution to this equation.

This method works and is (and should!) be widely used in similar use cases, but it’s also old. What other solutions are out there and where does it fail?

## Noisy blurry signals

Let’s say we are given noisy observations of a blurry signal. How can we reconstruct the values of the original signal from our observations? Similar situations happen all the time in the real world. We see averaging input data together all the time.

Let’s say that we make $n$ observations of a signal $\xb$ and put the observations into a vector $\yb$. We know that the averaging of $k$ values can be represented by a matrix $\Hb_k$ because blurring is a linear transformation. When we represent noise with standard deviation $\sigma$ as $\nb_\sigma$,

$$\yb = \Hb_k \cdot \xb + \nb_\sigma$$

The naive approach would be to use the inverse transform with the the inverse filter, which is possible because $\Hb_k$ is a square matrix. It would seem natural just to reverse the operation by making our estimate $\est{\xb}$ by

$$\est{\xb} = \Hb_k^{-1}\cdot\yb = \xb + \Hb_k^{-1}\cdot\nb_\sigma$$

But the noise ruins this approximation and similar approximation (such as the least squares approximation). In general, $\Hb^{-1}$ is adding or subtracting many noisy values. Even worse, it is common for $\Hb^{-1}$ to be ill-conditioned and include large values. This means that our estimate $\est{\xb}$ quickly gets a lot of error as (typically) many noisy values are multiplied by large values and then added.

Of course, in the noise free case this method works perfectly. But in noisy cases, the estimate does not perform well… and I’ve made an interactive widget to help explore that fact!