# Applying eigenvalues to the Fibonacci problem

The Fibonacci problem is a well known mathematical problem that models population growth and was conceived in the 1200s. Leonardo of Pisa aka Fibonacci decided to use a recursive equation: $x_{n} = x_{n-1} + x_{n-2}$ with the seed values $x_0 = 0$ and $x_1 = 1$. Implementing this recursive function is straightforward:

Since the Fibonacci sequence was conceived to model population growth, it would
seem that there should be a simple equation that grows almost exponentially.
Plus, this recursive calling is expensive both in time and memory.^{1}.
The cost of this function doesn’t seem worthwhile. To see the surprising
formula that we end up with, we need to define our Fibonacci problem in a
matrix language.^{2}

Calling each of those matrices and vectors variables and recognizing the fact that $\mathbf{x}_{n-1}$ follows the same formula as $\mathbf{x}_n$ allows us to write

where we have used $\mathbf{A}^n$ to mean $n$ matrix multiplications. The corresponding implementation looks something like this:

While this isn’t recursive, there’s still an $n-1$ unnecessary matrix multiplications. These are expensive time-wise and it seems like there should be a simple formula involving $n$. As populations grow exponentially, we would expect this formula to involve scalars raised to the $n$th power. A simple equation like this could be implemented many times faster than the recursive implementation!

The trick to do this rests on the mysterious and intimidating eigenvalues and eigenvectors. These are just a nice way to view the same data but they have a lot of mystery behind them. Most simply, for a matrix $\mathbf{A}$ they obey the equation

for different eigenvalues $\lambda$ and eigenvectors $\mathbf{x}$. Through the way matrix multiplication is defined, we can represent all of these cases. This rests on the fact that the left multiplied diagonal matrix $\mathbf{\Lambda}$ just scales each $\mathbf{x}_i$ by $\lambda_i$. The column-wise definition of matrix multiplication makes it clear that this is represents every case where the equation above occurs.

Or compacting the vectors $\mathbf{x}_i$ into a matrix called $\mathbf{X}$ and the diagonal matrix of $\lambda_i$’s into $\mathbf{\Lambda}$, we find that

Because the Fibonacci matrix is diagonalizable

And then because a matrix and it’s inverse cancel

$\mathbf{\Lambda}^n$ is a simple computation because $\mathbf{\Lambda}$ is a diagonal matrix: every element is just raised to the $n$th power. That means the expensive matrix multiplication only happens twice now. This is a powerful speed boost and we can calculate the result by substituting for $\mathbf{A}^n$

For this Fibonacci matrix, we find that $\mathbf{\Lambda} = \textrm{diag}\left(\frac{1+\sqrt{5}}{2}, \frac{1-\sqrt{5}}{2}\right)= \textrm{diag}\left(\lambda_1, \lambda_2\right)$. We could define our Fibonacci function to carry out this matrix multiplication, but these matrices are simple: $\mathbf{\Lambda}$ is diagonal and $\mathbf{x}_0 = \left[1; 0\right]$. So, carrying out this fairly simple computation gives

We would not expect this equation to give an integer. It involves the power of two irrational numbers, a division by another irrational number and even the golden ratio phi $\phi \approx 1.618$! However, it gives exactly the Fibonacci numbers – you can check yourself!

This means we can define our function rather simply:

As one would expect, this implementation is *fast*. We see speedups of roughly
$1000$ for $n=25$, milliseconds vs microseconds. This is almost typical when
mathematics are applied to a seemingly straightforward problem. There are often
large benefits by making the implementation slightly more cryptic!

I’ve found that mathematics^{3} becomes fascinating, especially in higher
level college courses, and can often yield surprising results. I mean, look at
this blog post. We went from a expensive recursive equation to a simple and
fast equation that only involves scalars. This derivation is one I enjoy and I
especially enjoy the simplicity of the final result. This is part of the reason
why I’m going to grad school for highly mathematical signal processing. Real
world benefits $+$ neat theory $=$ <3.