(marked with . I also list all posts.)
  • Finding sparse solutions to linear systems


    This post is a part 2 of a 3 part series: Part I, Part II, Part III

    We often have fewer measurements than unknowns, which happens all the time in genomics and medical imaging. For example, we might be collecting 8,000 gene measurements in 300 patients and we’d like to determine which ones are most important in determining cancer.

    This means that we typically have an underdetermined system because we’re collecting more measurement than unknowns. This is an unfavorable situation – there are infinitely may solutions to this problem. However, in the case of breast cancer, biological intuition might tell us that most of the 8,000 genes aren’t important and have zero important in cancer expression.

    How do we enforce that most of the variables are 0? This post will try and give intuition for the problem formulation and dig into the algorithm to solve the posed problem. I’ll use a real-world cancer dataset1 to predict which genes are important for cancer expression. It should be noted that we’re more concerned with the type of solution we obtain rather than how well it performs.

    1. This data set is detailed in the section titled Predicting Breast Cancer

    Read on →

  • Stepping from Matlab to Python


    It’s not a big leap; it’s one small step. There’s only a little to pick up and there’s not a huge difference in use or functionality. The difference is so small you can switch and just google any conversion issues you have: they’re so small you’ll have no trouble finding the appropriate functions/syntax.

    There is a wrapper package in Python with the aim of providing a Matlab-like interface that is well suited for numerical linear algebra. This package is called pylab and wraps NumPy, SciPy and matplotlib. When I use pylab, this is how similar my Python and Matlab code is:

    # PYTHON                        | % MATLAB
    from pylab import *             | clc; clear all; close all
    # matrix multiplication         | % matrix multiplication
    A = rand(3, 3)                  | A = rand(3, 3);
    A[0:2, 1] = 4                   | A(1:2, 2) = 4;
    I = A @ inv(A)                  | I = A * inv(A);
    I = A.dot(inv(A))               |
    # vector manipulations          | % vector manipulations
    t = linspace(0, 4, num=1e3)     | t = linspace(0, 4, 1e3);
    y1 = cos(t/2) * exp(-t)         | y1 = cos(t/2) .* exp(-t);
    y2 = cos(t/2) * exp(-5*t)       | y2 = cos(t/2) .* exp(-5*t);
    # plotting                      | % plotting
    figure()                        | figure; hold on
    plot(t, y1, label='Slow decay') | plot(t, y1)
    plot(t, y2, label='Fast decay') | plot(t, y2)
    legend(loc='best')              | legend('Slow decay', 'Fast decay')
    show()                          |

    Python even has a matrix multiplication operator! Python 3.5 introduces the matrix multiplication operator @ detailed in PEP 465. Python is remarkably well suited for developing numerical algorithms – what else does Python offer?

    Read on →

  • Computer color is only kinda broken


    When we blur red and green, we get this:

    Why? We would not expect this brownish color.

    Read on →

  • Common mathematical misconceptions


    When I heard course names for higher mathematical classes during high school and even part of college, it seemed as if they were teaching something simple that I learned back in middle school. I knew that couldn’t be the case, and three years of college have taught me otherwise.

    Read on →

  • Fourier transforms and optical lenses


    The Fourier transform and it’s closely related cousin the discrete time Fourier transform (computed by the FFT) is a powerful mathematical concept. It breaks an input signal down into it’s frequency components. The best example is lifted from Wikipedia.

    Read on →