I recently had to compute many inner products with a given matrix $\Ab$ for many different vectors $\xb_i$, or $\xb_i^T \Ab \xb_i$. Each vector $\xb_i$ represents a shoe from Zappos and there are 50k vectors $\xb_i \in \R^{1000}$. This is computation took place behind a user-facing web interface and during testing had a delay of 5 minutes. This is clearly unacceptable; how can we make it faster?

edit, 2018-03-17: Looking for the libraries? Check out the libraries section

I spent a couple hours trying to get the best possible performance from my functions… and through this, I found a speed optimization1 that put most of the computation on NumPy’s shoulders. After I made this change, the naïve for-loop and NumPy were about a factor of 2 apart, not enough to write a blog post about.

Use of a NVIDIA GPU significantly outperformed NumPy. Given that most of the optimization seemed to be focused on a single matrix multiplication, let’s focus on speed in matrix multiplication.

We know that matrix multiplication has computational complexity of something like $\bigO{n^{2.8074}}$2, but very likely greater than $\bigO{n^{2.375477}}$3 when multiplying two $n\times n$ matrices. We can’t get around this without diving into theory, but we can change the constant that dictates exactly how fast these algorithms run.

The tools I’ll test are

  • the default NumPy install, with no MKL (even though it’s now provided by default with Anaconda)
  • Intel MKL, a tool that provides acceleration for BLAS/LAPACK
  • the GPU. To do this, I’ll need an Amazon AWS machine and the NVIDIA CUDA Toolkit. An easy interface is available through cudamat but scikit-cuda and Accelerate also have nice interfaces and provide more access.

I had planned to test other tools but these tests didn’t pan out for reasons in the footnotes. My test script can be summarized in the appendix, but I saw that the GPU offered significant speedup with the following graph:

Environment NumPy + no MKL NumPy + MKL cudamat
Time (seconds) 7.18 4.057 0.2898


Under the default Anaconda environment (i.e., with MKL), we see that our script runs 80% slower without MKL and has a 14x speedup under cudamat!

begin edits on 2018-03-17

Libraries

This simple test shows that using the GPU is powerful. However, this is a simple test with only one library, cudamat. Many more libraries exist and have better usage, including:

  • CuPy, which has a NumPy interface for arrays allocated on the GPU. The transition from NumPy should be one line.
  • Numba, which allows defining functions (in Python!) that can be used as GPU kernels through numba.cuda.jit and numba.hsa.jit.
  • PyTorch, which supports arrays allocated on the GPU. It has other useful features, including optimizers, loss functions and multiprocessing to support it’s use in machine learning.

CuPy tries to copy NumPy’s API, which means that transitioning should be very easy. I mean, they even have a page on “CuPy and NumPy Differences”. But, they also offer some low level CUDA support which could be convenient. It looks like Numba support is coming for CuPy (numba/numba#2786, relevant tweet).

Numba supports defining GPU kernels in Python, and then compiling them to C++. This is a powerful usage (JIT compiling Python for the GPU!), and Numba is designed for high performance Python and shown powerful speedups. More advanced use cases (large arrays, etc) may benefit from some of their memory management. Numba does have support for other lower level details (e.g., calling the kernel with different threads/block sizes).

PyTorch is useful in machine learning, and has a small core development team of 4 sponsored by Facebook. It’s what I (a machine learning researcher) use every day, and it’s inspired another blog post, “PyTorch: fast and simple”. It’s API does not exactly conform to NumPy’s API, but this library does have pretty good support (easy debugging, nice NumPy/SciPy integration, etc).

end edits on 2018-03-17

Accelerate and scikit-learn are both fairly similar. In choosing whether to use Accelerate or scikit-learn, there are two obvious tradeoffs:

  • scikit-cuda has access to linear algebra functions (e.g., eig) and Anaconda does not. However, access to these higher level mathematical functions comes through CULA, another framework that requires a license (free academic licenses are available).
  • Anaconda can accept raw ndarrays and scikit-cuda needs to have gpuarrays passed in (meaning more setup/cleanup).

Whichever is chosen, large speed enhancements exist. I have timed a common function (fft) over different values of n; there is some overhead to moving to the GPU and I wanted to see where that is. I provide a summary of my testing script in the appendix.

CULA has benchmarks for a few higher-level mathematical functions (source: the CULA Dense homepage):

Appendix

Other GPU libraries

Anaconda has published a good overview titled “Getting started with GPU computing”. I think I would start with Numba: it has debugging and supports some notion of kernels. [updated 2017-11]

  • Numba has [numba.cuda.jit], [numba.hsa.jit]. It has good debugging and looks like a wrapper around CUDA kernels.
  • Anaconda has developed pyculib. This provides access to cu{BLAS, FFT, RAND} and CUDA sorting.
  • PyCUDA and PyOpenCL are not tested because they require C++ code (PyCUDA example, PyOpenCL example).
  • gnumpy not tested becuase it didn’t support Python 3 and hasn’t been touched in 4 years
  • I tried to install cudarray but ran into install difficulties
  • theano supports the GPU (see “Using the GPU”) but not tested – this seems to be primarily a machine learning library

…and of course I didn’t optimize any loop-based functions. To do optimize loop speed, I would look at numba first and then possibly Cython.

Matrix multiplication timing script

import numpy as np
import cudamat as cm

n, p = int(2e3), int(40e3)
A = np.random.randn(n, p)
B = np.random.randn(p, n)
%timeit A @ B

cm.cublas_init()
cm.CUDAMatrix.init_random()
A_cm = cm.empty((n, p)).fill_with_randn()
B_cm = cm.empty((p, n)).fill_with_randn()
%timeit A_cm.dot(B_cm)
cm.cublas_shutdown()

FFT timing script summary

In this script, I show preparing for the FFT and preparing for linear algebra functions (e.g., cilinalg.init()). I found that it’s useful to look at the scikit-cuda demos.

import numpy as np
from accelerate.cuda.blas import Blas
import accelerate.cuda.fft as acc_fft
import pycuda.autoinit
import pycuda.gpuarray as gpuarray
import skcuda.fft as cu_fft
import skcuda.linalg as culinalg
import skcuda.misc as cumisc

# for scikit-learn
culinalg.init()

# for accelerate when calling wrapped BLAS functions (e.g., blas.dot)
blas = Blas()


def fft_accelerate(x, y):
    f = acc_fft.FFTPlan(shape=x.shape, itype=x.dtype, otype=y.dtype)
    f.forward(x, out=y)  # note: we're passing np.ndarrays
    return y


def fft_scikit(x, y):
    plan_forward = cu_fft.Plan(x.shape, np.float32, np.complex64)
    cu_fft.fft(x, y, plan_forward)
    return y.get()

n = int(40e4)
x = np.random.randn(n).astype('float32')
y = np.zeros(n).astype('complex64')  # needed because fft has complex output
%timeit fft_accelerate(x, y)

x = gpuarray.to_gpu(x)
y = gpuarray.empty(n//2 + 1, np.complex64)
%timeit fft_scikit(x, y)
  1. which was calculating $\Ab \Xb^T$ outside the loop 

  2. using the Strassen algorithm 

  3. using the Coppersmith-Winograd algorithm