2017年4月1日 星期六

Running Numba Example of Matrix Multiplication

Quoted from Numba's Documentation:
"Numba works by generating optimized machine code using the LLVM compiler infrastructure at import time, runtime, or statically (using the included pycc tool). Numba supports compilation of Python to run on either CPU or GPU hardware, and is designed to integrate with the Python scientific software stack."

Check Numba version by following Python code:
import numba
print(numba.__version__)

In WinPython-64bit-2.7.10.3, its Numba version is 0.20.0. It is too old because the latest stable Numba release is Version 0.33.0 on May 2017. So we follow the official suggestion of Numba site -  using the Anaconda Distribution. Anaconda2-4.3.1-Windows-x86_64 is used in this test.





Setting Up CUDA


Before using CUDA GPU programming of Numba, downloading and installing  CUDA Toolkit is required. Numba supports CUDA-enabled GPU with compute capability 2.0 or above with an up-to-data NVIDIA driver. Check the compute capability of CUDA-enabled GPU from NVIDIA's CUDA Legacy GPUs.

Per Numba information on the Python Package Index, you might have to specify environment variables in order to override the standard search paths:
NUMBAPRO_CUDA_DRIVER
Path to the CUDA driver shared library
NUMBAPRO_NVVM
Path to the CUDA libNVVM shared library file
NUMBAPRO_LIBDEVICE
Path to the CUDA libNVVM libdevice directory which contains .bc files






Matrix Multiply


In this test, matrix multiplication code in Numba CUDA example is used; @jit decorator, @guvectorize decorator and @cuda.jit decorator are tested.

The content of matrix_multiply2.py is:
from numba import guvectorize, jit, float64, void
import numpy as np
from numba import cuda

import os
os.environ['NUMBAPRO_NVVM']      = r'C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\nvvm\bin\nvvm64_31_0.dll'
os.environ['NUMBAPRO_LIBDEVICE'] = r'C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\nvvm\libdevice'



#
# numba.guvectorize matrix multiplication
#
@guvectorize([(float64[:,:], float64[:,:], float64[:,:])], '(m,l),(l,n)->(m,n)', target='cpu')
def matmul_gu(A, B, out):
    """Perform square matrix multiplication of out = A * B
    """
    for i in range(out.shape[0]):
        for j in range(out.shape[1]):
            tmp = 0.
            for k in range(A.shape[1]):
                tmp += A[i, k] * B[k, j]
            out[i, j] = tmp

@guvectorize([(float64[:,:], float64[:,:], float64[:,:])], '(m,l),(l,n)->(m,n)', target='cpu')
def matmul_gu1(A, B, out):
    """Perform square matrix multiplication of out = A * B
    """
    A_i = np.empty(A.shape[1], dtype=np.float64)
    for i in range(out.shape[0]):
        for k in range(A.shape[1]):
            A_i[k] = A[i, k]

        for j in range(out.shape[1]):
            tmp = 0.
            for k in range(A.shape[1]):
                tmp += A_i[k] * B[k, j]
            out[i, j] = tmp

@guvectorize([(float64[:,:], float64[:,:], float64[:,:])], '(m,l),(l,n)->(m,n)', target='parallel')
def matmul_gu2(A, B, out):
    """Perform square matrix multiplication of out = A * B
    """
    for i in range(out.shape[0]):
        for j in range(out.shape[1]):
            tmp = 0.
            for k in range(A.shape[1]):
                tmp += A[i, k] * B[k, j]
            out[i, j] = tmp



#
# numba.jit matrix multiplication
#
@jit
def matmul_jit(A, B):
    """Perform square matrix multiplication of out = A * B
    """
    out = np.empty((A.shape[0],B.shape[1]), A.dtype)
    for i in range(out.shape[0]):
        for j in range(out.shape[1]):
            tmp = 0.
            for k in range(A.shape[1]):
                tmp += A[i, k] * B[k, j]
            out[i, j] = tmp
    return out

@jit
def matmul_jit_(A, B):
    """Perform square matrix multiplication of out = A * B
    """
    out = np.empty([A.shape[0],B.shape[1]], A.dtype)
    for i in range(out.shape[0]):
        for j in range(out.shape[1]):
            tmp = 0.
            for k in range(A.shape[1]):
                tmp += A[i, k] * B[k, j]
            out[i, j] = tmp
    return out

@jit([float64[:,:](float64[:,:], float64[:,:])])
def matmul_jit1(A, B):
    """Perform square matrix multiplication of out = A * B
    """
    out = np.empty((A.shape[0],B.shape[1]), A.dtype)
    for i in range(out.shape[0]):
        for j in range(out.shape[1]):
            tmp = 0.
            for k in range(A.shape[1]):
                tmp += A[i, k] * B[k, j]
            out[i, j] = tmp
    return out

@jit([float64[:,:](float64[:,:], float64[:,:])], cache=True)
def matmul_jit2(A, B):
    """Perform square matrix multiplication of out = A * B
    """
    out = np.empty((A.shape[0],B.shape[1]), A.dtype)
    for i in range(out.shape[0]):
        for j in range(out.shape[1]):
            tmp = 0.
            for k in range(A.shape[1]):
                tmp += A[i, k] * B[k, j]
            out[i, j] = tmp
    return out

@jit([float64[:,:](float64[:,:], float64[:,:])], cache=True)
def matmul_jit3(A, B):
    """Perform square matrix multiplication of out = A * B
    """
    out = np.empty((A.shape[0],B.shape[1]), A.dtype)
    A_i = np.empty(A.shape[1], dtype=np.float64)
    for i in range(out.shape[0]):
        for k in range(A.shape[1]):
            A_i[k] = A[i, k]

        for j in range(out.shape[1]):
            tmp = 0.
            for k in range(A.shape[1]):
                tmp += A_i[k] * B[k, j]
            out[i, j] = tmp
    return out

@jit([float64[:,:](float64[:,:], float64[:,:])], nopython=True)
def matmul_jit4(A, B):
    """Perform square matrix multiplication of out = A * B
    """
    out = np.empty((A.shape[0],B.shape[1]), A.dtype)
    for i in range(out.shape[0]):
        for j in range(out.shape[1]):
            tmp = 0.
            for k in range(A.shape[1]):
                tmp += A[i, k] * B[k, j]
            out[i, j] = tmp
    return out

@jit([float64[:,:](float64[:,:], float64[:,:])], nopython=True)
def matmul_jit5(A, B):
    """Perform square matrix multiplication of out = A * B
    """
    out = np.empty((A.shape[0],B.shape[1]), A.dtype)
    A_i = np.empty(A.shape[1], dtype=np.float64)
    for i in range(out.shape[0]):
        for k in range(A.shape[1]):
            A_i[k] = A[i, k]

        for j in range(out.shape[1]):
            tmp = 0.
            for k in range(A.shape[1]):
                tmp += A_i[k] * B[k, j]
            out[i, j] = tmp
    return out



#
# CUDA matrix multiplication
#
@cuda.jit
def matmul(A, B, C):
    """Perform square matrix multiplication of C = A * B
    """
    i, j = cuda.grid(2)
    if i < C.shape[0] and j < C.shape[1]:
        tmp = 0.
        for k in range(A.shape[1]):
            tmp += A[i, k] * B[k, j]
        C[i, j] = tmp

# Controls threads per block and shared memory usage.
# The computation will be done on blocks of TPBxTPB elements.
TPB = 16

@cuda.jit
def fast_matmul(A, B, C):
    # Define an array in the shared memory
    # The size and type of the arrays must be known at compile time
    sA = cuda.shared.array(shape=(TPB, TPB), dtype=float64)
    sB = cuda.shared.array(shape=(TPB, TPB), dtype=float64)

    x, y = cuda.grid(2)

    tx = cuda.threadIdx.x
    ty = cuda.threadIdx.y
    bpg = cuda.gridDim.x    # blocks per grid

    if x >= C.shape[0] and y >= C.shape[1]:
        # Quit if (x, y) is outside of valid C boundary
        return

    # Each thread computes one element in the result matrix.
    # The dot product is chunked into dot products of TPB-long vectors.
    tmp = 0.
    for i in range(bpg):
        # Preload data into shared memory
        sA[tx, ty] = A[x, ty + i * TPB]
        sB[tx, ty] = B[tx + i * TPB, y]

        # Wait until all threads finish preloading
        cuda.syncthreads()

        # Computes partial product on the shared memory
        for j in range(TPB):
            tmp += sA[tx, j] * sB[j, ty]

        # Wait until all threads finish computing
        cuda.syncthreads()

    C[x, y] = tmp

@guvectorize([void(float64[:,:], float64[:,:], float64[:,:])], '(m,l),(l,n)->(m,n)', target='cuda')
def matmul_gu3(A, B, out):
    """Perform square matrix multiplication of out = A * B
    """
    i, j = cuda.grid(2)
    if i < out.shape[0] and j < out.shape[1]:
        tmp = 0.
        for k in range(A.shape[1]):
            tmp += A[i, k] * B[k, j]
        out[i, j] = tmp

matmul_gu3.max_blocksize = 32



#
# Python matrix multiplication
#
def matmul_py(A, B):
    """Perform square matrix multiplication of out = A * B
    """
    out = np.empty([A.shape[0],B.shape[1]], A.dtype)
    for i in range(out.shape[0]):
        for j in range(out.shape[1]):
            tmp = 0.
            for k in range(A.shape[1]):
                tmp += A[i, k] * B[k, j]
            out[i, j] = tmp
    return out

def matmul_py1(A, B):
    """Perform square matrix multiplication of out = A * B
    """
    out = np.empty([A.shape[0],B.shape[1]], A.dtype)
    A_i = np.empty(A.shape[1], dtype=np.float64)
    for i in range(out.shape[0]):
        for k in range(A.shape[1]):
            A_i[k] = A[i, k]

        for j in range(out.shape[1]):
            tmp = 0.
            for k in range(A.shape[1]):
                tmp += A_i[k] * B[k, j]
            out[i, j] = tmp
    return out



#
# Main Program
#
N = 500
A = np.random.rand(N,N)
B = np.random.rand(N,N)

print "matrix multiplication"
print "A array size", A.shape[0], "by", A.shape[1]
print "B array size", B.shape[0], "by", B.shape[1]
print "A array type", A.dtype
print "B array type", B.dtype
print

import timeit



#
# Loop Body
#
np_loop = 300
np_dot = np.empty(np_loop)
A_dot_B = np.empty(np_loop)
np_matmul = np.empty(np_loop)

print "NumPy matrix multiplication..."
for i in range(np_loop):
    t_start = timeit.default_timer()
    out = np.dot(A, B)
    t_end = timeit.default_timer()
    np_dot[i] = t_end - t_start
    # print "np.dot(A, B) takes {:.5f} second".format((t_end - t_start))

    t_start = timeit.default_timer()
    out = A.dot(B)
    t_end = timeit.default_timer()
    A_dot_B[i] = t_end - t_start
    # print "A.dot(B) takes {:.5f} second".format((t_end - t_start))

    # This function is included in NumPy 1.10.0 and higher
    t_start = timeit.default_timer()
    out = np.matmul(A, B)
    t_end = timeit.default_timer()
    np_matmul[i] = t_end - t_start
    # print "np.matmul(A, B) takes {:.5f} second".format((t_end - t_start))



nb_loop = 10
nb_gu = np.empty(nb_loop)
nb_gu1 = np.empty(nb_loop)
nb_gu2 = np.empty(nb_loop)
nb_jit = np.empty(nb_loop)
nb_jit_ = np.empty(nb_loop)
nb_jit1 = np.empty(nb_loop)
nb_jit2 = np.empty(nb_loop)
nb_jit3 = np.empty(nb_loop)
nb_jit4 = np.empty(nb_loop)
nb_jit5 = np.empty(nb_loop)

print "Numba matrix multiplication..."
for i in range(nb_loop):
    t_start = timeit.default_timer()
    out = matmul_gu(A, B)
    t_end = timeit.default_timer()
    nb_gu[i] = t_end - t_start
    # print "matmul_gu takes {:.5f} second".format((t_end - t_start))

    t_start = timeit.default_timer()
    out = matmul_gu1(A, B)
    t_end = timeit.default_timer()
    nb_gu1[i] = t_end - t_start
    # print "matmul_gu1 takes {:.5f} second".format((t_end - t_start))

    t_start = timeit.default_timer()
    out = matmul_gu2(A, B)
    t_end = timeit.default_timer()
    nb_gu2[i] = t_end - t_start
    # print "matmul_gu1 takes {:.5f} second".format((t_end - t_start))

    t_start = timeit.default_timer()
    out = matmul_jit(A, B)
    t_end = timeit.default_timer()
    nb_jit[i] = t_end - t_start
    # print "matmul_jit takes {:.5f} second".format((t_end - t_start))

    t_start = timeit.default_timer()
    out = matmul_jit_(A, B)
    t_end = timeit.default_timer()
    nb_jit_[i] = t_end - t_start
    # print "matmul_jit_ takes {:.5f} second".format((t_end - t_start))

    t_start = timeit.default_timer()
    out = matmul_jit1(A, B)
    t_end = timeit.default_timer()
    nb_jit1[i] = t_end - t_start
    # print "matmul_jit1 takes {:.5f} second".format((t_end - t_start))

    t_start = timeit.default_timer()
    out = matmul_jit2(A, B)
    t_end = timeit.default_timer()
    nb_jit2[i] = t_end - t_start
    # print "matmul_jit2 takes {:.5f} second".format((t_end - t_start))

    t_start = timeit.default_timer()
    out = matmul_jit3(A, B)
    t_end = timeit.default_timer()
    nb_jit3[i] = t_end - t_start
    # print "matmul_jit3 takes {:.5f} second".format((t_end - t_start))

    t_start = timeit.default_timer()
    out = matmul_jit4(A, B)
    t_end = timeit.default_timer()
    nb_jit4[i] = t_end - t_start
    # print "matmul_jit4 takes {:.5f} second".format((t_end - t_start))

    t_start = timeit.default_timer()
    out = matmul_jit5(A, B)
    t_end = timeit.default_timer()
    nb_jit5[i] = t_end - t_start
    # print "matmul_jit5 takes {:.5f} second".format((t_end - t_start))



# matrix multiplication of out = A * B
# out.shape == (A.shape[0], B.shape[1])
import math
threadsperblock = (TPB, TPB)
blockspergrid_x = int(math.ceil(out.shape[0] / threadsperblock[0]))
blockspergrid_y = int(math.ceil(out.shape[1] / threadsperblock[1]))
blockspergrid = (blockspergrid_x, blockspergrid_y)

cuda_loop = 10
cuda_matmul = np.empty(cuda_loop)
cuda_fast_matmul = np.empty(cuda_loop)
cuda_matmul_gu3 = np.empty(cuda_loop)

stream = cuda.stream()

print "CUDA matrix multiplication..."
for i in range(cuda_loop):
    d_A = cuda.to_device(A, stream=stream)
    d_B = cuda.to_device(B, stream=stream)
    t_start = timeit.default_timer()
    matmul[blockspergrid, threadsperblock, stream](d_A, d_B, out)
    t_end = timeit.default_timer()
    cuda_matmul[i] = t_end - t_start
    # print "matmul takes {:.5f} second".format((t_end -t_start))
    d_A.copy_to_host(A, stream=stream)
    d_B.copy_to_host(B, stream=stream)
    # data may not be available in an_array
    stream.synchronize()
    # data available in an_array

    d_A = cuda.to_device(A, stream=stream)
    d_B = cuda.to_device(B, stream=stream)
    t_start = timeit.default_timer()
    fast_matmul[blockspergrid, threadsperblock, stream](A, B, out)
    t_end = timeit.default_timer()
    cuda_fast_matmul[i] = t_end - t_start
    # print "fast_matmul takes {:.5f} second".format((t_end -t_start))
    d_A.copy_to_host(A, stream=stream)
    d_B.copy_to_host(B, stream=stream)
    # data may not be available in an_array
    stream.synchronize()
    # data available in an_array

    t_start = timeit.default_timer()
    out = matmul_gu3(A, B)
    t_end = timeit.default_timer()
    cuda_matmul_gu3[i] = t_end - t_start
    # print "matmul_gu3 takes {:.5f} second".format((t_end -t_start))



py_loop = 5
py_py = np.empty(py_loop)
py_py1 = np.empty(py_loop)

print "Python matrix multiplication..."
for i in range(py_loop):
 py_start = timeit.default_timer()
 out = matmul_py(A, B)
 py_end = timeit.default_timer()
 py_py[i] = py_end - py_start
 # print "matmul_py takes {:.5f} second".format((t_end - t_start))

 py1_start = timeit.default_timer()
 out = matmul_py1(A, B)
 py1_end = timeit.default_timer()
 py_py1[i] = py1_end - py1_start
 # print "matmul_py1 takes {:.5f} second".format((t_end - t_start))



#
# Output Results
#
print
print
print

print "NumPy matrix multiplication"
record = np_dot
print "np.dot(A, B) takes average {:.5f} second (except 1st run)".format(np_dot[1:].mean())
print "{:<10}{:<10}{:<10}{:<10}".format("mean","max","min","std")
print "{:<10.5f}{:<10.5f}{:<10.5f}{:<10.5f}".format(record.mean(),record.max(),record.min(),record.std())
print "record"
print record
print
record = A_dot_B
print "A.dot(B) takes average {:.5f} second (except 1st run)".format(A_dot_B[1:].mean())
print "{:<10}{:<10}{:<10}{:<10}".format("mean","max","min","std")
print "{:<10.5f}{:<10.5f}{:<10.5f}{:<10.5f}".format(record.mean(),record.max(),record.min(),record.std())
print "record"
print record
print
record = np_matmul
print "np.matmul(A, B) takes average {:.5f} second (except 1st run)".format(np_matmul[1:].mean())
print "{:<10}{:<10}{:<10}{:<10}".format("mean","max","min","std")
print "{:<10.5f}{:<10.5f}{:<10.5f}{:<10.5f}".format(record.mean(),record.max(),record.min(),record.std())
print "record"
print record
print
print

print "Numba matrix multiplication"
record = nb_gu
print "matmul_gu takes average {:.5f} second (except 1st run)".format(nb_gu[1:].mean())
print "{:<10}{:<10}{:<10}{:<10}".format("mean","max","min","std")
print "{:<10.5f}{:<10.5f}{:<10.5f}{:<10.5f}".format(record.mean(),record.max(),record.min(),record.std())
print "record"
print record
print
record = nb_gu1
print "matmul_gu1 takes average {:.5f} second (except 1st run)".format(nb_gu1[1:].mean())
print "{:<10}{:<10}{:<10}{:<10}".format("mean","max","min","std")
print "{:<10.5f}{:<10.5f}{:<10.5f}{:<10.5f}".format(record.mean(),record.max(),record.min(),record.std())
print "record"
print record
print
record = nb_gu2
print "matmul_gu2 takes average {:.5f} second (except 1st run)".format(nb_gu2[1:].mean())
print "{:<10}{:<10}{:<10}{:<10}".format("mean","max","min","std")
print "{:<10.5f}{:<10.5f}{:<10.5f}{:<10.5f}".format(record.mean(),record.max(),record.min(),record.std())
print "record"
print record
print
record = nb_jit
print "matmul_jit takes average {:.5f} second (except 1st run)".format(nb_jit[1:].mean())
print "{:<10}{:<10}{:<10}{:<10}".format("mean","max","min","std")
print "{:<10.5f}{:<10.5f}{:<10.5f}{:<10.5f}".format(record.mean(),record.max(),record.min(),record.std())
print "record"
print record
print
record = nb_jit_
print "matmul_jit_ takes average {:.5f} second (except 1st run)".format(nb_jit_[1:].mean())
print "{:<10}{:<10}{:<10}{:<10}".format("mean","max","min","std")
print "{:<10.5f}{:<10.5f}{:<10.5f}{:<10.5f}".format(record.mean(),record.max(),record.min(),record.std())
print "record"
print record
print
record = nb_jit1
print "matmul_jit1 takes average {:.5f} second (except 1st run)".format(nb_jit1[1:].mean())
print "{:<10}{:<10}{:<10}{:<10}".format("mean","max","min","std")
print "{:<10.5f}{:<10.5f}{:<10.5f}{:<10.5f}".format(record.mean(),record.max(),record.min(),record.std())
print "record"
print record
print
record = nb_jit2
print "matmul_jit2 takes average {:.5f} second (except 1st run)".format(nb_jit2[1:].mean())
print "{:<10}{:<10}{:<10}{:<10}".format("mean","max","min","std")
print "{:<10.5f}{:<10.5f}{:<10.5f}{:<10.5f}".format(record.mean(),record.max(),record.min(),record.std())
print "record"
print record
print
record = nb_jit3
print "matmul_jit3 takes average {:.5f} second (except 1st run)".format(nb_jit3[1:].mean())
print "{:<10}{:<10}{:<10}{:<10}".format("mean","max","min","std")
print "{:<10.5f}{:<10.5f}{:<10.5f}{:<10.5f}".format(record.mean(),record.max(),record.min(),record.std())
print "record"
print record
print
record = nb_jit4
print "matmul_jit4 takes average {:.5f} second (except 1st run)".format(nb_jit4[1:].mean())
print "{:<10}{:<10}{:<10}{:<10}".format("mean","max","min","std")
print "{:<10.5f}{:<10.5f}{:<10.5f}{:<10.5f}".format(record.mean(),record.max(),record.min(),record.std())
print "record"
print record
print
record = nb_jit5
print "matmul_jit5 takes average {:.5f} second (except 1st run)".format(nb_jit5[1:].mean())
print "{:<10}{:<10}{:<10}{:<10}".format("mean","max","min","std")
print "{:<10.5f}{:<10.5f}{:<10.5f}{:<10.5f}".format(record.mean(),record.max(),record.min(),record.std())
print "record"
print record
print
print

print "CUDA matrix multiplication"
record = cuda_matmul
print "matmul takes average {:.5f} second (except 1st run)".format(cuda_matmul[1:].mean())
print "{:<10}{:<10}{:<10}{:<10}".format("mean","max","min","std")
print "{:<10.5f}{:<10.5f}{:<10.5f}{:<10.5f}".format(record.mean(),record.max(),record.min(),record.std())
print "record"
print record
print
record = cuda_fast_matmul
print "fast_matmul takes average {:.5f} second (except 1st run)".format(cuda_fast_matmul[1:].mean())
print "{:<10}{:<10}{:<10}{:<10}".format("mean","max","min","std")
print "{:<10.5f}{:<10.5f}{:<10.5f}{:<10.5f}".format(record.mean(),record.max(),record.min(),record.std())
print "record"
print record
print
record = cuda_matmul_gu3
print "matmul_gu3 takes average {:.5f} second (except 1st run)".format(cuda_matmul_gu3[1:].mean())
print "{:<10}{:<10}{:<10}{:<10}".format("mean","max","min","std")
print "{:<10.5f}{:<10.5f}{:<10.5f}{:<10.5f}".format(record.mean(),record.max(),record.min(),record.std())
print "record"
print record
print
print

print "Python matrix multiplication"
record = py_py
print "matmul_py takes average {:.5f} second (except 1st run)".format(py_py[1:].mean())
print "{:<10}{:<10}{:<10}{:<10}".format("mean","max","min","std")
print "{:<10.5f}{:<10.5f}{:<10.5f}{:<10.5f}".format(record.mean(),record.max(),record.min(),record.std())
print "record"
print record
print
record = py_py1
print "matmul_py1 takes average {:.5f} second (except 1st run)".format(py_py1[1:].mean())
print "{:<10}{:<10}{:<10}{:<10}".format("mean","max","min","std")
print "{:<10.5f}{:<10.5f}{:<10.5f}{:<10.5f}".format(record.mean(),record.max(),record.min(),record.std())
print "record"
print record
print
print

@guvectorize decorator is tested in functions matmul_gumatmul_gu1matmul_gu2.
@jit decorator is tested in functions matmul_jitmatmul_jit_matmul_jit1matmul_jit2matmul_jit3matmul_jit4matmul_jit5.
CUDA GPU programming is tested in cuda.jit functions matmulfast_matmul and guvectorize function matmul_gu3.
Functions matmul_py and matmul_py1 are pure python.
Three forms of NumPy matrix multiplication (out = np.dot(A, B)out = A.dot(B)out = np.matmul(A, B)) are also tested.

In the guvectorize() function matmul_gu, the loop body is the same as matrix multiplication code in Numba CUDA example and the target argument is set as "cpu"; in the function matmul_gu1, its content is the same as function matmul_gu except that the matrix slice A[i,:] is cached; in the function matmul_gu2, its content is the same as unction matmul_gu except that target argument is set as "parallel".

In the jit() function matmul_jit, the loop body is the same as matrix multiplication code in Numba CUDA example; in the function matmul_jit1, its content is the same as function matmul_jit except that the signature argument is given; in the function matmul_jit2, its content is the same as function matmul_jit1 except that the cache argument is set as True; in the function matmul_jit3, its content is the same as function matmul_jit2 except that the matrix slice A[i,:] is cached; in the function matmul_jit4, its content is the same as function matmul_jit1 except that the nopython argument is set as True; in the function matmul_jit5, its content is the same as function matmul_jit4 except that the matrix slice A[i,:] is cached.

cuda.jit functions matmul and fast_matmul are copied from the matrix multiplication code in Numba CUDA example. Function matmul_gu3 is a guvectorize() function, its content is exactly the same as cuda.jit functions matmul, and the target argument is set as "cuda". For CUDA GPU programming, only guvectorize() function with target argument set as "cuda" (namely function matmul_gu3) can return Numpy array. Finally, it would be appreciated if anyone could show me how to change fast_matmul from a cuda.jit function to a guvectorize() function.

Function matmul_py is pure python, and its content is exactly the same as function matmul_jit; in the function matmul_jit_, its content is the same as function matmul_py except that the matrix slice A[i,:] is cached.


Running Matrix Multiplication Code

The computer hardware and software specification is:
Operating system: Windows 10, 64bit
Power plan: High performance
Distribution: Anaconda2-4.3.1-Windows-x86_64
Hardware: Computer 1

For Numpy array A and B, their dtype are both float64, and np.dtype('float64').itemsize = 8 (bytes) on my computer 1. Hence the size of the Numpy array A and B are both 500 * 500 * 8 (bytes) = 2,000,000 (bytes), and is less than CPU L3 cache.

The running time is:

Running Time
(second)
Speed-up
Numpy
np.dot(A, B)0.011136540.5
A.dot(B)0.011056587.9
np.matmul(A, B)0.011106558.2
@guvectorize decorator
matmul_gu0.22610322.0
matmul_gu10.23270312.8
matmul_gu20.23209313.7
@jit decorator
matmul_jit0.23277312.7
matmul_jit_0.22887318.1
matmul_jit10.22839318.7
matmul_jit20.22731320.3
matmul_jit30.23245313.2
matmul_jit40.22776319.6
matmul_jit50.23097315.2
CUDA GPU programming
matmul0.10635684.5
fast_matmul0.07739940.6
matmul_gu30.009377769.1
Python function
matmul_py72.796241.0
matmul_py165.291541.1

For some functions, the first running time is much longer than the others. Hence the running time in the above table is the average of all running times except the first one.

In this test, NumPy matrix multiplication outperforms Numba except CUDA GPU programming matmul_gu3. And the running time of guvectorize() functions and jit() functions are the same, despite the setting of decorator argument, or whether slice A[i,:] is cached or not. The running time of CUDA GPU programming is shorter than running time of guvectorize() functions and jit() functions.

Multiply of Large Matrix


The content of matrix_multiply2_large.py is:
from numba import guvectorize, jit, float64, void
import numpy as np
from numba import cuda

import os
os.environ['NUMBAPRO_NVVM']      = r'C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\nvvm\bin\nvvm64_31_0.dll'
os.environ['NUMBAPRO_LIBDEVICE'] = r'C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\nvvm\libdevice'



#
# numba.guvectorize matrix multiplication
#
@guvectorize([(float64[:,:], float64[:,:], float64[:,:])], '(m,l),(l,n)->(m,n)', target='cpu')
def matmul_gu(A, B, out):
    """Perform square matrix multiplication of out = A * B
    """
    for i in range(out.shape[0]):
        for j in range(out.shape[1]):
            tmp = 0.
            for k in range(A.shape[1]):
                tmp += A[i, k] * B[k, j]
            out[i, j] = tmp

@guvectorize([(float64[:,:], float64[:,:], float64[:,:])], '(m,l),(l,n)->(m,n)', target='cpu')
def matmul_gu1(A, B, out):
    """Perform square matrix multiplication of out = A * B
    """
    A_i = np.empty(A.shape[1], dtype=np.float64)
    for i in range(out.shape[0]):
        for k in range(A.shape[1]):
            A_i[k] = A[i, k]

        for j in range(out.shape[1]):
            tmp = 0.
            for k in range(A.shape[1]):
                tmp += A_i[k] * B[k, j]
            out[i, j] = tmp

@guvectorize([(float64[:,:], float64[:,:], float64[:,:])], '(m,l),(l,n)->(m,n)', target='parallel')
def matmul_gu2(A, B, out):
    """Perform square matrix multiplication of out = A * B
    """
    for i in range(out.shape[0]):
        for j in range(out.shape[1]):
            tmp = 0.
            for k in range(A.shape[1]):
                tmp += A[i, k] * B[k, j]
            out[i, j] = tmp



#
# numba.jit matrix multiplication
#
@jit
def matmul_jit(A, B):
    """Perform square matrix multiplication of out = A * B
    """
    out = np.empty((A.shape[0],B.shape[1]), A.dtype)
    for i in range(out.shape[0]):
        for j in range(out.shape[1]):
            tmp = 0.
            for k in range(A.shape[1]):
                tmp += A[i, k] * B[k, j]
            out[i, j] = tmp
    return out

@jit
def matmul_jit_(A, B):
    """Perform square matrix multiplication of out = A * B
    """
    out = np.empty([A.shape[0],B.shape[1]], A.dtype)
    for i in range(out.shape[0]):
        for j in range(out.shape[1]):
            tmp = 0.
            for k in range(A.shape[1]):
                tmp += A[i, k] * B[k, j]
            out[i, j] = tmp
    return out

@jit([float64[:,:](float64[:,:], float64[:,:])])
def matmul_jit1(A, B):
    """Perform square matrix multiplication of out = A * B
    """
    out = np.empty((A.shape[0],B.shape[1]), A.dtype)
    for i in range(out.shape[0]):
        for j in range(out.shape[1]):
            tmp = 0.
            for k in range(A.shape[1]):
                tmp += A[i, k] * B[k, j]
            out[i, j] = tmp
    return out

@jit([float64[:,:](float64[:,:], float64[:,:])], cache=True)
def matmul_jit2(A, B):
    """Perform square matrix multiplication of out = A * B
    """
    out = np.empty((A.shape[0],B.shape[1]), A.dtype)
    for i in range(out.shape[0]):
        for j in range(out.shape[1]):
            tmp = 0.
            for k in range(A.shape[1]):
                tmp += A[i, k] * B[k, j]
            out[i, j] = tmp
    return out

@jit([float64[:,:](float64[:,:], float64[:,:])], cache=True)
def matmul_jit3(A, B):
    """Perform square matrix multiplication of out = A * B
    """
    out = np.empty((A.shape[0],B.shape[1]), A.dtype)
    A_i = np.empty(A.shape[1], dtype=np.float64)
    for i in range(out.shape[0]):
        for k in range(A.shape[1]):
            A_i[k] = A[i, k]

        for j in range(out.shape[1]):
            tmp = 0.
            for k in range(A.shape[1]):
                tmp += A_i[k] * B[k, j]
            out[i, j] = tmp
    return out

@jit([float64[:,:](float64[:,:], float64[:,:])], nopython=True)
def matmul_jit4(A, B):
    """Perform square matrix multiplication of out = A * B
    """
    out = np.empty((A.shape[0],B.shape[1]), A.dtype)
    for i in range(out.shape[0]):
        for j in range(out.shape[1]):
            tmp = 0.
            for k in range(A.shape[1]):
                tmp += A[i, k] * B[k, j]
            out[i, j] = tmp
    return out

@jit([float64[:,:](float64[:,:], float64[:,:])], nopython=True)
def matmul_jit5(A, B):
    """Perform square matrix multiplication of out = A * B
    """
    out = np.empty((A.shape[0],B.shape[1]), A.dtype)
    A_i = np.empty(A.shape[1], dtype=np.float64)
    for i in range(out.shape[0]):
        for k in range(A.shape[1]):
            A_i[k] = A[i, k]

        for j in range(out.shape[1]):
            tmp = 0.
            for k in range(A.shape[1]):
                tmp += A_i[k] * B[k, j]
            out[i, j] = tmp
    return out



#
# CUDA matrix multiplication
#
@cuda.jit
def matmul(A, B, C):
    """Perform square matrix multiplication of C = A * B
    """
    i, j = cuda.grid(2)
    if i < C.shape[0] and j < C.shape[1]:
        tmp = 0.
        for k in range(A.shape[1]):
            tmp += A[i, k] * B[k, j]
        C[i, j] = tmp

# Controls threads per block and shared memory usage.
# The computation will be done on blocks of TPBxTPB elements.
TPB = 16

@cuda.jit
def fast_matmul(A, B, C):
    # Define an array in the shared memory
    # The size and type of the arrays must be known at compile time
    sA = cuda.shared.array(shape=(TPB, TPB), dtype=float64)
    sB = cuda.shared.array(shape=(TPB, TPB), dtype=float64)

    x, y = cuda.grid(2)

    tx = cuda.threadIdx.x
    ty = cuda.threadIdx.y
    bpg = cuda.gridDim.x    # blocks per grid

    if x >= C.shape[0] and y >= C.shape[1]:
        # Quit if (x, y) is outside of valid C boundary
        return

    # Each thread computes one element in the result matrix.
    # The dot product is chunked into dot products of TPB-long vectors.
    tmp = 0.
    for i in range(bpg):
        # Preload data into shared memory
        sA[tx, ty] = A[x, ty + i * TPB]
        sB[tx, ty] = B[tx + i * TPB, y]

        # Wait until all threads finish preloading
        cuda.syncthreads()

        # Computes partial product on the shared memory
        for j in range(TPB):
            tmp += sA[tx, j] * sB[j, ty]

        # Wait until all threads finish computing
        cuda.syncthreads()

    C[x, y] = tmp

@guvectorize([void(float64[:,:], float64[:,:], float64[:,:])], '(m,l),(l,n)->(m,n)', target='cuda')
def matmul_gu3(A, B, out):
    """Perform square matrix multiplication of out = A * B
    """
    i, j = cuda.grid(2)
    if i < out.shape[0] and j < out.shape[1]:
        tmp = 0.
        for k in range(A.shape[1]):
            tmp += A[i, k] * B[k, j]
        out[i, j] = tmp

matmul_gu3.max_blocksize = 32



#
# Python matrix multiplication
#
def matmul_py(A, B):
    """Perform square matrix multiplication of out = A * B
    """
    out = np.empty([A.shape[0],B.shape[1]], A.dtype)
    for i in range(out.shape[0]):
        for j in range(out.shape[1]):
            tmp = 0.
            for k in range(A.shape[1]):
                tmp += A[i, k] * B[k, j]
            out[i, j] = tmp
    return out

def matmul_py1(A, B):
    """Perform square matrix multiplication of out = A * B
    """
    out = np.empty([A.shape[0],B.shape[1]], A.dtype)
    A_i = np.empty(A.shape[1], dtype=np.float64)
    for i in range(out.shape[0]):
        for k in range(A.shape[1]):
            A_i[k] = A[i, k]

        for j in range(out.shape[1]):
            tmp = 0.
            for k in range(A.shape[1]):
                tmp += A_i[k] * B[k, j]
            out[i, j] = tmp
    return out



#
# Main Program
#
N = 1500
A = np.random.rand(N,N)
B = np.random.rand(N,N)

print "matrix multiplication"
print "A array size", A.shape[0], "by", A.shape[1]
print "B array size", B.shape[0], "by", B.shape[1]
print "A array type", A.dtype
print "B array type", B.dtype
print

import timeit



#
# Loop Body
#
np_loop = 30
np_dot = np.empty(np_loop)
A_dot_B = np.empty(np_loop)
np_matmul = np.empty(np_loop)

print "NumPy matrix multiplication..."
for i in range(np_loop):
    t_start = timeit.default_timer()
    out = np.dot(A, B)
    t_end = timeit.default_timer()
    np_dot[i] = t_end - t_start
    # print "np.dot(A, B) takes {:.5f} second".format((t_end - t_start))

    t_start = timeit.default_timer()
    out = A.dot(B)
    t_end = timeit.default_timer()
    A_dot_B[i] = t_end - t_start
    # print "A.dot(B) takes {:.5f} second".format((t_end - t_start))

    # This function is included in NumPy 1.10.0 and higher
    t_start = timeit.default_timer()
    out = np.matmul(A, B)
    t_end = timeit.default_timer()
    np_matmul[i] = t_end - t_start
    # print "np.matmul(A, B) takes {:.5f} second".format((t_end - t_start))



nb_loop = 5
nb_gu = np.empty(nb_loop)
nb_gu1 = np.empty(nb_loop)
nb_gu2 = np.empty(nb_loop)
nb_jit = np.empty(nb_loop)
nb_jit_ = np.empty(nb_loop)
nb_jit1 = np.empty(nb_loop)
nb_jit2 = np.empty(nb_loop)
nb_jit3 = np.empty(nb_loop)
nb_jit4 = np.empty(nb_loop)
nb_jit5 = np.empty(nb_loop)

print "Numba matrix multiplication..."
for i in range(nb_loop):
    t_start = timeit.default_timer()
    out = matmul_gu(A, B)
    t_end = timeit.default_timer()
    nb_gu[i] = t_end - t_start
    # print "matmul_gu takes {:.5f} second".format((t_end - t_start))

    t_start = timeit.default_timer()
    out = matmul_gu1(A, B)
    t_end = timeit.default_timer()
    nb_gu1[i] = t_end - t_start
    # print "matmul_gu1 takes {:.5f} second".format((t_end - t_start))

    t_start = timeit.default_timer()
    out = matmul_gu2(A, B)
    t_end = timeit.default_timer()
    nb_gu2[i] = t_end - t_start
    # print "matmul_gu1 takes {:.5f} second".format((t_end - t_start))

    t_start = timeit.default_timer()
    out = matmul_jit(A, B)
    t_end = timeit.default_timer()
    nb_jit[i] = t_end - t_start
    # print "matmul_jit takes {:.5f} second".format((t_end - t_start))

    t_start = timeit.default_timer()
    out = matmul_jit_(A, B)
    t_end = timeit.default_timer()
    nb_jit_[i] = t_end - t_start
    # print "matmul_jit_ takes {:.5f} second".format((t_end - t_start))

    t_start = timeit.default_timer()
    out = matmul_jit1(A, B)
    t_end = timeit.default_timer()
    nb_jit1[i] = t_end - t_start
    # print "matmul_jit1 takes {:.5f} second".format((t_end - t_start))

    t_start = timeit.default_timer()
    out = matmul_jit2(A, B)
    t_end = timeit.default_timer()
    nb_jit2[i] = t_end - t_start
    # print "matmul_jit2 takes {:.5f} second".format((t_end - t_start))

    t_start = timeit.default_timer()
    out = matmul_jit3(A, B)
    t_end = timeit.default_timer()
    nb_jit3[i] = t_end - t_start
    # print "matmul_jit3 takes {:.5f} second".format((t_end - t_start))

    t_start = timeit.default_timer()
    out = matmul_jit4(A, B)
    t_end = timeit.default_timer()
    nb_jit4[i] = t_end - t_start
    # print "matmul_jit4 takes {:.5f} second".format((t_end - t_start))

    t_start = timeit.default_timer()
    out = matmul_jit5(A, B)
    t_end = timeit.default_timer()
    nb_jit5[i] = t_end - t_start
    # print "matmul_jit5 takes {:.5f} second".format((t_end - t_start))



# matrix multiplication of out = A * B
# out.shape == (A.shape[0], B.shape[1])
import math
threadsperblock = (TPB, TPB)
blockspergrid_x = int(math.ceil(out.shape[0] / threadsperblock[0]))
blockspergrid_y = int(math.ceil(out.shape[1] / threadsperblock[1]))
blockspergrid = (blockspergrid_x, blockspergrid_y)

cuda_loop = 10
cuda_matmul = np.empty(cuda_loop)
cuda_fast_matmul = np.empty(cuda_loop)
cuda_matmul_gu3 = np.empty(cuda_loop)

stream = cuda.stream()

print "CUDA matrix multiplication..."
for i in range(cuda_loop):
    print i,
    print "\r",

    t_start = timeit.default_timer()
    matmul[blockspergrid, threadsperblock](A, B, out)
    t_end = timeit.default_timer()
    cuda_matmul[i] = t_end - t_start
    # print "matmul takes {:.5f} second".format((t_end -t_start))

    t_start = timeit.default_timer()
    fast_matmul[blockspergrid, threadsperblock](A, B, out)
    t_end = timeit.default_timer()
    cuda_fast_matmul[i] = t_end - t_start
    # print "fast_matmul takes {:.5f} second".format((t_end -t_start))

    t_start = timeit.default_timer()
    out = matmul_gu3(A, B)
    t_end = timeit.default_timer()
    cuda_matmul_gu3[i] = t_end - t_start
    # print "matmul_gu3 takes {:.5f} second".format((t_end -t_start))



py_loop = 0
py_py = np.empty(py_loop)
py_py1 = np.empty(py_loop)

print "Python matrix multiplication..."
for i in range(py_loop):
 py_start = timeit.default_timer()
 out = matmul_py(A, B)
 py_end = timeit.default_timer()
 py_py[i] = py_end - py_start
 # print "matmul_py takes {:.5f} second".format((t_end - t_start))

 py1_start = timeit.default_timer()
 out = matmul_py1(A, B)
 py1_end = timeit.default_timer()
 py_py1[i] = py1_end - py1_start
 # print "matmul_py1 takes {:.5f} second".format((t_end - t_start))



#
# Output Results
#
print
print
print

print "NumPy matrix multiplication"
record = np_dot
print "np.dot(A, B) takes average {:.5f} second (except 1st run)".format(np_dot[1:].mean())
print "{:<10}{:<10}{:<10}{:<10}".format("mean","max","min","std")
print "{:<10.5f}{:<10.5f}{:<10.5f}{:<10.5f}".format(record.mean(),record.max(),record.min(),record.std())
print "record"
print record
print
record = A_dot_B
print "A.dot(B) takes average {:.5f} second (except 1st run)".format(A_dot_B[1:].mean())
print "{:<10}{:<10}{:<10}{:<10}".format("mean","max","min","std")
print "{:<10.5f}{:<10.5f}{:<10.5f}{:<10.5f}".format(record.mean(),record.max(),record.min(),record.std())
print "record"
print record
print
record = np_matmul
print "np.matmul(A, B) takes average {:.5f} second (except 1st run)".format(np_matmul[1:].mean())
print "{:<10}{:<10}{:<10}{:<10}".format("mean","max","min","std")
print "{:<10.5f}{:<10.5f}{:<10.5f}{:<10.5f}".format(record.mean(),record.max(),record.min(),record.std())
print "record"
print record
print
print

print "Numba matrix multiplication"
record = nb_gu
print "matmul_gu takes average {:.5f} second (except 1st run)".format(nb_gu[1:].mean())
print "{:<10}{:<10}{:<10}{:<10}".format("mean","max","min","std")
print "{:<10.5f}{:<10.5f}{:<10.5f}{:<10.5f}".format(record.mean(),record.max(),record.min(),record.std())
print "record"
print record
print
record = nb_gu1
print "matmul_gu1 takes average {:.5f} second (except 1st run)".format(nb_gu1[1:].mean())
print "{:<10}{:<10}{:<10}{:<10}".format("mean","max","min","std")
print "{:<10.5f}{:<10.5f}{:<10.5f}{:<10.5f}".format(record.mean(),record.max(),record.min(),record.std())
print "record"
print record
print
record = nb_gu2
print "matmul_gu2 takes average {:.5f} second (except 1st run)".format(nb_gu2[1:].mean())
print "{:<10}{:<10}{:<10}{:<10}".format("mean","max","min","std")
print "{:<10.5f}{:<10.5f}{:<10.5f}{:<10.5f}".format(record.mean(),record.max(),record.min(),record.std())
print "record"
print record
print
record = nb_jit
print "matmul_jit takes average {:.5f} second (except 1st run)".format(nb_jit[1:].mean())
print "{:<10}{:<10}{:<10}{:<10}".format("mean","max","min","std")
print "{:<10.5f}{:<10.5f}{:<10.5f}{:<10.5f}".format(record.mean(),record.max(),record.min(),record.std())
print "record"
print record
print
record = nb_jit_
print "matmul_jit_ takes average {:.5f} second (except 1st run)".format(nb_jit_[1:].mean())
print "{:<10}{:<10}{:<10}{:<10}".format("mean","max","min","std")
print "{:<10.5f}{:<10.5f}{:<10.5f}{:<10.5f}".format(record.mean(),record.max(),record.min(),record.std())
print "record"
print record
print
record = nb_jit1
print "matmul_jit1 takes average {:.5f} second (except 1st run)".format(nb_jit1[1:].mean())
print "{:<10}{:<10}{:<10}{:<10}".format("mean","max","min","std")
print "{:<10.5f}{:<10.5f}{:<10.5f}{:<10.5f}".format(record.mean(),record.max(),record.min(),record.std())
print "record"
print record
print
record = nb_jit2
print "matmul_jit2 takes average {:.5f} second (except 1st run)".format(nb_jit2[1:].mean())
print "{:<10}{:<10}{:<10}{:<10}".format("mean","max","min","std")
print "{:<10.5f}{:<10.5f}{:<10.5f}{:<10.5f}".format(record.mean(),record.max(),record.min(),record.std())
print "record"
print record
print
record = nb_jit3
print "matmul_jit3 takes average {:.5f} second (except 1st run)".format(nb_jit3[1:].mean())
print "{:<10}{:<10}{:<10}{:<10}".format("mean","max","min","std")
print "{:<10.5f}{:<10.5f}{:<10.5f}{:<10.5f}".format(record.mean(),record.max(),record.min(),record.std())
print "record"
print record
print
record = nb_jit4
print "matmul_jit4 takes average {:.5f} second (except 1st run)".format(nb_jit4[1:].mean())
print "{:<10}{:<10}{:<10}{:<10}".format("mean","max","min","std")
print "{:<10.5f}{:<10.5f}{:<10.5f}{:<10.5f}".format(record.mean(),record.max(),record.min(),record.std())
print "record"
print record
print
record = nb_jit5
print "matmul_jit5 takes average {:.5f} second (except 1st run)".format(nb_jit5[1:].mean())
print "{:<10}{:<10}{:<10}{:<10}".format("mean","max","min","std")
print "{:<10.5f}{:<10.5f}{:<10.5f}{:<10.5f}".format(record.mean(),record.max(),record.min(),record.std())
print "record"
print record
print
print

print "CUDA matrix multiplication"
record = cuda_matmul
print "matmul takes average {:.5f} second (except 1st run)".format(cuda_matmul[1:].mean())
print "{:<10}{:<10}{:<10}{:<10}".format("mean","max","min","std")
print "{:<10.5f}{:<10.5f}{:<10.5f}{:<10.5f}".format(record.mean(),record.max(),record.min(),record.std())
print "record"
print record
print
record = cuda_fast_matmul
print "fast_matmul takes average {:.5f} second (except 1st run)".format(cuda_fast_matmul[1:].mean())
print "{:<10}{:<10}{:<10}{:<10}".format("mean","max","min","std")
print "{:<10.5f}{:<10.5f}{:<10.5f}{:<10.5f}".format(record.mean(),record.max(),record.min(),record.std())
print "record"
print record
print
record = cuda_matmul_gu3
print "matmul_gu3 takes average {:.5f} second (except 1st run)".format(cuda_matmul_gu3[1:].mean())
print "{:<10}{:<10}{:<10}{:<10}".format("mean","max","min","std")
print "{:<10.5f}{:<10.5f}{:<10.5f}{:<10.5f}".format(record.mean(),record.max(),record.min(),record.std())
print "record"
print record
print
print

exit()

print "Python matrix multiplication"
record = py_py
print "matmul_py takes average {:.5f} second (except 1st run)".format(py_py[1:].mean())
print "{:<10}{:<10}{:<10}{:<10}".format("mean","max","min","std")
print "{:<10.5f}{:<10.5f}{:<10.5f}{:<10.5f}".format(record.mean(),record.max(),record.min(),record.std())
print "record"
print record
print
record = py_py1
print "matmul_py1 takes average {:.5f} second (except 1st run)".format(py_py1[1:].mean())
print "{:<10}{:<10}{:<10}{:<10}".format("mean","max","min","std")
print "{:<10.5f}{:<10.5f}{:<10.5f}{:<10.5f}".format(record.mean(),record.max(),record.min(),record.std())
print "record"
print record
print
print


The computer hardware and software specification is the same as the former section "Running Matrix Multiplication Code".


For Numpy array A and B, their dtype are both float64, and np.dtype('float64').itemsize = 8 (bytes) on my computer 1. Hence the size of the Numpy array A and B are both 1,500 * 1,500 * 8 (bytes) = 18,000,000 (bytes), and is greater than CPU L3 cache.

The running time is:

Running Time
(second)
Speed-up
Numpy
np.dot(A, B)0.27524-
A.dot(B)0.26752-
np.matmul(A, B)0.25729-
@guvectorize decorator
matmul_gu31.55001-
matmul_gu131.57751-
matmul_gu231.46228-
@jit decorator
matmul_jit31.47627-
matmul_jit_31.53829-
matmul_jit131.47299-
matmul_jit231.47734-
matmul_jit331.48846-
matmul_jit431.45899-
matmul_jit531.50573-
CUDA GPU programming
matmul2.85543-
fast_matmul1.94038-
matmul_gu30.08086-
Python function
matmul_py--
matmul_py1--

For some functions, the first running time is much longer than the others. Hence the running time in the above table is the average of all running times except the first one.

The running time of matmul_py, matmul_py1 is too long, hence they are skipped.

In this test, NumPy matrix multiplication outperforms Numba except CUDA GPU programming matmul_gu3. And the running time of guvectorize() functions and jit() functions are the same, despite the setting of decorator argument, or whether slice A[i,:] is cached or not. The running time of CUDA GPU programming is shorter than running time of guvectorize() functions and jit() functions.

1 則留言: