Jile Chen, Peimin Zhu

An Alternate GPU-Accelerated Algorithm for Very Large Sparse LU Factorization

  • General Mathematics
  • Engineering (miscellaneous)
  • Computer Science (miscellaneous)

The LU factorization of very large sparse matrices requires a significant amount of computing resources, including memory and broadband communication. A hybrid MPI + OpenMP + CUDA algorithm named SuperLU3D can efficiently compute the LU factorization with GPU acceleration. However, this algorithm faces difficulties when dealing with very large sparse matrices with limited GPU resources. Factorizing very large matrices involves a vast amount of nonblocking communication between processes, often leading to a break in SuperLU3D calculation due to the overflow of cluster communication buffers. In this paper, we present an improved GPU-accelerated algorithm named SuperLU3D_Alternate for the LU factorization of very large sparse matrices with fewer GPU resources. The basic idea is “divide and conquer”, which means dividing a very large matrix into multiple submatrices, performing LU factorization on each submatrix, and then assembling the factorized results of all submatrices into two complete matrices L and U. In detail, according to the number of available GPUs, a very large matrix is first divided into multiple submatrices using the elimination tree. Then, the LU factorization of each submatrix is alternately computed with limited GPU resources, and its intermediate LU factors from GPUs are saved to the host memory or hard disk. Finally, after finishing the LU factorization of all submatrices, these factorized submatrices are assembled into a complete lower triangular matrix L and a complete upper triangular matrix U, respectively. The SuperLU3D_Alternate algorithm is suitable for hybrid CPU/GPU cluster systems, especially for a subset of nodes without GPUs. To accommodate different hardware resources in various clusters, we designed the algorithm to run in the following three cases: sufficient memory for GPU nodes, insufficient memory for GPU nodes, and insufficient memory for the entire cluster. The results from LU factorization test on different matrices in various cases show that the larger the matrix is, the more efficient this algorithm is under the same GPU memory consumption. In our numerical experiments, SuperLU3D_Alternate achieves speeds of up to 8× that of SuperLU3D (CPU only) and 2.5× that of SuperLU3D (CPU + GPU) on the hybrid cluster with six Tesla V100S GPUs. Furthermore, when the matrix is too big to be handled by SuperLU3D, SuperLU3D_Alternate can still utilize the cluster’s host memory or hard disk to solve it. By reducing the amount of data exchange to prevent exceeding the buffer’s limit of the cluster MPI nonblocking communication, our algorithm enhances the stability of the program.

Need a simple solution for managing your BibTeX entries? Explore CiteDrive!

  • Web-based, modern reference management
  • Collaborate and share with fellow researchers
  • Integration with Overleaf
  • Comprehensive BibTeX/BibLaTeX support
  • Save articles and websites directly from your browser
  • Search for new articles from a database of tens of millions of references
Try out CiteDrive

More from our Archive