LAPACK
LAPACK ("Linear Algebra Package") is a standard software library for numerical linear algebra. It provides routines for solving systems of linear equations and linear least squares, eigenvalue problems, and singular value decomposition. It also includes routines to implement the associated matrix factorizations such as LU, QR, Cholesky and Schur decomposition. LAPACK was originally written in FORTRAN 77, but moved to Fortran 90 in version 3.2 (2008).[1] The routines handle both real and complex matrices in both single and double precision.
Initial release | 1992 |
---|---|
Stable release | 3.9.0
/ 21 November 2019 |
Written in | Fortran 90 |
Type | Software library |
License | BSD-new |
Website | www |
LAPACK was designed as the successor to the linear equations and linear least-squares routines of LINPACK and the eigenvalue routines of EISPACK. LINPACK, written in the 1970s and 1980s, was designed to run on the then-modern vector computers with shared memory. LAPACK, in contrast, was designed to effectively exploit the caches on modern cache-based architectures, and thus can run orders of magnitude faster than LINPACK on such machines, given a well-tuned BLAS implementation. LAPACK has also been extended to run on distributed memory systems in later packages such as ScaLAPACK and PLAPACK.[2]
LAPACK is licensed under a three-clause BSD style license, a permissive free software license with few restrictions.
Naming scheme
Subroutines in LAPACK have a naming convention which makes the identifiers very compact. This was necessary as the first Fortran standards only supported identifiers up to six characters long, so the names had to be shortened to fit into this limit.
A LAPACK subroutine name is in the form pmmaaa
, where:
p
is a one-letter code denoting the type of numerical constants used.S
,D
stand for real floating point arithmetic respectively in single and double precision, whileC
andZ
stand for complex arithmetic with respectively single and double precision. The newer version, LAPACK95, uses generic subroutines in order to overcome the need to explicitly specify the data type.mm
is a two-letter code denoting the kind of matrix expected by the algorithm. The codes for the different kind of matrices are reported below; the actual data are stored in a different format depending on the specific kind; e.g., when the codeDI
is given, the subroutine expects a vector of lengthn
containing the elements on the diagonal, while when the codeGE
is given, the subroutine expects an n×n array containing the entries of the matrix.aaa
is a one- to three-letter code describing the actual algorithm implemented in the subroutine, e.g.SV
denotes a subroutine to solve linear system, whileR
denotes a rank-1 update.
For example, the subroutine to solve a linear system with a general (non-structured) matrix using real double-precision arithmetic is called DGESV
.
Details on this scheme can be found in the Naming scheme section in LAPACK Users' Guide.
Use with other programming languages
Many programming environments today support the use of libraries with C binding. The LAPACK routines can be used like C functions if a few restrictions are observed.
Several alternative language bindings are also available:
Implementations
As with BLAS, LAPACK is frequently forked or rewritten to provide better performance on specific systems. Some of the implementations are:
- Accelerate
- Apple's framework for macOS and iOS, which includes tuned versions of BLAS and LAPACK.[3][4]
- Netlib LAPACK
- The official LAPACK.
- Netlib ScaLAPACK
- Scalable (multicore) LAPACK, built on top of PBLAS.
- Intel MKL
- Intel's Math routines for their x86 CPUs.
- OpenBLAS
- Open-source reimplementation of BLAS and LAPACK.
Since LAPACK uses BLAS for the heavy-lifting, just linking to a better-tuned BLAS implementation usually improves the performance sufficiently. As a result, LAPACK is not reimplemented as often as BLAS is.
Similar projects
These projects provide a similar functionality to LAPACK, but the main interface differs from that of LAPACK:
- Libflame
- A dense linear algebra library. Has a LAPACK-compatible wrapper. Can be used with any BLAS, although BLIS is the preferred implementation.[5]
- Eigen
- A header library for linear algebra. Has a BLAS and a partial LAPACK implementation for compatibility.
- MAGMA
- Matrix Algebra on GPU and Multicore Architectures (MAGMA) project develops a dense linear algebra library similar to LAPACK but for heterogeneous and hybrid architectures including multicore systems accelerated with GPGPUs.
- PLASMA
- The Parallel Linear Algebra for Scalable Multi-core Architectures (PLASMA) project is a modern replacement of LAPACK for multi-core architectures. PLASMA is a software framework for development of asynchronous operations and features out of order scheduling with a runtime scheduler called QUARK that may be used for any code that expresses its dependencies with a directed acyclic graph.[6]
See also
- List of numerical libraries
- Math Kernel Library (MKL)
- NAG Numerical Library
- SLATEC, a FORTRAN 77 library of mathematical and statistical routines
- QUADPACK, a FORTRAN 77 library for numerical integration
References
- "LAPACK 3.2 Release Notes". 16 November 2008.
- "PLAPACK: Parallel Linear Algebra Package". www.cs.utexas.edu. University of Texas at Austin. 12 June 2007. Retrieved 20 April 2017.
- "Guides and Sample Code". developer.apple.com. Retrieved 2017-07-07.
- "Guides and Sample Code". developer.apple.com. Retrieved 2017-07-07.
- "amd/libflame: High-performance object-based library for DLA computations". GitHub. AMD. 25 August 2020.
- "ICL". icl.eecs.utk.edu. Retrieved 2017-07-07.
Further reading
- Anderson, E.; Bai, Z.; Bischof, C.; Blackford, S.; Demmel, J.; Dongarra, J.; Du Croz, J.; Greenbaum, A.; Hammarling, S.; McKenney, A.; Sorensen, D. (1999). LAPACK Users' Guide (Third ed.). Philadelphia, PA: Society for Industrial and Applied Mathematics. ISBN 0-89871-447-8.