You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.md 20 kB

10 years ago
13 years ago
13 years ago
11 years ago
4 years ago
11 years ago
5 years ago
11 years ago
4 years ago
11 years ago
4 years ago
5 years ago
13 years ago
13 years ago
9 months ago
5 years ago
12 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370
  1. # OpenBLAS
  2. [![Join the chat at https://gitter.im/xianyi/OpenBLAS](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/xianyi/OpenBLAS?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
  3. Cirrus CI: [![Build Status](https://api.cirrus-ci.com/github/xianyi/OpenBLAS.svg?branch=develop)](https://cirrus-ci.com/github/xianyi/OpenBLAS)
  4. [![Build Status](https://dev.azure.com/xianyi/OpenBLAS/_apis/build/status/xianyi.OpenBLAS?branchName=develop)](https://dev.azure.com/xianyi/OpenBLAS/_build/latest?definitionId=1&branchName=develop)
  5. OSUOSL POWERCI [![Build Status](https://powerci.osuosl.org/buildStatus/icon?job=OpenBLAS_gh%2Fdevelop)](http://powerci.osuosl.org/job/OpenBLAS_gh/job/develop/)
  6. OSUOSL IBMZ-CI [![Build Status](http://ibmz-ci.osuosl.org/buildStatus/icon?job=OpenBLAS-Z%2Fdevelop)](http://ibmz-ci.osuosl.org/job/OpenBLAS-Z/job/develop/)
  7. ## Introduction
  8. OpenBLAS is an optimized BLAS (Basic Linear Algebra Subprograms) library based on GotoBLAS2 1.13 BSD version.
  9. For more information about OpenBLAS, please see:
  10. - The documentation at [openmathlib.org/OpenBLAS/docs/](http://www.openmathlib.org/OpenBLAS/docs),
  11. - The home page at [openmathlib.org/OpenBLAS/](http://www.openmathlib.org/OpenBLAS).
  12. For a general introduction to the BLAS routines, please refer to the extensive documentation of their reference implementation hosted at netlib:
  13. <https://www.netlib.org/blas>. On that site you will likewise find documentation for the reference implementation of the higher-level library LAPACK - the **L**inear **A**lgebra **Pack**age that comes included with OpenBLAS. If you are looking for a general primer or refresher on Linear Algebra, the set of six
  14. 20-minute lecture videos by Prof. Gilbert Strang on either MIT OpenCourseWare [here](https://ocw.mit.edu/resources/res-18-010-a-2020-vision-of-linear-algebra-spring-2020/) or YouTube [here](https://www.youtube.com/playlist?list=PLUl4u3cNGP61iQEFiWLE21EJCxwmWvvek) may be helpful.
  15. ## Binary Packages
  16. We provide official binary packages for the following platform:
  17. * Windows x86/x86_64
  18. * Windows arm64 (woa)
  19. You can download them from [file hosting on sourceforge.net](https://sourceforge.net/projects/openblas/files/) or from the [Releases section of the GitHub project page](https://github.com/OpenMathLib/OpenBLAS/releases).
  20. OpenBLAS is also packaged for many package managers - see [the installation section of the docs](http://www.openmathlib.org/OpenBLAS/docs/install/) for details.
  21. ## Installation from Source
  22. Obtain the source code from https://github.com/OpenMathLib/OpenBLAS/. Note that the default branch
  23. is `develop` (a `master` branch is still present, but far out of date).
  24. Build-time parameters can be chosen in `Makefile.rule`, see there for a short description of each option.
  25. Most options can also be given directly on the command line as parameters to your `make` or `cmake` invocation.
  26. ### Dependencies
  27. Building OpenBLAS requires the following to be installed:
  28. * GNU Make or CMake
  29. * A C compiler, e.g. GCC or Clang
  30. * A Fortran compiler (optional, for LAPACK)
  31. In general, using a recent version of the compiler is strongly recommended.
  32. If a Fortran compiler is not available, it is possible to compile an older version of the included LAPACK
  33. that has been machine-translated to C.
  34. ### Normal compile
  35. Simply invoking `make` (or `gmake` on BSD) will detect the CPU automatically.
  36. To set a specific target CPU, use `make TARGET=xxx`, e.g. `make TARGET=NEHALEM`.
  37. The full target list is in the file `TargetList.txt`, other build optionss are documented in Makefile.rule and
  38. can either be set there (typically by removing the comment character from the respective line), or used on the
  39. `make` command line.
  40. Note that when you run `make install` after building, you need to repeat all command line options you provided to `make`
  41. in the build step, as some settings like the supported maximum number of threads are automatically derived from the
  42. build host by default, which might not be what you want.
  43. For building with `cmake`, the usual conventions apply, i.e. create a build directory either underneath the toplevel
  44. OpenBLAS source directory or separate from it, and invoke `cmake` there with the path to the source tree and any
  45. build options you plan to set.
  46. For more details, see the [Building from source](http://www.openmathlib.org/OpenBLAS/docs/install/#building-from-source)
  47. section in the docs.
  48. ### Cross compile
  49. Set `CC` and `FC` to point to the cross toolchains, and if you use `make`, also set `HOSTCC` to your host C compiler.
  50. The target must be specified explicitly when cross compiling.
  51. Examples:
  52. * On a Linux system, cross-compiling to an older MIPS64 router board:
  53. ```sh
  54. make BINARY=64 CC=mipsisa64r6el-linux-gnuabi64-gcc FC=mipsisa64r6el-linux-gnuabi64-gfortran HOSTCC=gcc TARGET=P6600
  55. ```
  56. * or to a Windows x64 host:
  57. ```sh
  58. make CC="i686-w64-mingw32-gcc -Bstatic" FC="i686-w64-mingw32-gfortran -static-libgfortran" TARGET=HASWELL BINARY=32 CROSS=1 NUM_THREADS=20 CONSISTENT_FPCSR=1 HOSTCC=gcc
  59. ```
  60. You can find instructions for other cases both in the "Supported Systems" section below and in
  61. the [Building from source docs](http://www.openmathlib.org/OpenBLAS/docs/install).
  62. The `.yml` scripts included with the sources (which contain the
  63. build scripts for the "continuous integration" (CI) build tests automatically run on every proposed change to the sources) may also provide additional hints.
  64. When compiling for a more modern CPU target of the same architecture, e.g. `TARGET=SKYLAKEX` on a `HASWELL` host, option `CROSS=1` can be used to suppress the automatic invocation of the tests at the end of the build.
  65. ### Debug version
  66. A debug version can be built using `make DEBUG=1`.
  67. ### Compile with MASS support on Power CPU (optional)
  68. The [IBM MASS](https://www.ibm.com/support/home/product/W511326D80541V01/other_software/mathematical_acceleration_subsystem) library consists of a set of mathematical functions for C, C++, and Fortran applications that are tuned for optimum performance on POWER architectures.
  69. OpenBLAS with MASS requires a 64-bit, little-endian OS on POWER.
  70. The library can be installed as shown:
  71. * On Ubuntu:
  72. ```sh
  73. wget -q http://public.dhe.ibm.com/software/server/POWER/Linux/xl-compiler/eval/ppc64le/ubuntu/public.gpg -O- | sudo apt-key add -
  74. echo "deb http://public.dhe.ibm.com/software/server/POWER/Linux/xl-compiler/eval/ppc64le/ubuntu/ trusty main" | sudo tee /etc/apt/sources.list.d/ibm-xl-compiler-eval.list
  75. sudo apt-get update
  76. sudo apt-get install libxlmass-devel.8.1.5
  77. ```
  78. * On RHEL/CentOS:
  79. ```sh
  80. wget http://public.dhe.ibm.com/software/server/POWER/Linux/xl-compiler/eval/ppc64le/rhel7/repodata/repomd.xml.key
  81. sudo rpm --import repomd.xml.key
  82. wget http://public.dhe.ibm.com/software/server/POWER/Linux/xl-compiler/eval/ppc64le/rhel7/ibm-xl-compiler-eval.repo
  83. sudo cp ibm-xl-compiler-eval.repo /etc/yum.repos.d/
  84. sudo yum install libxlmass-devel.8.1.5
  85. ```
  86. After installing the MASS library, compile OpenBLAS with `USE_MASS=1`.
  87. For example, to compile on Power8 with MASS support: `make USE_MASS=1 TARGET=POWER8`.
  88. ### Install to a specific directory (optional)
  89. Use `PREFIX=` when invoking `make`, for example
  90. ```sh
  91. make install PREFIX=your_installation_directory
  92. ```
  93. (along with all options you added on the `make` command line in the preceding build step)
  94. The default installation directory is `/opt/OpenBLAS`.
  95. ## Supported CPUs and Operating Systems
  96. Please read `GotoBLAS_01Readme.txt` for older CPU models already supported by the 2010 GotoBLAS.
  97. ### Additional supported CPUs
  98. #### x86/x86-64
  99. - **Intel Xeon 56xx (Westmere)**: Used GotoBLAS2 Nehalem codes.
  100. - **Intel Sandy Bridge**: Optimized Level-3 and Level-2 BLAS with AVX on x86-64.
  101. - **Intel Haswell**: Optimized Level-3 and Level-2 BLAS with AVX2 and FMA on x86-64.
  102. - **Intel Skylake-X**: Optimized Level-3 and Level-2 BLAS with AVX512 and FMA on x86-64.
  103. - **Intel Cooper Lake**: as Skylake-X with improved BFLOAT16 support.
  104. - **AMD Bobcat**: Used GotoBLAS2 Barcelona codes.
  105. - **AMD Bulldozer**: x86-64 ?GEMM FMA4 kernels. (Thanks to Werner Saar)
  106. - **AMD PILEDRIVER**: Uses Bulldozer codes with some optimizations.
  107. - **AMD STEAMROLLER**: Uses Bulldozer codes with some optimizations.
  108. - **AMD ZEN**: Uses Haswell codes with some optimizations for Zen 2/3 (use SkylakeX for Zen4)
  109. #### MIPS32
  110. - **MIPS 1004K**: uses P5600 codes
  111. - **MIPS 24K**: uses P5600 codes
  112. #### MIPS64
  113. - **ICT Loongson 3A**: Optimized Level-3 BLAS and the part of Level-1,2.
  114. - **ICT Loongson 3B**: Experimental
  115. #### ARM
  116. - **ARMv6**: Optimized BLAS for vfpv2 and vfpv3-d16 (e.g. BCM2835, Cortex M0+)
  117. - **ARMv7**: Optimized BLAS for vfpv3-d32 (e.g. Cortex A8, A9 and A15)
  118. #### ARM64
  119. - **ARMv8**: Basic ARMV8 with small caches, optimized Level-3 and Level-2 BLAS
  120. - **Cortex-A53**: same as ARMV8 (different cpu specifications)
  121. - **Cortex-A55**: same as ARMV8 (different cpu specifications)
  122. - **Cortex A57**: Optimized Level-3 and Level-2 functions
  123. - **Cortex A72**: same as A57 ( different cpu specifications)
  124. - **Cortex A73**: same as A57 (different cpu specifications)
  125. - **Cortex A76**: same as A57 (different cpu specifications)
  126. - **Falkor**: same as A57 (different cpu specifications)
  127. - **ThunderX**: Optimized some Level-1 functions
  128. - **ThunderX2T99**: Optimized Level-3 BLAS and parts of Levels 1 and 2
  129. - **ThunderX3T110**
  130. - **TSV110**: Optimized some Level-3 helper functions
  131. - **EMAG 8180**: preliminary support based on A57
  132. - **Neoverse N1**: (AWS Graviton2) preliminary support
  133. - **Neoverse V1**: (AWS Graviton3) optimized Level-3 BLAS
  134. - **Apple Vortex**: preliminary support based on ThunderX2/3
  135. - **A64FX**: preliminary support, optimized Level-3 BLAS
  136. - **ARMV8SVE**: any ARMV8 cpu with SVE extensions
  137. #### PPC/PPC64
  138. - **POWER8**: Optimized BLAS, only for PPC64LE (Little Endian), only with `USE_OPENMP=1`
  139. - **POWER9**: Optimized Level-3 BLAS (real) and some Level-1,2. PPC64LE with OpenMP only.
  140. - **POWER10**: Optimized Level-3 BLAS including SBGEMM and some Level-1,2.
  141. - **AIX**: Dynamic architecture with OpenXL and OpenMP.
  142. ```sh
  143. make CC=ibm-clang_r FC=xlf_r TARGET=POWER7 BINARY=64 USE_OPENMP=1 INTERFACE64=1 DYNAMIC_ARCH=1 USE_THREAD=1
  144. ```
  145. #### IBM zEnterprise System
  146. - **Z13**: Optimized Level-3 BLAS and Level-1,2
  147. - **Z14**: Optimized Level-3 BLAS and (single precision) Level-1,2
  148. #### RISC-V
  149. - **C910V**: Optimized Level-3 BLAS (real) and Level-1,2 by RISC-V Vector extension 0.7.1.
  150. ```sh
  151. make HOSTCC=gcc TARGET=C910V CC=riscv64-unknown-linux-gnu-gcc FC=riscv64-unknown-linux-gnu-gfortran
  152. ```
  153. (also known to work on C906 as long as you use only single-precision functions - its instruction set support appears to be incomplete in double precision)
  154. - **x280**: Level-3 BLAS and Level-1,2 are optimized by RISC-V Vector extension 1.0.
  155. ```sh
  156. make HOSTCC=gcc TARGET=x280 NUM_THREADS=8 CC=riscv64-unknown-linux-gnu-clang FC=riscv64-unknown-linux-gnu-gfortran
  157. ```
  158. - **ZVL???B**: Level-3 BLAS and Level-1,2 including vectorised kernels targeting generic RISCV cores with vector support with registers of at least the corresponding width; ZVL128B and ZVL256B are available.
  159. e.g.:
  160. ```sh
  161. make TARGET=RISCV64_ZVL256B CFLAGS="-DTARGET=RISCV64_ZVL256B" \
  162. BINARY=64 ARCH=riscv64 CC='clang -target riscv64-unknown-linux-gnu' \
  163. AR=riscv64-unknown-linux-gnu-ar AS=riscv64-unknown-linux-gnu-gcc \
  164. LD=riscv64-unknown-linux-gnu-gcc FC=riscv64-unknown-linux-gnu-gfortran \
  165. HOSTCC=gcc HOSTFC=gfortran -j
  166. ```
  167. #### LOONGARCH64
  168. - **LA64_GENERIC**: Optimized Level-3, Level-2 and Level-1 BLAS with scalar instruction
  169. ```sh
  170. make HOSTCC=gcc TARGET=LA64_GENERIC CC=loongarch64-unknown-linux-gnu-gcc FC=loongarch64-unknown-linux-gnu-gfortran USE_SIMPLE_THREADED_LEVEL3=1
  171. ```
  172. The old-style TARGET=LOONGSONGENERIC is still supported
  173. - **LA264**: Optimized Level-3, Level-2 and Level-1 BLAS with LSX instruction
  174. ```sh
  175. make HOSTCC=gcc TARGET=LA264 CC=loongarch64-unknown-linux-gnu-gcc FC=loongarch64-unknown-linux-gnu-gfortran USE_SIMPLE_THREADED_LEVEL3=1
  176. ```
  177. The old-style TARGET=LOONGSON2K1000 is still supported
  178. - **LA464**: Optimized Level-3, Level-2 and Level-1 BLAS with LASX instruction
  179. ```sh
  180. make HOSTCC=gcc TARGET=LA464 CC=loongarch64-unknown-linux-gnu-gcc FC=loongarch64-unknown-linux-gnu-gfortran USE_SIMPLE_THREADED_LEVEL3=1
  181. ```
  182. The old-style TARGET=LOONGSON3R5 is still supported
  183. ### Support for multiple targets in a single library
  184. OpenBLAS can be built for multiple targets with runtime detection of the target cpu by specifiying `DYNAMIC_ARCH=1` in Makefile.rule, on the gmake command line or as `-DDYNAMIC_ARCH=TRUE` in cmake.
  185. For **x86_64**, the list of targets this activates contains Prescott, Core2, Nehalem, Barcelona, Sandybridge, Bulldozer, Piledriver, Steamroller, Excavator, Haswell, Zen, SkylakeX, Cooper Lake, Sapphire Rapids. For cpu generations not included in this list, the corresponding older model is used. If you also specify `DYNAMIC_OLDER=1`, specific support for Penryn, Dunnington, Opteron, Opteron/SSE3, Bobcat, Atom and Nano is added. Finally there is an option `DYNAMIC_LIST` that allows to specify an individual list of targets to include instead of the default.
  186. `DYNAMIC_ARCH` is also supported on **x86**, where it translates to Katmai, Coppermine, Northwood, Prescott, Banias,
  187. Core2, Penryn, Dunnington, Nehalem, Athlon, Opteron, Opteron_SSE3, Barcelona, Bobcat, Atom and Nano.
  188. On **ARMV8**, it enables support for CortexA53, CortexA57, CortexA72, CortexA73, Falkor, ThunderX, ThunderX2T99, TSV110 as well as generic ARMV8 cpus. If compiler support for SVE is available at build time, support for NeoverseN2, NeoverseV1 as well as generic ArmV8SVE targets is also enabled.
  189. For **POWER**, the list encompasses POWER6, POWER8 and POWER9. POWER10 is additionally available if a sufficiently recent compiler is used for the build.
  190. on **ZARCH** it comprises Z13 and Z14 as well as generic zarch support.
  191. On **riscv64**, DYNAMIC_ARCH enables support for riscv64_zvl128b and riscv64_zvl256b in addition to generic riscv64 support. A compiler that supports RVV 1.0 is required to build OpenBLAS for riscv64 when DYNAMIC_ARCH is enabled.
  192. On **LoongArch64**, it comprises LA264 and LA464 as well as generic LoongArch64 support.
  193. The `TARGET` option can - and usually **should** - be used in conjunction with `DYNAMIC_ARCH=1` to specify which cpu model should be assumed for all the common code in the library, usually you will want to set this to the oldest model you expect to encounter.
  194. Failure to specify this may lead to advanced instructions being used by the compiler, just because the build host happens to support them. This is most likely to happen when aggressive optimization options are in effect, and the resulting library may then crash with an
  195. illegal instruction error on weaker hardware, before it even reaches the BLAS routines specifically included for that cpu.
  196. Please note that it is not possible to combine support for different architectures, so no combined 32 and 64 bit or x86_64 and arm64 in the same library.
  197. ### Supported OS
  198. - **GNU/Linux**
  199. - **MinGW or Visual Studio (CMake)/Windows**: Please read <https://github.com/xianyi/OpenBLAS/wiki/How-to-use-OpenBLAS-in-Microsoft-Visual-Studio>.
  200. - **Darwin/macOS/OSX/iOS**: Experimental. Although GotoBLAS2 already supports Darwin, we are not OSX/iOS experts.
  201. - **FreeBSD**: Supported by the community. We don't actively test the library on this OS.
  202. - **OpenBSD**: Supported by the community. We don't actively test the library on this OS.
  203. - **NetBSD**: Supported by the community. We don't actively test the library on this OS.
  204. - **DragonFly BSD**: Supported by the community. We don't actively test the library on this OS.
  205. - **Android**: Supported by the community. Please read <https://github.com/xianyi/OpenBLAS/wiki/How-to-build-OpenBLAS-for-Android>.
  206. - **AIX**: Supported on PPC up to POWER10
  207. - **Haiku**: Supported by the community. We don't actively test the library on this OS.
  208. - **SunOS**: Supported by the community. We don't actively test the library on this OS.
  209. - **Cortex-M**: Supported by the community. Please read <https://github.com/xianyi/OpenBLAS/wiki/How-to-use-OpenBLAS-on-Cortex-M>.
  210. ## Usage
  211. Statically link with `libopenblas.a` or dynamically link with `-lopenblas` if OpenBLAS was
  212. compiled as a shared library.
  213. ### Setting the number of threads using environment variables
  214. Environment variables are used to specify a maximum number of threads.
  215. For example,
  216. ```sh
  217. export OPENBLAS_NUM_THREADS=4
  218. export GOTO_NUM_THREADS=4
  219. export OMP_NUM_THREADS=4
  220. ```
  221. The priorities are `OPENBLAS_NUM_THREADS` > `GOTO_NUM_THREADS` > `OMP_NUM_THREADS`.
  222. If you compile this library with `USE_OPENMP=1`, you should set the `OMP_NUM_THREADS`
  223. environment variable; OpenBLAS ignores `OPENBLAS_NUM_THREADS` and `GOTO_NUM_THREADS` when
  224. compiled with `USE_OPENMP=1`.
  225. ### Setting the number of threads at runtime
  226. We provide the following functions to control the number of threads at runtime:
  227. ```c
  228. void goto_set_num_threads(int num_threads);
  229. void openblas_set_num_threads(int num_threads);
  230. ```
  231. Note that these are only used once at library initialization, and are not available for
  232. fine-tuning thread numbers in individual BLAS calls.
  233. If you compile this library with `USE_OPENMP=1`, you should use the above functions too.
  234. ## Reporting bugs
  235. Please submit an issue in https://github.com/OpenMathLib/OpenBLAS/issues.
  236. ## Contact
  237. + Use github discussions: https://github.com/OpenMathLib/OpenBLAS/discussions
  238. * OpenBLAS users mailing list: https://groups.google.com/forum/#!forum/openblas-users
  239. * OpenBLAS developers mailing list: https://groups.google.com/forum/#!forum/openblas-dev
  240. ## Change log
  241. Please see Changelog.txt.
  242. ## Troubleshooting
  243. * Please read the [FAQ](http://www.openmathlib.org/OpenBLAS/docs/faq) section of the docs first.
  244. * Please use GCC version 4.6 and above to compile Sandy Bridge AVX kernels on Linux/MinGW/BSD.
  245. * Please use Clang version 3.1 and above to compile the library on Sandy Bridge microarchitecture.
  246. Clang 3.0 will generate the wrong AVX binary code.
  247. * Please use GCC version 6 or LLVM version 6 and above to compile Skylake/CooperLake AVX512 kernels
  248. * Please use LLVM version 18 and above (version 19 and above on Windows) if you plan to use
  249. its new flang compiler for Fortran
  250. * Please use GCC version 11 and above to compile OpenBLAS on the POWER architecture
  251. * The number of CPUs/cores should be less than or equal to 256. On Linux `x86_64` (`amd64`),
  252. there is experimental support for up to 1024 CPUs/cores and 128 numa nodes if you build
  253. the library with `BIGNUMA=1`.
  254. * OpenBLAS does not set processor affinity by default.
  255. On Linux, you can enable processor affinity by commenting out the line `NO_AFFINITY=1` in
  256. Makefile.rule. However, note that this may cause
  257. [a conflict with R parallel](https://stat.ethz.ch/pipermail/r-sig-hpc/2012-April/001348.html).
  258. * On Loongson 3A, `make test` may fail with a `pthread_create` error (`EAGAIN`).
  259. However, it will be okay when you run the same test case on the shell.
  260. ## Contributing
  261. 1. [Check for open issues](https://github.com/OpenMathLib/OpenBLAS/issues) or open a fresh issue
  262. to start a discussion around a feature idea or a bug.
  263. 2. Fork the [OpenBLAS](https://github.com/OpenMathLib/OpenBLAS) repository to start making your changes.
  264. 3. Write a test which shows that the bug was fixed or that the feature works as expected.
  265. 4. Send a pull request. Make sure to add yourself to `CONTRIBUTORS.md`.
  266. ## Donation
  267. Please see [the donations section](http://www.openmathlib.org/OpenBLAS/docs/about/#donations) in the docs.