Browse Source

Use llama instead of libllama in `[DllImport]`

This results in windows users not needing to rename the DLL. This allows native llama builds to be dropped in, even on windows.

I also took the time to update the documentation, removing references to renaming the files, since the names now match.

Fixes #463
tags/v0.10.0
Jason Couture 1 year ago
parent
commit
db7e1e88f8
15 changed files with 28 additions and 28 deletions
  1. +6
    -6
      .github/workflows/compile.yml
  2. +1
    -1
      CONTRIBUTING.md
  3. +12
    -12
      LLama/LLamaSharp.Runtime.targets
  4. +1
    -1
      LLama/Native/NativeApi.Load.cs
  5. +4
    -4
      LLama/runtimes/build/LLamaSharp.Backend.Cpu.nuspec
  6. +1
    -1
      LLama/runtimes/build/LLamaSharp.Backend.Cuda11.nuspec
  7. +1
    -1
      LLama/runtimes/build/LLamaSharp.Backend.Cuda12.nuspec
  8. +0
    -0
      LLama/runtimes/deps/avx/llama.dll
  9. +0
    -0
      LLama/runtimes/deps/avx2/llama.dll
  10. +0
    -0
      LLama/runtimes/deps/avx512/llama.dll
  11. +0
    -0
      LLama/runtimes/deps/cu11.7.1/llama.dll
  12. +0
    -0
      LLama/runtimes/deps/cu12.1.0/llama.dll
  13. +0
    -0
      LLama/runtimes/deps/llama.dll
  14. +1
    -1
      docs/ContributingGuide.md
  15. +1
    -1
      docs/index.md

+ 6
- 6
.github/workflows/compile.yml View File

@@ -204,18 +204,18 @@ jobs:
cp artifacts/llama-bin-linux-avx2-x64.so/libllama.so deps/avx2/libllama.so
cp artifacts/llama-bin-linux-avx512-x64.so/libllama.so deps/avx512/libllama.so

cp artifacts/llama-bin-win-noavx-x64.dll/llama.dll deps/libllama.dll
cp artifacts/llama-bin-win-avx-x64.dll/llama.dll deps/avx/libllama.dll
cp artifacts/llama-bin-win-avx2-x64.dll/llama.dll deps/avx2/libllama.dll
cp artifacts/llama-bin-win-avx512-x64.dll/llama.dll deps/avx512/libllama.dll
cp artifacts/llama-bin-win-noavx-x64.dll/llama.dll deps/llama.dll
cp artifacts/llama-bin-win-avx-x64.dll/llama.dll deps/avx/llama.dll
cp artifacts/llama-bin-win-avx2-x64.dll/llama.dll deps/avx2/llama.dll
cp artifacts/llama-bin-win-avx512-x64.dll/llama.dll deps/avx512/llama.dll

cp artifacts/llama-bin-osx-arm64.dylib/libllama.dylib deps/osx-arm64/libllama.dylib
cp artifacts/ggml-metal.metal/ggml-metal.metal deps/osx-arm64/ggml-metal.metal
cp artifacts/llama-bin-osx-x64.dylib/libllama.dylib deps/osx-x64/libllama.dylib

cp artifacts/llama-bin-win-cublas-cu11.7.1-x64.dll/llama.dll deps/cu11.7.1/libllama.dll
cp artifacts/llama-bin-win-cublas-cu11.7.1-x64.dll/llama.dll deps/cu11.7.1/llama.dll
cp artifacts/llama-bin-linux-cublas-cu11.7.1-x64.so/libllama.so deps/cu11.7.1/libllama.so
cp artifacts/llama-bin-win-cublas-cu12.1.0-x64.dll/llama.dll deps/cu12.1.0/libllama.dll
cp artifacts/llama-bin-win-cublas-cu12.1.0-x64.dll/llama.dll deps/cu12.1.0/llama.dll
cp artifacts/llama-bin-linux-cublas-cu12.1.0-x64.so/libllama.so deps/cu12.1.0/libllama.so

- name: Upload artifacts


+ 1
- 1
CONTRIBUTING.md View File

@@ -16,7 +16,7 @@ When building from source, please add `-DBUILD_SHARED_LIBS=ON` to the cmake inst
cmake .. -DLLAMA_CUBLAS=ON -DBUILD_SHARED_LIBS=ON
```

After running `cmake --build . --config Release`, you could find the `llama.dll`, `llama.so` or `llama.dylib` in your build directory. After pasting it to `LLamaSharp/LLama/runtimes` and renaming it to `libllama.dll`, `libllama.so` or `libllama.dylib`, you can use it as the native library in LLamaSharp.
After running `cmake --build . --config Release`, you could find the `llama.dll`, `llama.so` or `llama.dylib` in your build directory. After pasting it to `LLamaSharp/LLama/runtimes` you can use it as the native library in LLamaSharp.


## Add a new feature to LLamaSharp


+ 12
- 12
LLama/LLamaSharp.Runtime.targets View File

@@ -4,29 +4,29 @@
</PropertyGroup>
<ItemGroup Condition="'$(IncludeBuiltInRuntimes)' == 'true'">

<None Include="$(MSBuildThisFileDirectory)runtimes/deps/libllama.dll">
<None Include="$(MSBuildThisFileDirectory)runtimes/deps/llama.dll">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
<Link>runtimes/win-x64/native/noavx/libllama.dll</Link>
<Link>runtimes/win-x64/native/noavx/llama.dll</Link>
</None>
<None Include="$(MSBuildThisFileDirectory)runtimes/deps/avx/libllama.dll">
<None Include="$(MSBuildThisFileDirectory)runtimes/deps/avx/llama.dll">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
<Link>runtimes/win-x64/native/avx/libllama.dll</Link>
<Link>runtimes/win-x64/native/avx/llama.dll</Link>
</None>
<None Include="$(MSBuildThisFileDirectory)runtimes/deps/avx2/libllama.dll">
<None Include="$(MSBuildThisFileDirectory)runtimes/deps/avx2/llama.dll">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
<Link>runtimes/win-x64/native/avx2/libllama.dll</Link>
<Link>runtimes/win-x64/native/avx2/llama.dll</Link>
</None>
<None Include="$(MSBuildThisFileDirectory)runtimes/deps/avx512/libllama.dll">
<None Include="$(MSBuildThisFileDirectory)runtimes/deps/avx512/llama.dll">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
<Link>runtimes/win-x64/native/avx512/libllama.dll</Link>
<Link>runtimes/win-x64/native/avx512/llama.dll</Link>
</None>
<None Include="$(MSBuildThisFileDirectory)runtimes/deps/cu11.7.1/libllama.dll">
<None Include="$(MSBuildThisFileDirectory)runtimes/deps/cu11.7.1/llama.dll">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
<Link>runtimes/win-x64/native/cuda11/libllama.dll</Link>
<Link>runtimes/win-x64/native/cuda11/llama.dll</Link>
</None>
<None Include="$(MSBuildThisFileDirectory)runtimes/deps/cu12.1.0/libllama.dll">
<None Include="$(MSBuildThisFileDirectory)runtimes/deps/cu12.1.0/llama.dll">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
<Link>runtimes/win-x64/native/cuda12/libllama.dll</Link>
<Link>runtimes/win-x64/native/cuda12/llama.dll</Link>
</None>

<None Include="$(MSBuildThisFileDirectory)runtimes/deps/libllama.so">


+ 1
- 1
LLama/Native/NativeApi.Load.cs View File

@@ -329,7 +329,7 @@ namespace LLama.Native
#endif
}

internal const string libraryName = "libllama";
internal const string libraryName = "llama";
private const string cudaVersionFile = "version.json";
private const string loggingPrefix = "[LLamaSharp Native]";
private static bool enableLogging = false;


+ 4
- 4
LLama/runtimes/build/LLamaSharp.Backend.Cpu.nuspec View File

@@ -18,10 +18,10 @@
<files>
<file src="LLamaSharpBackend.props" target="build/netstandard2.0/LLamaSharp.Backend.Cpu.props" />

<file src="runtimes/deps/libllama.dll" target="runtimes\win-x64\native\libllama.dll" />
<file src="runtimes/deps/avx/libllama.dll" target="runtimes\win-x64\native\avx\libllama.dll" />
<file src="runtimes/deps/avx2/libllama.dll" target="runtimes\win-x64\native\avx2\libllama.dll" />
<file src="runtimes/deps/avx512/libllama.dll" target="runtimes\win-x64\native\avx512\libllama.dll" />
<file src="runtimes/deps/llama.dll" target="runtimes\win-x64\native\llama.dll" />
<file src="runtimes/deps/avx/llama.dll" target="runtimes\win-x64\native\avx\llama.dll" />
<file src="runtimes/deps/avx2/llama.dll" target="runtimes\win-x64\native\avx2\llama.dll" />
<file src="runtimes/deps/avx512/llama.dll" target="runtimes\win-x64\native\avx512\llama.dll" />

<file src="runtimes/deps/libllama.so" target="runtimes\linux-x64\native\libllama.so" />
<file src="runtimes/deps/avx/libllama.so" target="runtimes\linux-x64\native\avx\libllama.so" />


+ 1
- 1
LLama/runtimes/build/LLamaSharp.Backend.Cuda11.nuspec View File

@@ -18,7 +18,7 @@
<files>
<file src="LLamaSharpBackend.props" target="build/netstandard2.0/LLamaSharp.Backend.Cuda11.props" />
<file src="runtimes/deps/cu11.7.1/libllama.dll" target="runtimes\win-x64\native\cuda11\libllama.dll" />
<file src="runtimes/deps/cu11.7.1/llama.dll" target="runtimes\win-x64\native\cuda11\llama.dll" />
<file src="runtimes/deps/cu11.7.1/libllama.so" target="runtimes\linux-x64\native\cuda11\libllama.so" />
<file src="icon512.png" target="icon512.png" />


+ 1
- 1
LLama/runtimes/build/LLamaSharp.Backend.Cuda12.nuspec View File

@@ -18,7 +18,7 @@
<files>
<file src="LLamaSharpBackend.props" target="build/netstandard2.0/LLamaSharp.Backend.Cuda12.props" />
<file src="runtimes/deps/cu12.1.0/libllama.dll" target="runtimes\win-x64\native\cuda12\libllama.dll" />
<file src="runtimes/deps/cu12.1.0/llama.dll" target="runtimes\win-x64\native\cuda12\llama.dll" />
<file src="runtimes/deps/cu12.1.0/libllama.so" target="runtimes\linux-x64\native\cuda12\libllama.so" />
<file src="icon512.png" target="icon512.png" />


LLama/runtimes/deps/avx/libllama.dll → LLama/runtimes/deps/avx/llama.dll View File


LLama/runtimes/deps/avx2/libllama.dll → LLama/runtimes/deps/avx2/llama.dll View File


LLama/runtimes/deps/avx512/libllama.dll → LLama/runtimes/deps/avx512/llama.dll View File


LLama/runtimes/deps/cu11.7.1/libllama.dll → LLama/runtimes/deps/cu11.7.1/llama.dll View File


LLama/runtimes/deps/cu12.1.0/libllama.dll → LLama/runtimes/deps/cu12.1.0/llama.dll View File


LLama/runtimes/deps/libllama.dll → LLama/runtimes/deps/llama.dll View File


+ 1
- 1
docs/ContributingGuide.md View File

@@ -16,7 +16,7 @@ When building from source, please add `-DBUILD_SHARED_LIBS=ON` to the cmake inst
cmake .. -DLLAMA_CUBLAS=ON -DBUILD_SHARED_LIBS=ON
```

After running `cmake --build . --config Release`, you could find the `llama.dll`, `llama.so` or `llama.dylib` in your build directory. After pasting it to `LLamaSharp/LLama/runtimes` and renaming it to `libllama.dll`, `libllama.so` or `libllama.dylib`, you can use it as the native library in LLamaSharp.
After running `cmake --build . --config Release`, you could find the `llama.dll`, `llama.so` or `llama.dylib` in your build directory. After pasting it to `LLamaSharp/LLama/runtimes` , you can use it as the native library in LLamaSharp.


## Add a new feature to LLamaSharp


+ 1
- 1
docs/index.md View File

@@ -20,7 +20,7 @@ LLamaSharp is the C#/.NET binding of [llama.cpp](https://github.com/ggerganov/ll
If you are new to LLM, here're some tips for you to help you to get start with `LLamaSharp`. If you are experienced in this field, we'd still recommend you to take a few minutes to read it because some things perform differently compared to cpp/python.

1. The main ability of LLamaSharp is to provide an efficient way to run inference of LLM (Large Language Model) locally (and fine-tune model in the future). The model weights, however, need to be downloaded from other resources such as [huggingface](https://huggingface.co).
2. Since LLamaSharp supports multiple platforms, The nuget package is split into `LLamaSharp` and `LLama.Backend`. After installing `LLamaSharp`, please install one of `LLama.Backend.Cpu`, `LLama.Backend.Cuda11` or `LLama.Backend.Cuda12`. If you use the source code, dynamic libraries can be found in `LLama/Runtimes`. Rename the one you want to use to `libllama.dll`.
2. Since LLamaSharp supports multiple platforms, The nuget package is split into `LLamaSharp` and `LLama.Backend`. After installing `LLamaSharp`, please install one of `LLama.Backend.Cpu`, `LLama.Backend.Cuda11` or `LLama.Backend.Cuda12`. If you use the source code, dynamic libraries can be found in `LLama/Runtimes`.
3. `LLaMa` originally refers to the weights released by Meta (Facebook Research). After that, many models are fine-tuned based on it, such as `Vicuna`, `GPT4All`, and `Pyglion`. Though all of these models are supported by LLamaSharp, some steps are necessary with different file formats. There're mainly three kinds of files, which are `.pth`, `.bin (ggml)`, `.bin (quantized)`. If you have the `.bin (quantized)` file, it could be used directly by LLamaSharp. If you have the `.bin (ggml)` file, you could use it directly but get higher inference speed after the quantization. If you have the `.pth` file, you need to follow [the instructions in llama.cpp](https://github.com/ggerganov/llama.cpp#prepare-data--run) to convert it to `.bin (ggml)` file at first.
4. LLamaSharp supports GPU acceleration, but it requires cuda installation. Please install cuda 11 or cuda 12 on your system before using LLamaSharp to enable GPU. If you have another cuda version, you could compile llama.cpp from source to get the dll. For building from source, please refer to [issue #5](https://github.com/SciSharp/LLamaSharp/issues/5).



Loading…
Cancel
Save