| @@ -35,9 +35,16 @@ PM> Install-Package LLamaSharp | |||||
| Then, search and install one of the following backends: | Then, search and install one of the following backends: | ||||
| ``` | ``` | ||||
| LLamaSharp.Backend.Cpu | |||||
| LLamaSharp.Backend.Cuda11 | |||||
| LLamaSharp.Backend.Cuda12 | |||||
| LLamaSharp.Backend.Cpu # cpu for windows, linux and mac | |||||
| LLamaSharp.Backend.Cuda11 # cuda11 for windows and linux | |||||
| LLamaSharp.Backend.Cuda12 # cuda12 for windows and linux | |||||
| LLamaSharp.Backend.MacMetal # metal for mac | |||||
| ``` | |||||
| If you would like to use it with [microsoft semantic-kernel](https://github.com/microsoft/semantic-kernel), please search and install the following package: | |||||
| ``` | |||||
| LLamaSharp.semantic-kernel | |||||
| ``` | ``` | ||||
| Here's the mapping of them and corresponding model samples provided by `LLamaSharp`. If you're not sure which model is available for a version, please try our sample model. | Here's the mapping of them and corresponding model samples provided by `LLamaSharp`. If you're not sure which model is available for a version, please try our sample model. | ||||
| @@ -49,7 +56,8 @@ Here's the mapping of them and corresponding model samples provided by `LLamaSha | |||||
| | v0.2.2 | v0.2.2, v0.2.3 | [WizardLM](https://huggingface.co/TheBloke/wizardLM-7B-GGML/tree/previous_llama_ggmlv2), [Vicuna (filenames without "old")](https://huggingface.co/eachadea/ggml-vicuna-13b-1.1/tree/main) | 63d2046 | | | v0.2.2 | v0.2.2, v0.2.3 | [WizardLM](https://huggingface.co/TheBloke/wizardLM-7B-GGML/tree/previous_llama_ggmlv2), [Vicuna (filenames without "old")](https://huggingface.co/eachadea/ggml-vicuna-13b-1.1/tree/main) | 63d2046 | | ||||
| | v0.3.0, v0.3.1 | v0.3.0, v0.4.0 | [LLamaSharpSamples v0.3.0](https://huggingface.co/AsakusaRinne/LLamaSharpSamples/tree/v0.3.0), [WizardLM](https://huggingface.co/TheBloke/wizardLM-7B-GGML/tree/main) | 7e4ea5b | | | v0.3.0, v0.3.1 | v0.3.0, v0.4.0 | [LLamaSharpSamples v0.3.0](https://huggingface.co/AsakusaRinne/LLamaSharpSamples/tree/v0.3.0), [WizardLM](https://huggingface.co/TheBloke/wizardLM-7B-GGML/tree/main) | 7e4ea5b | | ||||
| | v0.4.1-preview (cpu only) | v0.4.1-preview | [Open llama 3b](https://huggingface.co/SlyEcho/open_llama_3b_ggml), [Open Buddy](https://huggingface.co/OpenBuddy/openbuddy-llama-ggml)| aacdbd4 | | | v0.4.1-preview (cpu only) | v0.4.1-preview | [Open llama 3b](https://huggingface.co/SlyEcho/open_llama_3b_ggml), [Open Buddy](https://huggingface.co/OpenBuddy/openbuddy-llama-ggml)| aacdbd4 | | ||||
| | v0.4.2-preview (cpu,cuda11) |v0.4.2-preview | [Llama2 7b](https://huggingface.co/TheBloke/llama-2-7B-Guanaco-QLoRA-GGML)| 332311234a0aa2974b2450710e22e09d90dd6b0b | | |||||
| | v0.4.2-preview (cpu,cuda11) |v0.4.2-preview | [Llama2 7b](https://huggingface.co/TheBloke/llama-2-7B-Guanaco-QLoRA-GGML)| 3323112 | | |||||
| | v0.5.1 |v0.5.1 | | 6b73ef1 | | |||||
| Many hands make light work. If you have found any other model resource that could work for a version, we'll appreciate it for opening an PR about it! 😊 | Many hands make light work. If you have found any other model resource that could work for a version, we'll appreciate it for opening an PR about it! 😊 | ||||
| @@ -150,7 +158,7 @@ Since we are in short of hands, if you're familiar with ASP.NET core, we'll appr | |||||
| ✅ ASP.NET core Integration | ✅ ASP.NET core Integration | ||||
| ⚠️ Semantic-kernel Integration | |||||
| ✅ Semantic-kernel Integration | |||||
| 🔳 Fine-tune | 🔳 Fine-tune | ||||
| @@ -158,8 +166,6 @@ Since we are in short of hands, if you're familiar with ASP.NET core, we'll appr | |||||
| 🔳 MAUI Integration | 🔳 MAUI Integration | ||||
| 🔳 Follow up llama.cpp and improve performance | |||||
| ## Assets | ## Assets | ||||
| Some extra model resources could be found below: | Some extra model resources could be found below: | ||||