You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

LLama.Unittest.csproj 3.2 kB

📝 Update LLamaSharpChatCompletion and LLama.Unittest - Updated LLamaSharpChatCompletion class in LLama.SemanticKernel/ChatCompletion/LLamaSharpChatCompletion.cs - Changed the type of the "_model" field from "StatelessExecutor" to "ILLamaExecutor" - Updated the constructor to accept an "ILLamaExecutor" parameter instead of a "StatelessExecutor" parameter - Updated LLamaSharpChatCompletion class in LLama.SemanticKernel/LLamaSharp.SemanticKernel.csproj - Updated LLama.Unittest project in LLama.Unittest/LLama.Unittest.csproj - Added a "PackageReference" for "Moq" version 4.20.70 - Added ExtensionMethodsTests class in LLama.Unittest/SemanticKernel/ExtensionMethodsTests.cs - Added tests for the "ToLLamaSharpChatHistory" and "ToLLamaSharpInferenceParams" extension methods - Added LLamaSharpChatCompletionTests class in LLama.Unittest/SemanticKernel/LLamaSharpChatCompletionTests.cs - Added tests for the LLamaSharpChatCompletion class ℹ️ The LLamaSharpChatCompletion class in the LLama.SemanticKernel project has been updated to use the ILLamaExecutor interface instead of the StatelessExecutor class. This change allows for better abstraction and flexibility in the implementation of the LLamaSharpChatCompletion class. The LLamaSharpChatCompletion class is responsible for providing chat completion functionality in the LLamaSharp project. The LLama.Unittest project has also been updated to include tests for the LLamaSharpChatCompletion class and the extension methods used by the class.
1 year ago
April 2024 Binary Update (#662) * Updated binaries, using [this build](https://github.com/SciSharp/LLamaSharp/actions/runs/8654672719/job/23733195669) for llama.cpp commit `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7`. - Added all new functions. - Moved some functions (e.g. `SafeLlamaModelHandle` specific functions) into `SafeLlamaModelHandle.cs` - Exposed tokens on `SafeLlamaModelHandle` and `LLamaWeights` through a `Tokens` property. As new special tokens are added in the future they can be added here. - Changed all token properties to return nullable tokens, to handle some models not having some tokens. - Fixed `DefaultSamplingPipeline` to handle no newline token in some models. * Moved native methods to more specific locations. - Context specific things have been moved into `SafeLLamaContextHandle.cs` and made private - they're exposed through C# properties and methods already. - Checking that GPU layer count is zero if GPU offload is not supported. - Moved methods for creating default structs (`llama_model_quantize_default_params` and `llama_context_default_params`) into relevant structs. * Removed exception if `GpuLayerCount > 0` when GPU is not supported. * - Added low level wrapper methods for new per-sequence state load/save in `SafeLLamaContextHandle` - Added high level wrapper methods (save/load with `State` object or memory mapped file) in `LLamaContext` - Moved native methods for per-sequence state load/save into `SafeLLamaContextHandle` * Added update and defrag methods for KV cache in `SafeLLamaContextHandle` * Updated submodule to `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7` * Passing the sequence ID when saving a single sequence state
1 year ago
April 2024 Binary Update (#662) * Updated binaries, using [this build](https://github.com/SciSharp/LLamaSharp/actions/runs/8654672719/job/23733195669) for llama.cpp commit `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7`. - Added all new functions. - Moved some functions (e.g. `SafeLlamaModelHandle` specific functions) into `SafeLlamaModelHandle.cs` - Exposed tokens on `SafeLlamaModelHandle` and `LLamaWeights` through a `Tokens` property. As new special tokens are added in the future they can be added here. - Changed all token properties to return nullable tokens, to handle some models not having some tokens. - Fixed `DefaultSamplingPipeline` to handle no newline token in some models. * Moved native methods to more specific locations. - Context specific things have been moved into `SafeLLamaContextHandle.cs` and made private - they're exposed through C# properties and methods already. - Checking that GPU layer count is zero if GPU offload is not supported. - Moved methods for creating default structs (`llama_model_quantize_default_params` and `llama_context_default_params`) into relevant structs. * Removed exception if `GpuLayerCount > 0` when GPU is not supported. * - Added low level wrapper methods for new per-sequence state load/save in `SafeLLamaContextHandle` - Added high level wrapper methods (save/load with `State` object or memory mapped file) in `LLamaContext` - Moved native methods for per-sequence state load/save into `SafeLLamaContextHandle` * Added update and defrag methods for KV cache in `SafeLLamaContextHandle` * Updated submodule to `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7` * Passing the sequence ID when saving a single sequence state
1 year ago
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465
  1. <Project Sdk="Microsoft.NET.Sdk">
  2. <Import Project="..\LLama\LLamaSharp.Runtime.targets" />
  3. <PropertyGroup>
  4. <TargetFramework>net8.0</TargetFramework>
  5. <RootNamespace>LLama.Unittest</RootNamespace>
  6. <ImplicitUsings>enable</ImplicitUsings>
  7. <Platforms>AnyCPU;x64</Platforms>
  8. <Nullable>enable</Nullable>
  9. <IsPackable>false</IsPackable>
  10. <AllowUnsafeBlocks>true</AllowUnsafeBlocks>
  11. </PropertyGroup>
  12. <ItemGroup>
  13. <PackageReference Include="Microsoft.NET.Test.Sdk" Version="17.9.0" />
  14. <PackageReference Include="Moq" Version="4.20.70" />
  15. <PackageReference Include="System.Linq.Async" Version="6.0.1" />
  16. <PackageReference Include="xunit" Version="2.7.0" />
  17. <PackageReference Include="xunit.runner.visualstudio" Version="2.5.7">
  18. <IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets>
  19. <PrivateAssets>all</PrivateAssets>
  20. </PackageReference>
  21. <PackageReference Include="coverlet.collector" Version="6.0.1">
  22. <IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets>
  23. <PrivateAssets>all</PrivateAssets>
  24. </PackageReference>
  25. </ItemGroup>
  26. <Target Name="DownloadContentFiles" BeforeTargets="Build">
  27. <DownloadFile SourceUrl="https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/resolve/main/llama-2-7b-chat.Q3_K_S.gguf" DestinationFolder="Models" DestinationFileName="llama-2-7b-chat.Q3_K_S.gguf" SkipUnchangedFiles="true"></DownloadFile>
  28. <DownloadFile SourceUrl="https://huggingface.co/cjpais/llava-1.6-mistral-7b-gguf/resolve/main/llava-v1.6-mistral-7b.Q3_K_XS.gguf" DestinationFolder="Models" DestinationFileName="llava-v1.6-mistral-7b.Q3_K_XS.gguf" SkipUnchangedFiles="true"></DownloadFile>
  29. <DownloadFile SourceUrl="https://huggingface.co/cjpais/llava-1.6-mistral-7b-gguf/resolve/main/mmproj-model-f16.gguf" DestinationFolder="Models" DestinationFileName="mmproj-model-f16.gguf" SkipUnchangedFiles="true"></DownloadFile>
  30. <DownloadFile SourceUrl="https://huggingface.co/leliuga/all-MiniLM-L12-v2-GGUF/resolve/main/all-MiniLM-L12-v2.Q8_0.gguf" DestinationFolder="Models" DestinationFileName="all-MiniLM-L12-v2.Q8_0.gguf" SkipUnchangedFiles="true"></DownloadFile>
  31. </Target>
  32. <ItemGroup>
  33. <ProjectReference Include="..\LLama.SemanticKernel\LLamaSharp.SemanticKernel.csproj" />
  34. <ProjectReference Include="..\LLama\LLamaSharp.csproj" />
  35. </ItemGroup>
  36. <ItemGroup>
  37. <Folder Include="Models\" />
  38. </ItemGroup>
  39. <ItemGroup>
  40. <None Update="Models\all-MiniLM-L12-v2.Q8_0.gguf">
  41. <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
  42. </None>
  43. <None Update="Models\llama-2-7b-chat.Q3_K_S.gguf">
  44. <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
  45. </None>
  46. <None Update="Models\llava-v1.6-mistral-7b.Q3_K_XS.gguf">
  47. <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
  48. </None>
  49. <None Update="Models\mmproj-model-f16.gguf">
  50. <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
  51. </None>
  52. <None Update="Models\extreme-ironing-taxi-610x427.jpg">
  53. <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
  54. </None>
  55. </ItemGroup>
  56. </Project>