* Updated binaries, using [this build](https://github.com/SciSharp/LLamaSharp/actions/runs/8654672719/job/23733195669) for llama.cpp commit `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7`.
- Added all new functions.
- Moved some functions (e.g. `SafeLlamaModelHandle` specific functions) into `SafeLlamaModelHandle.cs`
- Exposed tokens on `SafeLlamaModelHandle` and `LLamaWeights` through a `Tokens` property. As new special tokens are added in the future they can be added here.
- Changed all token properties to return nullable tokens, to handle some models not having some tokens.
- Fixed `DefaultSamplingPipeline` to handle no newline token in some models.
* Moved native methods to more specific locations.
- Context specific things have been moved into `SafeLLamaContextHandle.cs` and made private - they're exposed through C# properties and methods already.
- Checking that GPU layer count is zero if GPU offload is not supported.
- Moved methods for creating default structs (`llama_model_quantize_default_params` and `llama_context_default_params`) into relevant structs.
* Removed exception if `GpuLayerCount > 0` when GPU is not supported.
* - Added low level wrapper methods for new per-sequence state load/save in `SafeLLamaContextHandle`
- Added high level wrapper methods (save/load with `State` object or memory mapped file) in `LLamaContext`
- Moved native methods for per-sequence state load/save into `SafeLLamaContextHandle`
* Added update and defrag methods for KV cache in `SafeLLamaContextHandle`
* Updated submodule to `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7`
* Passing the sequence ID when saving a single sequence state
- Updated LLamaSharpChatCompletion class in LLama.SemanticKernel/ChatCompletion/LLamaSharpChatCompletion.cs
- Changed the type of the "_model" field from "StatelessExecutor" to "ILLamaExecutor"
- Updated the constructor to accept an "ILLamaExecutor" parameter instead of a "StatelessExecutor" parameter
- Updated LLamaSharpChatCompletion class in LLama.SemanticKernel/LLamaSharp.SemanticKernel.csproj
- Updated LLama.Unittest project in LLama.Unittest/LLama.Unittest.csproj
- Added a "PackageReference" for "Moq" version 4.20.70
- Added ExtensionMethodsTests class in LLama.Unittest/SemanticKernel/ExtensionMethodsTests.cs
- Added tests for the "ToLLamaSharpChatHistory" and "ToLLamaSharpInferenceParams" extension methods
- Added LLamaSharpChatCompletionTests class in LLama.Unittest/SemanticKernel/LLamaSharpChatCompletionTests.cs
- Added tests for the LLamaSharpChatCompletion class
ℹ️ The LLamaSharpChatCompletion class in the LLama.SemanticKernel project has been updated to use the ILLamaExecutor interface instead of the StatelessExecutor class. This change allows for better abstraction and flexibility in the implementation of the LLamaSharpChatCompletion class. The LLamaSharpChatCompletion class is responsible for providing chat completion functionality in the LLamaSharp project. The LLama.Unittest project has also been updated to include tests for the LLamaSharpChatCompletion class and the extension methods used by the class.
* Add llava_binaries, update all binaries to make the test
* Llava API + LlavaTest
Preliminary
* First prototype of Load + Unit Test
* Temporary run test con branch LlavaAPI
* Disable Embed test to review the rest of the test
* Restore Embedding test
* Use BatchThread to eval image embeddings
Test Threads default value to ensure it doesn´t produce problems.
* Rename test file
* Update action versions
* Test only one method, no release embeddings
* Revert "Test only one method, no release embeddings"
This reverts commit 264e176dcc.
* Correct API call
* Only test llava related functionality
* Cuda and Cblast binaries
* Restore build policy
* Changes related with code review
* Add SafeHandles
* Set overwrite to upload-artifact@v4
* Revert to upload-artifact@v3
* revert to upload-artifact@v3
The poc of the test is working know. Finally the problem error seems to be related with the process stopping.
Once I changed the context with today llama.cpp binaries is working OK