You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

LLamaContext.cs 25 kB

April 2024 Binary Update (#662) * Updated binaries, using [this build](https://github.com/SciSharp/LLamaSharp/actions/runs/8654672719/job/23733195669) for llama.cpp commit `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7`. - Added all new functions. - Moved some functions (e.g. `SafeLlamaModelHandle` specific functions) into `SafeLlamaModelHandle.cs` - Exposed tokens on `SafeLlamaModelHandle` and `LLamaWeights` through a `Tokens` property. As new special tokens are added in the future they can be added here. - Changed all token properties to return nullable tokens, to handle some models not having some tokens. - Fixed `DefaultSamplingPipeline` to handle no newline token in some models. * Moved native methods to more specific locations. - Context specific things have been moved into `SafeLLamaContextHandle.cs` and made private - they're exposed through C# properties and methods already. - Checking that GPU layer count is zero if GPU offload is not supported. - Moved methods for creating default structs (`llama_model_quantize_default_params` and `llama_context_default_params`) into relevant structs. * Removed exception if `GpuLayerCount > 0` when GPU is not supported. * - Added low level wrapper methods for new per-sequence state load/save in `SafeLLamaContextHandle` - Added high level wrapper methods (save/load with `State` object or memory mapped file) in `LLamaContext` - Moved native methods for per-sequence state load/save into `SafeLLamaContextHandle` * Added update and defrag methods for KV cache in `SafeLLamaContextHandle` * Updated submodule to `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7` * Passing the sequence ID when saving a single sequence state
1 year ago
April 2024 Binary Update (#662) * Updated binaries, using [this build](https://github.com/SciSharp/LLamaSharp/actions/runs/8654672719/job/23733195669) for llama.cpp commit `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7`. - Added all new functions. - Moved some functions (e.g. `SafeLlamaModelHandle` specific functions) into `SafeLlamaModelHandle.cs` - Exposed tokens on `SafeLlamaModelHandle` and `LLamaWeights` through a `Tokens` property. As new special tokens are added in the future they can be added here. - Changed all token properties to return nullable tokens, to handle some models not having some tokens. - Fixed `DefaultSamplingPipeline` to handle no newline token in some models. * Moved native methods to more specific locations. - Context specific things have been moved into `SafeLLamaContextHandle.cs` and made private - they're exposed through C# properties and methods already. - Checking that GPU layer count is zero if GPU offload is not supported. - Moved methods for creating default structs (`llama_model_quantize_default_params` and `llama_context_default_params`) into relevant structs. * Removed exception if `GpuLayerCount > 0` when GPU is not supported. * - Added low level wrapper methods for new per-sequence state load/save in `SafeLLamaContextHandle` - Added high level wrapper methods (save/load with `State` object or memory mapped file) in `LLamaContext` - Moved native methods for per-sequence state load/save into `SafeLLamaContextHandle` * Added update and defrag methods for KV cache in `SafeLLamaContextHandle` * Updated submodule to `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7` * Passing the sequence ID when saving a single sequence state
1 year ago
April 2024 Binary Update (#662) * Updated binaries, using [this build](https://github.com/SciSharp/LLamaSharp/actions/runs/8654672719/job/23733195669) for llama.cpp commit `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7`. - Added all new functions. - Moved some functions (e.g. `SafeLlamaModelHandle` specific functions) into `SafeLlamaModelHandle.cs` - Exposed tokens on `SafeLlamaModelHandle` and `LLamaWeights` through a `Tokens` property. As new special tokens are added in the future they can be added here. - Changed all token properties to return nullable tokens, to handle some models not having some tokens. - Fixed `DefaultSamplingPipeline` to handle no newline token in some models. * Moved native methods to more specific locations. - Context specific things have been moved into `SafeLLamaContextHandle.cs` and made private - they're exposed through C# properties and methods already. - Checking that GPU layer count is zero if GPU offload is not supported. - Moved methods for creating default structs (`llama_model_quantize_default_params` and `llama_context_default_params`) into relevant structs. * Removed exception if `GpuLayerCount > 0` when GPU is not supported. * - Added low level wrapper methods for new per-sequence state load/save in `SafeLLamaContextHandle` - Added high level wrapper methods (save/load with `State` object or memory mapped file) in `LLamaContext` - Moved native methods for per-sequence state load/save into `SafeLLamaContextHandle` * Added update and defrag methods for KV cache in `SafeLLamaContextHandle` * Updated submodule to `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7` * Passing the sequence ID when saving a single sequence state
1 year ago
April 2024 Binary Update (#662) * Updated binaries, using [this build](https://github.com/SciSharp/LLamaSharp/actions/runs/8654672719/job/23733195669) for llama.cpp commit `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7`. - Added all new functions. - Moved some functions (e.g. `SafeLlamaModelHandle` specific functions) into `SafeLlamaModelHandle.cs` - Exposed tokens on `SafeLlamaModelHandle` and `LLamaWeights` through a `Tokens` property. As new special tokens are added in the future they can be added here. - Changed all token properties to return nullable tokens, to handle some models not having some tokens. - Fixed `DefaultSamplingPipeline` to handle no newline token in some models. * Moved native methods to more specific locations. - Context specific things have been moved into `SafeLLamaContextHandle.cs` and made private - they're exposed through C# properties and methods already. - Checking that GPU layer count is zero if GPU offload is not supported. - Moved methods for creating default structs (`llama_model_quantize_default_params` and `llama_context_default_params`) into relevant structs. * Removed exception if `GpuLayerCount > 0` when GPU is not supported. * - Added low level wrapper methods for new per-sequence state load/save in `SafeLLamaContextHandle` - Added high level wrapper methods (save/load with `State` object or memory mapped file) in `LLamaContext` - Moved native methods for per-sequence state load/save into `SafeLLamaContextHandle` * Added update and defrag methods for KV cache in `SafeLLamaContextHandle` * Updated submodule to `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7` * Passing the sequence ID when saving a single sequence state
1 year ago
April 2024 Binary Update (#662) * Updated binaries, using [this build](https://github.com/SciSharp/LLamaSharp/actions/runs/8654672719/job/23733195669) for llama.cpp commit `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7`. - Added all new functions. - Moved some functions (e.g. `SafeLlamaModelHandle` specific functions) into `SafeLlamaModelHandle.cs` - Exposed tokens on `SafeLlamaModelHandle` and `LLamaWeights` through a `Tokens` property. As new special tokens are added in the future they can be added here. - Changed all token properties to return nullable tokens, to handle some models not having some tokens. - Fixed `DefaultSamplingPipeline` to handle no newline token in some models. * Moved native methods to more specific locations. - Context specific things have been moved into `SafeLLamaContextHandle.cs` and made private - they're exposed through C# properties and methods already. - Checking that GPU layer count is zero if GPU offload is not supported. - Moved methods for creating default structs (`llama_model_quantize_default_params` and `llama_context_default_params`) into relevant structs. * Removed exception if `GpuLayerCount > 0` when GPU is not supported. * - Added low level wrapper methods for new per-sequence state load/save in `SafeLLamaContextHandle` - Added high level wrapper methods (save/load with `State` object or memory mapped file) in `LLamaContext` - Moved native methods for per-sequence state load/save into `SafeLLamaContextHandle` * Added update and defrag methods for KV cache in `SafeLLamaContextHandle` * Updated submodule to `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7` * Passing the sequence ID when saving a single sequence state
1 year ago
April 2024 Binary Update (#662) * Updated binaries, using [this build](https://github.com/SciSharp/LLamaSharp/actions/runs/8654672719/job/23733195669) for llama.cpp commit `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7`. - Added all new functions. - Moved some functions (e.g. `SafeLlamaModelHandle` specific functions) into `SafeLlamaModelHandle.cs` - Exposed tokens on `SafeLlamaModelHandle` and `LLamaWeights` through a `Tokens` property. As new special tokens are added in the future they can be added here. - Changed all token properties to return nullable tokens, to handle some models not having some tokens. - Fixed `DefaultSamplingPipeline` to handle no newline token in some models. * Moved native methods to more specific locations. - Context specific things have been moved into `SafeLLamaContextHandle.cs` and made private - they're exposed through C# properties and methods already. - Checking that GPU layer count is zero if GPU offload is not supported. - Moved methods for creating default structs (`llama_model_quantize_default_params` and `llama_context_default_params`) into relevant structs. * Removed exception if `GpuLayerCount > 0` when GPU is not supported. * - Added low level wrapper methods for new per-sequence state load/save in `SafeLLamaContextHandle` - Added high level wrapper methods (save/load with `State` object or memory mapped file) in `LLamaContext` - Moved native methods for per-sequence state load/save into `SafeLLamaContextHandle` * Added update and defrag methods for KV cache in `SafeLLamaContextHandle` * Updated submodule to `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7` * Passing the sequence ID when saving a single sequence state
1 year ago
April 2024 Binary Update (#662) * Updated binaries, using [this build](https://github.com/SciSharp/LLamaSharp/actions/runs/8654672719/job/23733195669) for llama.cpp commit `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7`. - Added all new functions. - Moved some functions (e.g. `SafeLlamaModelHandle` specific functions) into `SafeLlamaModelHandle.cs` - Exposed tokens on `SafeLlamaModelHandle` and `LLamaWeights` through a `Tokens` property. As new special tokens are added in the future they can be added here. - Changed all token properties to return nullable tokens, to handle some models not having some tokens. - Fixed `DefaultSamplingPipeline` to handle no newline token in some models. * Moved native methods to more specific locations. - Context specific things have been moved into `SafeLLamaContextHandle.cs` and made private - they're exposed through C# properties and methods already. - Checking that GPU layer count is zero if GPU offload is not supported. - Moved methods for creating default structs (`llama_model_quantize_default_params` and `llama_context_default_params`) into relevant structs. * Removed exception if `GpuLayerCount > 0` when GPU is not supported. * - Added low level wrapper methods for new per-sequence state load/save in `SafeLLamaContextHandle` - Added high level wrapper methods (save/load with `State` object or memory mapped file) in `LLamaContext` - Moved native methods for per-sequence state load/save into `SafeLLamaContextHandle` * Added update and defrag methods for KV cache in `SafeLLamaContextHandle` * Updated submodule to `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7` * Passing the sequence ID when saving a single sequence state
1 year ago
April 2024 Binary Update (#662) * Updated binaries, using [this build](https://github.com/SciSharp/LLamaSharp/actions/runs/8654672719/job/23733195669) for llama.cpp commit `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7`. - Added all new functions. - Moved some functions (e.g. `SafeLlamaModelHandle` specific functions) into `SafeLlamaModelHandle.cs` - Exposed tokens on `SafeLlamaModelHandle` and `LLamaWeights` through a `Tokens` property. As new special tokens are added in the future they can be added here. - Changed all token properties to return nullable tokens, to handle some models not having some tokens. - Fixed `DefaultSamplingPipeline` to handle no newline token in some models. * Moved native methods to more specific locations. - Context specific things have been moved into `SafeLLamaContextHandle.cs` and made private - they're exposed through C# properties and methods already. - Checking that GPU layer count is zero if GPU offload is not supported. - Moved methods for creating default structs (`llama_model_quantize_default_params` and `llama_context_default_params`) into relevant structs. * Removed exception if `GpuLayerCount > 0` when GPU is not supported. * - Added low level wrapper methods for new per-sequence state load/save in `SafeLLamaContextHandle` - Added high level wrapper methods (save/load with `State` object or memory mapped file) in `LLamaContext` - Moved native methods for per-sequence state load/save into `SafeLLamaContextHandle` * Added update and defrag methods for KV cache in `SafeLLamaContextHandle` * Updated submodule to `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7` * Passing the sequence ID when saving a single sequence state
1 year ago
April 2024 Binary Update (#662) * Updated binaries, using [this build](https://github.com/SciSharp/LLamaSharp/actions/runs/8654672719/job/23733195669) for llama.cpp commit `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7`. - Added all new functions. - Moved some functions (e.g. `SafeLlamaModelHandle` specific functions) into `SafeLlamaModelHandle.cs` - Exposed tokens on `SafeLlamaModelHandle` and `LLamaWeights` through a `Tokens` property. As new special tokens are added in the future they can be added here. - Changed all token properties to return nullable tokens, to handle some models not having some tokens. - Fixed `DefaultSamplingPipeline` to handle no newline token in some models. * Moved native methods to more specific locations. - Context specific things have been moved into `SafeLLamaContextHandle.cs` and made private - they're exposed through C# properties and methods already. - Checking that GPU layer count is zero if GPU offload is not supported. - Moved methods for creating default structs (`llama_model_quantize_default_params` and `llama_context_default_params`) into relevant structs. * Removed exception if `GpuLayerCount > 0` when GPU is not supported. * - Added low level wrapper methods for new per-sequence state load/save in `SafeLLamaContextHandle` - Added high level wrapper methods (save/load with `State` object or memory mapped file) in `LLamaContext` - Moved native methods for per-sequence state load/save into `SafeLLamaContextHandle` * Added update and defrag methods for KV cache in `SafeLLamaContextHandle` * Updated submodule to `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7` * Passing the sequence ID when saving a single sequence state
1 year ago
April 2024 Binary Update (#662) * Updated binaries, using [this build](https://github.com/SciSharp/LLamaSharp/actions/runs/8654672719/job/23733195669) for llama.cpp commit `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7`. - Added all new functions. - Moved some functions (e.g. `SafeLlamaModelHandle` specific functions) into `SafeLlamaModelHandle.cs` - Exposed tokens on `SafeLlamaModelHandle` and `LLamaWeights` through a `Tokens` property. As new special tokens are added in the future they can be added here. - Changed all token properties to return nullable tokens, to handle some models not having some tokens. - Fixed `DefaultSamplingPipeline` to handle no newline token in some models. * Moved native methods to more specific locations. - Context specific things have been moved into `SafeLLamaContextHandle.cs` and made private - they're exposed through C# properties and methods already. - Checking that GPU layer count is zero if GPU offload is not supported. - Moved methods for creating default structs (`llama_model_quantize_default_params` and `llama_context_default_params`) into relevant structs. * Removed exception if `GpuLayerCount > 0` when GPU is not supported. * - Added low level wrapper methods for new per-sequence state load/save in `SafeLLamaContextHandle` - Added high level wrapper methods (save/load with `State` object or memory mapped file) in `LLamaContext` - Moved native methods for per-sequence state load/save into `SafeLLamaContextHandle` * Added update and defrag methods for KV cache in `SafeLLamaContextHandle` * Updated submodule to `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7` * Passing the sequence ID when saving a single sequence state
1 year ago
April 2024 Binary Update (#662) * Updated binaries, using [this build](https://github.com/SciSharp/LLamaSharp/actions/runs/8654672719/job/23733195669) for llama.cpp commit `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7`. - Added all new functions. - Moved some functions (e.g. `SafeLlamaModelHandle` specific functions) into `SafeLlamaModelHandle.cs` - Exposed tokens on `SafeLlamaModelHandle` and `LLamaWeights` through a `Tokens` property. As new special tokens are added in the future they can be added here. - Changed all token properties to return nullable tokens, to handle some models not having some tokens. - Fixed `DefaultSamplingPipeline` to handle no newline token in some models. * Moved native methods to more specific locations. - Context specific things have been moved into `SafeLLamaContextHandle.cs` and made private - they're exposed through C# properties and methods already. - Checking that GPU layer count is zero if GPU offload is not supported. - Moved methods for creating default structs (`llama_model_quantize_default_params` and `llama_context_default_params`) into relevant structs. * Removed exception if `GpuLayerCount > 0` when GPU is not supported. * - Added low level wrapper methods for new per-sequence state load/save in `SafeLLamaContextHandle` - Added high level wrapper methods (save/load with `State` object or memory mapped file) in `LLamaContext` - Moved native methods for per-sequence state load/save into `SafeLLamaContextHandle` * Added update and defrag methods for KV cache in `SafeLLamaContextHandle` * Updated submodule to `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7` * Passing the sequence ID when saving a single sequence state
1 year ago
April 2024 Binary Update (#662) * Updated binaries, using [this build](https://github.com/SciSharp/LLamaSharp/actions/runs/8654672719/job/23733195669) for llama.cpp commit `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7`. - Added all new functions. - Moved some functions (e.g. `SafeLlamaModelHandle` specific functions) into `SafeLlamaModelHandle.cs` - Exposed tokens on `SafeLlamaModelHandle` and `LLamaWeights` through a `Tokens` property. As new special tokens are added in the future they can be added here. - Changed all token properties to return nullable tokens, to handle some models not having some tokens. - Fixed `DefaultSamplingPipeline` to handle no newline token in some models. * Moved native methods to more specific locations. - Context specific things have been moved into `SafeLLamaContextHandle.cs` and made private - they're exposed through C# properties and methods already. - Checking that GPU layer count is zero if GPU offload is not supported. - Moved methods for creating default structs (`llama_model_quantize_default_params` and `llama_context_default_params`) into relevant structs. * Removed exception if `GpuLayerCount > 0` when GPU is not supported. * - Added low level wrapper methods for new per-sequence state load/save in `SafeLLamaContextHandle` - Added high level wrapper methods (save/load with `State` object or memory mapped file) in `LLamaContext` - Moved native methods for per-sequence state load/save into `SafeLLamaContextHandle` * Added update and defrag methods for KV cache in `SafeLLamaContextHandle` * Updated submodule to `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7` * Passing the sequence ID when saving a single sequence state
1 year ago
April 2024 Binary Update (#662) * Updated binaries, using [this build](https://github.com/SciSharp/LLamaSharp/actions/runs/8654672719/job/23733195669) for llama.cpp commit `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7`. - Added all new functions. - Moved some functions (e.g. `SafeLlamaModelHandle` specific functions) into `SafeLlamaModelHandle.cs` - Exposed tokens on `SafeLlamaModelHandle` and `LLamaWeights` through a `Tokens` property. As new special tokens are added in the future they can be added here. - Changed all token properties to return nullable tokens, to handle some models not having some tokens. - Fixed `DefaultSamplingPipeline` to handle no newline token in some models. * Moved native methods to more specific locations. - Context specific things have been moved into `SafeLLamaContextHandle.cs` and made private - they're exposed through C# properties and methods already. - Checking that GPU layer count is zero if GPU offload is not supported. - Moved methods for creating default structs (`llama_model_quantize_default_params` and `llama_context_default_params`) into relevant structs. * Removed exception if `GpuLayerCount > 0` when GPU is not supported. * - Added low level wrapper methods for new per-sequence state load/save in `SafeLLamaContextHandle` - Added high level wrapper methods (save/load with `State` object or memory mapped file) in `LLamaContext` - Moved native methods for per-sequence state load/save into `SafeLLamaContextHandle` * Added update and defrag methods for KV cache in `SafeLLamaContextHandle` * Updated submodule to `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7` * Passing the sequence ID when saving a single sequence state
1 year ago
April 2024 Binary Update (#662) * Updated binaries, using [this build](https://github.com/SciSharp/LLamaSharp/actions/runs/8654672719/job/23733195669) for llama.cpp commit `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7`. - Added all new functions. - Moved some functions (e.g. `SafeLlamaModelHandle` specific functions) into `SafeLlamaModelHandle.cs` - Exposed tokens on `SafeLlamaModelHandle` and `LLamaWeights` through a `Tokens` property. As new special tokens are added in the future they can be added here. - Changed all token properties to return nullable tokens, to handle some models not having some tokens. - Fixed `DefaultSamplingPipeline` to handle no newline token in some models. * Moved native methods to more specific locations. - Context specific things have been moved into `SafeLLamaContextHandle.cs` and made private - they're exposed through C# properties and methods already. - Checking that GPU layer count is zero if GPU offload is not supported. - Moved methods for creating default structs (`llama_model_quantize_default_params` and `llama_context_default_params`) into relevant structs. * Removed exception if `GpuLayerCount > 0` when GPU is not supported. * - Added low level wrapper methods for new per-sequence state load/save in `SafeLLamaContextHandle` - Added high level wrapper methods (save/load with `State` object or memory mapped file) in `LLamaContext` - Moved native methods for per-sequence state load/save into `SafeLLamaContextHandle` * Added update and defrag methods for KV cache in `SafeLLamaContextHandle` * Updated submodule to `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7` * Passing the sequence ID when saving a single sequence state
1 year ago
April 2024 Binary Update (#662) * Updated binaries, using [this build](https://github.com/SciSharp/LLamaSharp/actions/runs/8654672719/job/23733195669) for llama.cpp commit `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7`. - Added all new functions. - Moved some functions (e.g. `SafeLlamaModelHandle` specific functions) into `SafeLlamaModelHandle.cs` - Exposed tokens on `SafeLlamaModelHandle` and `LLamaWeights` through a `Tokens` property. As new special tokens are added in the future they can be added here. - Changed all token properties to return nullable tokens, to handle some models not having some tokens. - Fixed `DefaultSamplingPipeline` to handle no newline token in some models. * Moved native methods to more specific locations. - Context specific things have been moved into `SafeLLamaContextHandle.cs` and made private - they're exposed through C# properties and methods already. - Checking that GPU layer count is zero if GPU offload is not supported. - Moved methods for creating default structs (`llama_model_quantize_default_params` and `llama_context_default_params`) into relevant structs. * Removed exception if `GpuLayerCount > 0` when GPU is not supported. * - Added low level wrapper methods for new per-sequence state load/save in `SafeLLamaContextHandle` - Added high level wrapper methods (save/load with `State` object or memory mapped file) in `LLamaContext` - Moved native methods for per-sequence state load/save into `SafeLLamaContextHandle` * Added update and defrag methods for KV cache in `SafeLLamaContextHandle` * Updated submodule to `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7` * Passing the sequence ID when saving a single sequence state
1 year ago
April 2024 Binary Update (#662) * Updated binaries, using [this build](https://github.com/SciSharp/LLamaSharp/actions/runs/8654672719/job/23733195669) for llama.cpp commit `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7`. - Added all new functions. - Moved some functions (e.g. `SafeLlamaModelHandle` specific functions) into `SafeLlamaModelHandle.cs` - Exposed tokens on `SafeLlamaModelHandle` and `LLamaWeights` through a `Tokens` property. As new special tokens are added in the future they can be added here. - Changed all token properties to return nullable tokens, to handle some models not having some tokens. - Fixed `DefaultSamplingPipeline` to handle no newline token in some models. * Moved native methods to more specific locations. - Context specific things have been moved into `SafeLLamaContextHandle.cs` and made private - they're exposed through C# properties and methods already. - Checking that GPU layer count is zero if GPU offload is not supported. - Moved methods for creating default structs (`llama_model_quantize_default_params` and `llama_context_default_params`) into relevant structs. * Removed exception if `GpuLayerCount > 0` when GPU is not supported. * - Added low level wrapper methods for new per-sequence state load/save in `SafeLLamaContextHandle` - Added high level wrapper methods (save/load with `State` object or memory mapped file) in `LLamaContext` - Moved native methods for per-sequence state load/save into `SafeLLamaContextHandle` * Added update and defrag methods for KV cache in `SafeLLamaContextHandle` * Updated submodule to `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7` * Passing the sequence ID when saving a single sequence state
1 year ago
1 year ago
April 2024 Binary Update (#662) * Updated binaries, using [this build](https://github.com/SciSharp/LLamaSharp/actions/runs/8654672719/job/23733195669) for llama.cpp commit `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7`. - Added all new functions. - Moved some functions (e.g. `SafeLlamaModelHandle` specific functions) into `SafeLlamaModelHandle.cs` - Exposed tokens on `SafeLlamaModelHandle` and `LLamaWeights` through a `Tokens` property. As new special tokens are added in the future they can be added here. - Changed all token properties to return nullable tokens, to handle some models not having some tokens. - Fixed `DefaultSamplingPipeline` to handle no newline token in some models. * Moved native methods to more specific locations. - Context specific things have been moved into `SafeLLamaContextHandle.cs` and made private - they're exposed through C# properties and methods already. - Checking that GPU layer count is zero if GPU offload is not supported. - Moved methods for creating default structs (`llama_model_quantize_default_params` and `llama_context_default_params`) into relevant structs. * Removed exception if `GpuLayerCount > 0` when GPU is not supported. * - Added low level wrapper methods for new per-sequence state load/save in `SafeLLamaContextHandle` - Added high level wrapper methods (save/load with `State` object or memory mapped file) in `LLamaContext` - Moved native methods for per-sequence state load/save into `SafeLLamaContextHandle` * Added update and defrag methods for KV cache in `SafeLLamaContextHandle` * Updated submodule to `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7` * Passing the sequence ID when saving a single sequence state
1 year ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660
  1. using LLama.Exceptions;
  2. using LLama.Native;
  3. using System;
  4. using System.Collections.Generic;
  5. using System.Linq;
  6. using System.Text;
  7. using System.IO;
  8. using System.IO.MemoryMappedFiles;
  9. using LLama.Common;
  10. using System.Runtime.InteropServices;
  11. using System.Threading.Tasks;
  12. using LLama.Extensions;
  13. using LLama.Abstractions;
  14. using LLama.Sampling;
  15. using Microsoft.Extensions.Logging;
  16. using System.Threading;
  17. namespace LLama
  18. {
  19. /// <summary>
  20. /// A llama_context, which holds all the context required to interact with a model
  21. /// </summary>
  22. public sealed class LLamaContext
  23. : IDisposable
  24. {
  25. private readonly ILogger? _logger;
  26. /// <summary>
  27. /// Total number of tokens in vocabulary of this model
  28. /// </summary>
  29. public int VocabCount => NativeHandle.VocabCount;
  30. /// <summary>
  31. /// Total number of tokens in the context
  32. /// </summary>
  33. public uint ContextSize => NativeHandle.ContextSize;
  34. /// <summary>
  35. /// Dimension of embedding vectors
  36. /// </summary>
  37. public int EmbeddingSize => NativeHandle.EmbeddingSize;
  38. /// <summary>
  39. /// The context params set for this context
  40. /// </summary>
  41. public IContextParams Params { get; }
  42. /// <summary>
  43. /// The native handle, which is used to be passed to the native APIs
  44. /// </summary>
  45. /// <remarks>Be careful how you use this!</remarks>
  46. public SafeLLamaContextHandle NativeHandle { get; }
  47. /// <summary>
  48. /// The encoding set for this model to deal with text input.
  49. /// </summary>
  50. public Encoding Encoding { get; }
  51. private uint _generationThreads;
  52. private uint _batchThreads;
  53. /// <summary>
  54. /// Get or set the number of threads to use for generation
  55. /// </summary>
  56. public uint GenerationThreads
  57. {
  58. get => _generationThreads;
  59. set
  60. {
  61. _generationThreads = value;
  62. NativeHandle.SetThreads(_generationThreads, _batchThreads);
  63. }
  64. }
  65. /// <summary>
  66. /// Get or set the number of threads to use for batch processing
  67. /// </summary>
  68. public uint BatchThreads
  69. {
  70. get => _batchThreads;
  71. set
  72. {
  73. _batchThreads = value;
  74. NativeHandle.SetThreads(_generationThreads, _batchThreads);
  75. }
  76. }
  77. /// <summary>
  78. /// Get the maximum batch size for this context
  79. /// </summary>
  80. public uint BatchSize => NativeHandle.BatchSize;
  81. /// <summary>
  82. /// Create a new LLamaContext for the given LLamaWeights
  83. /// </summary>
  84. /// <param name="model"></param>
  85. /// <param name="params"></param>
  86. /// <param name="logger"></param>
  87. /// <exception cref="ObjectDisposedException"></exception>
  88. public LLamaContext(LLamaWeights model, IContextParams @params, ILogger? logger = null)
  89. {
  90. if (model.NativeHandle.IsClosed)
  91. throw new ObjectDisposedException("Cannot create context, model weights have been disposed");
  92. Params = @params;
  93. _logger = logger;
  94. Encoding = @params.Encoding;
  95. @params.ToLlamaContextParams(out var lparams);
  96. NativeHandle = SafeLLamaContextHandle.Create(model.NativeHandle, lparams);
  97. // It's not possible to get these values from llama.cpp, store a copy of them here.
  98. _generationThreads = lparams.n_threads;
  99. _batchThreads = lparams.n_threads_batch;
  100. }
  101. /// <summary>
  102. /// Set the seed for the RNG
  103. /// </summary>
  104. /// <param name="seed"></param>
  105. public void SetSeed(uint seed)
  106. {
  107. NativeHandle.SetSeed(seed);
  108. }
  109. /// <summary>
  110. /// Tokenize a string.
  111. /// </summary>
  112. /// <param name="text"></param>
  113. /// <param name="addBos">Whether to add a bos to the text.</param>
  114. /// <param name="special">Allow tokenizing special and/or control tokens which otherwise are not exposed and treated as plaintext.</param>
  115. /// <returns></returns>
  116. public LLamaToken[] Tokenize(string text, bool addBos = true, bool special = false)
  117. {
  118. return NativeHandle.Tokenize(text, addBos, special, Encoding);
  119. }
  120. /// <summary>
  121. /// Detokenize the tokens to text.
  122. /// </summary>
  123. /// <param name="tokens"></param>
  124. /// <returns></returns>
  125. [Obsolete("Use a `StreamingTokenDecoder` instead")]
  126. public string DeTokenize(IReadOnlyList<LLamaToken> tokens)
  127. {
  128. // Do **not** use this method as an example of how to correctly use the StreamingTokenDecoder!
  129. // It should be kept around for the entire time you are decoding one stream of tokens.
  130. var decoder = new StreamingTokenDecoder(this);
  131. decoder.AddRange(tokens);
  132. return decoder.Read();
  133. }
  134. #region state load/save
  135. /// <summary>
  136. /// Save the state to specified path.
  137. /// </summary>
  138. /// <param name="filename"></param>
  139. public void SaveState(string filename)
  140. {
  141. // Delete that file before overwriting it
  142. if (File.Exists(filename))
  143. File.Delete(filename);
  144. // Estimate size of state to write to disk, this is always equal to or greater than the actual size
  145. var estimatedStateSize = checked((long)NativeHandle.GetStateSize());
  146. // Map the file and write the bytes directly to it. This saves copying the bytes into a C# array
  147. long writtenBytes;
  148. using (var file = MemoryMappedFile.CreateFromFile(filename, FileMode.Create, null, estimatedStateSize))
  149. using (var view = file.CreateViewAccessor(0, estimatedStateSize))
  150. {
  151. unsafe
  152. {
  153. byte* ptr = null;
  154. view.SafeMemoryMappedViewHandle.AcquirePointer(ref ptr);
  155. try
  156. {
  157. writtenBytes = (long)NativeHandle.GetState(ptr, (ulong)estimatedStateSize);
  158. }
  159. finally
  160. {
  161. view.SafeMemoryMappedViewHandle.ReleasePointer();
  162. }
  163. }
  164. }
  165. // Truncate the file to the actual size of data that was written
  166. using (var fileStream = new FileStream(filename, FileMode.Open))
  167. fileStream.SetLength(writtenBytes);
  168. }
  169. /// <summary>
  170. /// Save the state of a particular sequence to specified path.
  171. /// </summary>
  172. /// <param name="filename"></param>
  173. /// <param name="sequence"></param>
  174. public void SaveState(string filename, LLamaSeqId sequence)
  175. {
  176. // Delete that file before overwriting it
  177. if (File.Exists(filename))
  178. File.Delete(filename);
  179. // Estimate size of state to write to disk, this is always equal to or greater than the actual size
  180. var estimatedStateSize = checked((long)NativeHandle.GetStateSize(sequence));
  181. // Map the file and write the bytes directly to it. This saves copying the bytes into a C# array
  182. long writtenBytes;
  183. using (var file = MemoryMappedFile.CreateFromFile(filename, FileMode.Create, null, estimatedStateSize))
  184. using (var view = file.CreateViewAccessor(0, estimatedStateSize))
  185. {
  186. unsafe
  187. {
  188. byte* ptr = null;
  189. view.SafeMemoryMappedViewHandle.AcquirePointer(ref ptr);
  190. try
  191. {
  192. writtenBytes = (long)NativeHandle.GetState(ptr, (ulong)estimatedStateSize, sequence);
  193. }
  194. finally
  195. {
  196. view.SafeMemoryMappedViewHandle.ReleasePointer();
  197. }
  198. }
  199. }
  200. // Truncate the file to the actual size of data that was written
  201. using (var fileStream = new FileStream(filename, FileMode.Open))
  202. fileStream.SetLength(writtenBytes);
  203. }
  204. /// <summary>
  205. /// Get the state data as an opaque handle, which can be loaded later using <see cref="LoadState(State)"/>
  206. /// </summary>
  207. /// <remarks>Use <see cref="SaveState(string)"/> if you intend to save this state to disk.</remarks>
  208. /// <returns></returns>
  209. public State GetState()
  210. {
  211. var stateSize = NativeHandle.GetStateSize();
  212. // Allocate a chunk of memory large enough to hold the entire state
  213. var memory = Marshal.AllocHGlobal((nint)stateSize);
  214. try
  215. {
  216. // Copy the state data into memory, discover the actual size required
  217. ulong actualSize;
  218. unsafe
  219. {
  220. actualSize = NativeHandle.GetState((byte*)memory, stateSize);
  221. }
  222. // Shrink to size
  223. memory = Marshal.ReAllocHGlobal(memory, (nint)actualSize);
  224. // Wrap memory in a "state"
  225. var state = new State(memory, actualSize);
  226. // Set memory to zero, to prevent it being freed in finally block
  227. memory = IntPtr.Zero;
  228. return state;
  229. }
  230. finally
  231. {
  232. if (memory != IntPtr.Zero)
  233. Marshal.FreeHGlobal(memory);
  234. }
  235. }
  236. /// <summary>
  237. /// Get the state data as an opaque handle, which can be loaded later using <see cref="LoadState(State)"/>
  238. /// </summary>
  239. /// <remarks>Use <see cref="SaveState(string, LLamaSeqId)"/> if you intend to save this state to disk.</remarks>
  240. /// <returns></returns>
  241. public SequenceState GetState(LLamaSeqId sequence)
  242. {
  243. var stateSize = NativeHandle.GetStateSize(sequence);
  244. // Allocate a chunk of memory large enough to hold the entire state
  245. var memory = Marshal.AllocHGlobal((nint)stateSize);
  246. try
  247. {
  248. // Copy the state data into memory, discover the actual size required
  249. ulong actualSize;
  250. unsafe
  251. {
  252. actualSize = NativeHandle.GetState((byte*)memory, stateSize, sequence);
  253. }
  254. // Shrink to size
  255. memory = Marshal.ReAllocHGlobal(memory, (nint)actualSize);
  256. // Wrap memory in a "state"
  257. var state = new SequenceState(memory, actualSize);
  258. // Set memory to zero, to prevent it being freed in finally block
  259. memory = IntPtr.Zero;
  260. return state;
  261. }
  262. finally
  263. {
  264. if (memory != IntPtr.Zero)
  265. Marshal.FreeHGlobal(memory);
  266. }
  267. }
  268. /// <summary>
  269. /// Load the state from specified path.
  270. /// </summary>
  271. /// <param name="filename"></param>
  272. public void LoadState(string filename)
  273. {
  274. // Map state file into memory and pass that pointer directly to `llama_set_state_data` to load from
  275. using (var file = MemoryMappedFile.CreateFromFile(filename, FileMode.Open, null))
  276. using (var view = file.CreateViewAccessor())
  277. {
  278. unsafe
  279. {
  280. byte* ptr = null;
  281. view.SafeMemoryMappedViewHandle.AcquirePointer(ref ptr);
  282. try
  283. {
  284. NativeHandle.SetState(ptr);
  285. }
  286. finally
  287. {
  288. view.SafeMemoryMappedViewHandle.ReleasePointer();
  289. }
  290. }
  291. }
  292. }
  293. /// <summary>
  294. /// Load the state from specified path into a particular sequence
  295. /// </summary>
  296. /// <param name="filename"></param>
  297. /// <param name="sequence"></param>
  298. public void LoadState(string filename, LLamaSeqId sequence)
  299. {
  300. // Map state file into memory and pass that pointer directly to `llama_set_state_data` to load from
  301. using (var file = MemoryMappedFile.CreateFromFile(filename, FileMode.Open, null))
  302. using (var view = file.CreateViewAccessor())
  303. {
  304. unsafe
  305. {
  306. byte* ptr = null;
  307. view.SafeMemoryMappedViewHandle.AcquirePointer(ref ptr);
  308. try
  309. {
  310. NativeHandle.SetState(ptr, sequence);
  311. }
  312. finally
  313. {
  314. view.SafeMemoryMappedViewHandle.ReleasePointer();
  315. }
  316. }
  317. }
  318. }
  319. /// <summary>
  320. /// Load the state from memory.
  321. /// </summary>
  322. /// <param name="state"></param>
  323. /// <exception cref="RuntimeError"></exception>
  324. public void LoadState(State state)
  325. {
  326. unsafe
  327. {
  328. NativeHandle.SetState((byte*)state.DangerousGetHandle());
  329. }
  330. }
  331. /// <summary>
  332. /// Load the state from memory into a particular sequence
  333. /// </summary>
  334. /// <param name="state"></param>
  335. /// <param name="sequence"></param>
  336. /// <exception cref="RuntimeError"></exception>
  337. public void LoadState(SequenceState state, LLamaSeqId sequence)
  338. {
  339. unsafe
  340. {
  341. NativeHandle.SetState((byte*)state.DangerousGetHandle(), sequence);
  342. }
  343. }
  344. #endregion
  345. /// <summary>
  346. /// Sample a single token from this context, using the given sampling pipeline
  347. /// </summary>
  348. /// <param name="pipeline">The pipeline to use to process the logits and to select a token</param>
  349. /// <param name="lastTokens">The tokens recently returned from the model</param>
  350. /// <returns>The selected token</returns>
  351. public LLamaToken Sample(ISamplingPipeline pipeline, ReadOnlySpan<LLamaToken> lastTokens)
  352. {
  353. var token = pipeline.Sample(NativeHandle, NativeHandle.GetLogits(), lastTokens);
  354. pipeline.Accept(NativeHandle, token);
  355. return token;
  356. }
  357. /// <summary>
  358. /// Perform the sampling. Please don't use it unless you fully know what it does.
  359. /// </summary>
  360. /// <param name="candidates"></param>
  361. /// <param name="mirostat_mu"></param>
  362. /// <param name="temperature"></param>
  363. /// <param name="mirostat"></param>
  364. /// <param name="mirostatTau"></param>
  365. /// <param name="mirostatEta"></param>
  366. /// <param name="topK"></param>
  367. /// <param name="topP"></param>
  368. /// <param name="tfsZ"></param>
  369. /// <param name="typicalP"></param>
  370. /// <param name="grammar"></param>
  371. /// <param name="minP"></param>
  372. /// <returns></returns>
  373. public LLamaToken Sample(LLamaTokenDataArray candidates, ref float? mirostat_mu, float temperature, MirostatType mirostat,
  374. float mirostatTau, float mirostatEta, int topK, float topP, float tfsZ, float typicalP,
  375. SafeLLamaGrammarHandle? grammar, float minP)
  376. {
  377. LLamaToken id;
  378. if (grammar != null)
  379. {
  380. candidates.ApplyGrammar(NativeHandle, grammar);
  381. }
  382. if (temperature <= 0)
  383. {
  384. // Greedy sampling
  385. id = candidates.SampleTokenGreedy(NativeHandle);
  386. }
  387. else
  388. {
  389. var mu = mirostat_mu ?? (2 * mirostatTau);
  390. {
  391. if (mirostat == MirostatType.Mirostat)
  392. {
  393. const int mirostat_m = 100;
  394. candidates.Temperature(NativeHandle, temperature);
  395. id = candidates.SampleTokenMirostat(NativeHandle, mirostatTau, mirostatEta, mirostat_m, ref mu);
  396. }
  397. else if (mirostat == MirostatType.Mirostat2)
  398. {
  399. candidates.Temperature(NativeHandle, temperature);
  400. id = candidates.SampleTokenMirostat2(NativeHandle, mirostatTau, mirostatEta, ref mu);
  401. }
  402. else
  403. {
  404. candidates.TopK(NativeHandle, topK);
  405. candidates.TailFree(NativeHandle, tfsZ);
  406. candidates.LocallyTypical(NativeHandle, typicalP);
  407. candidates.TopP(NativeHandle, topP);
  408. candidates.MinP(NativeHandle, minP);
  409. candidates.Temperature(NativeHandle, temperature);
  410. id = candidates.SampleToken(NativeHandle);
  411. }
  412. }
  413. mirostat_mu = mu;
  414. }
  415. grammar?.AcceptToken(NativeHandle, id);
  416. return id;
  417. }
  418. /// <summary>
  419. /// Apply the penalty for the tokens. Please don't use it unless you fully know what it does.
  420. /// </summary>
  421. /// <param name="logits_i"></param>
  422. /// <param name="lastTokens"></param>
  423. /// <param name="logitBias"></param>
  424. /// <param name="repeatLastTokensCount"></param>
  425. /// <param name="repeatPenalty"></param>
  426. /// <param name="alphaFrequency"></param>
  427. /// <param name="alphaPresence"></param>
  428. /// <param name="penalizeNL"></param>
  429. /// <returns></returns>
  430. public LLamaTokenDataArray ApplyPenalty(int logits_i, IEnumerable<LLamaToken> lastTokens, Dictionary<LLamaToken, float>? logitBias = null,
  431. int repeatLastTokensCount = 64, float repeatPenalty = 1.1f, float alphaFrequency = .0f, float alphaPresence = .0f,
  432. bool penalizeNL = true)
  433. {
  434. var logits = NativeHandle.GetLogitsIth(logits_i);
  435. // Apply params.logit_bias map
  436. if (logitBias is not null)
  437. {
  438. foreach (var (key, value) in logitBias)
  439. logits[(int)key] += value;
  440. }
  441. // Save the newline logit value
  442. var nl_token = NativeHandle.ModelHandle.Tokens.Newline;
  443. var nl_logit = logits[(int?)nl_token ?? 0];
  444. // Convert logits into token candidates
  445. var candidates_p = LLamaTokenDataArray.Create(logits);
  446. // Extract most recently returned tokens
  447. var last_n_repeat = Math.Min((int)ContextSize, repeatLastTokensCount);
  448. var last_n_array = lastTokens.TakeLast(last_n_repeat).ToArray();
  449. // Apply penalties to candidates
  450. candidates_p.RepetitionPenalty(NativeHandle, last_n_array, repeatPenalty, alphaFrequency, alphaPresence);
  451. // Restore newline token logit value if necessary
  452. if (!penalizeNL && nl_token.HasValue)
  453. {
  454. var candidatesSpan = candidates_p.data.Span;
  455. for (var i = 0; i < candidates_p.data.Length; i++)
  456. {
  457. ref var item = ref candidatesSpan[i];
  458. if (item.id == nl_token)
  459. item.logit = nl_logit;
  460. }
  461. candidates_p.sorted = false;
  462. }
  463. return candidates_p;
  464. }
  465. /// <summary>
  466. /// Gets whether or not the Bos token should be added.
  467. /// From common.cpp https://github.com/ggerganov/llama.cpp/blob/60325fa56f61c228464c9f065db3aa6a61f2156e/common/common.cpp#L2417
  468. /// </summary>
  469. /// <returns></returns>
  470. public bool ShouldAddBosToken()
  471. {
  472. var addBos = NativeApi.llama_add_bos_token(NativeHandle.ModelHandle);
  473. return addBos != -1 ? Convert.ToBoolean(addBos) : NativeHandle.LLamaVocabType == LLamaVocabType.SentencePiece;
  474. }
  475. #region eval overloads
  476. /// <summary>
  477. /// </summary>
  478. /// <param name="batch"></param>
  479. public DecodeResult Decode(LLamaBatch batch)
  480. {
  481. if (batch.TokenCount == 0)
  482. return 0;
  483. if (batch.TokenCount > Params.BatchSize)
  484. throw new ArgumentException("Input contains more tokens than configured batch size", nameof(batch));
  485. return (DecodeResult)NativeHandle.Decode(batch);
  486. }
  487. /// <summary>
  488. /// </summary>
  489. /// <param name="batch"></param>
  490. /// <param name="cancellationToken"></param>
  491. public Task<DecodeResult> DecodeAsync(LLamaBatch batch, CancellationToken cancellationToken = default)
  492. {
  493. return Task.Run(() => Decode(batch), cancellationToken);
  494. }
  495. #endregion
  496. /// <inheritdoc />
  497. public void Dispose()
  498. {
  499. NativeHandle.Dispose();
  500. }
  501. /// <summary>
  502. /// The state of this context, which can be reloaded later
  503. /// </summary>
  504. public class State
  505. : SafeLLamaHandleBase
  506. {
  507. private readonly ulong _size;
  508. /// <summary>
  509. /// Get the size in bytes of this state object
  510. /// </summary>
  511. public ulong Size => _size;
  512. internal State(IntPtr memory, ulong size)
  513. : base(memory, true)
  514. {
  515. _size = size;
  516. }
  517. /// <inheritdoc />
  518. protected override bool ReleaseHandle()
  519. {
  520. Marshal.FreeHGlobal(handle);
  521. return true;
  522. }
  523. /// <summary>
  524. /// Convert this state to a byte array
  525. /// </summary>
  526. /// <returns></returns>
  527. [Obsolete("It is not generally safe to convert a state into a byte array - it will fail if the state is very large")]
  528. public byte[] ToByteArray()
  529. {
  530. var bytes = new byte[_size];
  531. Marshal.Copy(handle, bytes, 0, (int)_size);
  532. return bytes;
  533. }
  534. /// <summary>
  535. /// Load state from a byte array
  536. /// </summary>
  537. /// <param name="bytes"></param>
  538. /// <returns></returns>
  539. [Obsolete("It is not generally safe to convert a state into a byte array - it will fail if the state is very large")]
  540. public static State FromByteArray(byte[] bytes)
  541. {
  542. var memory = Marshal.AllocHGlobal(bytes.Length);
  543. Marshal.Copy(bytes, 0, memory, bytes.Length);
  544. return new State(memory, (ulong)bytes.Length);
  545. }
  546. }
  547. /// <summary>
  548. /// The state of a single sequence, which can be reloaded later
  549. /// </summary>
  550. public class SequenceState
  551. : SafeLLamaHandleBase
  552. {
  553. private readonly ulong _size;
  554. /// <summary>
  555. /// Get the size in bytes of this state object
  556. /// </summary>
  557. public ulong Size => _size;
  558. internal SequenceState(IntPtr memory, ulong size)
  559. : base(memory, true)
  560. {
  561. _size = size;
  562. }
  563. /// <inheritdoc />
  564. protected override bool ReleaseHandle()
  565. {
  566. Marshal.FreeHGlobal(handle);
  567. return true;
  568. }
  569. /// <summary>
  570. /// Copy bytes to a destination pointer.
  571. /// </summary>
  572. /// <param name="dst">Destination to write to</param>
  573. /// <param name="length">Length of the destination buffer</param>
  574. /// <param name="offset">Offset from start of src to start copying from</param>
  575. /// <returns>Number of bytes written to destination</returns>
  576. public unsafe ulong CopyTo(byte* dst, ulong length, ulong offset = 0)
  577. {
  578. var copy = Math.Min(length, _size - offset);
  579. var src = (byte*)DangerousGetHandle();
  580. src += offset;
  581. Buffer.MemoryCopy(src, dst, length, copy);
  582. return copy;
  583. }
  584. }
  585. }
  586. }