You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

SafeLlamaModelHandle.cs 26 kB

April 2024 Binary Update (#662) * Updated binaries, using [this build](https://github.com/SciSharp/LLamaSharp/actions/runs/8654672719/job/23733195669) for llama.cpp commit `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7`. - Added all new functions. - Moved some functions (e.g. `SafeLlamaModelHandle` specific functions) into `SafeLlamaModelHandle.cs` - Exposed tokens on `SafeLlamaModelHandle` and `LLamaWeights` through a `Tokens` property. As new special tokens are added in the future they can be added here. - Changed all token properties to return nullable tokens, to handle some models not having some tokens. - Fixed `DefaultSamplingPipeline` to handle no newline token in some models. * Moved native methods to more specific locations. - Context specific things have been moved into `SafeLLamaContextHandle.cs` and made private - they're exposed through C# properties and methods already. - Checking that GPU layer count is zero if GPU offload is not supported. - Moved methods for creating default structs (`llama_model_quantize_default_params` and `llama_context_default_params`) into relevant structs. * Removed exception if `GpuLayerCount > 0` when GPU is not supported. * - Added low level wrapper methods for new per-sequence state load/save in `SafeLLamaContextHandle` - Added high level wrapper methods (save/load with `State` object or memory mapped file) in `LLamaContext` - Moved native methods for per-sequence state load/save into `SafeLLamaContextHandle` * Added update and defrag methods for KV cache in `SafeLLamaContextHandle` * Updated submodule to `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7` * Passing the sequence ID when saving a single sequence state
1 year ago
April 2024 Binary Update (#662) * Updated binaries, using [this build](https://github.com/SciSharp/LLamaSharp/actions/runs/8654672719/job/23733195669) for llama.cpp commit `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7`. - Added all new functions. - Moved some functions (e.g. `SafeLlamaModelHandle` specific functions) into `SafeLlamaModelHandle.cs` - Exposed tokens on `SafeLlamaModelHandle` and `LLamaWeights` through a `Tokens` property. As new special tokens are added in the future they can be added here. - Changed all token properties to return nullable tokens, to handle some models not having some tokens. - Fixed `DefaultSamplingPipeline` to handle no newline token in some models. * Moved native methods to more specific locations. - Context specific things have been moved into `SafeLLamaContextHandle.cs` and made private - they're exposed through C# properties and methods already. - Checking that GPU layer count is zero if GPU offload is not supported. - Moved methods for creating default structs (`llama_model_quantize_default_params` and `llama_context_default_params`) into relevant structs. * Removed exception if `GpuLayerCount > 0` when GPU is not supported. * - Added low level wrapper methods for new per-sequence state load/save in `SafeLLamaContextHandle` - Added high level wrapper methods (save/load with `State` object or memory mapped file) in `LLamaContext` - Moved native methods for per-sequence state load/save into `SafeLLamaContextHandle` * Added update and defrag methods for KV cache in `SafeLLamaContextHandle` * Updated submodule to `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7` * Passing the sequence ID when saving a single sequence state
1 year ago
April 2024 Binary Update (#662) * Updated binaries, using [this build](https://github.com/SciSharp/LLamaSharp/actions/runs/8654672719/job/23733195669) for llama.cpp commit `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7`. - Added all new functions. - Moved some functions (e.g. `SafeLlamaModelHandle` specific functions) into `SafeLlamaModelHandle.cs` - Exposed tokens on `SafeLlamaModelHandle` and `LLamaWeights` through a `Tokens` property. As new special tokens are added in the future they can be added here. - Changed all token properties to return nullable tokens, to handle some models not having some tokens. - Fixed `DefaultSamplingPipeline` to handle no newline token in some models. * Moved native methods to more specific locations. - Context specific things have been moved into `SafeLLamaContextHandle.cs` and made private - they're exposed through C# properties and methods already. - Checking that GPU layer count is zero if GPU offload is not supported. - Moved methods for creating default structs (`llama_model_quantize_default_params` and `llama_context_default_params`) into relevant structs. * Removed exception if `GpuLayerCount > 0` when GPU is not supported. * - Added low level wrapper methods for new per-sequence state load/save in `SafeLLamaContextHandle` - Added high level wrapper methods (save/load with `State` object or memory mapped file) in `LLamaContext` - Moved native methods for per-sequence state load/save into `SafeLLamaContextHandle` * Added update and defrag methods for KV cache in `SafeLLamaContextHandle` * Updated submodule to `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7` * Passing the sequence ID when saving a single sequence state
1 year ago
April 2024 Binary Update (#662) * Updated binaries, using [this build](https://github.com/SciSharp/LLamaSharp/actions/runs/8654672719/job/23733195669) for llama.cpp commit `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7`. - Added all new functions. - Moved some functions (e.g. `SafeLlamaModelHandle` specific functions) into `SafeLlamaModelHandle.cs` - Exposed tokens on `SafeLlamaModelHandle` and `LLamaWeights` through a `Tokens` property. As new special tokens are added in the future they can be added here. - Changed all token properties to return nullable tokens, to handle some models not having some tokens. - Fixed `DefaultSamplingPipeline` to handle no newline token in some models. * Moved native methods to more specific locations. - Context specific things have been moved into `SafeLLamaContextHandle.cs` and made private - they're exposed through C# properties and methods already. - Checking that GPU layer count is zero if GPU offload is not supported. - Moved methods for creating default structs (`llama_model_quantize_default_params` and `llama_context_default_params`) into relevant structs. * Removed exception if `GpuLayerCount > 0` when GPU is not supported. * - Added low level wrapper methods for new per-sequence state load/save in `SafeLLamaContextHandle` - Added high level wrapper methods (save/load with `State` object or memory mapped file) in `LLamaContext` - Moved native methods for per-sequence state load/save into `SafeLLamaContextHandle` * Added update and defrag methods for KV cache in `SafeLLamaContextHandle` * Updated submodule to `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7` * Passing the sequence ID when saving a single sequence state
1 year ago
April 2024 Binary Update (#662) * Updated binaries, using [this build](https://github.com/SciSharp/LLamaSharp/actions/runs/8654672719/job/23733195669) for llama.cpp commit `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7`. - Added all new functions. - Moved some functions (e.g. `SafeLlamaModelHandle` specific functions) into `SafeLlamaModelHandle.cs` - Exposed tokens on `SafeLlamaModelHandle` and `LLamaWeights` through a `Tokens` property. As new special tokens are added in the future they can be added here. - Changed all token properties to return nullable tokens, to handle some models not having some tokens. - Fixed `DefaultSamplingPipeline` to handle no newline token in some models. * Moved native methods to more specific locations. - Context specific things have been moved into `SafeLLamaContextHandle.cs` and made private - they're exposed through C# properties and methods already. - Checking that GPU layer count is zero if GPU offload is not supported. - Moved methods for creating default structs (`llama_model_quantize_default_params` and `llama_context_default_params`) into relevant structs. * Removed exception if `GpuLayerCount > 0` when GPU is not supported. * - Added low level wrapper methods for new per-sequence state load/save in `SafeLLamaContextHandle` - Added high level wrapper methods (save/load with `State` object or memory mapped file) in `LLamaContext` - Moved native methods for per-sequence state load/save into `SafeLLamaContextHandle` * Added update and defrag methods for KV cache in `SafeLLamaContextHandle` * Updated submodule to `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7` * Passing the sequence ID when saving a single sequence state
1 year ago
April 2024 Binary Update (#662) * Updated binaries, using [this build](https://github.com/SciSharp/LLamaSharp/actions/runs/8654672719/job/23733195669) for llama.cpp commit `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7`. - Added all new functions. - Moved some functions (e.g. `SafeLlamaModelHandle` specific functions) into `SafeLlamaModelHandle.cs` - Exposed tokens on `SafeLlamaModelHandle` and `LLamaWeights` through a `Tokens` property. As new special tokens are added in the future they can be added here. - Changed all token properties to return nullable tokens, to handle some models not having some tokens. - Fixed `DefaultSamplingPipeline` to handle no newline token in some models. * Moved native methods to more specific locations. - Context specific things have been moved into `SafeLLamaContextHandle.cs` and made private - they're exposed through C# properties and methods already. - Checking that GPU layer count is zero if GPU offload is not supported. - Moved methods for creating default structs (`llama_model_quantize_default_params` and `llama_context_default_params`) into relevant structs. * Removed exception if `GpuLayerCount > 0` when GPU is not supported. * - Added low level wrapper methods for new per-sequence state load/save in `SafeLLamaContextHandle` - Added high level wrapper methods (save/load with `State` object or memory mapped file) in `LLamaContext` - Moved native methods for per-sequence state load/save into `SafeLLamaContextHandle` * Added update and defrag methods for KV cache in `SafeLLamaContextHandle` * Updated submodule to `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7` * Passing the sequence ID when saving a single sequence state
1 year ago
April 2024 Binary Update (#662) * Updated binaries, using [this build](https://github.com/SciSharp/LLamaSharp/actions/runs/8654672719/job/23733195669) for llama.cpp commit `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7`. - Added all new functions. - Moved some functions (e.g. `SafeLlamaModelHandle` specific functions) into `SafeLlamaModelHandle.cs` - Exposed tokens on `SafeLlamaModelHandle` and `LLamaWeights` through a `Tokens` property. As new special tokens are added in the future they can be added here. - Changed all token properties to return nullable tokens, to handle some models not having some tokens. - Fixed `DefaultSamplingPipeline` to handle no newline token in some models. * Moved native methods to more specific locations. - Context specific things have been moved into `SafeLLamaContextHandle.cs` and made private - they're exposed through C# properties and methods already. - Checking that GPU layer count is zero if GPU offload is not supported. - Moved methods for creating default structs (`llama_model_quantize_default_params` and `llama_context_default_params`) into relevant structs. * Removed exception if `GpuLayerCount > 0` when GPU is not supported. * - Added low level wrapper methods for new per-sequence state load/save in `SafeLLamaContextHandle` - Added high level wrapper methods (save/load with `State` object or memory mapped file) in `LLamaContext` - Moved native methods for per-sequence state load/save into `SafeLLamaContextHandle` * Added update and defrag methods for KV cache in `SafeLLamaContextHandle` * Updated submodule to `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7` * Passing the sequence ID when saving a single sequence state
1 year ago
April 2024 Binary Update (#662) * Updated binaries, using [this build](https://github.com/SciSharp/LLamaSharp/actions/runs/8654672719/job/23733195669) for llama.cpp commit `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7`. - Added all new functions. - Moved some functions (e.g. `SafeLlamaModelHandle` specific functions) into `SafeLlamaModelHandle.cs` - Exposed tokens on `SafeLlamaModelHandle` and `LLamaWeights` through a `Tokens` property. As new special tokens are added in the future they can be added here. - Changed all token properties to return nullable tokens, to handle some models not having some tokens. - Fixed `DefaultSamplingPipeline` to handle no newline token in some models. * Moved native methods to more specific locations. - Context specific things have been moved into `SafeLLamaContextHandle.cs` and made private - they're exposed through C# properties and methods already. - Checking that GPU layer count is zero if GPU offload is not supported. - Moved methods for creating default structs (`llama_model_quantize_default_params` and `llama_context_default_params`) into relevant structs. * Removed exception if `GpuLayerCount > 0` when GPU is not supported. * - Added low level wrapper methods for new per-sequence state load/save in `SafeLLamaContextHandle` - Added high level wrapper methods (save/load with `State` object or memory mapped file) in `LLamaContext` - Moved native methods for per-sequence state load/save into `SafeLLamaContextHandle` * Added update and defrag methods for KV cache in `SafeLLamaContextHandle` * Updated submodule to `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7` * Passing the sequence ID when saving a single sequence state
1 year ago
April 2024 Binary Update (#662) * Updated binaries, using [this build](https://github.com/SciSharp/LLamaSharp/actions/runs/8654672719/job/23733195669) for llama.cpp commit `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7`. - Added all new functions. - Moved some functions (e.g. `SafeLlamaModelHandle` specific functions) into `SafeLlamaModelHandle.cs` - Exposed tokens on `SafeLlamaModelHandle` and `LLamaWeights` through a `Tokens` property. As new special tokens are added in the future they can be added here. - Changed all token properties to return nullable tokens, to handle some models not having some tokens. - Fixed `DefaultSamplingPipeline` to handle no newline token in some models. * Moved native methods to more specific locations. - Context specific things have been moved into `SafeLLamaContextHandle.cs` and made private - they're exposed through C# properties and methods already. - Checking that GPU layer count is zero if GPU offload is not supported. - Moved methods for creating default structs (`llama_model_quantize_default_params` and `llama_context_default_params`) into relevant structs. * Removed exception if `GpuLayerCount > 0` when GPU is not supported. * - Added low level wrapper methods for new per-sequence state load/save in `SafeLLamaContextHandle` - Added high level wrapper methods (save/load with `State` object or memory mapped file) in `LLamaContext` - Moved native methods for per-sequence state load/save into `SafeLLamaContextHandle` * Added update and defrag methods for KV cache in `SafeLLamaContextHandle` * Updated submodule to `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7` * Passing the sequence ID when saving a single sequence state
1 year ago
April 2024 Binary Update (#662) * Updated binaries, using [this build](https://github.com/SciSharp/LLamaSharp/actions/runs/8654672719/job/23733195669) for llama.cpp commit `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7`. - Added all new functions. - Moved some functions (e.g. `SafeLlamaModelHandle` specific functions) into `SafeLlamaModelHandle.cs` - Exposed tokens on `SafeLlamaModelHandle` and `LLamaWeights` through a `Tokens` property. As new special tokens are added in the future they can be added here. - Changed all token properties to return nullable tokens, to handle some models not having some tokens. - Fixed `DefaultSamplingPipeline` to handle no newline token in some models. * Moved native methods to more specific locations. - Context specific things have been moved into `SafeLLamaContextHandle.cs` and made private - they're exposed through C# properties and methods already. - Checking that GPU layer count is zero if GPU offload is not supported. - Moved methods for creating default structs (`llama_model_quantize_default_params` and `llama_context_default_params`) into relevant structs. * Removed exception if `GpuLayerCount > 0` when GPU is not supported. * - Added low level wrapper methods for new per-sequence state load/save in `SafeLLamaContextHandle` - Added high level wrapper methods (save/load with `State` object or memory mapped file) in `LLamaContext` - Moved native methods for per-sequence state load/save into `SafeLLamaContextHandle` * Added update and defrag methods for KV cache in `SafeLLamaContextHandle` * Updated submodule to `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7` * Passing the sequence ID when saving a single sequence state
1 year ago
April 2024 Binary Update (#662) * Updated binaries, using [this build](https://github.com/SciSharp/LLamaSharp/actions/runs/8654672719/job/23733195669) for llama.cpp commit `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7`. - Added all new functions. - Moved some functions (e.g. `SafeLlamaModelHandle` specific functions) into `SafeLlamaModelHandle.cs` - Exposed tokens on `SafeLlamaModelHandle` and `LLamaWeights` through a `Tokens` property. As new special tokens are added in the future they can be added here. - Changed all token properties to return nullable tokens, to handle some models not having some tokens. - Fixed `DefaultSamplingPipeline` to handle no newline token in some models. * Moved native methods to more specific locations. - Context specific things have been moved into `SafeLLamaContextHandle.cs` and made private - they're exposed through C# properties and methods already. - Checking that GPU layer count is zero if GPU offload is not supported. - Moved methods for creating default structs (`llama_model_quantize_default_params` and `llama_context_default_params`) into relevant structs. * Removed exception if `GpuLayerCount > 0` when GPU is not supported. * - Added low level wrapper methods for new per-sequence state load/save in `SafeLLamaContextHandle` - Added high level wrapper methods (save/load with `State` object or memory mapped file) in `LLamaContext` - Moved native methods for per-sequence state load/save into `SafeLLamaContextHandle` * Added update and defrag methods for KV cache in `SafeLLamaContextHandle` * Updated submodule to `f7001ccc5aa359fcf41bba19d1c99c3d25c9bcc7` * Passing the sequence ID when saving a single sequence state
1 year ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628
  1. using System;
  2. using System.Buffers;
  3. using System.Collections.Generic;
  4. using System.Diagnostics;
  5. using System.IO;
  6. using System.Runtime.InteropServices;
  7. using System.Text;
  8. using LLama.Exceptions;
  9. using LLama.Extensions;
  10. namespace LLama.Native
  11. {
  12. /// <summary>
  13. /// A reference to a set of llama model weights
  14. /// </summary>
  15. // ReSharper disable once ClassNeverInstantiated.Global (used implicitly in native API)
  16. public sealed class SafeLlamaModelHandle
  17. : SafeLLamaHandleBase
  18. {
  19. /// <summary>
  20. /// Total number of tokens in vocabulary of this model
  21. /// </summary>
  22. public int VocabCount => llama_n_vocab(this);
  23. /// <summary>
  24. /// Get the vocabulary type for this model
  25. /// </summary>
  26. public LLamaVocabType VocabType => llama_vocab_type(this);
  27. /// <summary>
  28. /// Get the rope (positional embedding) type for this model
  29. /// </summary>
  30. public LLamaRopeType RopeType => llama_rope_type(this);
  31. /// <summary>
  32. /// Total number of tokens in the context
  33. /// </summary>
  34. public int ContextSize => llama_n_ctx_train(this);
  35. /// <summary>
  36. /// Get the rope frequency this model was trained with
  37. /// </summary>
  38. public float RopeFrequency => llama_rope_freq_scale_train(this);
  39. /// <summary>
  40. /// Dimension of embedding vectors
  41. /// </summary>
  42. public int EmbeddingSize => llama_n_embd(this);
  43. /// <summary>
  44. /// Get the size of this model in bytes
  45. /// </summary>
  46. public ulong SizeInBytes => llama_model_size(this);
  47. /// <summary>
  48. /// Get the number of parameters in this model
  49. /// </summary>
  50. public ulong ParameterCount => llama_model_n_params(this);
  51. /// <summary>
  52. /// Get the number of layers in this model
  53. /// </summary>
  54. public int LayerCount => llama_n_embd(this);
  55. /// <summary>
  56. /// Get a description of this model
  57. /// </summary>
  58. public string Description
  59. {
  60. get
  61. {
  62. unsafe
  63. {
  64. // Get description length
  65. var size = llama_model_desc(this, null, 0);
  66. var buf = new byte[size + 1];
  67. fixed (byte* bufPtr = buf)
  68. {
  69. size = llama_model_desc(this, bufPtr, buf.Length);
  70. return Encoding.UTF8.GetString(buf, 0, size);
  71. }
  72. }
  73. }
  74. }
  75. /// <summary>
  76. /// Get the number of metadata key/value pairs
  77. /// </summary>
  78. /// <returns></returns>
  79. public int MetadataCount => llama_model_meta_count(this);
  80. private ModelTokens? _tokens;
  81. /// <summary>
  82. /// Get the special tokens of this model
  83. /// </summary>
  84. public ModelTokens Tokens => _tokens ??= new ModelTokens(this);
  85. /// <inheritdoc />
  86. protected override bool ReleaseHandle()
  87. {
  88. llama_free_model(handle);
  89. return true;
  90. }
  91. /// <summary>
  92. /// Load a model from the given file path into memory
  93. /// </summary>
  94. /// <param name="modelPath"></param>
  95. /// <param name="lparams"></param>
  96. /// <returns></returns>
  97. /// <exception cref="RuntimeError"></exception>
  98. public static SafeLlamaModelHandle LoadFromFile(string modelPath, LLamaModelParams lparams)
  99. {
  100. // Try to open the model file, this will check:
  101. // - File exists (automatically throws FileNotFoundException)
  102. // - File is readable (explicit check)
  103. // This provides better error messages that llama.cpp, which would throw an access violation exception in both cases.
  104. using (var fs = new FileStream(modelPath, FileMode.Open))
  105. if (!fs.CanRead)
  106. throw new InvalidOperationException($"Model file '{modelPath}' is not readable");
  107. var handle = llama_load_model_from_file(modelPath, lparams);
  108. if (handle.IsInvalid)
  109. throw new LoadWeightsFailedException(modelPath);
  110. return handle;
  111. }
  112. #region native API
  113. static SafeLlamaModelHandle()
  114. {
  115. // Ensure that `NativeApi` has been loaded
  116. NativeApi.llama_empty_call();
  117. }
  118. /// <summary>
  119. /// Load all of the weights of a model into memory.
  120. /// </summary>
  121. /// <param name="path_model"></param>
  122. /// <param name="params"></param>
  123. /// <returns>The loaded model, or null on failure.</returns>
  124. [DllImport(NativeApi.libraryName, CallingConvention = CallingConvention.Cdecl)]
  125. private static extern SafeLlamaModelHandle llama_load_model_from_file(string path_model, LLamaModelParams @params);
  126. /// <summary>
  127. /// Apply a LoRA adapter to a loaded model
  128. /// path_base_model is the path to a higher quality model to use as a base for
  129. /// the layers modified by the adapter. Can be NULL to use the current loaded model.
  130. /// The model needs to be reloaded before applying a new adapter, otherwise the adapter
  131. /// will be applied on top of the previous one
  132. /// </summary>
  133. /// <param name="model_ptr"></param>
  134. /// <param name="path_lora"></param>
  135. /// <param name="scale"></param>
  136. /// <param name="path_base_model"></param>
  137. /// <param name="n_threads"></param>
  138. /// <returns>Returns 0 on success</returns>
  139. [DllImport(NativeApi.libraryName, CallingConvention = CallingConvention.Cdecl)]
  140. private static extern int llama_model_apply_lora_from_file(SafeLlamaModelHandle model_ptr, string path_lora, float scale, string? path_base_model, int n_threads);
  141. /// <summary>
  142. /// Frees all allocated memory associated with a model
  143. /// </summary>
  144. /// <param name="model"></param>
  145. [DllImport(NativeApi.libraryName, CallingConvention = CallingConvention.Cdecl)]
  146. private static extern void llama_free_model(IntPtr model);
  147. /// <summary>
  148. /// Get the number of metadata key/value pairs
  149. /// </summary>
  150. /// <param name="model"></param>
  151. /// <returns></returns>
  152. [DllImport(NativeApi.libraryName, CallingConvention = CallingConvention.Cdecl)]
  153. private static extern int llama_model_meta_count(SafeLlamaModelHandle model);
  154. /// <summary>
  155. /// Get metadata key name by index
  156. /// </summary>
  157. /// <param name="model">Model to fetch from</param>
  158. /// <param name="index">Index of key to fetch</param>
  159. /// <param name="dest">buffer to write result into</param>
  160. /// <returns>The length of the string on success (even if the buffer is too small). -1 is the key does not exist.</returns>
  161. private static int llama_model_meta_key_by_index(SafeLlamaModelHandle model, int index, Span<byte> dest)
  162. {
  163. unsafe
  164. {
  165. fixed (byte* destPtr = dest)
  166. {
  167. return llama_model_meta_key_by_index_native(model, index, destPtr, dest.Length);
  168. }
  169. }
  170. [DllImport(NativeApi.libraryName, CallingConvention = CallingConvention.Cdecl, EntryPoint = "llama_model_meta_key_by_index")]
  171. static extern unsafe int llama_model_meta_key_by_index_native(SafeLlamaModelHandle model, int index, byte* buf, long buf_size);
  172. }
  173. /// <summary>
  174. /// Get metadata value as a string by index
  175. /// </summary>
  176. /// <param name="model">Model to fetch from</param>
  177. /// <param name="index">Index of val to fetch</param>
  178. /// <param name="dest">Buffer to write result into</param>
  179. /// <returns>The length of the string on success (even if the buffer is too small). -1 is the key does not exist.</returns>
  180. private static int llama_model_meta_val_str_by_index(SafeLlamaModelHandle model, int index, Span<byte> dest)
  181. {
  182. unsafe
  183. {
  184. fixed (byte* destPtr = dest)
  185. {
  186. return llama_model_meta_val_str_by_index_native(model, index, destPtr, dest.Length);
  187. }
  188. }
  189. [DllImport(NativeApi.libraryName, CallingConvention = CallingConvention.Cdecl, EntryPoint = "llama_model_meta_val_str_by_index")]
  190. static extern unsafe int llama_model_meta_val_str_by_index_native(SafeLlamaModelHandle model, int index, byte* buf, long buf_size);
  191. }
  192. /// <summary>
  193. /// Get metadata value as a string by key name
  194. /// </summary>
  195. /// <param name="model"></param>
  196. /// <param name="key"></param>
  197. /// <param name="buf"></param>
  198. /// <param name="buf_size"></param>
  199. /// <returns>The length of the string on success, or -1 on failure</returns>
  200. [DllImport(NativeApi.libraryName, CallingConvention = CallingConvention.Cdecl)]
  201. public static extern unsafe int llama_model_meta_val_str(SafeLlamaModelHandle model, byte* key, byte* buf, long buf_size);
  202. /// <summary>
  203. /// Get the number of tokens in the model vocabulary
  204. /// </summary>
  205. /// <param name="model"></param>
  206. /// <returns></returns>
  207. [DllImport(NativeApi.libraryName, CallingConvention = CallingConvention.Cdecl)]
  208. private static extern int llama_n_vocab(SafeLlamaModelHandle model);
  209. [DllImport(NativeApi.libraryName, CallingConvention = CallingConvention.Cdecl)]
  210. private static extern LLamaVocabType llama_vocab_type(SafeLlamaModelHandle model);
  211. [DllImport(NativeApi.libraryName, CallingConvention = CallingConvention.Cdecl)]
  212. private static extern LLamaRopeType llama_rope_type(SafeLlamaModelHandle model);
  213. /// <summary>
  214. /// Get the size of the context window for the model
  215. /// </summary>
  216. /// <param name="model"></param>
  217. /// <returns></returns>
  218. [DllImport(NativeApi.libraryName, CallingConvention = CallingConvention.Cdecl)]
  219. private static extern int llama_n_ctx_train(SafeLlamaModelHandle model);
  220. /// <summary>
  221. /// Get the dimension of embedding vectors from this model
  222. /// </summary>
  223. /// <param name="model"></param>
  224. /// <returns></returns>
  225. [DllImport(NativeApi.libraryName, CallingConvention = CallingConvention.Cdecl)]
  226. private static extern int llama_n_embd(SafeLlamaModelHandle model);
  227. /// <summary>
  228. /// Get the number of layers in this model
  229. /// </summary>
  230. /// <param name="model"></param>
  231. /// <returns></returns>
  232. [DllImport(NativeApi.libraryName, CallingConvention = CallingConvention.Cdecl)]
  233. private static extern int llama_n_layers(SafeLlamaModelHandle model);
  234. /// <summary>
  235. /// Get a string describing the model type
  236. /// </summary>
  237. /// <param name="model"></param>
  238. /// <param name="buf"></param>
  239. /// <param name="buf_size"></param>
  240. /// <returns>The length of the string on success (even if the buffer is too small)., or -1 on failure</returns>
  241. [DllImport(NativeApi.libraryName, CallingConvention = CallingConvention.Cdecl)]
  242. private static extern unsafe int llama_model_desc(SafeLlamaModelHandle model, byte* buf, long buf_size);
  243. /// <summary>
  244. /// Get the size of the model in bytes
  245. /// </summary>
  246. /// <param name="model"></param>
  247. /// <returns>The size of the model</returns>
  248. [DllImport(NativeApi.libraryName, CallingConvention = CallingConvention.Cdecl)]
  249. private static extern ulong llama_model_size(SafeLlamaModelHandle model);
  250. /// <summary>
  251. /// Get the number of parameters in this model
  252. /// </summary>
  253. /// <param name="model"></param>
  254. /// <returns>The functions return the length of the string on success, or -1 on failure</returns>
  255. [DllImport(NativeApi.libraryName, CallingConvention = CallingConvention.Cdecl)]
  256. private static extern ulong llama_model_n_params(SafeLlamaModelHandle model);
  257. /// <summary>
  258. /// Get the model's RoPE frequency scaling factor
  259. /// </summary>
  260. /// <param name="model"></param>
  261. /// <returns></returns>
  262. [DllImport(NativeApi.libraryName, CallingConvention = CallingConvention.Cdecl)]
  263. private static extern float llama_rope_freq_scale_train(SafeLlamaModelHandle model);
  264. /// <summary>
  265. /// Get the "Beginning of sentence" token
  266. /// </summary>
  267. /// <returns></returns>
  268. [DllImport(NativeApi.libraryName, CallingConvention = CallingConvention.Cdecl)]
  269. private static extern LLamaToken llama_token_bos(SafeLlamaModelHandle model);
  270. /// <summary>
  271. /// Get the "End of sentence" token
  272. /// </summary>
  273. /// <returns></returns>
  274. [DllImport(NativeApi.libraryName, CallingConvention = CallingConvention.Cdecl)]
  275. private static extern LLamaToken llama_token_eos(SafeLlamaModelHandle model);
  276. /// <summary>
  277. /// Get the "classification" token
  278. /// </summary>
  279. /// <returns></returns>
  280. [DllImport(NativeApi.libraryName, CallingConvention = CallingConvention.Cdecl)]
  281. private static extern LLamaToken llama_token_cls(SafeLlamaModelHandle model);
  282. /// <summary>
  283. /// Get the "sentence separator" token
  284. /// </summary>
  285. /// <returns></returns>
  286. [DllImport(NativeApi.libraryName, CallingConvention = CallingConvention.Cdecl)]
  287. private static extern LLamaToken llama_token_sep(SafeLlamaModelHandle model);
  288. /// <summary>
  289. /// Get the "new line" token
  290. /// </summary>
  291. /// <returns></returns>
  292. [DllImport(NativeApi.libraryName, CallingConvention = CallingConvention.Cdecl)]
  293. private static extern LLamaToken llama_token_nl(SafeLlamaModelHandle model);
  294. /// <summary>
  295. /// codellama infill tokens, Beginning of infill prefix
  296. /// </summary>
  297. /// <returns></returns>
  298. [DllImport(NativeApi.libraryName, CallingConvention = CallingConvention.Cdecl)]
  299. private static extern int llama_token_prefix(SafeLlamaModelHandle model);
  300. /// <summary>
  301. /// codellama infill tokens, Beginning of infill middle
  302. /// </summary>
  303. /// <returns></returns>
  304. [DllImport(NativeApi.libraryName, CallingConvention = CallingConvention.Cdecl)]
  305. private static extern int llama_token_middle(SafeLlamaModelHandle model);
  306. /// <summary>
  307. /// codellama infill tokens, Beginning of infill suffix
  308. /// </summary>
  309. /// <returns></returns>
  310. [DllImport(NativeApi.libraryName, CallingConvention = CallingConvention.Cdecl)]
  311. private static extern int llama_token_suffix(SafeLlamaModelHandle model);
  312. /// <summary>
  313. /// codellama infill tokens, End of infill middle
  314. /// </summary>
  315. /// <returns></returns>
  316. [DllImport(NativeApi.libraryName, CallingConvention = CallingConvention.Cdecl)]
  317. private static extern int llama_token_eot(SafeLlamaModelHandle model);
  318. #endregion
  319. #region LoRA
  320. /// <summary>
  321. /// Apply a LoRA adapter to a loaded model
  322. /// </summary>
  323. /// <param name="lora"></param>
  324. /// <param name="scale"></param>
  325. /// <param name="modelBase">A path to a higher quality model to use as a base for the layers modified by the
  326. /// adapter. Can be NULL to use the current loaded model.</param>
  327. /// <param name="threads"></param>
  328. /// <exception cref="RuntimeError"></exception>
  329. public void ApplyLoraFromFile(string lora, float scale, string? modelBase = null, int? threads = null)
  330. {
  331. // Try to open the model file, this will check:
  332. // - File exists (automatically throws FileNotFoundException)
  333. // - File is readable (explicit check)
  334. // This provides better error messages that llama.cpp, which would throw an access violation exception in both cases.
  335. using (var fs = new FileStream(lora, FileMode.Open))
  336. if (!fs.CanRead)
  337. throw new InvalidOperationException($"LoRA file '{lora}' is not readable");
  338. var err = llama_model_apply_lora_from_file(
  339. this,
  340. lora,
  341. scale,
  342. string.IsNullOrEmpty(modelBase) ? null : modelBase,
  343. threads ?? Math.Max(1, Environment.ProcessorCount / 2)
  344. );
  345. if (err != 0)
  346. throw new RuntimeError($"Failed to apply lora adapter (err={err}).");
  347. }
  348. #endregion
  349. #region tokenize
  350. /// <summary>
  351. /// Convert a single llama token into bytes
  352. /// </summary>
  353. /// <param name="token">Token to decode</param>
  354. /// <param name="dest">A span to attempt to write into. If this is too small nothing will be written</param>
  355. /// <returns>The size of this token. **nothing will be written** if this is larger than `dest`</returns>
  356. public uint TokenToSpan(LLamaToken token, Span<byte> dest)
  357. {
  358. var length = NativeApi.llama_token_to_piece(this, token, dest);
  359. return (uint)Math.Abs(length);
  360. }
  361. /// <summary>
  362. /// Convert a sequence of tokens into characters.
  363. /// </summary>
  364. /// <param name="tokens"></param>
  365. /// <param name="dest"></param>
  366. /// <param name="encoding"></param>
  367. /// <returns>The section of the span which has valid data in it.
  368. /// If there was insufficient space in the output span this will be
  369. /// filled with as many characters as possible, starting from the _last_ token.
  370. /// </returns>
  371. [Obsolete("Use a StreamingTokenDecoder instead")]
  372. internal Span<char> TokensToSpan(IReadOnlyList<LLamaToken> tokens, Span<char> dest, Encoding encoding)
  373. {
  374. var decoder = new StreamingTokenDecoder(encoding, this);
  375. decoder.AddRange(tokens);
  376. var str = decoder.Read();
  377. if (str.Length < dest.Length)
  378. {
  379. str.AsSpan().CopyTo(dest);
  380. return dest.Slice(0, str.Length);
  381. }
  382. else
  383. {
  384. str.AsSpan().Slice(str.Length - dest.Length).CopyTo(dest);
  385. return dest;
  386. }
  387. }
  388. /// <summary>
  389. /// Convert a string of text into tokens
  390. /// </summary>
  391. /// <param name="text"></param>
  392. /// <param name="add_bos"></param>
  393. /// <param name="encoding"></param>
  394. /// <param name="special">Allow tokenizing special and/or control tokens which otherwise are not exposed and treated as plaintext.</param>
  395. /// <returns></returns>
  396. public LLamaToken[] Tokenize(string text, bool add_bos, bool special, Encoding encoding)
  397. {
  398. // Early exit if there's no work to do
  399. if (text == "" && !add_bos)
  400. return Array.Empty<LLamaToken>();
  401. // Convert string to bytes, adding one extra byte to the end (null terminator)
  402. var bytesCount = encoding.GetByteCount(text);
  403. var bytes = ArrayPool<byte>.Shared.Rent(bytesCount + 1);
  404. try
  405. {
  406. unsafe
  407. {
  408. fixed (char* textPtr = text)
  409. fixed (byte* bytesPtr = bytes)
  410. {
  411. // Convert text into bytes
  412. encoding.GetBytes(textPtr, text.Length, bytesPtr, bytes.Length);
  413. // Tokenize once with no output, to get the token count. Output will be negative (indicating that there was insufficient space)
  414. var count = -NativeApi.llama_tokenize(this, bytesPtr, bytesCount, (LLamaToken*)IntPtr.Zero, 0, add_bos, special);
  415. // Tokenize again, this time outputting into an array of exactly the right size
  416. var tokens = new LLamaToken[count];
  417. fixed (LLamaToken* tokensPtr = tokens)
  418. {
  419. NativeApi.llama_tokenize(this, bytesPtr, bytesCount, tokensPtr, count, add_bos, special);
  420. return tokens;
  421. }
  422. }
  423. }
  424. }
  425. finally
  426. {
  427. ArrayPool<byte>.Shared.Return(bytes, true);
  428. }
  429. }
  430. #endregion
  431. #region context
  432. /// <summary>
  433. /// Create a new context for this model
  434. /// </summary>
  435. /// <param name="params"></param>
  436. /// <returns></returns>
  437. public SafeLLamaContextHandle CreateContext(LLamaContextParams @params)
  438. {
  439. return SafeLLamaContextHandle.Create(this, @params);
  440. }
  441. #endregion
  442. #region metadata
  443. /// <summary>
  444. /// Get the metadata key for the given index
  445. /// </summary>
  446. /// <param name="index">The index to get</param>
  447. /// <returns>The key, null if there is no such key or if the buffer was too small</returns>
  448. public Memory<byte>? MetadataKeyByIndex(int index)
  449. {
  450. // Check if the key exists, without getting any bytes of data
  451. var keyLength = llama_model_meta_key_by_index(this, index, Array.Empty<byte>());
  452. if (keyLength < 0)
  453. return null;
  454. // get a buffer large enough to hold it
  455. var buffer = new byte[keyLength + 1];
  456. keyLength = llama_model_meta_key_by_index(this, index, buffer);
  457. Debug.Assert(keyLength >= 0);
  458. return buffer.AsMemory().Slice(0, keyLength);
  459. }
  460. /// <summary>
  461. /// Get the metadata value for the given index
  462. /// </summary>
  463. /// <param name="index">The index to get</param>
  464. /// <returns>The value, null if there is no such value or if the buffer was too small</returns>
  465. public Memory<byte>? MetadataValueByIndex(int index)
  466. {
  467. // Check if the key exists, without getting any bytes of data
  468. var valueLength = llama_model_meta_val_str_by_index(this, index, Array.Empty<byte>());
  469. if (valueLength < 0)
  470. return null;
  471. // get a buffer large enough to hold it
  472. var buffer = new byte[valueLength + 1];
  473. valueLength = llama_model_meta_val_str_by_index(this, index, buffer);
  474. Debug.Assert(valueLength >= 0);
  475. return buffer.AsMemory().Slice(0, valueLength);
  476. }
  477. internal IReadOnlyDictionary<string, string> ReadMetadata()
  478. {
  479. var result = new Dictionary<string, string>();
  480. for (var i = 0; i < MetadataCount; i++)
  481. {
  482. var keyBytes = MetadataKeyByIndex(i);
  483. if (keyBytes == null)
  484. continue;
  485. var key = Encoding.UTF8.GetStringFromSpan(keyBytes.Value.Span);
  486. var valBytes = MetadataValueByIndex(i);
  487. if (valBytes == null)
  488. continue;
  489. var val = Encoding.UTF8.GetStringFromSpan(valBytes.Value.Span);
  490. result[key] = val;
  491. }
  492. return result;
  493. }
  494. #endregion
  495. /// <summary>
  496. /// Get tokens for a model
  497. /// </summary>
  498. public class ModelTokens
  499. {
  500. private readonly SafeLlamaModelHandle _model;
  501. internal ModelTokens(SafeLlamaModelHandle model)
  502. {
  503. _model = model;
  504. }
  505. private static LLamaToken? Normalize(LLamaToken token)
  506. {
  507. return token == -1 ? null : token;
  508. }
  509. /// <summary>
  510. /// Get the Beginning of Sentence token for this model
  511. /// </summary>
  512. public LLamaToken? BOS => Normalize(llama_token_bos(_model));
  513. /// <summary>
  514. /// Get the End of Sentence token for this model
  515. /// </summary>
  516. public LLamaToken? EOS => Normalize(llama_token_eos(_model));
  517. /// <summary>
  518. /// Get the newline token for this model
  519. /// </summary>
  520. public LLamaToken? Newline => Normalize(llama_token_nl(_model));
  521. /// <summary>
  522. /// Get the classification token for this model
  523. /// </summary>
  524. public LLamaToken? CLS => Normalize(llama_token_cls(_model));
  525. /// <summary>
  526. /// Get the sentence separator token for this model
  527. /// </summary>
  528. public LLamaToken? SEP => Normalize(llama_token_sep(_model));
  529. /// <summary>
  530. /// Codellama beginning of infill prefix
  531. /// </summary>
  532. public LLamaToken? InfillPrefix => Normalize(llama_token_prefix(_model));
  533. /// <summary>
  534. /// Codellama beginning of infill middle
  535. /// </summary>
  536. public LLamaToken? InfillMiddle => Normalize(llama_token_middle(_model));
  537. /// <summary>
  538. /// Codellama beginning of infill suffix
  539. /// </summary>
  540. public LLamaToken? InfillSuffix => Normalize(llama_token_suffix(_model));
  541. /// <summary>
  542. /// Codellama end of infill middle
  543. /// </summary>
  544. public LLamaToken? EOT => Normalize(llama_token_eot(_model));
  545. }
  546. }
  547. }