You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

llama.native.samplingapi.md 14 kB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338
  1. # SamplingApi
  2. Namespace: LLama.Native
  3. Direct translation of the llama.cpp sampling API
  4. ```csharp
  5. public class SamplingApi
  6. ```
  7. Inheritance [Object](https://docs.microsoft.com/en-us/dotnet/api/system.object) → [SamplingApi](./llama.native.samplingapi.md)
  8. ## Constructors
  9. ### **SamplingApi()**
  10. ```csharp
  11. public SamplingApi()
  12. ```
  13. ## Methods
  14. ### **llama_sample_grammar(SafeLLamaContextHandle, LLamaTokenDataArray, SafeLLamaGrammarHandle)**
  15. Apply grammar rules to candidate tokens
  16. ```csharp
  17. public static void llama_sample_grammar(SafeLLamaContextHandle ctx, LLamaTokenDataArray candidates, SafeLLamaGrammarHandle grammar)
  18. ```
  19. #### Parameters
  20. `ctx` [SafeLLamaContextHandle](./llama.native.safellamacontexthandle.md)<br>
  21. `candidates` [LLamaTokenDataArray](./llama.native.llamatokendataarray.md)<br>
  22. `grammar` [SafeLLamaGrammarHandle](./llama.native.safellamagrammarhandle.md)<br>
  23. ### **llama_sample_repetition_penalty(SafeLLamaContextHandle, LLamaTokenDataArray, Memory&lt;Int32&gt;, UInt64, Single)**
  24. #### Caution
  25. last_tokens_size parameter is no longer needed
  26. ---
  27. Repetition penalty described in CTRL academic paper https://arxiv.org/abs/1909.05858, with negative logit fix.
  28. ```csharp
  29. public static void llama_sample_repetition_penalty(SafeLLamaContextHandle ctx, LLamaTokenDataArray candidates, Memory<int> last_tokens, ulong last_tokens_size, float penalty)
  30. ```
  31. #### Parameters
  32. `ctx` [SafeLLamaContextHandle](./llama.native.safellamacontexthandle.md)<br>
  33. `candidates` [LLamaTokenDataArray](./llama.native.llamatokendataarray.md)<br>
  34. Pointer to LLamaTokenDataArray
  35. `last_tokens` [Memory&lt;Int32&gt;](https://docs.microsoft.com/en-us/dotnet/api/system.memory-1)<br>
  36. `last_tokens_size` [UInt64](https://docs.microsoft.com/en-us/dotnet/api/system.uint64)<br>
  37. `penalty` [Single](https://docs.microsoft.com/en-us/dotnet/api/system.single)<br>
  38. ### **llama_sample_repetition_penalty(SafeLLamaContextHandle, LLamaTokenDataArray, Memory&lt;Int32&gt;, Single)**
  39. Repetition penalty described in CTRL academic paper https://arxiv.org/abs/1909.05858, with negative logit fix.
  40. ```csharp
  41. public static void llama_sample_repetition_penalty(SafeLLamaContextHandle ctx, LLamaTokenDataArray candidates, Memory<int> last_tokens, float penalty)
  42. ```
  43. #### Parameters
  44. `ctx` [SafeLLamaContextHandle](./llama.native.safellamacontexthandle.md)<br>
  45. `candidates` [LLamaTokenDataArray](./llama.native.llamatokendataarray.md)<br>
  46. Pointer to LLamaTokenDataArray
  47. `last_tokens` [Memory&lt;Int32&gt;](https://docs.microsoft.com/en-us/dotnet/api/system.memory-1)<br>
  48. `penalty` [Single](https://docs.microsoft.com/en-us/dotnet/api/system.single)<br>
  49. ### **llama_sample_frequency_and_presence_penalties(SafeLLamaContextHandle, LLamaTokenDataArray, Memory&lt;Int32&gt;, UInt64, Single, Single)**
  50. #### Caution
  51. last_tokens_size parameter is no longer needed
  52. ---
  53. Frequency and presence penalties described in OpenAI API https://platform.openai.com/docs/api-reference/parameter-details.
  54. ```csharp
  55. public static void llama_sample_frequency_and_presence_penalties(SafeLLamaContextHandle ctx, LLamaTokenDataArray candidates, Memory<int> last_tokens, ulong last_tokens_size, float alpha_frequency, float alpha_presence)
  56. ```
  57. #### Parameters
  58. `ctx` [SafeLLamaContextHandle](./llama.native.safellamacontexthandle.md)<br>
  59. `candidates` [LLamaTokenDataArray](./llama.native.llamatokendataarray.md)<br>
  60. Pointer to LLamaTokenDataArray
  61. `last_tokens` [Memory&lt;Int32&gt;](https://docs.microsoft.com/en-us/dotnet/api/system.memory-1)<br>
  62. `last_tokens_size` [UInt64](https://docs.microsoft.com/en-us/dotnet/api/system.uint64)<br>
  63. `alpha_frequency` [Single](https://docs.microsoft.com/en-us/dotnet/api/system.single)<br>
  64. `alpha_presence` [Single](https://docs.microsoft.com/en-us/dotnet/api/system.single)<br>
  65. ### **llama_sample_frequency_and_presence_penalties(SafeLLamaContextHandle, LLamaTokenDataArray, Memory&lt;Int32&gt;, Single, Single)**
  66. Frequency and presence penalties described in OpenAI API https://platform.openai.com/docs/api-reference/parameter-details.
  67. ```csharp
  68. public static void llama_sample_frequency_and_presence_penalties(SafeLLamaContextHandle ctx, LLamaTokenDataArray candidates, Memory<int> last_tokens, float alpha_frequency, float alpha_presence)
  69. ```
  70. #### Parameters
  71. `ctx` [SafeLLamaContextHandle](./llama.native.safellamacontexthandle.md)<br>
  72. `candidates` [LLamaTokenDataArray](./llama.native.llamatokendataarray.md)<br>
  73. Pointer to LLamaTokenDataArray
  74. `last_tokens` [Memory&lt;Int32&gt;](https://docs.microsoft.com/en-us/dotnet/api/system.memory-1)<br>
  75. `alpha_frequency` [Single](https://docs.microsoft.com/en-us/dotnet/api/system.single)<br>
  76. `alpha_presence` [Single](https://docs.microsoft.com/en-us/dotnet/api/system.single)<br>
  77. ### **llama_sample_softmax(SafeLLamaContextHandle, LLamaTokenDataArray)**
  78. Sorts candidate tokens by their logits in descending order and calculate probabilities based on logits.
  79. ```csharp
  80. public static void llama_sample_softmax(SafeLLamaContextHandle ctx, LLamaTokenDataArray candidates)
  81. ```
  82. #### Parameters
  83. `ctx` [SafeLLamaContextHandle](./llama.native.safellamacontexthandle.md)<br>
  84. `candidates` [LLamaTokenDataArray](./llama.native.llamatokendataarray.md)<br>
  85. Pointer to LLamaTokenDataArray
  86. ### **llama_sample_top_k(SafeLLamaContextHandle, LLamaTokenDataArray, Int32, UInt64)**
  87. Top-K sampling described in academic paper "The Curious Case of Neural Text Degeneration" https://arxiv.org/abs/1904.09751
  88. ```csharp
  89. public static void llama_sample_top_k(SafeLLamaContextHandle ctx, LLamaTokenDataArray candidates, int k, ulong min_keep)
  90. ```
  91. #### Parameters
  92. `ctx` [SafeLLamaContextHandle](./llama.native.safellamacontexthandle.md)<br>
  93. `candidates` [LLamaTokenDataArray](./llama.native.llamatokendataarray.md)<br>
  94. Pointer to LLamaTokenDataArray
  95. `k` [Int32](https://docs.microsoft.com/en-us/dotnet/api/system.int32)<br>
  96. `min_keep` [UInt64](https://docs.microsoft.com/en-us/dotnet/api/system.uint64)<br>
  97. ### **llama_sample_top_p(SafeLLamaContextHandle, LLamaTokenDataArray, Single, UInt64)**
  98. Nucleus sampling described in academic paper "The Curious Case of Neural Text Degeneration" https://arxiv.org/abs/1904.09751
  99. ```csharp
  100. public static void llama_sample_top_p(SafeLLamaContextHandle ctx, LLamaTokenDataArray candidates, float p, ulong min_keep)
  101. ```
  102. #### Parameters
  103. `ctx` [SafeLLamaContextHandle](./llama.native.safellamacontexthandle.md)<br>
  104. `candidates` [LLamaTokenDataArray](./llama.native.llamatokendataarray.md)<br>
  105. Pointer to LLamaTokenDataArray
  106. `p` [Single](https://docs.microsoft.com/en-us/dotnet/api/system.single)<br>
  107. `min_keep` [UInt64](https://docs.microsoft.com/en-us/dotnet/api/system.uint64)<br>
  108. ### **llama_sample_tail_free(SafeLLamaContextHandle, LLamaTokenDataArray, Single, UInt64)**
  109. Tail Free Sampling described in https://www.trentonbricken.com/Tail-Free-Sampling/.
  110. ```csharp
  111. public static void llama_sample_tail_free(SafeLLamaContextHandle ctx, LLamaTokenDataArray candidates, float z, ulong min_keep)
  112. ```
  113. #### Parameters
  114. `ctx` [SafeLLamaContextHandle](./llama.native.safellamacontexthandle.md)<br>
  115. `candidates` [LLamaTokenDataArray](./llama.native.llamatokendataarray.md)<br>
  116. Pointer to LLamaTokenDataArray
  117. `z` [Single](https://docs.microsoft.com/en-us/dotnet/api/system.single)<br>
  118. `min_keep` [UInt64](https://docs.microsoft.com/en-us/dotnet/api/system.uint64)<br>
  119. ### **llama_sample_typical(SafeLLamaContextHandle, LLamaTokenDataArray, Single, UInt64)**
  120. Locally Typical Sampling implementation described in the paper https://arxiv.org/abs/2202.00666.
  121. ```csharp
  122. public static void llama_sample_typical(SafeLLamaContextHandle ctx, LLamaTokenDataArray candidates, float p, ulong min_keep)
  123. ```
  124. #### Parameters
  125. `ctx` [SafeLLamaContextHandle](./llama.native.safellamacontexthandle.md)<br>
  126. `candidates` [LLamaTokenDataArray](./llama.native.llamatokendataarray.md)<br>
  127. Pointer to LLamaTokenDataArray
  128. `p` [Single](https://docs.microsoft.com/en-us/dotnet/api/system.single)<br>
  129. `min_keep` [UInt64](https://docs.microsoft.com/en-us/dotnet/api/system.uint64)<br>
  130. ### **llama_sample_temperature(SafeLLamaContextHandle, LLamaTokenDataArray, Single)**
  131. Sample with temperature.
  132. As temperature increases, the prediction becomes diverse but also vulnerable to hallucinations -- generating tokens that are sensible but not factual
  133. ```csharp
  134. public static void llama_sample_temperature(SafeLLamaContextHandle ctx, LLamaTokenDataArray candidates, float temp)
  135. ```
  136. #### Parameters
  137. `ctx` [SafeLLamaContextHandle](./llama.native.safellamacontexthandle.md)<br>
  138. `candidates` [LLamaTokenDataArray](./llama.native.llamatokendataarray.md)<br>
  139. `temp` [Single](https://docs.microsoft.com/en-us/dotnet/api/system.single)<br>
  140. ### **llama_sample_token_mirostat(SafeLLamaContextHandle, LLamaTokenDataArray, Single, Single, Int32, Single&)**
  141. Mirostat 1.0 algorithm described in the paper https://arxiv.org/abs/2007.14966. Uses tokens instead of words.
  142. ```csharp
  143. public static int llama_sample_token_mirostat(SafeLLamaContextHandle ctx, LLamaTokenDataArray candidates, float tau, float eta, int m, Single& mu)
  144. ```
  145. #### Parameters
  146. `ctx` [SafeLLamaContextHandle](./llama.native.safellamacontexthandle.md)<br>
  147. `candidates` [LLamaTokenDataArray](./llama.native.llamatokendataarray.md)<br>
  148. A vector of `LLamaTokenData` containing the candidate tokens, their probabilities (p), and log-odds (logit) for the current position in the generated text.
  149. `tau` [Single](https://docs.microsoft.com/en-us/dotnet/api/system.single)<br>
  150. The target cross-entropy (or surprise) value you want to achieve for the generated text. A higher value corresponds to more surprising or less predictable text, while a lower value corresponds to less surprising or more predictable text.
  151. `eta` [Single](https://docs.microsoft.com/en-us/dotnet/api/system.single)<br>
  152. The learning rate used to update `mu` based on the error between the target and observed surprisal of the sampled word. A larger learning rate will cause `mu` to be updated more quickly, while a smaller learning rate will result in slower updates.
  153. `m` [Int32](https://docs.microsoft.com/en-us/dotnet/api/system.int32)<br>
  154. The number of tokens considered in the estimation of `s_hat`. This is an arbitrary value that is used to calculate `s_hat`, which in turn helps to calculate the value of `k`. In the paper, they use `m = 100`, but you can experiment with different values to see how it affects the performance of the algorithm.
  155. `mu` [Single&](https://docs.microsoft.com/en-us/dotnet/api/system.single&)<br>
  156. Maximum cross-entropy. This value is initialized to be twice the target cross-entropy (`2 * tau`) and is updated in the algorithm based on the error between the target and observed surprisal.
  157. #### Returns
  158. [Int32](https://docs.microsoft.com/en-us/dotnet/api/system.int32)<br>
  159. ### **llama_sample_token_mirostat_v2(SafeLLamaContextHandle, LLamaTokenDataArray, Single, Single, Single&)**
  160. Mirostat 2.0 algorithm described in the paper https://arxiv.org/abs/2007.14966. Uses tokens instead of words.
  161. ```csharp
  162. public static int llama_sample_token_mirostat_v2(SafeLLamaContextHandle ctx, LLamaTokenDataArray candidates, float tau, float eta, Single& mu)
  163. ```
  164. #### Parameters
  165. `ctx` [SafeLLamaContextHandle](./llama.native.safellamacontexthandle.md)<br>
  166. `candidates` [LLamaTokenDataArray](./llama.native.llamatokendataarray.md)<br>
  167. A vector of `LLamaTokenData` containing the candidate tokens, their probabilities (p), and log-odds (logit) for the current position in the generated text.
  168. `tau` [Single](https://docs.microsoft.com/en-us/dotnet/api/system.single)<br>
  169. The target cross-entropy (or surprise) value you want to achieve for the generated text. A higher value corresponds to more surprising or less predictable text, while a lower value corresponds to less surprising or more predictable text.
  170. `eta` [Single](https://docs.microsoft.com/en-us/dotnet/api/system.single)<br>
  171. The learning rate used to update `mu` based on the error between the target and observed surprisal of the sampled word. A larger learning rate will cause `mu` to be updated more quickly, while a smaller learning rate will result in slower updates.
  172. `mu` [Single&](https://docs.microsoft.com/en-us/dotnet/api/system.single&)<br>
  173. Maximum cross-entropy. This value is initialized to be twice the target cross-entropy (`2 * tau`) and is updated in the algorithm based on the error between the target and observed surprisal.
  174. #### Returns
  175. [Int32](https://docs.microsoft.com/en-us/dotnet/api/system.int32)<br>
  176. ### **llama_sample_token_greedy(SafeLLamaContextHandle, LLamaTokenDataArray)**
  177. Selects the token with the highest probability.
  178. ```csharp
  179. public static int llama_sample_token_greedy(SafeLLamaContextHandle ctx, LLamaTokenDataArray candidates)
  180. ```
  181. #### Parameters
  182. `ctx` [SafeLLamaContextHandle](./llama.native.safellamacontexthandle.md)<br>
  183. `candidates` [LLamaTokenDataArray](./llama.native.llamatokendataarray.md)<br>
  184. Pointer to LLamaTokenDataArray
  185. #### Returns
  186. [Int32](https://docs.microsoft.com/en-us/dotnet/api/system.int32)<br>
  187. ### **llama_sample_token(SafeLLamaContextHandle, LLamaTokenDataArray)**
  188. Randomly selects a token from the candidates based on their probabilities.
  189. ```csharp
  190. public static int llama_sample_token(SafeLLamaContextHandle ctx, LLamaTokenDataArray candidates)
  191. ```
  192. #### Parameters
  193. `ctx` [SafeLLamaContextHandle](./llama.native.safellamacontexthandle.md)<br>
  194. `candidates` [LLamaTokenDataArray](./llama.native.llamatokendataarray.md)<br>
  195. Pointer to LLamaTokenDataArray
  196. #### Returns
  197. [Int32](https://docs.microsoft.com/en-us/dotnet/api/system.int32)<br>