diff --git a/0.5/404.html b/0.5/404.html new file mode 100755 index 00000000..8a544c66 --- /dev/null +++ b/0.5/404.html @@ -0,0 +1,2009 @@ + + + +
+ + + + + + + + + + + + + +The figure below shows the core framework structure, which is separated to four levels.
+LLamaContext, LLamaEmbedder and LLamaQuantizer.InteractiveExecutor, InstructuExecutor and StatelessExecutor.InteractiveExecutor and LLamaContext, which supports interactive tasks and saving/re-loading sessions. It also provides a flexible way to customize the text process by IHistoryTransform, ITextTransform and ITextStreamTransform.
Since LLamaContext interact with native library, it's not recommended to use the methods of it directly unless you know what you are doing. So does the NativeApi, which is not included in the architecture figure above.
ChatSession is recommended to be used when you want to build an application similar to ChatGPT, or the ChatBot, because it works best with InteractiveExecutor. Though other executors are also allowed to passed as a parameter to initialize a ChatSession, it's not encouraged if you are new to LLamaSharp and LLM.
High-level applications, such as BotSharp, are supposed to be used when you concentrate on the part not related with LLM. For example, if you want to deploy a chat bot to help you remember your schedules, using BotSharp may be a good choice.
+Note that the APIs of the high-level applications may not be stable now. Please take it into account when using them.
+ + + + + + +ChatSession is a higher-level abstraction than the executors. In the context of a chat application like ChatGPT, a "chat session" refers to an interactive conversation or exchange of messages between the user and the chatbot. It represents a continuous flow of communication where the user enters input or asks questions, and the chatbot responds accordingly. A chat session typically starts when the user initiates a conversation with the chatbot and continues until the interaction comes to a natural end or is explicitly terminated by either the user or the system. During a chat session, the chatbot maintains the context of the conversation, remembers previous messages, and generates appropriate responses based on the user's inputs and the ongoing dialogue.
Currently, the only parameter that is accepted is an ILLamaExecutor, because this is the only parameter that we're sure to exist in all the future versions. Since it's the high-level abstraction, we're conservative to the API designs. In the future, there may be more kinds of constructors added.
InteractiveExecutor ex = new(new LLamaModel(new ModelParams(modelPath)));
+ChatSession session = new ChatSession(ex);
+
+There'll be two kinds of input accepted by the Chat API, which are ChatHistory and String. The API with string is quite similar to that of the executors. Meanwhile, the API with ChatHistory is aimed to provide more flexible usages. For example, you have had a chat with the bot in session A before you open the session B. Now session B has no memory for what you said before. Therefore, you can feed the history of A to B.
string prompt = "What is C#?";
+
+foreach (var text in session.Chat(prompt, new InferenceParams() { Temperature = 0.6f, AntiPrompts = new List<string> { "User:" } })) // the inference params should be changed depending on your statement
+{
+ Console.Write(text);
+}
+
+Currently History is a property of ChatSession.
foreach(var rec in session.History.Messages)
+{
+ Console.WriteLine($"{rec.AuthorRole}: {rec.Content}");
+}
+
+
+
+
+
+
+
+ Generally, the chat session could be switched, which requires the ability of loading and saving session.
+When building a chat bot app, it's NOT encouraged to initialize many chat sessions and keep them in memory to wait for being switched, because the memory consumption of both CPU and GPU is expensive. It's recommended to save the current session before switching to a new session, and load the file when switching back to the session.
+The API is also quite simple, the files will be saved into a directory you specified. If the path does not exist, a new directory will be created.
+string savePath = "<save dir>";
+session.SaveSession(savePath);
+
+session.LoadSession(savePath);
+
+
+
+
+
+
+
+ There's three important elements in ChatSession, which are input, output and history. Besides, there're some conversions between them. Since the process of them under different conditions varies, LLamaSharp hands over this part of the power to the users.
Currently, there're three kinds of process that could be customized, as introduced below.
+In general, the input of the chat API is a text (without stream), therefore ChatSession processes it in a pipeline. If you want to use your customized transform, you need to define a transform that implements ITextTransform and add it to the pipeline of ChatSession.
public interface ITextTransform
+{
+ string Transform(string text);
+}
+
+public class MyInputTransform1 : ITextTransform
+{
+ public string Transform(string text)
+ {
+ return $"Question: {text}\n";
+ }
+}
+
+public class MyInputTransform2 : ITextTransform
+{
+ public string Transform(string text)
+ {
+ return text + "Answer: ";
+ }
+}
+
+session.AddInputTransform(new MyInputTransform1()).AddInputTransform(new MyInputTransform2());
+
+Different from the input, the output of chat API is a text stream. Therefore you need to process it word by word, instead of getting the full text at once.
+The interface of it has an IEnumerable<string> as input, which is actually a yield sequence.
public interface ITextStreamTransform
+{
+ IEnumerable<string> Transform(IEnumerable<string> tokens);
+ IAsyncEnumerable<string> TransformAsync(IAsyncEnumerable<string> tokens);
+}
+
+When implementing it, you could throw a not-implemented exception in one of them if you only need to use the chat API in synchronously or asynchronously.
+Different from the input transform pipeline, the output transform only supports one transform.
+session.WithOutputTransform(new MyOutputTransform());
+
+Here's an example of how to implement the interface. In this example, the transform detects whether there's some keywords in the response and removes them.
+/// <summary>
+/// A text output transform that removes the keywords from the response.
+/// </summary>
+public class KeywordTextOutputStreamTransform : ITextStreamTransform
+{
+ HashSet<string> _keywords;
+ int _maxKeywordLength;
+ bool _removeAllMatchedTokens;
+
+ /// <summary>
+ ///
+ /// </summary>
+ /// <param name="keywords">Keywords that you want to remove from the response.</param>
+ /// <param name="redundancyLength">The extra length when searching for the keyword. For example, if your only keyword is "highlight",
+ /// maybe the token you get is "\r\nhighligt". In this condition, if redundancyLength=0, the token cannot be successfully matched because the length of "\r\nhighligt" (10)
+ /// has already exceeded the maximum length of the keywords (8). On the contrary, setting redundancyLengyh >= 2 leads to successful match.
+ /// The larger the redundancyLength is, the lower the processing speed. But as an experience, it won't introduce too much performance impact when redundancyLength <= 5 </param>
+ /// <param name="removeAllMatchedTokens">If set to true, when getting a matched keyword, all the related tokens will be removed. Otherwise only the part of keyword will be removed.</param>
+ public KeywordTextOutputStreamTransform(IEnumerable<string> keywords, int redundancyLength = 3, bool removeAllMatchedTokens = false)
+ {
+ _keywords = new(keywords);
+ _maxKeywordLength = keywords.Select(x => x.Length).Max() + redundancyLength;
+ _removeAllMatchedTokens = removeAllMatchedTokens;
+ }
+ /// <inheritdoc />
+ public IEnumerable<string> Transform(IEnumerable<string> tokens)
+ {
+ var window = new Queue<string>();
+
+ foreach (var s in tokens)
+ {
+ window.Enqueue(s);
+ var current = string.Join("", window);
+ if (_keywords.Any(x => current.Contains(x)))
+ {
+ var matchedKeyword = _keywords.First(x => current.Contains(x));
+ int total = window.Count;
+ for (int i = 0; i < total; i++)
+ {
+ window.Dequeue();
+ }
+ if (!_removeAllMatchedTokens)
+ {
+ yield return current.Replace(matchedKeyword, "");
+ }
+ }
+ if (current.Length >= _maxKeywordLength)
+ {
+ if (_keywords.Any(x => current.Contains(x)))
+ {
+ var matchedKeyword = _keywords.First(x => current.Contains(x));
+ int total = window.Count;
+ for (int i = 0; i < total; i++)
+ {
+ window.Dequeue();
+ }
+ if (!_removeAllMatchedTokens)
+ {
+ yield return current.Replace(matchedKeyword, "");
+ }
+ }
+ else
+ {
+ int total = window.Count;
+ for (int i = 0; i < total; i++)
+ {
+ yield return window.Dequeue();
+ }
+ }
+ }
+ }
+ int totalCount = window.Count;
+ for (int i = 0; i < totalCount; i++)
+ {
+ yield return window.Dequeue();
+ }
+ }
+ /// <inheritdoc />
+ public async IAsyncEnumerable<string> TransformAsync(IAsyncEnumerable<string> tokens)
+ {
+ throw new NotImplementedException(); // This is implemented in `LLamaTransforms` but we ignore it here.
+ }
+}
+
+The chat history could be converted to or from a text, which is exactly what the interface of it.
+public interface IHistoryTransform
+{
+ string HistoryToText(ChatHistory history);
+ ChatHistory TextToHistory(AuthorRole role, string text);
+}
+
+Similar to the output transform, the history transform is added in the following way:
+session.WithHistoryTransform(new MyHistoryTransform());
+
+The implementation is quite flexible, depending on what you want the history message to be like. Here's an example, which is the default history transform in LLamaSharp.
+/// <summary>
+/// The default history transform.
+/// Uses plain text with the following format:
+/// [Author]: [Message]
+/// </summary>
+public class DefaultHistoryTransform : IHistoryTransform
+{
+ private readonly string defaultUserName = "User";
+ private readonly string defaultAssistantName = "Assistant";
+ private readonly string defaultSystemName = "System";
+ private readonly string defaultUnknownName = "??";
+
+ string _userName;
+ string _assistantName;
+ string _systemName;
+ string _unknownName;
+ bool _isInstructMode;
+ public DefaultHistoryTransform(string? userName = null, string? assistantName = null,
+ string? systemName = null, string? unknownName = null, bool isInstructMode = false)
+ {
+ _userName = userName ?? defaultUserName;
+ _assistantName = assistantName ?? defaultAssistantName;
+ _systemName = systemName ?? defaultSystemName;
+ _unknownName = unknownName ?? defaultUnknownName;
+ _isInstructMode = isInstructMode;
+ }
+
+ public virtual string HistoryToText(ChatHistory history)
+ {
+ StringBuilder sb = new();
+ foreach (var message in history.Messages)
+ {
+ if (message.AuthorRole == AuthorRole.User)
+ {
+ sb.AppendLine($"{_userName}: {message.Content}");
+ }
+ else if (message.AuthorRole == AuthorRole.System)
+ {
+ sb.AppendLine($"{_systemName}: {message.Content}");
+ }
+ else if (message.AuthorRole == AuthorRole.Unknown)
+ {
+ sb.AppendLine($"{_unknownName}: {message.Content}");
+ }
+ else if (message.AuthorRole == AuthorRole.Assistant)
+ {
+ sb.AppendLine($"{_assistantName}: {message.Content}");
+ }
+ }
+ return sb.ToString();
+ }
+
+ public virtual ChatHistory TextToHistory(AuthorRole role, string text)
+ {
+ ChatHistory history = new ChatHistory();
+ history.AddMessage(role, TrimNamesFromText(text, role));
+ return history;
+ }
+
+ public virtual string TrimNamesFromText(string text, AuthorRole role)
+ {
+ if (role == AuthorRole.User && text.StartsWith($"{_userName}:"))
+ {
+ text = text.Substring($"{_userName}:".Length).TrimStart();
+ }
+ else if (role == AuthorRole.Assistant && text.EndsWith($"{_assistantName}:"))
+ {
+ text = text.Substring(0, text.Length - $"{_assistantName}:".Length).TrimEnd();
+ }
+ if (_isInstructMode && role == AuthorRole.Assistant && text.EndsWith("\n> "))
+ {
+ text = text.Substring(0, text.Length - "\n> ".Length).TrimEnd();
+ }
+ return text;
+ }
+}
+
+
+
+
+
+
+
+ Hi, welcome to develop LLamaSharp with us together! We are always open for every contributor and any format of contributions! If you want to maintain this library actively together, please contact us to get the write access after some PRs. (Email: AsakusaRinne@gmail.com)
+In this page, we'd like to introduce how to make contributions here easily. 😊
+Firstly, please clone the llama.cpp repository and following the instructions in llama.cpp readme to configure your local environment.
+If you want to support cublas in the compilation, please make sure that you've installed the cuda.
+When building from source, please add -DBUILD_SHARED_LIBS=ON to the cmake instruction. For example, when building with cublas but without openblas, use the following instruction:
cmake .. -DLLAMA_CUBLAS=ON -DBUILD_SHARED_LIBS=ON
+
+After running cmake --build . --config Release, you could find the llama.dll, llama.so or llama.dylib in your build directory. After pasting it to LLamaSharp/LLama/runtimes and renaming it to libllama.dll, libllama.so or libllama.dylib, you can use it as the native library in LLamaSharp.
After refactoring the framework in v0.4.0, LLamaSharp will try to maintain the backward compatibility. However, in the following cases a breaking change will be required:
If a new feature could be added without introducing any break change, please open a PR rather than open an issue first. We will never refuse the PR but help to improve it, unless it's malicious.
+When adding the feature, please take care of the namespace and the naming convention. For example, if you are adding an integration for WPF, please put the code under namespace LLama.WPF or LLama.Integration.WPF instead of putting it under the root namespace. The naming convention of LLamaSharp follows the pascal naming convention, but in some parts that are invisible to users, you can do whatever you want.
If the issue is related to the LLM internal behaviour, such as endless generating the response, the best way to find the problem is to do comparison test between llama.cpp and LLamaSharp.
+You could use exactly the same prompt, the same model and the same parameters to run the inference in llama.cpp and LLamaSharp respectively to see if it's really a problem caused by the implementation in LLamaSharp.
+If the experiment showed that it worked well in llama.cpp but didn't in LLamaSharp, a search for the problem could be started. While the reason of the problem could be various, the best way I think is to add log-print in the code of llama.cpp and use it in LLamaSharp after compilation. Thus, when running LLamaSharp, you could see what happened in the native library.
+After finding out the reason, a painful but happy process comes. When working on the BUG fix, there's only one rule to follow, that is keeping the examples working well. If the modification fixed the BUG but impact on other functions, it would not be a good fix.
+During the BUG fix process, please don't hesitate to discuss together when you stuck on something.
+All kinds of integration are welcomed here! Currently the following integrations are under work or on our schedule:
+Besides, for some other integrations, like ASP.NET core, SQL, Blazor and so on, we'll appreciate it if you could help with that. If the time is limited for you, providing an example for it also means a lot!
There're mainly two ways to add an example:
+LLama.Examples of the repository.LLamaSharp uses mkdocs to build the documentation, please follow the tutorial of mkdocs to add or modify documents in LLamaSharp.
+ + + + + + +using LLama.Common;
+using System;
+using System.Collections.Generic;
+using System.Linq;
+using System.Text;
+using System.Threading.Tasks;
+
+public class ChatSessionStripRoleName
+{
+ public static void Run()
+ {
+ Console.Write("Please input your model path: ");
+ string modelPath = Console.ReadLine();
+ var prompt = File.ReadAllText("Assets/chat-with-bob.txt").Trim();
+ InteractiveExecutor ex = new(new LLamaModel(new ModelParams(modelPath, contextSize: 1024, seed: 1337, gpuLayerCount: 5)));
+ ChatSession session = new ChatSession(ex).WithOutputTransform(new LLamaTransforms.KeywordTextOutputStreamTransform(new string[] { "User:", "Bob:" }, redundancyLength: 8));
+
+ Console.ForegroundColor = ConsoleColor.Yellow;
+ Console.WriteLine("The chat session has started. The role names won't be printed.");
+ Console.ForegroundColor = ConsoleColor.White;
+
+ while (true)
+ {
+ foreach (var text in session.Chat(prompt, new InferenceParams() { Temperature = 0.6f, AntiPrompts = new List<string> { "User:" } }))
+ {
+ Console.Write(text);
+ }
+
+ Console.ForegroundColor = ConsoleColor.Green;
+ prompt = Console.ReadLine();
+ Console.ForegroundColor = ConsoleColor.White;
+ }
+ }
+}
+
+
+
+
+
+
+
+ using LLama.Common;
+using System;
+using System.Collections.Generic;
+using System.Linq;
+using System.Text;
+using System.Threading.Tasks;
+
+public class ChatSessionWithRoleName
+{
+ public static void Run()
+ {
+ Console.Write("Please input your model path: ");
+ string modelPath = Console.ReadLine();
+ var prompt = File.ReadAllText("Assets/chat-with-bob.txt").Trim();
+ InteractiveExecutor ex = new(new LLamaModel(new ModelParams(modelPath, contextSize: 1024, seed: 1337, gpuLayerCount: 5)));
+ ChatSession session = new ChatSession(ex); // The only change is to remove the transform for the output text stream.
+
+ Console.ForegroundColor = ConsoleColor.Yellow;
+ Console.WriteLine("The chat session has started. In this example, the prompt is printed for better visual result.");
+ Console.ForegroundColor = ConsoleColor.White;
+
+ // show the prompt
+ Console.Write(prompt);
+ while (true)
+ {
+ foreach (var text in session.Chat(prompt, new InferenceParams() { Temperature = 0.6f, AntiPrompts = new List<string> { "User:" } }))
+ {
+ Console.Write(text);
+ }
+
+ Console.ForegroundColor = ConsoleColor.Green;
+ prompt = Console.ReadLine();
+ Console.ForegroundColor = ConsoleColor.White;
+ }
+ }
+}
+
+
+
+
+
+
+
+ using LLama.Common;
+using System;
+using System.Collections.Generic;
+using System.Linq;
+using System.Text;
+using System.Threading.Tasks;
+
+public class GetEmbeddings
+{
+ public static void Run()
+ {
+ Console.Write("Please input your model path: ");
+ string modelPath = Console.ReadLine();
+ var embedder = new LLamaEmbedder(new ModelParams(modelPath));
+
+ while (true)
+ {
+ Console.Write("Please input your text: ");
+ Console.ForegroundColor = ConsoleColor.Green;
+ var text = Console.ReadLine();
+ Console.ForegroundColor = ConsoleColor.White;
+
+ Console.WriteLine(string.Join(", ", embedder.GetEmbeddings(text)));
+ Console.WriteLine();
+ }
+ }
+}
+
+
+
+
+
+
+
+ using LLama.Common;
+using System;
+using System.Collections.Generic;
+using System.Linq;
+using System.Text;
+using System.Threading.Tasks;
+
+public class InstructModeExecute
+{
+ public static void Run()
+ {
+ Console.Write("Please input your model path: ");
+ string modelPath = Console.ReadLine();
+ var prompt = File.ReadAllText("Assets/dan.txt").Trim();
+
+ InstructExecutor ex = new(new LLamaModel(new ModelParams(modelPath, contextSize: 1024)));
+
+ Console.ForegroundColor = ConsoleColor.Yellow;
+ Console.WriteLine("The executor has been enabled. In this example, the LLM will follow your instructions. For example, you can input \"Write a story about a fox who want to " +
+ "make friend with human, no less than 200 words.\"");
+ Console.ForegroundColor = ConsoleColor.White;
+
+ var inferenceParams = new InferenceParams() { Temperature = 0.8f, MaxTokens = 300 };
+
+ while (true)
+ {
+ foreach (var text in ex.Infer(prompt, inferenceParams))
+ {
+ Console.Write(text);
+ }
+ Console.ForegroundColor = ConsoleColor.Green;
+ prompt = Console.ReadLine();
+ Console.ForegroundColor = ConsoleColor.White;
+ }
+ }
+}
+
+
+
+
+
+
+
+ using LLama.Common;
+using System;
+using System.Collections.Generic;
+using System.Linq;
+using System.Text;
+using System.Threading.Tasks;
+
+public class InteractiveModeExecute
+{
+ public async static Task Run()
+ {
+ Console.Write("Please input your model path: ");
+ string modelPath = Console.ReadLine();
+ var prompt = File.ReadAllText("Assets/chat-with-bob.txt").Trim();
+
+ InteractiveExecutor ex = new(new LLamaModel(new ModelParams(modelPath, contextSize: 256)));
+
+ Console.ForegroundColor = ConsoleColor.Yellow;
+ Console.WriteLine("The executor has been enabled. In this example, the prompt is printed, the maximum tokens is set to 64 and the context size is 256. (an example for small scale usage)");
+ Console.ForegroundColor = ConsoleColor.White;
+
+ Console.Write(prompt);
+
+ var inferenceParams = new InferenceParams() { Temperature = 0.6f, AntiPrompts = new List<string> { "User:" }, MaxTokens = 64 };
+
+ while (true)
+ {
+ await foreach (var text in ex.InferAsync(prompt, inferenceParams))
+ {
+ Console.Write(text);
+ }
+ Console.ForegroundColor = ConsoleColor.Green;
+ prompt = Console.ReadLine();
+ Console.ForegroundColor = ConsoleColor.White;
+ }
+ }
+}
+
+
+
+
+
+
+
+ using LLama.Common;
+using LLama.OldVersion;
+using System;
+using System.Collections.Generic;
+using System.Linq;
+using System.Text;
+using System.Threading.Tasks;
+
+public class SaveAndLoadSession
+{
+ public static void Run()
+ {
+ Console.Write("Please input your model path: ");
+ string modelPath = Console.ReadLine();
+ var prompt = File.ReadAllText("Assets/chat-with-bob.txt").Trim();
+ InteractiveExecutor ex = new(new LLamaModel(new ModelParams(modelPath, contextSize: 1024, seed: 1337, gpuLayerCount: 5)));
+ ChatSession session = new ChatSession(ex); // The only change is to remove the transform for the output text stream.
+
+ Console.ForegroundColor = ConsoleColor.Yellow;
+ Console.WriteLine("The chat session has started. In this example, the prompt is printed for better visual result. Input \"save\" to save and reload the session.");
+ Console.ForegroundColor = ConsoleColor.White;
+
+ // show the prompt
+ Console.Write(prompt);
+ while (true)
+ {
+ foreach (var text in session.Chat(prompt, new InferenceParams() { Temperature = 0.6f, AntiPrompts = new List<string> { "User:" } }))
+ {
+ Console.Write(text);
+ }
+
+ Console.ForegroundColor = ConsoleColor.Green;
+ prompt = Console.ReadLine();
+ Console.ForegroundColor = ConsoleColor.White;
+ if (prompt == "save")
+ {
+ Console.Write("Preparing to save the state, please input the path you want to save it: ");
+ Console.ForegroundColor = ConsoleColor.Green;
+ var statePath = Console.ReadLine();
+ session.SaveSession(statePath);
+ Console.ForegroundColor = ConsoleColor.White;
+ Console.ForegroundColor = ConsoleColor.Yellow;
+ Console.WriteLine("Saved session!");
+ Console.ForegroundColor = ConsoleColor.White;
+
+ ex.Model.Dispose();
+ ex = new(new LLamaModel(new ModelParams(modelPath, contextSize: 1024, seed: 1337, gpuLayerCount: 5)));
+ session = new ChatSession(ex).WithOutputTransform(new LLamaTransforms.KeywordTextOutputStreamTransform(new string[] { "User:", "Bob:" }, redundancyLength: 8));
+ session.LoadSession(statePath);
+
+ Console.ForegroundColor = ConsoleColor.Yellow;
+ Console.WriteLine("Loaded session!");
+ Console.ForegroundColor = ConsoleColor.White;
+
+ Console.Write("Now you can continue your session: ");
+ Console.ForegroundColor = ConsoleColor.Green;
+ prompt = Console.ReadLine();
+ Console.ForegroundColor = ConsoleColor.White;
+ }
+ }
+ }
+}
+
+
+
+
+
+
+
+ using LLama.Common;
+using System;
+using System.Collections.Generic;
+using System.Linq;
+using System.Text;
+using System.Threading.Tasks;
+
+public class LoadAndSaveState
+{
+ public static void Run()
+ {
+ Console.Write("Please input your model path: ");
+ string modelPath = Console.ReadLine();
+ var prompt = File.ReadAllText("Assets/chat-with-bob.txt").Trim();
+
+ InteractiveExecutor ex = new(new LLamaModel(new ModelParams(modelPath, contextSize: 256)));
+
+ Console.ForegroundColor = ConsoleColor.Yellow;
+ Console.WriteLine("The executor has been enabled. In this example, the prompt is printed, the maximum tokens is set to 64 and the context size is 256. (an example for small scale usage)");
+ Console.ForegroundColor = ConsoleColor.White;
+
+ Console.Write(prompt);
+
+ var inferenceParams = new InferenceParams() { Temperature = 0.6f, AntiPrompts = new List<string> { "User:" } };
+
+ while (true)
+ {
+ foreach (var text in ex.Infer(prompt, inferenceParams))
+ {
+ Console.Write(text);
+ }
+
+ prompt = Console.ReadLine();
+ if (prompt == "save")
+ {
+ Console.Write("Your path to save model state: ");
+ string modelStatePath = Console.ReadLine();
+ ex.Model.SaveState(modelStatePath);
+
+ Console.Write("Your path to save executor state: ");
+ string executorStatePath = Console.ReadLine();
+ ex.SaveState(executorStatePath);
+
+ Console.ForegroundColor = ConsoleColor.Yellow;
+ Console.WriteLine("All states saved!");
+ Console.ForegroundColor = ConsoleColor.White;
+
+ var model = ex.Model;
+ model.LoadState(modelStatePath);
+ ex = new InteractiveExecutor(model);
+ ex.LoadState(executorStatePath);
+ Console.ForegroundColor = ConsoleColor.Yellow;
+ Console.WriteLine("Loaded state!");
+ Console.ForegroundColor = ConsoleColor.White;
+
+ Console.Write("Now you can continue your session: ");
+ Console.ForegroundColor = ConsoleColor.Green;
+ prompt = Console.ReadLine();
+ Console.ForegroundColor = ConsoleColor.White;
+ }
+ }
+ }
+}
+
+
+
+
+
+
+
+ using System;
+using System.Collections.Generic;
+using System.Linq;
+using System.Text;
+using System.Threading;
+using System.Threading.Tasks;
+
+public class QuantizeModel
+{
+ public static void Run()
+ {
+ Console.Write("Please input your original model path: ");
+ var inputPath = Console.ReadLine();
+ Console.Write("Please input your output model path: ");
+ var outputPath = Console.ReadLine();
+ Console.Write("Please input the quantize type (one of q4_0, q4_1, q5_0, q5_1, q8_0): ");
+ var quantizeType = Console.ReadLine();
+ if (LLamaQuantizer.Quantize(inputPath, outputPath, quantizeType))
+ {
+ Console.WriteLine("Quantization succeed!");
+ }
+ else
+ {
+ Console.WriteLine("Quantization failed!");
+ }
+ }
+}
+
+
+
+
+
+
+
+ using LLama.Common;
+using System;
+using System.Collections.Generic;
+using System.Linq;
+using System.Text;
+using System.Threading.Tasks;
+
+public class StatelessModeExecute
+{
+ public static void Run()
+ {
+ Console.Write("Please input your model path: ");
+ string modelPath = Console.ReadLine();
+
+ StatelessExecutor ex = new(new LLamaModel(new ModelParams(modelPath, contextSize: 256)));
+
+ Console.ForegroundColor = ConsoleColor.Yellow;
+ Console.WriteLine("The executor has been enabled. In this example, the inference is an one-time job. That says, the previous input and response has " +
+ "no impact on the current response. Now you can ask it questions. Note that in this example, no prompt was set for LLM and the maximum response tokens is 50. " +
+ "It may not perform well because of lack of prompt. This is also an example that could indicate the improtance of prompt in LLM. To improve it, you can add " +
+ "a prompt for it yourself!");
+ Console.ForegroundColor = ConsoleColor.White;
+
+ var inferenceParams = new InferenceParams() { Temperature = 0.6f, AntiPrompts = new List<string> { "Question:", "#", "Question: ", ".\n" }, MaxTokens = 50 };
+
+ while (true)
+ {
+ Console.Write("\nQuestion: ");
+ Console.ForegroundColor = ConsoleColor.Green;
+ string prompt = Console.ReadLine();
+ Console.ForegroundColor = ConsoleColor.White;
+ Console.Write("Answer: ");
+ prompt = $"Question: {prompt.Trim()} Answer: ";
+ foreach (var text in ex.Infer(prompt, inferenceParams))
+ {
+ Console.Write(text);
+ }
+ }
+ }
+}
+
+
+
+
+
+
+
+ Firstly, search LLamaSharp in nuget package manager and install it.
PM> Install-Package LLamaSharp
+
+Then, search and install one of the following backends:
+LLamaSharp.Backend.Cpu
+LLamaSharp.Backend.Cuda11
+LLamaSharp.Backend.Cuda12
+
+Here's the mapping of them and corresponding model samples provided by LLamaSharp. If you're not sure which model is available for a version, please try our sample model.
| LLamaSharp.Backend | +LLamaSharp | +Verified Model Resources | +llama.cpp commit id | +
|---|---|---|---|
| - | +v0.2.0 | +This version is not recommended to use. | +- | +
| - | +v0.2.1 | +WizardLM, Vicuna (filenames with "old") | +- | +
| v0.2.2 | +v0.2.2, v0.2.3 | +WizardLM, Vicuna (filenames without "old") | +63d2046 | +
| v0.3.0 | +v0.3.0 | +LLamaSharpSamples v0.3.0, WizardLM | +7e4ea5b | +
One of the following models could be okay:
+Note that because llama.cpp is under fast development now and often introduce break changes, some model weights on huggingface which works under a version may be invalid with another version. If it's your first time to configure LLamaSharp, we'd like to suggest for using verified model weights in the table above.
Please create a console program with dotnet runtime >= netstandard 2.0 (>= net6.0 is more recommended). Then, paste the following code to program.cs;
using LLama.Common;
+using LLama;
+
+string modelPath = "<Your model path>" // change it to your own model path
+var prompt = "Transcript of a dialog, where the User interacts with an Assistant named Bob. Bob is helpful, kind, honest, good at writing, and never fails to answer the User's requests immediately and with precision.\r\n\r\nUser: Hello, Bob.\r\nBob: Hello. How may I help you today?\r\nUser: Please tell me the largest city in Europe.\r\nBob: Sure. The largest city in Europe is Moscow, the capital of Russia.\r\nUser:"; // use the "chat-with-bob" prompt here.
+
+// Load model
+var parameters = new ModelParams(modelPath)
+{
+ ContextSize = 1024
+};
+using var model = LLamaWeights.LoadFromFile(parameters);
+
+// Initialize a chat session
+using var context = model.CreateContext(parameters);
+var ex = new InteractiveExecutor(context);
+ChatSession session = new ChatSession(ex);
+
+// show the prompt
+Console.WriteLine();
+Console.Write(prompt);
+
+// run the inference in a loop to chat with LLM
+while (true)
+{
+ foreach (var text in session.Chat(prompt, new InferenceParams() { Temperature = 0.6f, AntiPrompts = new List<string> { "User:" } }))
+ {
+ Console.Write(text);
+ }
+
+ Console.ForegroundColor = ConsoleColor.Green;
+ prompt = Console.ReadLine();
+ Console.ForegroundColor = ConsoleColor.White;
+}
+
+After starting it, you'll see the following outputs.
+Please input your model path: D:\development\llama\weights\wizard-vicuna-13B.ggmlv3.q4_1.bin
+llama.cpp: loading model from D:\development\llama\weights\wizard-vicuna-13B.ggmlv3.q4_1.bin
+llama_model_load_internal: format = ggjt v3 (latest)
+llama_model_load_internal: n_vocab = 32000
+llama_model_load_internal: n_ctx = 1024
+llama_model_load_internal: n_embd = 5120
+llama_model_load_internal: n_mult = 256
+llama_model_load_internal: n_head = 40
+llama_model_load_internal: n_layer = 40
+llama_model_load_internal: n_rot = 128
+llama_model_load_internal: ftype = 3 (mostly Q4_1)
+llama_model_load_internal: n_ff = 13824
+llama_model_load_internal: n_parts = 1
+llama_model_load_internal: model size = 13B
+llama_model_load_internal: ggml ctx size = 7759.48 MB
+llama_model_load_internal: mem required = 9807.48 MB (+ 1608.00 MB per state)
+....................................................................................................
+llama_init_from_file: kv self size = 800.00 MB
+
+Transcript of a dialog, where the User interacts with an Assistant named Bob. Bob is helpful, kind, honest, good at writing, and never fails to answer the User's requests immediately and with precision.
+
+User: Hello, Bob.
+Bob: Hello. How may I help you today?
+User: Please tell me the largest city in Europe.
+Bob: Sure. The largest city in Europe is Moscow, the capital of Russia.
+User:
+
+Now, enjoy chatting with LLM!
+ + + + + + +The document is under work, please have a wait. Thank you for your support! :)
+ + + + + + +Please see this doc
+ + + + + + +There're currently three kinds of executors provided, which are InteractiveExecutor, InstructExecutor and StatelessExecutor.
In a word, InteractiveExecutor is suitable for getting answer of your questions from LLM continuously. InstructExecutor let LLM execute your instructions, such as "continue writing". StatelessExecutor is best for one-time job because the previous inference has no impact on the current inference.
Both of them are taking "completing the prompt" as the goal to generate the response. For example, if you input Long long ago, there was a fox who wanted to make friend with humen. One day, then the LLM will continue to write the story.
Under interactive mode, you serve a role of user and the LLM serves the role of assistant. Then it will help you with your question or request.
+Under instruct mode, you give LLM some instructions and it follows.
+Though the behaviors of them sounds similar, it could introduce many differences depending on your prompt. For example, "chat-with-bob" has good performance under interactive mode and alpaca does well with instruct mode.
// chat-with-bob
+
+Transcript of a dialog, where the User interacts with an Assistant named Bob. Bob is helpful, kind, honest, good at writing, and never fails to answer the User's requests immediately and with precision.
+
+User: Hello, Bob.
+Bob: Hello. How may I help you today?
+User: Please tell me the largest city in Europe.
+Bob: Sure. The largest city in Europe is Moscow, the capital of Russia.
+User:
+
+// alpaca
+
+Below is an instruction that describes a task. Write a response that appropriately completes the request.
+
+Therefore, please modify the prompt correspondingly when switching from one mode to the other.
+Despite the differences between interactive mode and instruct mode, both of them are stateful mode. That is, your previous question/instruction will impact on the current response from LLM. On the contrary, the stateless executor does not have such a "memory". No matter how many times you talk to it, it will only concentrate on what you say in this time.
+Since the stateless executor has no memory of conversations before, you need to input your question with the whole prompt into it to get the better answer.
+For example, if you feed Q: Who is Trump? A: to the stateless executor, it may give the following answer with the antiprompt Q:.
Donald J. Trump, born June 14, 1946, is an American businessman, television personality, politician and the 45th President of the United States (2017-2021). # Anexo:Torneo de Hamburgo 2022 (individual masculino)
+
+## Presentación previa
+
+* Defensor del título: Daniil Medvédev
+
+It seems that things went well at first. However, after answering the question itself, LLM began to talk about some other things until the answer reached the token count limit. The reason of this strange behavior is the anti-prompt cannot be match. With the input, LLM cannot decide whether to append a string "A: " at the end of the response.
+As an improvement, let's take the following text as the input:
+Q: What is the capital of the USA? A: Washingtong. Q: What is the sum of 1 and 2? A: 3. Q: Who is Trump? A:
+
+Then, I got the following answer with the anti-prompt Q:.
45th president of the United States.
+
+At this time, by repeating the same mode of Q: xxx? A: xxx., LLM outputs the anti-prompt we want to help to decide where to stop the generation.
Different from LLamaModel, when using an executor, InferenceParams is passed to the Infer method instead of constructor. This is because executors only define the ways to run the model, therefore in each run, you can change the settings for this time inference.
Namespace: LLama.Common
+public class InferenceParams
+
+Inheritance Object → InferenceParams
+number of tokens to keep from initial prompt
+public int TokensKeep { get; set; }
+
+how many new tokens to predict (n_predict), set to -1 to infinitely generate response + until it complete.
+public int MaxTokens { get; set; }
+
+logit bias for specific tokens
+public Dictionary<int, float> LogitBias { get; set; }
+
+Sequences where the model will stop generating further tokens.
+public IEnumerable<string> AntiPrompts { get; set; }
+
+path to file for saving/loading model eval state
+public string PathSession { get; set; }
+
+string to suffix user inputs with
+public string InputSuffix { get; set; }
+
+string to prefix user inputs with
+public string InputPrefix { get; set; }
+
+0 or lower to use vocab size
+public int TopK { get; set; }
+
+1.0 = disabled
+public float TopP { get; set; }
+
+1.0 = disabled
+public float TfsZ { get; set; }
+
+1.0 = disabled
+public float TypicalP { get; set; }
+
+1.0 = disabled
+public float Temperature { get; set; }
+
+1.0 = disabled
+public float RepeatPenalty { get; set; }
+
+last n tokens to penalize (0 = disable penalty, -1 = context size) (repeat_last_n)
+public int RepeatLastTokensCount { get; set; }
+
+frequency penalty coefficient + 0.0 = disabled
+public float FrequencyPenalty { get; set; }
+
+presence penalty coefficient + 0.0 = disabled
+public float PresencePenalty { get; set; }
+
+Mirostat uses tokens instead of words. + algorithm described in the paper https://arxiv.org/abs/2007.14966. + 0 = disabled, 1 = mirostat, 2 = mirostat 2.0
+public MiroStateType Mirostat { get; set; }
+
+target entropy
+public float MirostatTau { get; set; }
+
+learning rate
+public float MirostatEta { get; set; }
+
+consider newlines as a repeatable token (penalize_nl)
+public bool PenalizeNL { get; set; }
+
+Similar to LLamaModel, an executor also has its state, which can be saved and loaded. Note that in most of cases, the state of executor and the state of the model should be loaded and saved at the same time.
To decouple the model and executor, we provide APIs to save/load state for model and executor respectively. However, during the inference, the processed information will leave footprint in LLamaModel's native context. Therefore, if you just load a state from another executor but keep the model unmodified, some strange things may happen. So will loading model state only.
Is there a condition that requires to load one of them only? The answer is YES. For example, after resetting the model state, if you don't want the inference starting from the new position, leaving the executor unmodified is okay. But, anyway, this flexible usage may cause some unexpected behaviors, therefore please ensure you know what you're doing before using it in this way.
+In the future version, we'll open the access for some variables inside the executor to support more flexible usages.
+The APIs to load/save state of the executors is similar to that of LLamaModel. However, note that StatelessExecutor doesn't have such APIs because it's stateless itself. Besides, the output of GetStateData is an object of type ExecutorBaseState.
LLamaModel model = new LLamaModel(new ModelParams("<modelPath>"));
+InteractiveExecutor executor = new InteractiveExecutor(model);
+// do some things...
+executor.SaveState("executor.st");
+var stateData = model.GetStateData();
+
+InteractiveExecutor executor2 = new InteractiveExecutor(model);
+executor2.LoadState(stateData);
+// do some things...
+
+InteractiveExecutor executor3 = new InteractiveExecutor(model);
+executor3.LoadState("executor.st");
+// do some things...
+
+
+
+
+
+
+
+ All the executors implements the interface ILLamaExecutor, which provides two APIs to execute text-to-text tasks.
public interface ILLamaExecutor
+{
+ public LLamaModel Model { get; }
+
+ IEnumerable<string> Infer(string text, InferenceParams? inferenceParams = null, CancellationToken token = default);
+
+ IAsyncEnumerable<string> InferAsync(string text, InferenceParams? inferenceParams = null, CancellationToken token = default);
+}
+
+Just pass the text to the executor with the inference parameters. For the inference parameters, please refer to executor inference parameters doc.
+The output of both two APIs are yield enumerable. Therefore, when receiving the output, you can directly use foreach to take actions on each word you get by order, instead of waiting for the whole process completed.
Getting the embeddings of a text in LLM is sometimes useful, for example, to train other MLP models.
+To get the embeddings, please initialize a LLamaEmbedder and then call GetEmbeddings.
var embedder = new LLamaEmbedder(new ModelParams("<modelPath>"));
+string text = "hello, LLM.";
+float[] embeddings = embedder.GetEmbeddings(text);
+
+The output is a float array. Note that the length of the array is related with the model you load. If you just want to get a smaller size embedding, please consider changing a model.
+ + + + + + +When initializing a LLamaModel object, there're three parameters, ModelParams Params, string encoding = "UTF-8", ILLamaLogger? logger = null.
The usage of logger will be further introduced in logger doc. The encoding is the encoding you want to use when dealing with text via this model.
The most important of all, is the ModelParams, which is defined as below. We'll explain the parameters step by step in this document.
public class ModelParams
+{
+ public int ContextSize { get; set; } = 512;
+ public int GpuLayerCount { get; set; } = 20;
+ public int Seed { get; set; } = 1686349486;
+ public bool UseFp16Memory { get; set; } = true;
+ public bool UseMemorymap { get; set; } = true;
+ public bool UseMemoryLock { get; set; } = false;
+ public bool Perplexity { get; set; } = false;
+ public string ModelPath { get; set; }
+ public string LoraAdapter { get; set; } = string.Empty;
+ public string LoraBase { get; set; } = string.Empty;
+ public int Threads { get; set; } = Math.Max(Environment.ProcessorCount / 2, 1);
+ public int BatchSize { get; set; } = 512;
+ public bool ConvertEosToNewLine { get; set; } = false;
+}
+
+Namespace: LLama.Common
+public class ModelParams
+
+Inheritance Object → ModelParams
+Model context size (n_ctx)
+public int ContextSize { get; set; }
+
+Number of layers to run in VRAM / GPU memory (n_gpu_layers)
+public int GpuLayerCount { get; set; }
+
+Seed for the random number generator (seed)
+public int Seed { get; set; }
+
+Use f16 instead of f32 for memory kv (memory_f16)
+public bool UseFp16Memory { get; set; }
+
+Use mmap for faster loads (use_mmap)
+public bool UseMemorymap { get; set; }
+
+Use mlock to keep model in memory (use_mlock)
+public bool UseMemoryLock { get; set; }
+
+Compute perplexity over the prompt (perplexity)
+public bool Perplexity { get; set; }
+
+Model path (model)
+public string ModelPath { get; set; }
+
+lora adapter path (lora_adapter)
+public string LoraAdapter { get; set; }
+
+base model path for the lora adapter (lora_base)
+public string LoraBase { get; set; }
+
+Number of threads (-1 = autodetect) (n_threads)
+public int Threads { get; set; }
+
+batch size for prompt processing (must be >=32 to use BLAS) (n_batch)
+public int BatchSize { get; set; }
+
+Whether to convert eos to newline during the inference.
+public bool ConvertEosToNewLine { get; set; }
+
+Whether to use embedding mode. (embedding) Note that if this is set to true, + The LLamaModel won't produce text response anymore.
+public bool EmbeddingMode { get; set; }
+
+Quantization is significant to accelerate the model inference. Since there's little accuracy (performance) reduction when quantizing the model, get it easy to quantize it!
+To quantize the model, please call Quantize from LLamaQuantizer, which is a static method.
string srcPath = "<model.bin>";
+string dstPath = "<model_q4_0.bin>";
+LLamaQuantizer.Quantize(srcPath, dstPath, "q4_0");
+// The following overload is also okay.
+// LLamaQuantizer.Quantize(srcPath, dstPath, LLamaFtype.LLAMA_FTYPE_MOSTLY_Q4_0);
+
+After calling it, a quantized model file will be saved.
+There're currently 5 types of quantization supported:
+There're two ways to load state: loading from path and loading from bite array. Therefore, correspondingly, state data can be extracted as byte array or saved to a file.
+LLamaModel model = new LLamaModel(new ModelParams("<modelPath>"));
+// do some things...
+model.SaveState("model.st");
+var stateData = model.GetStateData();
+model.Dispose();
+
+LLamaModel model2 = new LLamaModel(new ModelParams("<modelPath>"));
+model2.LoadState(stateData);
+// do some things...
+
+LLamaModel model3 = new LLamaModel(new ModelParams("<modelPath>"));
+model3.LoadState("model.st");
+// do some things...
+
+
+
+
+
+
+
+ A pair of APIs to make conversion between text and tokens.
+The basic usage is to call Tokenize after initializing the model.
LLamaModel model = new LLamaModel(new ModelParams("<modelPath>"));
+string text = "hello";
+int[] tokens = model.Tokenize(text).ToArray();
+
+Depending on different model (or vocab), the output will be various.
+Similar to tokenization, just pass an IEnumerable<int> to Detokenize method.
LLamaModel model = new LLamaModel(new ModelParams("<modelPath>"));
+int[] tokens = new int[] {125, 2568, 13245};
+string text = model.Detokenize(tokens);
+
+
+
+
+
+
+
+ LLamaSharp supports customized logger because it could be used in many kinds of applications, like Winform/WPF, WebAPI and Blazor, so that the preference of logger varies.
+What you need to do is to implement the ILogger interface.
public interface ILLamaLogger
+{
+ public enum LogLevel
+ {
+ Info,
+ Debug,
+ Warning,
+ Error
+ }
+ void Log(string source, string message, LogLevel level);
+}
+
+The source specifies where the log message is from, which could be a function, a class, etc..
The message is the log message itself.
The level is the level of the information in the log. As shown above, there're four levels, which are info, debug, warning and error respectively.
The following is a simple example of the logger implementation:
+public sealed class LLamaDefaultLogger : ILLamaLogger
+{
+ private static readonly Lazy<LLamaDefaultLogger> _instance = new Lazy<LLamaDefaultLogger>(() => new LLamaDefaultLogger());
+
+ private bool _toConsole = true;
+ private bool _toFile = false;
+
+ private FileStream? _fileStream = null;
+ private StreamWriter _fileWriter = null;
+
+ public static LLamaDefaultLogger Default => _instance.Value;
+
+ private LLamaDefaultLogger()
+ {
+
+ }
+
+ public LLamaDefaultLogger EnableConsole()
+ {
+ _toConsole = true;
+ return this;
+ }
+
+ public LLamaDefaultLogger DisableConsole()
+ {
+ _toConsole = false;
+ return this;
+ }
+
+ public LLamaDefaultLogger EnableFile(string filename, FileMode mode = FileMode.Append)
+ {
+ _fileStream = new FileStream(filename, mode, FileAccess.Write);
+ _fileWriter = new StreamWriter(_fileStream);
+ _toFile = true;
+ return this;
+ }
+
+ public LLamaDefaultLogger DisableFile(string filename)
+ {
+ if (_fileWriter is not null)
+ {
+ _fileWriter.Close();
+ _fileWriter = null;
+ }
+ if (_fileStream is not null)
+ {
+ _fileStream.Close();
+ _fileStream = null;
+ }
+ _toFile = false;
+ return this;
+ }
+
+ public void Log(string source, string message, LogLevel level)
+ {
+ if (level == LogLevel.Info)
+ {
+ Info(message);
+ }
+ else if (level == LogLevel.Debug)
+ {
+
+ }
+ else if (level == LogLevel.Warning)
+ {
+ Warn(message);
+ }
+ else if (level == LogLevel.Error)
+ {
+ Error(message);
+ }
+ }
+
+ public void Info(string message)
+ {
+ message = MessageFormat("info", message);
+ if (_toConsole)
+ {
+ Console.ForegroundColor = ConsoleColor.White;
+ Console.WriteLine(message);
+ Console.ResetColor();
+ }
+ if (_toFile)
+ {
+ Debug.Assert(_fileStream is not null);
+ Debug.Assert(_fileWriter is not null);
+ _fileWriter.WriteLine(message);
+ }
+ }
+
+ public void Warn(string message)
+ {
+ message = MessageFormat("warn", message);
+ if (_toConsole)
+ {
+ Console.ForegroundColor = ConsoleColor.Yellow;
+ Console.WriteLine(message);
+ Console.ResetColor();
+ }
+ if (_toFile)
+ {
+ Debug.Assert(_fileStream is not null);
+ Debug.Assert(_fileWriter is not null);
+ _fileWriter.WriteLine(message);
+ }
+ }
+
+ public void Error(string message)
+ {
+ message = MessageFormat("error", message);
+ if (_toConsole)
+ {
+ Console.ForegroundColor = ConsoleColor.Red;
+ Console.WriteLine(message);
+ Console.ResetColor();
+ }
+ if (_toFile)
+ {
+ Debug.Assert(_fileStream is not null);
+ Debug.Assert(_fileWriter is not null);
+ _fileWriter.WriteLine(message);
+ }
+ }
+
+ private string MessageFormat(string level, string message)
+ {
+ DateTime now = DateTime.Now;
+ string formattedDate = now.ToString("yyyy.MM.dd HH:mm:ss");
+ return $"[{formattedDate}][{level}]: {message}";
+ }
+}
+
+
+
+
+
+
+
+ It's supported now but the document is under work. Please wait for some time. Thank you for your support! :)
+ + + + + + +Sometimes, your application with LLM and LLamaSharp may have strange behaviours. Before opening an issue to report the BUG, the following tricks may worth a try.
+Anti-prompt can also be called as "Stop-keyword", which decides when to stop the response generation. Under interactive mode, the maximum tokens count is always not set, which makes the LLM generates responses infinitively. Therefore, setting anti-prompt correctly helps a lot to avoid the strange behaviours. For example, the prompt file chat-with-bob.txt has the following content:
Transcript of a dialog, where the User interacts with an Assistant named Bob. Bob is helpful, kind, honest, good at writing, and never fails to answer the User's requests immediately and with precision.
+
+User: Hello, Bob.
+Bob: Hello. How may I help you today?
+User: Please tell me the largest city in Europe.
+Bob: Sure. The largest city in Europe is Moscow, the capital of Russia.
+User:
+
+Therefore, the anti-prompt should be set as "User:". If the last line of the prompt is removed, LLM will automatically generate a question (user) and a response (bob) for one time when running the chat session. Therefore, the antiprompt is suggested to be appended to the prompt when starting a chat session.
+What if an extra line is appended? The string "User:" in the prompt will be followed with a char "\n". Thus when running the model, the automatic generation of a pair of question and response may appear because the anti-prompt is "User:" but the last token is "User:\n". As for whether it will appear, it's an undefined behaviour, which depends on the implementation inside the LLamaExecutor. Anyway, since it may leads to unexpected behaviors, it's recommended to trim your prompt or carefully keep consistent with your anti-prompt.
Sometimes we want to input a long prompt to execute a task. However, the context size may limit the inference of LLama model. Please ensure the inequality below holds.
+$$ len(prompt) + len(response) < len(context) $$
+In this inequality, len(response) refers to the expected tokens for LLM to generate.
Some prompt works well under interactive mode, such as chat-with-bob, some others may work well with instruct mode, such as alpaca. Besides, if your input is quite simple and one-time job, such as "Q: what is the satellite of the earth? A: ", stateless mode will be a good choice.
If your chat bot has bad performance, trying different executor will possibly make it work well.
+The differences between modes may lead to much different behaviours under the same task. For example, if you're building a chat bot with non-English, a fine-tuned model specially for the language you want to use will have huge effect on the performance.
+Currently, the GpuLayerCount parameter, which decides the number of layer loaded into GPU, is set to 20 by default. However, if you have some efficient GPUs, setting it as a larger number will attain faster inference.

LLamaSharp is the C#/.NET binding of llama.cpp. It provides APIs to inference the LLaMa Models and deploy it on native environment or Web. It could help C# developers to deploy the LLM (Large Language Model) locally and integrate with C# apps.
+If you are new to LLM, here're some tips for you to help you to get start with LLamaSharp. If you are experienced in this field, we'd still recommend you to take a few minutes to read it because some things perform differently compared to cpp/python.
LLamaSharp and LLama.Backend. After installing LLamaSharp, please install one of LLama.Backend.Cpu, LLama.Backend.Cuda11 or LLama.Backend.Cuda12. If you use the source code, dynamic libraries can be found in LLama/Runtimes. Rename the one you want to use to libllama.dll.LLaMa originally refers to the weights released by Meta (Facebook Research). After that, many models are fine-tuned based on it, such as Vicuna, GPT4All, and Pyglion. Though all of these models are supported by LLamaSharp, some steps are necessary with different file formats. There're mainly three kinds of files, which are .pth, .bin (ggml), .bin (quantized). If you have the .bin (quantized) file, it could be used directly by LLamaSharp. If you have the .bin (ggml) file, you could use it directly but get higher inference speed after the quantization. If you have the .pth file, you need to follow the instructions in llama.cpp to convert it to .bin (ggml) file at first.Community effort is always one of the most important things in open-source projects. Any contribution in any way is welcomed here. For example, the following things mean a lot for LLamaSharp:
+If you'd like to get deeply involved in development, please touch us in discord channel or send email to AsakusaRinne@gmail.com. :)
LLamaSharp is the C#/.NET binding of llama.cpp. It provides APIs to inference the LLaMa Models and deploy it on native environment or Web. It could help C# developers to deploy the LLM (Large Language Model) locally and integrate with C# apps.
"},{"location":"#main-features","title":"Main features","text":"If you are new to LLM, here're some tips for you to help you to get start with LLamaSharp. If you are experienced in this field, we'd still recommend you to take a few minutes to read it because some things perform differently compared to cpp/python.
LLamaSharp and LLama.Backend. After installing LLamaSharp, please install one of LLama.Backend.Cpu, LLama.Backend.Cuda11 or LLama.Backend.Cuda12. If you use the source code, dynamic libraries can be found in LLama/Runtimes. Rename the one you want to use to libllama.dll.LLaMa originally refers to the weights released by Meta (Facebook Research). After that, many models are fine-tuned based on it, such as Vicuna, GPT4All, and Pyglion. Though all of these models are supported by LLamaSharp, some steps are necessary with different file formats. There're mainly three kinds of files, which are .pth, .bin (ggml), .bin (quantized). If you have the .bin (quantized) file, it could be used directly by LLamaSharp. If you have the .bin (ggml) file, you could use it directly but get higher inference speed after the quantization. If you have the .pth file, you need to follow the instructions in llama.cpp to convert it to .bin (ggml) file at first.Community effort is always one of the most important things in open-source projects. Any contribution in any way is welcomed here. For example, the following things mean a lot for LLamaSharp:
If you'd like to get deeply involved in development, please touch us in discord channel or send email to AsakusaRinne@gmail.com. :)
The figure below shows the core framework structure, which is separated to four levels.
LLamaContext, LLamaEmbedder and LLamaQuantizer.InteractiveExecutor, InstructuExecutor and StatelessExecutor.InteractiveExecutor and LLamaContext, which supports interactive tasks and saving/re-loading sessions. It also provides a flexible way to customize the text process by IHistoryTransform, ITextTransform and ITextStreamTransform.Since LLamaContext interact with native library, it's not recommended to use the methods of it directly unless you know what you are doing. So does the NativeApi, which is not included in the architecture figure above.
ChatSession is recommended to be used when you want to build an application similar to ChatGPT, or the ChatBot, because it works best with InteractiveExecutor. Though other executors are also allowed to passed as a parameter to initialize a ChatSession, it's not encouraged if you are new to LLamaSharp and LLM.
High-level applications, such as BotSharp, are supposed to be used when you concentrate on the part not related with LLM. For example, if you want to deploy a chat bot to help you remember your schedules, using BotSharp may be a good choice.
Note that the APIs of the high-level applications may not be stable now. Please take it into account when using them.
"},{"location":"ContributingGuide/","title":"LLamaSharp Contributing Guide","text":"Hi, welcome to develop LLamaSharp with us together! We are always open for every contributor and any format of contributions! If you want to maintain this library actively together, please contact us to get the write access after some PRs. (Email: AsakusaRinne@gmail.com)
In this page, we'd like to introduce how to make contributions here easily. \ud83d\ude0a
"},{"location":"ContributingGuide/#compile-the-native-library-from-source","title":"Compile the native library from source","text":"Firstly, please clone the llama.cpp repository and following the instructions in llama.cpp readme to configure your local environment.
If you want to support cublas in the compilation, please make sure that you've installed the cuda.
When building from source, please add -DBUILD_SHARED_LIBS=ON to the cmake instruction. For example, when building with cublas but without openblas, use the following instruction:
cmake .. -DLLAMA_CUBLAS=ON -DBUILD_SHARED_LIBS=ON\n After running cmake --build . --config Release, you could find the llama.dll, llama.so or llama.dylib in your build directory. After pasting it to LLamaSharp/LLama/runtimes and renaming it to libllama.dll, libllama.so or libllama.dylib, you can use it as the native library in LLamaSharp.
After refactoring the framework in v0.4.0, LLamaSharp will try to maintain the backward compatibility. However, in the following cases a breaking change will be required:
If a new feature could be added without introducing any break change, please open a PR rather than open an issue first. We will never refuse the PR but help to improve it, unless it's malicious.
When adding the feature, please take care of the namespace and the naming convention. For example, if you are adding an integration for WPF, please put the code under namespace LLama.WPF or LLama.Integration.WPF instead of putting it under the root namespace. The naming convention of LLamaSharp follows the pascal naming convention, but in some parts that are invisible to users, you can do whatever you want.
If the issue is related to the LLM internal behaviour, such as endless generating the response, the best way to find the problem is to do comparison test between llama.cpp and LLamaSharp.
You could use exactly the same prompt, the same model and the same parameters to run the inference in llama.cpp and LLamaSharp respectively to see if it's really a problem caused by the implementation in LLamaSharp.
If the experiment showed that it worked well in llama.cpp but didn't in LLamaSharp, a search for the problem could be started. While the reason of the problem could be various, the best way I think is to add log-print in the code of llama.cpp and use it in LLamaSharp after compilation. Thus, when running LLamaSharp, you could see what happened in the native library.
After finding out the reason, a painful but happy process comes. When working on the BUG fix, there's only one rule to follow, that is keeping the examples working well. If the modification fixed the BUG but impact on other functions, it would not be a good fix.
During the BUG fix process, please don't hesitate to discuss together when you stuck on something.
"},{"location":"ContributingGuide/#add-integrations","title":"Add integrations","text":"All kinds of integration are welcomed here! Currently the following integrations are under work or on our schedule:
Besides, for some other integrations, like ASP.NET core, SQL, Blazor and so on, we'll appreciate it if you could help with that. If the time is limited for you, providing an example for it also means a lot!
There're mainly two ways to add an example:
LLama.Examples of the repository.LLamaSharp uses mkdocs to build the documentation, please follow the tutorial of mkdocs to add or modify documents in LLamaSharp.
"},{"location":"GetStarted/","title":"Get Started","text":""},{"location":"GetStarted/#install-packages","title":"Install packages","text":"Firstly, search LLamaSharp in nuget package manager and install it.
PM> Install-Package LLamaSharp\n Then, search and install one of the following backends:
LLamaSharp.Backend.Cpu\nLLamaSharp.Backend.Cuda11\nLLamaSharp.Backend.Cuda12\n Here's the mapping of them and corresponding model samples provided by LLamaSharp. If you're not sure which model is available for a version, please try our sample model.
One of the following models could be okay:
Note that because llama.cpp is under fast development now and often introduce break changes, some model weights on huggingface which works under a version may be invalid with another version. If it's your first time to configure LLamaSharp, we'd like to suggest for using verified model weights in the table above.
Please create a console program with dotnet runtime >= netstandard 2.0 (>= net6.0 is more recommended). Then, paste the following code to program.cs;
using LLama.Common;\nusing LLama;\n\nstring modelPath = \"<Your model path>\" // change it to your own model path\nvar prompt = \"Transcript of a dialog, where the User interacts with an Assistant named Bob. Bob is helpful, kind, honest, good at writing, and never fails to answer the User's requests immediately and with precision.\\r\\n\\r\\nUser: Hello, Bob.\\r\\nBob: Hello. How may I help you today?\\r\\nUser: Please tell me the largest city in Europe.\\r\\nBob: Sure. The largest city in Europe is Moscow, the capital of Russia.\\r\\nUser:\"; // use the \"chat-with-bob\" prompt here.\n\n// Load model\nvar parameters = new ModelParams(modelPath)\n{\n ContextSize = 1024\n};\nusing var model = LLamaWeights.LoadFromFile(parameters);\n\n// Initialize a chat session\nusing var context = model.CreateContext(parameters);\nvar ex = new InteractiveExecutor(context);\nChatSession session = new ChatSession(ex);\n\n// show the prompt\nConsole.WriteLine();\nConsole.Write(prompt);\n\n// run the inference in a loop to chat with LLM\nwhile (true)\n{\n foreach (var text in session.Chat(prompt, new InferenceParams() { Temperature = 0.6f, AntiPrompts = new List<string> { \"User:\" } }))\n {\n Console.Write(text);\n }\n\n Console.ForegroundColor = ConsoleColor.Green;\n prompt = Console.ReadLine();\n Console.ForegroundColor = ConsoleColor.White;\n}\n After starting it, you'll see the following outputs.
Please input your model path: D:\\development\\llama\\weights\\wizard-vicuna-13B.ggmlv3.q4_1.bin\nllama.cpp: loading model from D:\\development\\llama\\weights\\wizard-vicuna-13B.ggmlv3.q4_1.bin\nllama_model_load_internal: format = ggjt v3 (latest)\nllama_model_load_internal: n_vocab = 32000\nllama_model_load_internal: n_ctx = 1024\nllama_model_load_internal: n_embd = 5120\nllama_model_load_internal: n_mult = 256\nllama_model_load_internal: n_head = 40\nllama_model_load_internal: n_layer = 40\nllama_model_load_internal: n_rot = 128\nllama_model_load_internal: ftype = 3 (mostly Q4_1)\nllama_model_load_internal: n_ff = 13824\nllama_model_load_internal: n_parts = 1\nllama_model_load_internal: model size = 13B\nllama_model_load_internal: ggml ctx size = 7759.48 MB\nllama_model_load_internal: mem required = 9807.48 MB (+ 1608.00 MB per state)\n....................................................................................................\nllama_init_from_file: kv self size = 800.00 MB\n\nTranscript of a dialog, where the User interacts with an Assistant named Bob. Bob is helpful, kind, honest, good at writing, and never fails to answer the User's requests immediately and with precision.\n\nUser: Hello, Bob.\nBob: Hello. How may I help you today?\nUser: Please tell me the largest city in Europe.\nBob: Sure. The largest city in Europe is Moscow, the capital of Russia.\nUser:\n Now, enjoy chatting with LLM!
"},{"location":"Tricks/","title":"Tricks for FAQ","text":"Sometimes, your application with LLM and LLamaSharp may have strange behaviours. Before opening an issue to report the BUG, the following tricks may worth a try.
"},{"location":"Tricks/#carefully-set-the-anti-prompts","title":"Carefully set the anti-prompts","text":"Anti-prompt can also be called as \"Stop-keyword\", which decides when to stop the response generation. Under interactive mode, the maximum tokens count is always not set, which makes the LLM generates responses infinitively. Therefore, setting anti-prompt correctly helps a lot to avoid the strange behaviours. For example, the prompt file chat-with-bob.txt has the following content:
Transcript of a dialog, where the User interacts with an Assistant named Bob. Bob is helpful, kind, honest, good at writing, and never fails to answer the User's requests immediately and with precision.\n\nUser: Hello, Bob.\nBob: Hello. How may I help you today?\nUser: Please tell me the largest city in Europe.\nBob: Sure. The largest city in Europe is Moscow, the capital of Russia.\nUser:\n Therefore, the anti-prompt should be set as \"User:\". If the last line of the prompt is removed, LLM will automatically generate a question (user) and a response (bob) for one time when running the chat session. Therefore, the antiprompt is suggested to be appended to the prompt when starting a chat session.
What if an extra line is appended? The string \"User:\" in the prompt will be followed with a char \"\\n\". Thus when running the model, the automatic generation of a pair of question and response may appear because the anti-prompt is \"User:\" but the last token is \"User:\\n\". As for whether it will appear, it's an undefined behaviour, which depends on the implementation inside the LLamaExecutor. Anyway, since it may leads to unexpected behaviors, it's recommended to trim your prompt or carefully keep consistent with your anti-prompt.
Sometimes we want to input a long prompt to execute a task. However, the context size may limit the inference of LLama model. Please ensure the inequality below holds.
$$ len(prompt) + len(response) < len(context) $$
In this inequality, len(response) refers to the expected tokens for LLM to generate.
Some prompt works well under interactive mode, such as chat-with-bob, some others may work well with instruct mode, such as alpaca. Besides, if your input is quite simple and one-time job, such as \"Q: what is the satellite of the earth? A: \", stateless mode will be a good choice.
If your chat bot has bad performance, trying different executor will possibly make it work well.
"},{"location":"Tricks/#choose-models-weight-depending-on-you-task","title":"Choose models weight depending on you task","text":"The differences between modes may lead to much different behaviours under the same task. For example, if you're building a chat bot with non-English, a fine-tuned model specially for the language you want to use will have huge effect on the performance.
"},{"location":"Tricks/#set-the-layer-count-you-want-to-offload-to-gpu","title":"Set the layer count you want to offload to GPU","text":"Currently, the GpuLayerCount parameter, which decides the number of layer loaded into GPU, is set to 20 by default. However, if you have some efficient GPUs, setting it as a larger number will attain faster inference.
ChatSession is a higher-level abstraction than the executors. In the context of a chat application like ChatGPT, a \"chat session\" refers to an interactive conversation or exchange of messages between the user and the chatbot. It represents a continuous flow of communication where the user enters input or asks questions, and the chatbot responds accordingly. A chat session typically starts when the user initiates a conversation with the chatbot and continues until the interaction comes to a natural end or is explicitly terminated by either the user or the system. During a chat session, the chatbot maintains the context of the conversation, remembers previous messages, and generates appropriate responses based on the user's inputs and the ongoing dialogue.
Currently, the only parameter that is accepted is an ILLamaExecutor, because this is the only parameter that we're sure to exist in all the future versions. Since it's the high-level abstraction, we're conservative to the API designs. In the future, there may be more kinds of constructors added.
InteractiveExecutor ex = new(new LLamaModel(new ModelParams(modelPath)));\nChatSession session = new ChatSession(ex);\n"},{"location":"ChatSession/basic-usages/#chat-with-the-bot","title":"Chat with the bot","text":"There'll be two kinds of input accepted by the Chat API, which are ChatHistory and String. The API with string is quite similar to that of the executors. Meanwhile, the API with ChatHistory is aimed to provide more flexible usages. For example, you have had a chat with the bot in session A before you open the session B. Now session B has no memory for what you said before. Therefore, you can feed the history of A to B.
string prompt = \"What is C#?\";\n\nforeach (var text in session.Chat(prompt, new InferenceParams() { Temperature = 0.6f, AntiPrompts = new List<string> { \"User:\" } })) // the inference params should be changed depending on your statement\n{\n Console.Write(text);\n}\n"},{"location":"ChatSession/basic-usages/#get-the-history","title":"Get the history","text":"Currently History is a property of ChatSession.
foreach(var rec in session.History.Messages)\n{\n Console.WriteLine($\"{rec.AuthorRole}: {rec.Content}\");\n}\n"},{"location":"ChatSession/save-load-session/","title":"Save/Load Chat Session","text":"Generally, the chat session could be switched, which requires the ability of loading and saving session.
When building a chat bot app, it's NOT encouraged to initialize many chat sessions and keep them in memory to wait for being switched, because the memory consumption of both CPU and GPU is expensive. It's recommended to save the current session before switching to a new session, and load the file when switching back to the session.
The API is also quite simple, the files will be saved into a directory you specified. If the path does not exist, a new directory will be created.
string savePath = \"<save dir>\";\nsession.SaveSession(savePath);\n\nsession.LoadSession(savePath);\n"},{"location":"ChatSession/transforms/","title":"Transforms in Chat Session","text":"There's three important elements in ChatSession, which are input, output and history. Besides, there're some conversions between them. Since the process of them under different conditions varies, LLamaSharp hands over this part of the power to the users.
Currently, there're three kinds of process that could be customized, as introduced below.
"},{"location":"ChatSession/transforms/#input-transform","title":"Input transform","text":"In general, the input of the chat API is a text (without stream), therefore ChatSession processes it in a pipeline. If you want to use your customized transform, you need to define a transform that implements ITextTransform and add it to the pipeline of ChatSession.
public interface ITextTransform\n{\n string Transform(string text);\n}\n public class MyInputTransform1 : ITextTransform\n{\n public string Transform(string text)\n {\n return $\"Question: {text}\\n\";\n }\n}\n\npublic class MyInputTransform2 : ITextTransform\n{\n public string Transform(string text)\n {\n return text + \"Answer: \";\n }\n}\n\nsession.AddInputTransform(new MyInputTransform1()).AddInputTransform(new MyInputTransform2());\n"},{"location":"ChatSession/transforms/#output-transform","title":"Output transform","text":"Different from the input, the output of chat API is a text stream. Therefore you need to process it word by word, instead of getting the full text at once.
The interface of it has an IEnumerable<string> as input, which is actually a yield sequence.
public interface ITextStreamTransform\n{\n IEnumerable<string> Transform(IEnumerable<string> tokens);\n IAsyncEnumerable<string> TransformAsync(IAsyncEnumerable<string> tokens);\n}\n When implementing it, you could throw a not-implemented exception in one of them if you only need to use the chat API in synchronously or asynchronously.
Different from the input transform pipeline, the output transform only supports one transform.
session.WithOutputTransform(new MyOutputTransform());\n Here's an example of how to implement the interface. In this example, the transform detects whether there's some keywords in the response and removes them.
/// <summary>\n/// A text output transform that removes the keywords from the response.\n/// </summary>\npublic class KeywordTextOutputStreamTransform : ITextStreamTransform\n{\n HashSet<string> _keywords;\n int _maxKeywordLength;\n bool _removeAllMatchedTokens;\n\n /// <summary>\n /// \n /// </summary>\n /// <param name=\"keywords\">Keywords that you want to remove from the response.</param>\n /// <param name=\"redundancyLength\">The extra length when searching for the keyword. For example, if your only keyword is \"highlight\", \n /// maybe the token you get is \"\\r\\nhighligt\". In this condition, if redundancyLength=0, the token cannot be successfully matched because the length of \"\\r\\nhighligt\" (10)\n /// has already exceeded the maximum length of the keywords (8). On the contrary, setting redundancyLengyh >= 2 leads to successful match.\n /// The larger the redundancyLength is, the lower the processing speed. But as an experience, it won't introduce too much performance impact when redundancyLength <= 5 </param>\n /// <param name=\"removeAllMatchedTokens\">If set to true, when getting a matched keyword, all the related tokens will be removed. Otherwise only the part of keyword will be removed.</param>\n public KeywordTextOutputStreamTransform(IEnumerable<string> keywords, int redundancyLength = 3, bool removeAllMatchedTokens = false)\n {\n _keywords = new(keywords);\n _maxKeywordLength = keywords.Select(x => x.Length).Max() + redundancyLength;\n _removeAllMatchedTokens = removeAllMatchedTokens;\n }\n /// <inheritdoc />\n public IEnumerable<string> Transform(IEnumerable<string> tokens)\n {\n var window = new Queue<string>();\n\n foreach (var s in tokens)\n {\n window.Enqueue(s);\n var current = string.Join(\"\", window);\n if (_keywords.Any(x => current.Contains(x)))\n {\n var matchedKeyword = _keywords.First(x => current.Contains(x));\n int total = window.Count;\n for (int i = 0; i < total; i++)\n {\n window.Dequeue();\n }\n if (!_removeAllMatchedTokens)\n {\n yield return current.Replace(matchedKeyword, \"\");\n }\n }\n if (current.Length >= _maxKeywordLength)\n {\n if (_keywords.Any(x => current.Contains(x)))\n {\n var matchedKeyword = _keywords.First(x => current.Contains(x));\n int total = window.Count;\n for (int i = 0; i < total; i++)\n {\n window.Dequeue();\n }\n if (!_removeAllMatchedTokens)\n {\n yield return current.Replace(matchedKeyword, \"\");\n }\n }\n else\n {\n int total = window.Count;\n for (int i = 0; i < total; i++)\n {\n yield return window.Dequeue();\n }\n }\n }\n }\n int totalCount = window.Count;\n for (int i = 0; i < totalCount; i++)\n {\n yield return window.Dequeue();\n }\n }\n /// <inheritdoc />\n public async IAsyncEnumerable<string> TransformAsync(IAsyncEnumerable<string> tokens)\n {\n throw new NotImplementedException(); // This is implemented in `LLamaTransforms` but we ignore it here.\n }\n}\n"},{"location":"ChatSession/transforms/#history-transform","title":"History transform","text":"The chat history could be converted to or from a text, which is exactly what the interface of it.
public interface IHistoryTransform\n{\n string HistoryToText(ChatHistory history);\n ChatHistory TextToHistory(AuthorRole role, string text);\n}\n Similar to the output transform, the history transform is added in the following way:
session.WithHistoryTransform(new MyHistoryTransform());\n The implementation is quite flexible, depending on what you want the history message to be like. Here's an example, which is the default history transform in LLamaSharp.
/// <summary>\n/// The default history transform.\n/// Uses plain text with the following format:\n/// [Author]: [Message]\n/// </summary>\npublic class DefaultHistoryTransform : IHistoryTransform\n{\n private readonly string defaultUserName = \"User\";\n private readonly string defaultAssistantName = \"Assistant\";\n private readonly string defaultSystemName = \"System\";\n private readonly string defaultUnknownName = \"??\";\n\n string _userName;\n string _assistantName;\n string _systemName;\n string _unknownName;\n bool _isInstructMode;\n public DefaultHistoryTransform(string? userName = null, string? assistantName = null, \n string? systemName = null, string? unknownName = null, bool isInstructMode = false)\n {\n _userName = userName ?? defaultUserName;\n _assistantName = assistantName ?? defaultAssistantName;\n _systemName = systemName ?? defaultSystemName;\n _unknownName = unknownName ?? defaultUnknownName;\n _isInstructMode = isInstructMode;\n }\n\n public virtual string HistoryToText(ChatHistory history)\n {\n StringBuilder sb = new();\n foreach (var message in history.Messages)\n {\n if (message.AuthorRole == AuthorRole.User)\n {\n sb.AppendLine($\"{_userName}: {message.Content}\");\n }\n else if (message.AuthorRole == AuthorRole.System)\n {\n sb.AppendLine($\"{_systemName}: {message.Content}\");\n }\n else if (message.AuthorRole == AuthorRole.Unknown)\n {\n sb.AppendLine($\"{_unknownName}: {message.Content}\");\n }\n else if (message.AuthorRole == AuthorRole.Assistant)\n {\n sb.AppendLine($\"{_assistantName}: {message.Content}\");\n }\n }\n return sb.ToString();\n }\n\n public virtual ChatHistory TextToHistory(AuthorRole role, string text)\n {\n ChatHistory history = new ChatHistory();\n history.AddMessage(role, TrimNamesFromText(text, role));\n return history;\n }\n\n public virtual string TrimNamesFromText(string text, AuthorRole role)\n {\n if (role == AuthorRole.User && text.StartsWith($\"{_userName}:\"))\n {\n text = text.Substring($\"{_userName}:\".Length).TrimStart();\n }\n else if (role == AuthorRole.Assistant && text.EndsWith($\"{_assistantName}:\"))\n {\n text = text.Substring(0, text.Length - $\"{_assistantName}:\".Length).TrimEnd();\n }\n if (_isInstructMode && role == AuthorRole.Assistant && text.EndsWith(\"\\n> \"))\n {\n text = text.Substring(0, text.Length - \"\\n> \".Length).TrimEnd();\n }\n return text;\n }\n}\n"},{"location":"Examples/ChatSessionStripRoleName/","title":"Use chat session and strip role names","text":"using LLama.Common;\nusing System;\nusing System.Collections.Generic;\nusing System.Linq;\nusing System.Text;\nusing System.Threading.Tasks;\n\npublic class ChatSessionStripRoleName\n{\n public static void Run()\n {\n Console.Write(\"Please input your model path: \");\n string modelPath = Console.ReadLine();\n var prompt = File.ReadAllText(\"Assets/chat-with-bob.txt\").Trim();\n InteractiveExecutor ex = new(new LLamaModel(new ModelParams(modelPath, contextSize: 1024, seed: 1337, gpuLayerCount: 5)));\n ChatSession session = new ChatSession(ex).WithOutputTransform(new LLamaTransforms.KeywordTextOutputStreamTransform(new string[] { \"User:\", \"Bob:\" }, redundancyLength: 8));\n\n Console.ForegroundColor = ConsoleColor.Yellow;\n Console.WriteLine(\"The chat session has started. The role names won't be printed.\");\n Console.ForegroundColor = ConsoleColor.White;\n\n while (true)\n {\n foreach (var text in session.Chat(prompt, new InferenceParams() { Temperature = 0.6f, AntiPrompts = new List<string> { \"User:\" } }))\n {\n Console.Write(text);\n }\n\n Console.ForegroundColor = ConsoleColor.Green;\n prompt = Console.ReadLine();\n Console.ForegroundColor = ConsoleColor.White;\n }\n }\n}\n"},{"location":"Examples/ChatSessionWithRoleName/","title":"Use chat session without removing role names","text":"using LLama.Common;\nusing System;\nusing System.Collections.Generic;\nusing System.Linq;\nusing System.Text;\nusing System.Threading.Tasks;\n\npublic class ChatSessionWithRoleName\n{\n public static void Run()\n {\n Console.Write(\"Please input your model path: \");\n string modelPath = Console.ReadLine();\n var prompt = File.ReadAllText(\"Assets/chat-with-bob.txt\").Trim();\n InteractiveExecutor ex = new(new LLamaModel(new ModelParams(modelPath, contextSize: 1024, seed: 1337, gpuLayerCount: 5)));\n ChatSession session = new ChatSession(ex); // The only change is to remove the transform for the output text stream.\n\n Console.ForegroundColor = ConsoleColor.Yellow;\n Console.WriteLine(\"The chat session has started. In this example, the prompt is printed for better visual result.\");\n Console.ForegroundColor = ConsoleColor.White;\n\n // show the prompt\n Console.Write(prompt);\n while (true)\n {\n foreach (var text in session.Chat(prompt, new InferenceParams() { Temperature = 0.6f, AntiPrompts = new List<string> { \"User:\" } }))\n {\n Console.Write(text);\n }\n\n Console.ForegroundColor = ConsoleColor.Green;\n prompt = Console.ReadLine();\n Console.ForegroundColor = ConsoleColor.White;\n }\n }\n}\n"},{"location":"Examples/GetEmbeddings/","title":"Get embeddings","text":"using LLama.Common;\nusing System;\nusing System.Collections.Generic;\nusing System.Linq;\nusing System.Text;\nusing System.Threading.Tasks;\n\npublic class GetEmbeddings\n{\n public static void Run()\n {\n Console.Write(\"Please input your model path: \");\n string modelPath = Console.ReadLine();\n var embedder = new LLamaEmbedder(new ModelParams(modelPath));\n\n while (true)\n {\n Console.Write(\"Please input your text: \");\n Console.ForegroundColor = ConsoleColor.Green;\n var text = Console.ReadLine();\n Console.ForegroundColor = ConsoleColor.White;\n\n Console.WriteLine(string.Join(\", \", embedder.GetEmbeddings(text)));\n Console.WriteLine();\n }\n }\n}\n"},{"location":"Examples/InstructModeExecute/","title":"Use instruct executor","text":"using LLama.Common;\nusing System;\nusing System.Collections.Generic;\nusing System.Linq;\nusing System.Text;\nusing System.Threading.Tasks;\n\npublic class InstructModeExecute\n{\n public static void Run()\n {\n Console.Write(\"Please input your model path: \");\n string modelPath = Console.ReadLine();\n var prompt = File.ReadAllText(\"Assets/dan.txt\").Trim();\n\n InstructExecutor ex = new(new LLamaModel(new ModelParams(modelPath, contextSize: 1024)));\n\n Console.ForegroundColor = ConsoleColor.Yellow;\n Console.WriteLine(\"The executor has been enabled. In this example, the LLM will follow your instructions. For example, you can input \\\"Write a story about a fox who want to \" +\n \"make friend with human, no less than 200 words.\\\"\");\n Console.ForegroundColor = ConsoleColor.White;\n\n var inferenceParams = new InferenceParams() { Temperature = 0.8f, MaxTokens = 300 };\n\n while (true)\n {\n foreach (var text in ex.Infer(prompt, inferenceParams))\n {\n Console.Write(text);\n }\n Console.ForegroundColor = ConsoleColor.Green;\n prompt = Console.ReadLine();\n Console.ForegroundColor = ConsoleColor.White;\n }\n }\n}\n"},{"location":"Examples/InteractiveModeExecute/","title":"Use interactive executor","text":"using LLama.Common;\nusing System;\nusing System.Collections.Generic;\nusing System.Linq;\nusing System.Text;\nusing System.Threading.Tasks;\n\npublic class InteractiveModeExecute\n{\n public async static Task Run()\n {\n Console.Write(\"Please input your model path: \");\n string modelPath = Console.ReadLine();\n var prompt = File.ReadAllText(\"Assets/chat-with-bob.txt\").Trim();\n\n InteractiveExecutor ex = new(new LLamaModel(new ModelParams(modelPath, contextSize: 256)));\n\n Console.ForegroundColor = ConsoleColor.Yellow;\n Console.WriteLine(\"The executor has been enabled. In this example, the prompt is printed, the maximum tokens is set to 64 and the context size is 256. (an example for small scale usage)\");\n Console.ForegroundColor = ConsoleColor.White;\n\n Console.Write(prompt);\n\n var inferenceParams = new InferenceParams() { Temperature = 0.6f, AntiPrompts = new List<string> { \"User:\" }, MaxTokens = 64 };\n\n while (true)\n {\n await foreach (var text in ex.InferAsync(prompt, inferenceParams))\n {\n Console.Write(text);\n }\n Console.ForegroundColor = ConsoleColor.Green;\n prompt = Console.ReadLine();\n Console.ForegroundColor = ConsoleColor.White;\n }\n }\n}\n"},{"location":"Examples/LoadAndSaveSession/","title":"Load and save chat session","text":"using LLama.Common;\nusing LLama.OldVersion;\nusing System;\nusing System.Collections.Generic;\nusing System.Linq;\nusing System.Text;\nusing System.Threading.Tasks;\n\npublic class SaveAndLoadSession\n{\n public static void Run()\n {\n Console.Write(\"Please input your model path: \");\n string modelPath = Console.ReadLine();\n var prompt = File.ReadAllText(\"Assets/chat-with-bob.txt\").Trim();\n InteractiveExecutor ex = new(new LLamaModel(new ModelParams(modelPath, contextSize: 1024, seed: 1337, gpuLayerCount: 5)));\n ChatSession session = new ChatSession(ex); // The only change is to remove the transform for the output text stream.\n\n Console.ForegroundColor = ConsoleColor.Yellow;\n Console.WriteLine(\"The chat session has started. In this example, the prompt is printed for better visual result. Input \\\"save\\\" to save and reload the session.\");\n Console.ForegroundColor = ConsoleColor.White;\n\n // show the prompt\n Console.Write(prompt);\n while (true)\n {\n foreach (var text in session.Chat(prompt, new InferenceParams() { Temperature = 0.6f, AntiPrompts = new List<string> { \"User:\" } }))\n {\n Console.Write(text);\n }\n\n Console.ForegroundColor = ConsoleColor.Green;\n prompt = Console.ReadLine();\n Console.ForegroundColor = ConsoleColor.White;\n if (prompt == \"save\")\n {\n Console.Write(\"Preparing to save the state, please input the path you want to save it: \");\n Console.ForegroundColor = ConsoleColor.Green;\n var statePath = Console.ReadLine();\n session.SaveSession(statePath);\n Console.ForegroundColor = ConsoleColor.White;\n Console.ForegroundColor = ConsoleColor.Yellow;\n Console.WriteLine(\"Saved session!\");\n Console.ForegroundColor = ConsoleColor.White;\n\n ex.Model.Dispose();\n ex = new(new LLamaModel(new ModelParams(modelPath, contextSize: 1024, seed: 1337, gpuLayerCount: 5)));\n session = new ChatSession(ex).WithOutputTransform(new LLamaTransforms.KeywordTextOutputStreamTransform(new string[] { \"User:\", \"Bob:\" }, redundancyLength: 8));\n session.LoadSession(statePath);\n\n Console.ForegroundColor = ConsoleColor.Yellow;\n Console.WriteLine(\"Loaded session!\");\n Console.ForegroundColor = ConsoleColor.White;\n\n Console.Write(\"Now you can continue your session: \");\n Console.ForegroundColor = ConsoleColor.Green;\n prompt = Console.ReadLine();\n Console.ForegroundColor = ConsoleColor.White;\n }\n }\n }\n}\n"},{"location":"Examples/LoadAndSaveState/","title":"Load and save model/executor state","text":"using LLama.Common;\nusing System;\nusing System.Collections.Generic;\nusing System.Linq;\nusing System.Text;\nusing System.Threading.Tasks;\n\npublic class LoadAndSaveState\n{\n public static void Run()\n {\n Console.Write(\"Please input your model path: \");\n string modelPath = Console.ReadLine();\n var prompt = File.ReadAllText(\"Assets/chat-with-bob.txt\").Trim();\n\n InteractiveExecutor ex = new(new LLamaModel(new ModelParams(modelPath, contextSize: 256)));\n\n Console.ForegroundColor = ConsoleColor.Yellow;\n Console.WriteLine(\"The executor has been enabled. In this example, the prompt is printed, the maximum tokens is set to 64 and the context size is 256. (an example for small scale usage)\");\n Console.ForegroundColor = ConsoleColor.White;\n\n Console.Write(prompt);\n\n var inferenceParams = new InferenceParams() { Temperature = 0.6f, AntiPrompts = new List<string> { \"User:\" } };\n\n while (true)\n {\n foreach (var text in ex.Infer(prompt, inferenceParams))\n {\n Console.Write(text);\n }\n\n prompt = Console.ReadLine();\n if (prompt == \"save\")\n {\n Console.Write(\"Your path to save model state: \");\n string modelStatePath = Console.ReadLine();\n ex.Model.SaveState(modelStatePath);\n\n Console.Write(\"Your path to save executor state: \");\n string executorStatePath = Console.ReadLine();\n ex.SaveState(executorStatePath);\n\n Console.ForegroundColor = ConsoleColor.Yellow;\n Console.WriteLine(\"All states saved!\");\n Console.ForegroundColor = ConsoleColor.White;\n\n var model = ex.Model;\n model.LoadState(modelStatePath);\n ex = new InteractiveExecutor(model);\n ex.LoadState(executorStatePath);\n Console.ForegroundColor = ConsoleColor.Yellow;\n Console.WriteLine(\"Loaded state!\");\n Console.ForegroundColor = ConsoleColor.White;\n\n Console.Write(\"Now you can continue your session: \");\n Console.ForegroundColor = ConsoleColor.Green;\n prompt = Console.ReadLine();\n Console.ForegroundColor = ConsoleColor.White;\n }\n }\n }\n}\n"},{"location":"Examples/QuantizeModel/","title":"Quantize model","text":"using System;\nusing System.Collections.Generic;\nusing System.Linq;\nusing System.Text;\nusing System.Threading;\nusing System.Threading.Tasks;\n\npublic class QuantizeModel\n{\n public static void Run()\n {\n Console.Write(\"Please input your original model path: \");\n var inputPath = Console.ReadLine();\n Console.Write(\"Please input your output model path: \");\n var outputPath = Console.ReadLine();\n Console.Write(\"Please input the quantize type (one of q4_0, q4_1, q5_0, q5_1, q8_0): \");\n var quantizeType = Console.ReadLine();\n if (LLamaQuantizer.Quantize(inputPath, outputPath, quantizeType))\n {\n Console.WriteLine(\"Quantization succeed!\");\n }\n else\n {\n Console.WriteLine(\"Quantization failed!\");\n }\n }\n}\n"},{"location":"Examples/StatelessModeExecute/","title":"Use stateless executor","text":"using LLama.Common;\nusing System;\nusing System.Collections.Generic;\nusing System.Linq;\nusing System.Text;\nusing System.Threading.Tasks;\n\npublic class StatelessModeExecute\n{\n public static void Run()\n {\n Console.Write(\"Please input your model path: \");\n string modelPath = Console.ReadLine();\n\n StatelessExecutor ex = new(new LLamaModel(new ModelParams(modelPath, contextSize: 256)));\n\n Console.ForegroundColor = ConsoleColor.Yellow;\n Console.WriteLine(\"The executor has been enabled. In this example, the inference is an one-time job. That says, the previous input and response has \" +\n \"no impact on the current response. Now you can ask it questions. Note that in this example, no prompt was set for LLM and the maximum response tokens is 50. \" +\n \"It may not perform well because of lack of prompt. This is also an example that could indicate the improtance of prompt in LLM. To improve it, you can add \" +\n \"a prompt for it yourself!\");\n Console.ForegroundColor = ConsoleColor.White;\n\n var inferenceParams = new InferenceParams() { Temperature = 0.6f, AntiPrompts = new List<string> { \"Question:\", \"#\", \"Question: \", \".\\n\" }, MaxTokens = 50 };\n\n while (true)\n {\n Console.Write(\"\\nQuestion: \");\n Console.ForegroundColor = ConsoleColor.Green;\n string prompt = Console.ReadLine();\n Console.ForegroundColor = ConsoleColor.White; \n Console.Write(\"Answer: \");\n prompt = $\"Question: {prompt.Trim()} Answer: \";\n foreach (var text in ex.Infer(prompt, inferenceParams))\n {\n Console.Write(text);\n }\n }\n }\n}\n"},{"location":"HighLevelApps/bot-sharp/","title":"The Usage of BotSharp Integration","text":"The document is under work, please have a wait. Thank you for your support! :)
"},{"location":"HighLevelApps/semantic-kernel/","title":"The Usage of semantic-kernel Integration","text":"Please see this doc
"},{"location":"LLamaExecutors/differences/","title":"Differences of Executors","text":""},{"location":"LLamaExecutors/differences/#differences-between-the-executors","title":"Differences between the executors","text":"There're currently three kinds of executors provided, which are InteractiveExecutor, InstructExecutor and StatelessExecutor.
In a word, InteractiveExecutor is suitable for getting answer of your questions from LLM continuously. InstructExecutor let LLM execute your instructions, such as \"continue writing\". StatelessExecutor is best for one-time job because the previous inference has no impact on the current inference.
Both of them are taking \"completing the prompt\" as the goal to generate the response. For example, if you input Long long ago, there was a fox who wanted to make friend with humen. One day, then the LLM will continue to write the story.
Under interactive mode, you serve a role of user and the LLM serves the role of assistant. Then it will help you with your question or request.
Under instruct mode, you give LLM some instructions and it follows.
Though the behaviors of them sounds similar, it could introduce many differences depending on your prompt. For example, \"chat-with-bob\" has good performance under interactive mode and alpaca does well with instruct mode.
// chat-with-bob\n\nTranscript of a dialog, where the User interacts with an Assistant named Bob. Bob is helpful, kind, honest, good at writing, and never fails to answer the User's requests immediately and with precision.\n\nUser: Hello, Bob.\nBob: Hello. How may I help you today?\nUser: Please tell me the largest city in Europe.\nBob: Sure. The largest city in Europe is Moscow, the capital of Russia.\nUser:\n // alpaca\n\nBelow is an instruction that describes a task. Write a response that appropriately completes the request.\n Therefore, please modify the prompt correspondingly when switching from one mode to the other.
"},{"location":"LLamaExecutors/differences/#stateful-mode-and-stateless-mode","title":"Stateful mode and Stateless mode.","text":"Despite the differences between interactive mode and instruct mode, both of them are stateful mode. That is, your previous question/instruction will impact on the current response from LLM. On the contrary, the stateless executor does not have such a \"memory\". No matter how many times you talk to it, it will only concentrate on what you say in this time.
Since the stateless executor has no memory of conversations before, you need to input your question with the whole prompt into it to get the better answer.
For example, if you feed Q: Who is Trump? A: to the stateless executor, it may give the following answer with the antiprompt Q:.
Donald J. Trump, born June 14, 1946, is an American businessman, television personality, politician and the 45th President of the United States (2017-2021). # Anexo:Torneo de Hamburgo 2022 (individual masculino)\n\n## Presentaci\u00f3n previa\n\n* Defensor del t\u00edtulo: Daniil Medv\u00e9dev\n It seems that things went well at first. However, after answering the question itself, LLM began to talk about some other things until the answer reached the token count limit. The reason of this strange behavior is the anti-prompt cannot be match. With the input, LLM cannot decide whether to append a string \"A: \" at the end of the response.
As an improvement, let's take the following text as the input:
Q: What is the capital of the USA? A: Washingtong. Q: What is the sum of 1 and 2? A: 3. Q: Who is Trump? A: \n Then, I got the following answer with the anti-prompt Q:.
45th president of the United States.\n At this time, by repeating the same mode of Q: xxx? A: xxx., LLM outputs the anti-prompt we want to help to decide where to stop the generation.
Different from LLamaModel, when using an executor, InferenceParams is passed to the Infer method instead of constructor. This is because executors only define the ways to run the model, therefore in each run, you can change the settings for this time inference.
Namespace: LLama.Common
public class InferenceParams\n Inheritance Object \u2192 InferenceParams
"},{"location":"LLamaExecutors/parameters/#properties","title":"Properties","text":""},{"location":"LLamaExecutors/parameters/#tokenskeep","title":"TokensKeep","text":"number of tokens to keep from initial prompt
public int TokensKeep { get; set; }\n"},{"location":"LLamaExecutors/parameters/#property-value","title":"Property Value","text":"Int32
"},{"location":"LLamaExecutors/parameters/#maxtokens","title":"MaxTokens","text":"how many new tokens to predict (n_predict), set to -1 to infinitely generate response until it complete.
public int MaxTokens { get; set; }\n"},{"location":"LLamaExecutors/parameters/#property-value_1","title":"Property Value","text":"Int32
"},{"location":"LLamaExecutors/parameters/#logitbias","title":"LogitBias","text":"logit bias for specific tokens
public Dictionary<int, float> LogitBias { get; set; }\n"},{"location":"LLamaExecutors/parameters/#property-value_2","title":"Property Value","text":"Dictionary<Int32, Single>
"},{"location":"LLamaExecutors/parameters/#antiprompts","title":"AntiPrompts","text":"Sequences where the model will stop generating further tokens.
public IEnumerable<string> AntiPrompts { get; set; }\n"},{"location":"LLamaExecutors/parameters/#property-value_3","title":"Property Value","text":"IEnumerable<String>
"},{"location":"LLamaExecutors/parameters/#pathsession","title":"PathSession","text":"path to file for saving/loading model eval state
public string PathSession { get; set; }\n"},{"location":"LLamaExecutors/parameters/#property-value_4","title":"Property Value","text":"String
"},{"location":"LLamaExecutors/parameters/#inputsuffix","title":"InputSuffix","text":"string to suffix user inputs with
public string InputSuffix { get; set; }\n"},{"location":"LLamaExecutors/parameters/#property-value_5","title":"Property Value","text":"String
"},{"location":"LLamaExecutors/parameters/#inputprefix","title":"InputPrefix","text":"string to prefix user inputs with
public string InputPrefix { get; set; }\n"},{"location":"LLamaExecutors/parameters/#property-value_6","title":"Property Value","text":"String
"},{"location":"LLamaExecutors/parameters/#topk","title":"TopK","text":"0 or lower to use vocab size
public int TopK { get; set; }\n"},{"location":"LLamaExecutors/parameters/#property-value_7","title":"Property Value","text":"Int32
"},{"location":"LLamaExecutors/parameters/#topp","title":"TopP","text":"1.0 = disabled
public float TopP { get; set; }\n"},{"location":"LLamaExecutors/parameters/#property-value_8","title":"Property Value","text":"Single
"},{"location":"LLamaExecutors/parameters/#tfsz","title":"TfsZ","text":"1.0 = disabled
public float TfsZ { get; set; }\n"},{"location":"LLamaExecutors/parameters/#property-value_9","title":"Property Value","text":"Single
"},{"location":"LLamaExecutors/parameters/#typicalp","title":"TypicalP","text":"1.0 = disabled
public float TypicalP { get; set; }\n"},{"location":"LLamaExecutors/parameters/#property-value_10","title":"Property Value","text":"Single
"},{"location":"LLamaExecutors/parameters/#temperature","title":"Temperature","text":"1.0 = disabled
public float Temperature { get; set; }\n"},{"location":"LLamaExecutors/parameters/#property-value_11","title":"Property Value","text":"Single
"},{"location":"LLamaExecutors/parameters/#repeatpenalty","title":"RepeatPenalty","text":"1.0 = disabled
public float RepeatPenalty { get; set; }\n"},{"location":"LLamaExecutors/parameters/#property-value_12","title":"Property Value","text":"Single
"},{"location":"LLamaExecutors/parameters/#repeatlasttokenscount","title":"RepeatLastTokensCount","text":"last n tokens to penalize (0 = disable penalty, -1 = context size) (repeat_last_n)
public int RepeatLastTokensCount { get; set; }\n"},{"location":"LLamaExecutors/parameters/#property-value_13","title":"Property Value","text":"Int32
"},{"location":"LLamaExecutors/parameters/#frequencypenalty","title":"FrequencyPenalty","text":"frequency penalty coefficient 0.0 = disabled
public float FrequencyPenalty { get; set; }\n"},{"location":"LLamaExecutors/parameters/#property-value_14","title":"Property Value","text":"Single
"},{"location":"LLamaExecutors/parameters/#presencepenalty","title":"PresencePenalty","text":"presence penalty coefficient 0.0 = disabled
public float PresencePenalty { get; set; }\n"},{"location":"LLamaExecutors/parameters/#property-value_15","title":"Property Value","text":"Single
"},{"location":"LLamaExecutors/parameters/#mirostat","title":"Mirostat","text":"Mirostat uses tokens instead of words. algorithm described in the paper https://arxiv.org/abs/2007.14966. 0 = disabled, 1 = mirostat, 2 = mirostat 2.0
public MiroStateType Mirostat { get; set; }\n"},{"location":"LLamaExecutors/parameters/#property-value_16","title":"Property Value","text":"MiroStateType
"},{"location":"LLamaExecutors/parameters/#mirostattau","title":"MirostatTau","text":"target entropy
public float MirostatTau { get; set; }\n"},{"location":"LLamaExecutors/parameters/#property-value_17","title":"Property Value","text":"Single
"},{"location":"LLamaExecutors/parameters/#mirostateta","title":"MirostatEta","text":"learning rate
public float MirostatEta { get; set; }\n"},{"location":"LLamaExecutors/parameters/#property-value_18","title":"Property Value","text":"Single
"},{"location":"LLamaExecutors/parameters/#penalizenl","title":"PenalizeNL","text":"consider newlines as a repeatable token (penalize_nl)
public bool PenalizeNL { get; set; }\n"},{"location":"LLamaExecutors/parameters/#property-value_19","title":"Property Value","text":"Boolean
"},{"location":"LLamaExecutors/save-load-state/","title":"Save/Load State of Executor","text":"Similar to LLamaModel, an executor also has its state, which can be saved and loaded. Note that in most of cases, the state of executor and the state of the model should be loaded and saved at the same time.
To decouple the model and executor, we provide APIs to save/load state for model and executor respectively. However, during the inference, the processed information will leave footprint in LLamaModel's native context. Therefore, if you just load a state from another executor but keep the model unmodified, some strange things may happen. So will loading model state only.
Is there a condition that requires to load one of them only? The answer is YES. For example, after resetting the model state, if you don't want the inference starting from the new position, leaving the executor unmodified is okay. But, anyway, this flexible usage may cause some unexpected behaviors, therefore please ensure you know what you're doing before using it in this way.
In the future version, we'll open the access for some variables inside the executor to support more flexible usages.
The APIs to load/save state of the executors is similar to that of LLamaModel. However, note that StatelessExecutor doesn't have such APIs because it's stateless itself. Besides, the output of GetStateData is an object of type ExecutorBaseState.
LLamaModel model = new LLamaModel(new ModelParams(\"<modelPath>\"));\nInteractiveExecutor executor = new InteractiveExecutor(model);\n// do some things...\nexecutor.SaveState(\"executor.st\");\nvar stateData = model.GetStateData();\n\nInteractiveExecutor executor2 = new InteractiveExecutor(model);\nexecutor2.LoadState(stateData);\n// do some things...\n\nInteractiveExecutor executor3 = new InteractiveExecutor(model);\nexecutor3.LoadState(\"executor.st\");\n// do some things...\n"},{"location":"LLamaExecutors/text-to-text-apis/","title":"Text-to-Text APIs of the executors","text":"All the executors implements the interface ILLamaExecutor, which provides two APIs to execute text-to-text tasks.
public interface ILLamaExecutor\n{\n public LLamaModel Model { get; }\n\n IEnumerable<string> Infer(string text, InferenceParams? inferenceParams = null, CancellationToken token = default);\n\n IAsyncEnumerable<string> InferAsync(string text, InferenceParams? inferenceParams = null, CancellationToken token = default);\n}\n Just pass the text to the executor with the inference parameters. For the inference parameters, please refer to executor inference parameters doc.
The output of both two APIs are yield enumerable. Therefore, when receiving the output, you can directly use foreach to take actions on each word you get by order, instead of waiting for the whole process completed.
Getting the embeddings of a text in LLM is sometimes useful, for example, to train other MLP models.
To get the embeddings, please initialize a LLamaEmbedder and then call GetEmbeddings.
var embedder = new LLamaEmbedder(new ModelParams(\"<modelPath>\"));\nstring text = \"hello, LLM.\";\nfloat[] embeddings = embedder.GetEmbeddings(text);\n The output is a float array. Note that the length of the array is related with the model you load. If you just want to get a smaller size embedding, please consider changing a model.
"},{"location":"LLamaModel/parameters/","title":"LLamaModel Parameters","text":"When initializing a LLamaModel object, there're three parameters, ModelParams Params, string encoding = \"UTF-8\", ILLamaLogger? logger = null.
The usage of logger will be further introduced in logger doc. The encoding is the encoding you want to use when dealing with text via this model.
The most important of all, is the ModelParams, which is defined as below. We'll explain the parameters step by step in this document.
public class ModelParams\n{\n public int ContextSize { get; set; } = 512;\n public int GpuLayerCount { get; set; } = 20;\n public int Seed { get; set; } = 1686349486;\n public bool UseFp16Memory { get; set; } = true;\n public bool UseMemorymap { get; set; } = true;\n public bool UseMemoryLock { get; set; } = false;\n public bool Perplexity { get; set; } = false;\n public string ModelPath { get; set; }\n public string LoraAdapter { get; set; } = string.Empty;\n public string LoraBase { get; set; } = string.Empty;\n public int Threads { get; set; } = Math.Max(Environment.ProcessorCount / 2, 1);\n public int BatchSize { get; set; } = 512;\n public bool ConvertEosToNewLine { get; set; } = false;\n}\n"},{"location":"LLamaModel/parameters/#modelparams","title":"ModelParams","text":"Namespace: LLama.Common
public class ModelParams\n Inheritance Object \u2192 ModelParams
"},{"location":"LLamaModel/parameters/#properties","title":"Properties","text":""},{"location":"LLamaModel/parameters/#contextsize","title":"ContextSize","text":"Model context size (n_ctx)
public int ContextSize { get; set; }\n"},{"location":"LLamaModel/parameters/#property-value","title":"Property Value","text":"Int32
"},{"location":"LLamaModel/parameters/#gpulayercount","title":"GpuLayerCount","text":"Number of layers to run in VRAM / GPU memory (n_gpu_layers)
public int GpuLayerCount { get; set; }\n"},{"location":"LLamaModel/parameters/#property-value_1","title":"Property Value","text":"Int32
"},{"location":"LLamaModel/parameters/#seed","title":"Seed","text":"Seed for the random number generator (seed)
public int Seed { get; set; }\n"},{"location":"LLamaModel/parameters/#property-value_2","title":"Property Value","text":"Int32
"},{"location":"LLamaModel/parameters/#usefp16memory","title":"UseFp16Memory","text":"Use f16 instead of f32 for memory kv (memory_f16)
public bool UseFp16Memory { get; set; }\n"},{"location":"LLamaModel/parameters/#property-value_3","title":"Property Value","text":"Boolean
"},{"location":"LLamaModel/parameters/#usememorymap","title":"UseMemorymap","text":"Use mmap for faster loads (use_mmap)
public bool UseMemorymap { get; set; }\n"},{"location":"LLamaModel/parameters/#property-value_4","title":"Property Value","text":"Boolean
"},{"location":"LLamaModel/parameters/#usememorylock","title":"UseMemoryLock","text":"Use mlock to keep model in memory (use_mlock)
public bool UseMemoryLock { get; set; }\n"},{"location":"LLamaModel/parameters/#property-value_5","title":"Property Value","text":"Boolean
"},{"location":"LLamaModel/parameters/#perplexity","title":"Perplexity","text":"Compute perplexity over the prompt (perplexity)
public bool Perplexity { get; set; }\n"},{"location":"LLamaModel/parameters/#property-value_6","title":"Property Value","text":"Boolean
"},{"location":"LLamaModel/parameters/#modelpath","title":"ModelPath","text":"Model path (model)
public string ModelPath { get; set; }\n"},{"location":"LLamaModel/parameters/#property-value_7","title":"Property Value","text":"String
"},{"location":"LLamaModel/parameters/#loraadapter","title":"LoraAdapter","text":"lora adapter path (lora_adapter)
public string LoraAdapter { get; set; }\n"},{"location":"LLamaModel/parameters/#property-value_8","title":"Property Value","text":"String
"},{"location":"LLamaModel/parameters/#lorabase","title":"LoraBase","text":"base model path for the lora adapter (lora_base)
public string LoraBase { get; set; }\n"},{"location":"LLamaModel/parameters/#property-value_9","title":"Property Value","text":"String
"},{"location":"LLamaModel/parameters/#threads","title":"Threads","text":"Number of threads (-1 = autodetect) (n_threads)
public int Threads { get; set; }\n"},{"location":"LLamaModel/parameters/#property-value_10","title":"Property Value","text":"Int32
"},{"location":"LLamaModel/parameters/#batchsize","title":"BatchSize","text":"batch size for prompt processing (must be >=32 to use BLAS) (n_batch)
public int BatchSize { get; set; }\n"},{"location":"LLamaModel/parameters/#property-value_11","title":"Property Value","text":"Int32
"},{"location":"LLamaModel/parameters/#converteostonewline","title":"ConvertEosToNewLine","text":"Whether to convert eos to newline during the inference.
public bool ConvertEosToNewLine { get; set; }\n"},{"location":"LLamaModel/parameters/#property-value_12","title":"Property Value","text":"Boolean
"},{"location":"LLamaModel/parameters/#embeddingmode","title":"EmbeddingMode","text":"Whether to use embedding mode. (embedding) Note that if this is set to true, The LLamaModel won't produce text response anymore.
public bool EmbeddingMode { get; set; }\n"},{"location":"LLamaModel/parameters/#property-value_13","title":"Property Value","text":"Boolean
"},{"location":"LLamaModel/quantization/","title":"Quantization","text":"Quantization is significant to accelerate the model inference. Since there's little accuracy (performance) reduction when quantizing the model, get it easy to quantize it!
To quantize the model, please call Quantize from LLamaQuantizer, which is a static method.
string srcPath = \"<model.bin>\";\nstring dstPath = \"<model_q4_0.bin>\";\nLLamaQuantizer.Quantize(srcPath, dstPath, \"q4_0\");\n// The following overload is also okay.\n// LLamaQuantizer.Quantize(srcPath, dstPath, LLamaFtype.LLAMA_FTYPE_MOSTLY_Q4_0);\n After calling it, a quantized model file will be saved.
There're currently 5 types of quantization supported:
There're two ways to load state: loading from path and loading from bite array. Therefore, correspondingly, state data can be extracted as byte array or saved to a file.
LLamaModel model = new LLamaModel(new ModelParams(\"<modelPath>\"));\n// do some things...\nmodel.SaveState(\"model.st\");\nvar stateData = model.GetStateData();\nmodel.Dispose();\n\nLLamaModel model2 = new LLamaModel(new ModelParams(\"<modelPath>\"));\nmodel2.LoadState(stateData);\n// do some things...\n\nLLamaModel model3 = new LLamaModel(new ModelParams(\"<modelPath>\"));\nmodel3.LoadState(\"model.st\");\n// do some things...\n"},{"location":"LLamaModel/tokenization/","title":"Tokenization/Detokenization","text":"A pair of APIs to make conversion between text and tokens.
"},{"location":"LLamaModel/tokenization/#tokenization","title":"Tokenization","text":"The basic usage is to call Tokenize after initializing the model.
LLamaModel model = new LLamaModel(new ModelParams(\"<modelPath>\"));\nstring text = \"hello\";\nint[] tokens = model.Tokenize(text).ToArray();\n Depending on different model (or vocab), the output will be various.
"},{"location":"LLamaModel/tokenization/#detokenization","title":"Detokenization","text":"Similar to tokenization, just pass an IEnumerable<int> to Detokenize method.
LLamaModel model = new LLamaModel(new ModelParams(\"<modelPath>\"));\nint[] tokens = new int[] {125, 2568, 13245};\nstring text = model.Detokenize(tokens);\n"},{"location":"More/log/","title":"The Logger in LLamaSharp","text":"LLamaSharp supports customized logger because it could be used in many kinds of applications, like Winform/WPF, WebAPI and Blazor, so that the preference of logger varies.
"},{"location":"More/log/#define-customized-logger","title":"Define customized logger","text":"What you need to do is to implement the ILogger interface.
public interface ILLamaLogger\n{\n public enum LogLevel\n {\n Info,\n Debug,\n Warning,\n Error\n }\n void Log(string source, string message, LogLevel level);\n}\n The source specifies where the log message is from, which could be a function, a class, etc..
The message is the log message itself.
The level is the level of the information in the log. As shown above, there're four levels, which are info, debug, warning and error respectively.
The following is a simple example of the logger implementation:
public sealed class LLamaDefaultLogger : ILLamaLogger\n{\n private static readonly Lazy<LLamaDefaultLogger> _instance = new Lazy<LLamaDefaultLogger>(() => new LLamaDefaultLogger());\n\n private bool _toConsole = true;\n private bool _toFile = false;\n\n private FileStream? _fileStream = null;\n private StreamWriter _fileWriter = null;\n\n public static LLamaDefaultLogger Default => _instance.Value;\n\n private LLamaDefaultLogger()\n {\n\n }\n\n public LLamaDefaultLogger EnableConsole()\n {\n _toConsole = true;\n return this;\n }\n\n public LLamaDefaultLogger DisableConsole()\n {\n _toConsole = false;\n return this;\n }\n\n public LLamaDefaultLogger EnableFile(string filename, FileMode mode = FileMode.Append)\n {\n _fileStream = new FileStream(filename, mode, FileAccess.Write);\n _fileWriter = new StreamWriter(_fileStream);\n _toFile = true;\n return this;\n }\n\n public LLamaDefaultLogger DisableFile(string filename)\n {\n if (_fileWriter is not null)\n {\n _fileWriter.Close();\n _fileWriter = null;\n }\n if (_fileStream is not null)\n {\n _fileStream.Close();\n _fileStream = null;\n }\n _toFile = false;\n return this;\n }\n\n public void Log(string source, string message, LogLevel level)\n {\n if (level == LogLevel.Info)\n {\n Info(message);\n }\n else if (level == LogLevel.Debug)\n {\n\n }\n else if (level == LogLevel.Warning)\n {\n Warn(message);\n }\n else if (level == LogLevel.Error)\n {\n Error(message);\n }\n }\n\n public void Info(string message)\n {\n message = MessageFormat(\"info\", message);\n if (_toConsole)\n {\n Console.ForegroundColor = ConsoleColor.White;\n Console.WriteLine(message);\n Console.ResetColor();\n }\n if (_toFile)\n {\n Debug.Assert(_fileStream is not null);\n Debug.Assert(_fileWriter is not null);\n _fileWriter.WriteLine(message);\n }\n }\n\n public void Warn(string message)\n {\n message = MessageFormat(\"warn\", message);\n if (_toConsole)\n {\n Console.ForegroundColor = ConsoleColor.Yellow;\n Console.WriteLine(message);\n Console.ResetColor();\n }\n if (_toFile)\n {\n Debug.Assert(_fileStream is not null);\n Debug.Assert(_fileWriter is not null);\n _fileWriter.WriteLine(message);\n }\n }\n\n public void Error(string message)\n {\n message = MessageFormat(\"error\", message);\n if (_toConsole)\n {\n Console.ForegroundColor = ConsoleColor.Red;\n Console.WriteLine(message);\n Console.ResetColor();\n }\n if (_toFile)\n {\n Debug.Assert(_fileStream is not null);\n Debug.Assert(_fileWriter is not null);\n _fileWriter.WriteLine(message);\n }\n }\n\n private string MessageFormat(string level, string message)\n {\n DateTime now = DateTime.Now;\n string formattedDate = now.ToString(\"yyyy.MM.dd HH:mm:ss\");\n return $\"[{formattedDate}][{level}]: {message}\";\n }\n}\n"},{"location":"NonEnglishUsage/Chinese/","title":"Use LLamaSharp with Chinese","text":"It's supported now but the document is under work. Please wait for some time. Thank you for your support! :)
"},{"location":"xmldocs/","title":"LLamaSharp","text":""},{"location":"xmldocs/#llama","title":"LLama","text":"ChatSession
InstructExecutor
InteractiveExecutor
LLamaContext
LLamaEmbedder
LLamaQuantizer
LLamaTransforms
LLamaWeights
StatefulExecutorBase
StatelessExecutor
Utils
"},{"location":"xmldocs/#llamaabstractions","title":"LLama.Abstractions","text":"IHistoryTransform
IInferenceParams
ILLamaExecutor
IModelParams
ITextStreamTransform
ITextTransform
"},{"location":"xmldocs/#llamacommon","title":"LLama.Common","text":"AuthorRole
ChatHistory
FixedSizeQueue<T>
ILLamaLogger
InferenceParams
LLamaDefaultLogger
MirostatType
ModelParams
"},{"location":"xmldocs/#llamaexceptions","title":"LLama.Exceptions","text":"GrammarExpectedName
GrammarExpectedNext
GrammarExpectedPrevious
GrammarFormatException
GrammarUnexpectedCharAltElement
GrammarUnexpectedCharRngElement
GrammarUnexpectedEndElement
GrammarUnexpectedEndOfInput
GrammarUnexpectedHexCharsCount
GrammarUnknownEscapeCharacter
RuntimeError
"},{"location":"xmldocs/#llamaextensions","title":"LLama.Extensions","text":"IModelParamsExtensions
KeyValuePairExtensions
"},{"location":"xmldocs/#llamagrammars","title":"LLama.Grammars","text":"Grammar
GrammarRule
"},{"location":"xmldocs/#llamanative","title":"LLama.Native","text":"LLamaContextParams
LLamaFtype
LLamaGrammarElement
LLamaGrammarElementType
LLamaModelQuantizeParams
LLamaTokenData
LLamaTokenDataArray
LLamaTokenDataArrayNative
NativeApi
SafeLLamaContextHandle
SafeLLamaGrammarHandle
SafeLLamaHandleBase
SafeLlamaModelHandle
SamplingApi
"},{"location":"xmldocs/#llamaoldversion","title":"LLama.OldVersion","text":"ChatCompletion
ChatCompletionChoice
ChatCompletionChunk
ChatCompletionChunkChoice
ChatCompletionChunkDelta
ChatCompletionMessage
ChatMessageRecord
ChatRole
ChatSession<T>
Completion
CompletionChoice
CompletionChunk
CompletionLogprobs
CompletionUsage
Embedding
EmbeddingData
EmbeddingUsage
IChatModel
LLamaEmbedder
LLamaModel
LLamaParams
"},{"location":"xmldocs/llama.abstractions.ihistorytransform/","title":"IHistoryTransform","text":"Namespace: LLama.Abstractions
Transform history to plain text and vice versa.
public interface IHistoryTransform\n"},{"location":"xmldocs/llama.abstractions.ihistorytransform/#methods","title":"Methods","text":""},{"location":"xmldocs/llama.abstractions.ihistorytransform/#historytotextchathistory","title":"HistoryToText(ChatHistory)","text":"Convert a ChatHistory instance to plain text.
string HistoryToText(ChatHistory history)\n"},{"location":"xmldocs/llama.abstractions.ihistorytransform/#parameters","title":"Parameters","text":"history ChatHistory The ChatHistory instance
String
"},{"location":"xmldocs/llama.abstractions.ihistorytransform/#texttohistoryauthorrole-string","title":"TextToHistory(AuthorRole, String)","text":"Converts plain text to a ChatHistory instance.
ChatHistory TextToHistory(AuthorRole role, string text)\n"},{"location":"xmldocs/llama.abstractions.ihistorytransform/#parameters_1","title":"Parameters","text":"role AuthorRole The role for the author.
text String The chat history as plain text.
ChatHistory The updated history.
"},{"location":"xmldocs/llama.abstractions.iinferenceparams/","title":"IInferenceParams","text":"Namespace: LLama.Abstractions
The paramters used for inference.
public interface IInferenceParams\n"},{"location":"xmldocs/llama.abstractions.iinferenceparams/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.abstractions.iinferenceparams/#tokenskeep","title":"TokensKeep","text":"number of tokens to keep from initial prompt
public abstract int TokensKeep { get; set; }\n"},{"location":"xmldocs/llama.abstractions.iinferenceparams/#property-value","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.abstractions.iinferenceparams/#maxtokens","title":"MaxTokens","text":"how many new tokens to predict (n_predict), set to -1 to inifinitely generate response until it complete.
public abstract int MaxTokens { get; set; }\n"},{"location":"xmldocs/llama.abstractions.iinferenceparams/#property-value_1","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.abstractions.iinferenceparams/#logitbias","title":"LogitBias","text":"logit bias for specific tokens
public abstract Dictionary<int, float> LogitBias { get; set; }\n"},{"location":"xmldocs/llama.abstractions.iinferenceparams/#property-value_2","title":"Property Value","text":"Dictionary<Int32, Single>
"},{"location":"xmldocs/llama.abstractions.iinferenceparams/#antiprompts","title":"AntiPrompts","text":"Sequences where the model will stop generating further tokens.
public abstract IEnumerable<string> AntiPrompts { get; set; }\n"},{"location":"xmldocs/llama.abstractions.iinferenceparams/#property-value_3","title":"Property Value","text":"IEnumerable<String>
"},{"location":"xmldocs/llama.abstractions.iinferenceparams/#pathsession","title":"PathSession","text":"path to file for saving/loading model eval state
public abstract string PathSession { get; set; }\n"},{"location":"xmldocs/llama.abstractions.iinferenceparams/#property-value_4","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.abstractions.iinferenceparams/#inputsuffix","title":"InputSuffix","text":"string to suffix user inputs with
public abstract string InputSuffix { get; set; }\n"},{"location":"xmldocs/llama.abstractions.iinferenceparams/#property-value_5","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.abstractions.iinferenceparams/#inputprefix","title":"InputPrefix","text":"string to prefix user inputs with
public abstract string InputPrefix { get; set; }\n"},{"location":"xmldocs/llama.abstractions.iinferenceparams/#property-value_6","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.abstractions.iinferenceparams/#topk","title":"TopK","text":"0 or lower to use vocab size
public abstract int TopK { get; set; }\n"},{"location":"xmldocs/llama.abstractions.iinferenceparams/#property-value_7","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.abstractions.iinferenceparams/#topp","title":"TopP","text":"1.0 = disabled
public abstract float TopP { get; set; }\n"},{"location":"xmldocs/llama.abstractions.iinferenceparams/#property-value_8","title":"Property Value","text":"Single
"},{"location":"xmldocs/llama.abstractions.iinferenceparams/#tfsz","title":"TfsZ","text":"1.0 = disabled
public abstract float TfsZ { get; set; }\n"},{"location":"xmldocs/llama.abstractions.iinferenceparams/#property-value_9","title":"Property Value","text":"Single
"},{"location":"xmldocs/llama.abstractions.iinferenceparams/#typicalp","title":"TypicalP","text":"1.0 = disabled
public abstract float TypicalP { get; set; }\n"},{"location":"xmldocs/llama.abstractions.iinferenceparams/#property-value_10","title":"Property Value","text":"Single
"},{"location":"xmldocs/llama.abstractions.iinferenceparams/#temperature","title":"Temperature","text":"1.0 = disabled
public abstract float Temperature { get; set; }\n"},{"location":"xmldocs/llama.abstractions.iinferenceparams/#property-value_11","title":"Property Value","text":"Single
"},{"location":"xmldocs/llama.abstractions.iinferenceparams/#repeatpenalty","title":"RepeatPenalty","text":"1.0 = disabled
public abstract float RepeatPenalty { get; set; }\n"},{"location":"xmldocs/llama.abstractions.iinferenceparams/#property-value_12","title":"Property Value","text":"Single
"},{"location":"xmldocs/llama.abstractions.iinferenceparams/#repeatlasttokenscount","title":"RepeatLastTokensCount","text":"last n tokens to penalize (0 = disable penalty, -1 = context size) (repeat_last_n)
public abstract int RepeatLastTokensCount { get; set; }\n"},{"location":"xmldocs/llama.abstractions.iinferenceparams/#property-value_13","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.abstractions.iinferenceparams/#frequencypenalty","title":"FrequencyPenalty","text":"frequency penalty coefficient 0.0 = disabled
public abstract float FrequencyPenalty { get; set; }\n"},{"location":"xmldocs/llama.abstractions.iinferenceparams/#property-value_14","title":"Property Value","text":"Single
"},{"location":"xmldocs/llama.abstractions.iinferenceparams/#presencepenalty","title":"PresencePenalty","text":"presence penalty coefficient 0.0 = disabled
public abstract float PresencePenalty { get; set; }\n"},{"location":"xmldocs/llama.abstractions.iinferenceparams/#property-value_15","title":"Property Value","text":"Single
"},{"location":"xmldocs/llama.abstractions.iinferenceparams/#mirostat","title":"Mirostat","text":"Mirostat uses tokens instead of words. algorithm described in the paper https://arxiv.org/abs/2007.14966. 0 = disabled, 1 = mirostat, 2 = mirostat 2.0
public abstract MirostatType Mirostat { get; set; }\n"},{"location":"xmldocs/llama.abstractions.iinferenceparams/#property-value_16","title":"Property Value","text":"MirostatType
"},{"location":"xmldocs/llama.abstractions.iinferenceparams/#mirostattau","title":"MirostatTau","text":"target entropy
public abstract float MirostatTau { get; set; }\n"},{"location":"xmldocs/llama.abstractions.iinferenceparams/#property-value_17","title":"Property Value","text":"Single
"},{"location":"xmldocs/llama.abstractions.iinferenceparams/#mirostateta","title":"MirostatEta","text":"learning rate
public abstract float MirostatEta { get; set; }\n"},{"location":"xmldocs/llama.abstractions.iinferenceparams/#property-value_18","title":"Property Value","text":"Single
"},{"location":"xmldocs/llama.abstractions.iinferenceparams/#penalizenl","title":"PenalizeNL","text":"consider newlines as a repeatable token (penalize_nl)
public abstract bool PenalizeNL { get; set; }\n"},{"location":"xmldocs/llama.abstractions.iinferenceparams/#property-value_19","title":"Property Value","text":"Boolean
"},{"location":"xmldocs/llama.abstractions.iinferenceparams/#grammar","title":"Grammar","text":"Grammar to constrain possible tokens
public abstract SafeLLamaGrammarHandle Grammar { get; set; }\n"},{"location":"xmldocs/llama.abstractions.iinferenceparams/#property-value_20","title":"Property Value","text":"SafeLLamaGrammarHandle
"},{"location":"xmldocs/llama.abstractions.illamaexecutor/","title":"ILLamaExecutor","text":"Namespace: LLama.Abstractions
A high level interface for LLama models.
public interface ILLamaExecutor\n"},{"location":"xmldocs/llama.abstractions.illamaexecutor/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.abstractions.illamaexecutor/#context","title":"Context","text":"The loaded context for this executor.
public abstract LLamaContext Context { get; }\n"},{"location":"xmldocs/llama.abstractions.illamaexecutor/#property-value","title":"Property Value","text":"LLamaContext
"},{"location":"xmldocs/llama.abstractions.illamaexecutor/#methods","title":"Methods","text":""},{"location":"xmldocs/llama.abstractions.illamaexecutor/#inferstring-iinferenceparams-cancellationtoken","title":"Infer(String, IInferenceParams, CancellationToken)","text":"Infers a response from the model.
IEnumerable<string> Infer(string text, IInferenceParams inferenceParams, CancellationToken token)\n"},{"location":"xmldocs/llama.abstractions.illamaexecutor/#parameters","title":"Parameters","text":"text String Your prompt
inferenceParams IInferenceParams Any additional parameters
token CancellationToken A cancellation token.
IEnumerable<String>
"},{"location":"xmldocs/llama.abstractions.illamaexecutor/#inferasyncstring-iinferenceparams-cancellationtoken","title":"InferAsync(String, IInferenceParams, CancellationToken)","text":"Asynchronously infers a response from the model.
IAsyncEnumerable<string> InferAsync(string text, IInferenceParams inferenceParams, CancellationToken token)\n"},{"location":"xmldocs/llama.abstractions.illamaexecutor/#parameters_1","title":"Parameters","text":"text String Your prompt
inferenceParams IInferenceParams Any additional parameters
token CancellationToken A cancellation token.
IAsyncEnumerable<String>
"},{"location":"xmldocs/llama.abstractions.imodelparams/","title":"IModelParams","text":"Namespace: LLama.Abstractions
The parameters for initializing a LLama model.
public interface IModelParams\n"},{"location":"xmldocs/llama.abstractions.imodelparams/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.abstractions.imodelparams/#contextsize","title":"ContextSize","text":"Model context size (n_ctx)
public abstract int ContextSize { get; set; }\n"},{"location":"xmldocs/llama.abstractions.imodelparams/#property-value","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.abstractions.imodelparams/#maingpu","title":"MainGpu","text":"the GPU that is used for scratch and small tensors
public abstract int MainGpu { get; set; }\n"},{"location":"xmldocs/llama.abstractions.imodelparams/#property-value_1","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.abstractions.imodelparams/#lowvram","title":"LowVram","text":"if true, reduce VRAM usage at the cost of performance
public abstract bool LowVram { get; set; }\n"},{"location":"xmldocs/llama.abstractions.imodelparams/#property-value_2","title":"Property Value","text":"Boolean
"},{"location":"xmldocs/llama.abstractions.imodelparams/#gpulayercount","title":"GpuLayerCount","text":"Number of layers to run in VRAM / GPU memory (n_gpu_layers)
public abstract int GpuLayerCount { get; set; }\n"},{"location":"xmldocs/llama.abstractions.imodelparams/#property-value_3","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.abstractions.imodelparams/#seed","title":"Seed","text":"Seed for the random number generator (seed)
public abstract int Seed { get; set; }\n"},{"location":"xmldocs/llama.abstractions.imodelparams/#property-value_4","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.abstractions.imodelparams/#usefp16memory","title":"UseFp16Memory","text":"Use f16 instead of f32 for memory kv (memory_f16)
public abstract bool UseFp16Memory { get; set; }\n"},{"location":"xmldocs/llama.abstractions.imodelparams/#property-value_5","title":"Property Value","text":"Boolean
"},{"location":"xmldocs/llama.abstractions.imodelparams/#usememorymap","title":"UseMemorymap","text":"Use mmap for faster loads (use_mmap)
public abstract bool UseMemorymap { get; set; }\n"},{"location":"xmldocs/llama.abstractions.imodelparams/#property-value_6","title":"Property Value","text":"Boolean
"},{"location":"xmldocs/llama.abstractions.imodelparams/#usememorylock","title":"UseMemoryLock","text":"Use mlock to keep model in memory (use_mlock)
public abstract bool UseMemoryLock { get; set; }\n"},{"location":"xmldocs/llama.abstractions.imodelparams/#property-value_7","title":"Property Value","text":"Boolean
"},{"location":"xmldocs/llama.abstractions.imodelparams/#perplexity","title":"Perplexity","text":"Compute perplexity over the prompt (perplexity)
public abstract bool Perplexity { get; set; }\n"},{"location":"xmldocs/llama.abstractions.imodelparams/#property-value_8","title":"Property Value","text":"Boolean
"},{"location":"xmldocs/llama.abstractions.imodelparams/#modelpath","title":"ModelPath","text":"Model path (model)
public abstract string ModelPath { get; set; }\n"},{"location":"xmldocs/llama.abstractions.imodelparams/#property-value_9","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.abstractions.imodelparams/#modelalias","title":"ModelAlias","text":"model alias
public abstract string ModelAlias { get; set; }\n"},{"location":"xmldocs/llama.abstractions.imodelparams/#property-value_10","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.abstractions.imodelparams/#loraadapter","title":"LoraAdapter","text":"lora adapter path (lora_adapter)
public abstract string LoraAdapter { get; set; }\n"},{"location":"xmldocs/llama.abstractions.imodelparams/#property-value_11","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.abstractions.imodelparams/#lorabase","title":"LoraBase","text":"base model path for the lora adapter (lora_base)
public abstract string LoraBase { get; set; }\n"},{"location":"xmldocs/llama.abstractions.imodelparams/#property-value_12","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.abstractions.imodelparams/#threads","title":"Threads","text":"Number of threads (-1 = autodetect) (n_threads)
public abstract int Threads { get; set; }\n"},{"location":"xmldocs/llama.abstractions.imodelparams/#property-value_13","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.abstractions.imodelparams/#batchsize","title":"BatchSize","text":"batch size for prompt processing (must be >=32 to use BLAS) (n_batch)
public abstract int BatchSize { get; set; }\n"},{"location":"xmldocs/llama.abstractions.imodelparams/#property-value_14","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.abstractions.imodelparams/#converteostonewline","title":"ConvertEosToNewLine","text":"Whether to convert eos to newline during the inference.
public abstract bool ConvertEosToNewLine { get; set; }\n"},{"location":"xmldocs/llama.abstractions.imodelparams/#property-value_15","title":"Property Value","text":"Boolean
"},{"location":"xmldocs/llama.abstractions.imodelparams/#embeddingmode","title":"EmbeddingMode","text":"Whether to use embedding mode. (embedding) Note that if this is set to true, The LLamaModel won't produce text response anymore.
public abstract bool EmbeddingMode { get; set; }\n"},{"location":"xmldocs/llama.abstractions.imodelparams/#property-value_16","title":"Property Value","text":"Boolean
"},{"location":"xmldocs/llama.abstractions.imodelparams/#tensorsplits","title":"TensorSplits","text":"how split tensors should be distributed across GPUs
public abstract Single[] TensorSplits { get; set; }\n"},{"location":"xmldocs/llama.abstractions.imodelparams/#property-value_17","title":"Property Value","text":"Single[]
"},{"location":"xmldocs/llama.abstractions.imodelparams/#ropefrequencybase","title":"RopeFrequencyBase","text":"RoPE base frequency
public abstract float RopeFrequencyBase { get; set; }\n"},{"location":"xmldocs/llama.abstractions.imodelparams/#property-value_18","title":"Property Value","text":"Single
"},{"location":"xmldocs/llama.abstractions.imodelparams/#ropefrequencyscale","title":"RopeFrequencyScale","text":"RoPE frequency scaling factor
public abstract float RopeFrequencyScale { get; set; }\n"},{"location":"xmldocs/llama.abstractions.imodelparams/#property-value_19","title":"Property Value","text":"Single
"},{"location":"xmldocs/llama.abstractions.imodelparams/#mulmatq","title":"MulMatQ","text":"Use experimental mul_mat_q kernels
public abstract bool MulMatQ { get; set; }\n"},{"location":"xmldocs/llama.abstractions.imodelparams/#property-value_20","title":"Property Value","text":"Boolean
"},{"location":"xmldocs/llama.abstractions.imodelparams/#encoding","title":"Encoding","text":"The encoding to use for models
public abstract Encoding Encoding { get; set; }\n"},{"location":"xmldocs/llama.abstractions.imodelparams/#property-value_21","title":"Property Value","text":"Encoding
"},{"location":"xmldocs/llama.abstractions.itextstreamtransform/","title":"ITextStreamTransform","text":"Namespace: LLama.Abstractions
Takes a stream of tokens and transforms them.
public interface ITextStreamTransform\n"},{"location":"xmldocs/llama.abstractions.itextstreamtransform/#methods","title":"Methods","text":""},{"location":"xmldocs/llama.abstractions.itextstreamtransform/#transformienumerablestring","title":"Transform(IEnumerable<String>)","text":"Takes a stream of tokens and transforms them, returning a new stream of tokens.
IEnumerable<string> Transform(IEnumerable<string> tokens)\n"},{"location":"xmldocs/llama.abstractions.itextstreamtransform/#parameters","title":"Parameters","text":"tokens IEnumerable<String>
IEnumerable<String>
"},{"location":"xmldocs/llama.abstractions.itextstreamtransform/#transformasynciasyncenumerablestring","title":"TransformAsync(IAsyncEnumerable<String>)","text":"Takes a stream of tokens and transforms them, returning a new stream of tokens asynchronously.
IAsyncEnumerable<string> TransformAsync(IAsyncEnumerable<string> tokens)\n"},{"location":"xmldocs/llama.abstractions.itextstreamtransform/#parameters_1","title":"Parameters","text":"tokens IAsyncEnumerable<String>
IAsyncEnumerable<String>
"},{"location":"xmldocs/llama.abstractions.itexttransform/","title":"ITextTransform","text":"Namespace: LLama.Abstractions
An interface for text transformations. These can be used to compose a pipeline of text transformations, such as: - Tokenization - Lowercasing - Punctuation removal - Trimming - etc.
public interface ITextTransform\n"},{"location":"xmldocs/llama.abstractions.itexttransform/#methods","title":"Methods","text":""},{"location":"xmldocs/llama.abstractions.itexttransform/#transformstring","title":"Transform(String)","text":"Takes a string and transforms it.
string Transform(string text)\n"},{"location":"xmldocs/llama.abstractions.itexttransform/#parameters","title":"Parameters","text":"text String
String
"},{"location":"xmldocs/llama.chatsession/","title":"ChatSession","text":"Namespace: LLama
The main chat session class.
public class ChatSession\n Inheritance Object \u2192 ChatSession
"},{"location":"xmldocs/llama.chatsession/#fields","title":"Fields","text":""},{"location":"xmldocs/llama.chatsession/#outputtransform","title":"OutputTransform","text":"The output transform used in this session.
public ITextStreamTransform OutputTransform;\n"},{"location":"xmldocs/llama.chatsession/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.chatsession/#executor","title":"Executor","text":"The executor for this session.
public ILLamaExecutor Executor { get; }\n"},{"location":"xmldocs/llama.chatsession/#property-value","title":"Property Value","text":"ILLamaExecutor
"},{"location":"xmldocs/llama.chatsession/#history","title":"History","text":"The chat history for this session.
public ChatHistory History { get; }\n"},{"location":"xmldocs/llama.chatsession/#property-value_1","title":"Property Value","text":"ChatHistory
"},{"location":"xmldocs/llama.chatsession/#historytransform","title":"HistoryTransform","text":"The history transform used in this session.
public IHistoryTransform HistoryTransform { get; set; }\n"},{"location":"xmldocs/llama.chatsession/#property-value_2","title":"Property Value","text":"IHistoryTransform
"},{"location":"xmldocs/llama.chatsession/#inputtransformpipeline","title":"InputTransformPipeline","text":"The input transform pipeline used in this session.
public List<ITextTransform> InputTransformPipeline { get; set; }\n"},{"location":"xmldocs/llama.chatsession/#property-value_3","title":"Property Value","text":"List<ITextTransform>
"},{"location":"xmldocs/llama.chatsession/#constructors","title":"Constructors","text":""},{"location":"xmldocs/llama.chatsession/#chatsessionillamaexecutor","title":"ChatSession(ILLamaExecutor)","text":"public ChatSession(ILLamaExecutor executor)\n"},{"location":"xmldocs/llama.chatsession/#parameters","title":"Parameters","text":"executor ILLamaExecutor The executor for this session
Use a custom history transform.
public ChatSession WithHistoryTransform(IHistoryTransform transform)\n"},{"location":"xmldocs/llama.chatsession/#parameters_1","title":"Parameters","text":"transform IHistoryTransform
ChatSession
"},{"location":"xmldocs/llama.chatsession/#addinputtransformitexttransform","title":"AddInputTransform(ITextTransform)","text":"Add a text transform to the input transform pipeline.
public ChatSession AddInputTransform(ITextTransform transform)\n"},{"location":"xmldocs/llama.chatsession/#parameters_2","title":"Parameters","text":"transform ITextTransform
ChatSession
"},{"location":"xmldocs/llama.chatsession/#withoutputtransformitextstreamtransform","title":"WithOutputTransform(ITextStreamTransform)","text":"Use a custom output transform.
public ChatSession WithOutputTransform(ITextStreamTransform transform)\n"},{"location":"xmldocs/llama.chatsession/#parameters_3","title":"Parameters","text":"transform ITextStreamTransform
ChatSession
"},{"location":"xmldocs/llama.chatsession/#savesessionstring","title":"SaveSession(String)","text":"public void SaveSession(string path)\n"},{"location":"xmldocs/llama.chatsession/#parameters_4","title":"Parameters","text":"path String The directory name to save the session. If the directory does not exist, a new directory will be created.
public void LoadSession(string path)\n"},{"location":"xmldocs/llama.chatsession/#parameters_5","title":"Parameters","text":"path String The directory name to load the session.
Get the response from the LLama model with chat histories.
public IEnumerable<string> Chat(ChatHistory history, IInferenceParams inferenceParams, CancellationToken cancellationToken)\n"},{"location":"xmldocs/llama.chatsession/#parameters_6","title":"Parameters","text":"history ChatHistory
inferenceParams IInferenceParams
cancellationToken CancellationToken
IEnumerable<String>
"},{"location":"xmldocs/llama.chatsession/#chatstring-iinferenceparams-cancellationtoken","title":"Chat(String, IInferenceParams, CancellationToken)","text":"Get the response from the LLama model. Note that prompt could not only be the preset words, but also the question you want to ask.
public IEnumerable<string> Chat(string prompt, IInferenceParams inferenceParams, CancellationToken cancellationToken)\n"},{"location":"xmldocs/llama.chatsession/#parameters_7","title":"Parameters","text":"prompt String
inferenceParams IInferenceParams
cancellationToken CancellationToken
IEnumerable<String>
"},{"location":"xmldocs/llama.chatsession/#chatasyncchathistory-iinferenceparams-cancellationtoken","title":"ChatAsync(ChatHistory, IInferenceParams, CancellationToken)","text":"Get the response from the LLama model with chat histories.
public IAsyncEnumerable<string> ChatAsync(ChatHistory history, IInferenceParams inferenceParams, CancellationToken cancellationToken)\n"},{"location":"xmldocs/llama.chatsession/#parameters_8","title":"Parameters","text":"history ChatHistory
inferenceParams IInferenceParams
cancellationToken CancellationToken
IAsyncEnumerable<String>
"},{"location":"xmldocs/llama.chatsession/#chatasyncstring-iinferenceparams-cancellationtoken","title":"ChatAsync(String, IInferenceParams, CancellationToken)","text":"Get the response from the LLama model with chat histories asynchronously.
public IAsyncEnumerable<string> ChatAsync(string prompt, IInferenceParams inferenceParams, CancellationToken cancellationToken)\n"},{"location":"xmldocs/llama.chatsession/#parameters_9","title":"Parameters","text":"prompt String
inferenceParams IInferenceParams
cancellationToken CancellationToken
IAsyncEnumerable<String>
"},{"location":"xmldocs/llama.common.authorrole/","title":"AuthorRole","text":"Namespace: LLama.Common
Role of the message author, e.g. user/assistant/system
public enum AuthorRole\n Inheritance Object \u2192 ValueType \u2192 Enum \u2192 AuthorRole Implements IComparable, IFormattable, IConvertible
"},{"location":"xmldocs/llama.common.authorrole/#fields","title":"Fields","text":"Name Value Description Unknown -1 Role is unknown System 0 Message comes from a \"system\" prompt, not written by a user or language model User 1 Message comes from the user Assistant 2 Messages was generated by the language model"},{"location":"xmldocs/llama.common.chathistory/","title":"ChatHistory","text":"Namespace: LLama.Common
The chat history class
public class ChatHistory\n Inheritance Object \u2192 ChatHistory
"},{"location":"xmldocs/llama.common.chathistory/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.common.chathistory/#messages","title":"Messages","text":"List of messages in the chat
public List<Message> Messages { get; }\n"},{"location":"xmldocs/llama.common.chathistory/#property-value","title":"Property Value","text":"List<Message>
"},{"location":"xmldocs/llama.common.chathistory/#constructors","title":"Constructors","text":""},{"location":"xmldocs/llama.common.chathistory/#chathistory_1","title":"ChatHistory()","text":"Create a new instance of the chat content class
public ChatHistory()\n"},{"location":"xmldocs/llama.common.chathistory/#methods","title":"Methods","text":""},{"location":"xmldocs/llama.common.chathistory/#addmessageauthorrole-string","title":"AddMessage(AuthorRole, String)","text":"Add a message to the chat history
public void AddMessage(AuthorRole authorRole, string content)\n"},{"location":"xmldocs/llama.common.chathistory/#parameters","title":"Parameters","text":"authorRole AuthorRole Role of the message author
content String Message content
Namespace: LLama.Common
A queue with fixed storage size. Currently it's only a naive implementation and needs to be further optimized in the future.
public class FixedSizeQueue<T> : , System.Collections.IEnumerable\n"},{"location":"xmldocs/llama.common.fixedsizequeue-1/#type-parameters","title":"Type Parameters","text":"T
Inheritance Object \u2192 FixedSizeQueue<T> Implements IEnumerable<T>, IEnumerable
"},{"location":"xmldocs/llama.common.fixedsizequeue-1/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.common.fixedsizequeue-1/#count","title":"Count","text":"Number of items in this queue
public int Count { get; }\n"},{"location":"xmldocs/llama.common.fixedsizequeue-1/#property-value","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.common.fixedsizequeue-1/#capacity","title":"Capacity","text":"Maximum number of items allowed in this queue
public int Capacity { get; }\n"},{"location":"xmldocs/llama.common.fixedsizequeue-1/#property-value_1","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.common.fixedsizequeue-1/#constructors","title":"Constructors","text":""},{"location":"xmldocs/llama.common.fixedsizequeue-1/#fixedsizequeueint32","title":"FixedSizeQueue(Int32)","text":"Create a new queue
public FixedSizeQueue(int size)\n"},{"location":"xmldocs/llama.common.fixedsizequeue-1/#parameters","title":"Parameters","text":"size Int32 the maximum number of items to store in this queue
Fill the quene with the data. Please ensure that data.Count <= size
public FixedSizeQueue(int size, IEnumerable<T> data)\n"},{"location":"xmldocs/llama.common.fixedsizequeue-1/#parameters_1","title":"Parameters","text":"size Int32
data IEnumerable<T>
Replace every item in the queue with the given value
public FixedSizeQueue<T> FillWith(T value)\n"},{"location":"xmldocs/llama.common.fixedsizequeue-1/#parameters_2","title":"Parameters","text":"value T The value to replace all items with
FixedSizeQueue<T> returns this
"},{"location":"xmldocs/llama.common.fixedsizequeue-1/#enqueuet","title":"Enqueue(T)","text":"Enquene an element.
public void Enqueue(T item)\n"},{"location":"xmldocs/llama.common.fixedsizequeue-1/#parameters_3","title":"Parameters","text":"item T
public IEnumerator<T> GetEnumerator()\n"},{"location":"xmldocs/llama.common.fixedsizequeue-1/#returns_1","title":"Returns","text":"IEnumerator<T>
"},{"location":"xmldocs/llama.common.illamalogger/","title":"ILLamaLogger","text":"Namespace: LLama.Common
receives log messages from LLamaSharp
public interface ILLamaLogger\n"},{"location":"xmldocs/llama.common.illamalogger/#methods","title":"Methods","text":""},{"location":"xmldocs/llama.common.illamalogger/#logstring-string-loglevel","title":"Log(String, String, LogLevel)","text":"Write the log in customized way
void Log(string source, string message, LogLevel level)\n"},{"location":"xmldocs/llama.common.illamalogger/#parameters","title":"Parameters","text":"source String The source of the log. It may be a method name or class name.
message String The message.
level LogLevel The log level.
Namespace: LLama.Common
The paramters used for inference.
public class InferenceParams : LLama.Abstractions.IInferenceParams\n Inheritance Object \u2192 InferenceParams Implements IInferenceParams
"},{"location":"xmldocs/llama.common.inferenceparams/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.common.inferenceparams/#tokenskeep","title":"TokensKeep","text":"number of tokens to keep from initial prompt
public int TokensKeep { get; set; }\n"},{"location":"xmldocs/llama.common.inferenceparams/#property-value","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.common.inferenceparams/#maxtokens","title":"MaxTokens","text":"how many new tokens to predict (n_predict), set to -1 to inifinitely generate response until it complete.
public int MaxTokens { get; set; }\n"},{"location":"xmldocs/llama.common.inferenceparams/#property-value_1","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.common.inferenceparams/#logitbias","title":"LogitBias","text":"logit bias for specific tokens
public Dictionary<int, float> LogitBias { get; set; }\n"},{"location":"xmldocs/llama.common.inferenceparams/#property-value_2","title":"Property Value","text":"Dictionary<Int32, Single>
"},{"location":"xmldocs/llama.common.inferenceparams/#antiprompts","title":"AntiPrompts","text":"Sequences where the model will stop generating further tokens.
public IEnumerable<string> AntiPrompts { get; set; }\n"},{"location":"xmldocs/llama.common.inferenceparams/#property-value_3","title":"Property Value","text":"IEnumerable<String>
"},{"location":"xmldocs/llama.common.inferenceparams/#pathsession","title":"PathSession","text":"path to file for saving/loading model eval state
public string PathSession { get; set; }\n"},{"location":"xmldocs/llama.common.inferenceparams/#property-value_4","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.common.inferenceparams/#inputsuffix","title":"InputSuffix","text":"string to suffix user inputs with
public string InputSuffix { get; set; }\n"},{"location":"xmldocs/llama.common.inferenceparams/#property-value_5","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.common.inferenceparams/#inputprefix","title":"InputPrefix","text":"string to prefix user inputs with
public string InputPrefix { get; set; }\n"},{"location":"xmldocs/llama.common.inferenceparams/#property-value_6","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.common.inferenceparams/#topk","title":"TopK","text":"0 or lower to use vocab size
public int TopK { get; set; }\n"},{"location":"xmldocs/llama.common.inferenceparams/#property-value_7","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.common.inferenceparams/#topp","title":"TopP","text":"1.0 = disabled
public float TopP { get; set; }\n"},{"location":"xmldocs/llama.common.inferenceparams/#property-value_8","title":"Property Value","text":"Single
"},{"location":"xmldocs/llama.common.inferenceparams/#tfsz","title":"TfsZ","text":"1.0 = disabled
public float TfsZ { get; set; }\n"},{"location":"xmldocs/llama.common.inferenceparams/#property-value_9","title":"Property Value","text":"Single
"},{"location":"xmldocs/llama.common.inferenceparams/#typicalp","title":"TypicalP","text":"1.0 = disabled
public float TypicalP { get; set; }\n"},{"location":"xmldocs/llama.common.inferenceparams/#property-value_10","title":"Property Value","text":"Single
"},{"location":"xmldocs/llama.common.inferenceparams/#temperature","title":"Temperature","text":"1.0 = disabled
public float Temperature { get; set; }\n"},{"location":"xmldocs/llama.common.inferenceparams/#property-value_11","title":"Property Value","text":"Single
"},{"location":"xmldocs/llama.common.inferenceparams/#repeatpenalty","title":"RepeatPenalty","text":"1.0 = disabled
public float RepeatPenalty { get; set; }\n"},{"location":"xmldocs/llama.common.inferenceparams/#property-value_12","title":"Property Value","text":"Single
"},{"location":"xmldocs/llama.common.inferenceparams/#repeatlasttokenscount","title":"RepeatLastTokensCount","text":"last n tokens to penalize (0 = disable penalty, -1 = context size) (repeat_last_n)
public int RepeatLastTokensCount { get; set; }\n"},{"location":"xmldocs/llama.common.inferenceparams/#property-value_13","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.common.inferenceparams/#frequencypenalty","title":"FrequencyPenalty","text":"frequency penalty coefficient 0.0 = disabled
public float FrequencyPenalty { get; set; }\n"},{"location":"xmldocs/llama.common.inferenceparams/#property-value_14","title":"Property Value","text":"Single
"},{"location":"xmldocs/llama.common.inferenceparams/#presencepenalty","title":"PresencePenalty","text":"presence penalty coefficient 0.0 = disabled
public float PresencePenalty { get; set; }\n"},{"location":"xmldocs/llama.common.inferenceparams/#property-value_15","title":"Property Value","text":"Single
"},{"location":"xmldocs/llama.common.inferenceparams/#mirostat","title":"Mirostat","text":"Mirostat uses tokens instead of words. algorithm described in the paper https://arxiv.org/abs/2007.14966. 0 = disabled, 1 = mirostat, 2 = mirostat 2.0
public MirostatType Mirostat { get; set; }\n"},{"location":"xmldocs/llama.common.inferenceparams/#property-value_16","title":"Property Value","text":"MirostatType
"},{"location":"xmldocs/llama.common.inferenceparams/#mirostattau","title":"MirostatTau","text":"target entropy
public float MirostatTau { get; set; }\n"},{"location":"xmldocs/llama.common.inferenceparams/#property-value_17","title":"Property Value","text":"Single
"},{"location":"xmldocs/llama.common.inferenceparams/#mirostateta","title":"MirostatEta","text":"learning rate
public float MirostatEta { get; set; }\n"},{"location":"xmldocs/llama.common.inferenceparams/#property-value_18","title":"Property Value","text":"Single
"},{"location":"xmldocs/llama.common.inferenceparams/#penalizenl","title":"PenalizeNL","text":"consider newlines as a repeatable token (penalize_nl)
public bool PenalizeNL { get; set; }\n"},{"location":"xmldocs/llama.common.inferenceparams/#property-value_19","title":"Property Value","text":"Boolean
"},{"location":"xmldocs/llama.common.inferenceparams/#grammar","title":"Grammar","text":"A grammar to constrain the possible tokens
public SafeLLamaGrammarHandle Grammar { get; set; }\n"},{"location":"xmldocs/llama.common.inferenceparams/#property-value_20","title":"Property Value","text":"SafeLLamaGrammarHandle
"},{"location":"xmldocs/llama.common.inferenceparams/#constructors","title":"Constructors","text":""},{"location":"xmldocs/llama.common.inferenceparams/#inferenceparams_1","title":"InferenceParams()","text":"public InferenceParams()\n"},{"location":"xmldocs/llama.common.llamadefaultlogger/","title":"LLamaDefaultLogger","text":"Namespace: LLama.Common
The default logger of LLamaSharp. On default it write to console. Use methods of LLamaLogger.Default to change the behavior. It's recommended to inherit ILLamaLogger to customize the behavior.
public sealed class LLamaDefaultLogger : ILLamaLogger\n Inheritance Object \u2192 LLamaDefaultLogger Implements ILLamaLogger
"},{"location":"xmldocs/llama.common.llamadefaultlogger/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.common.llamadefaultlogger/#default","title":"Default","text":"Get the default logger instance
public static LLamaDefaultLogger Default { get; }\n"},{"location":"xmldocs/llama.common.llamadefaultlogger/#property-value","title":"Property Value","text":"LLamaDefaultLogger
"},{"location":"xmldocs/llama.common.llamadefaultlogger/#methods","title":"Methods","text":""},{"location":"xmldocs/llama.common.llamadefaultlogger/#enablenative","title":"EnableNative()","text":"Enable logging output from llama.cpp
public LLamaDefaultLogger EnableNative()\n"},{"location":"xmldocs/llama.common.llamadefaultlogger/#returns","title":"Returns","text":"LLamaDefaultLogger
"},{"location":"xmldocs/llama.common.llamadefaultlogger/#enableconsole","title":"EnableConsole()","text":"Enable writing log messages to console
public LLamaDefaultLogger EnableConsole()\n"},{"location":"xmldocs/llama.common.llamadefaultlogger/#returns_1","title":"Returns","text":"LLamaDefaultLogger
"},{"location":"xmldocs/llama.common.llamadefaultlogger/#disableconsole","title":"DisableConsole()","text":"Disable writing messages to console
public LLamaDefaultLogger DisableConsole()\n"},{"location":"xmldocs/llama.common.llamadefaultlogger/#returns_2","title":"Returns","text":"LLamaDefaultLogger
"},{"location":"xmldocs/llama.common.llamadefaultlogger/#enablefilestring-filemode","title":"EnableFile(String, FileMode)","text":"Enable writing log messages to file
public LLamaDefaultLogger EnableFile(string filename, FileMode mode)\n"},{"location":"xmldocs/llama.common.llamadefaultlogger/#parameters","title":"Parameters","text":"filename String
mode FileMode
LLamaDefaultLogger
"},{"location":"xmldocs/llama.common.llamadefaultlogger/#disablefilestring","title":"DisableFile(String)","text":""},{"location":"xmldocs/llama.common.llamadefaultlogger/#caution","title":"Caution","text":"Use DisableFile method without 'filename' parameter
Disable writing log messages to file
public LLamaDefaultLogger DisableFile(string filename)\n"},{"location":"xmldocs/llama.common.llamadefaultlogger/#parameters_1","title":"Parameters","text":"filename String unused!
LLamaDefaultLogger
"},{"location":"xmldocs/llama.common.llamadefaultlogger/#disablefile","title":"DisableFile()","text":"Disable writing log messages to file
public LLamaDefaultLogger DisableFile()\n"},{"location":"xmldocs/llama.common.llamadefaultlogger/#returns_5","title":"Returns","text":"LLamaDefaultLogger
"},{"location":"xmldocs/llama.common.llamadefaultlogger/#logstring-string-loglevel","title":"Log(String, String, LogLevel)","text":"Log a message
public void Log(string source, string message, LogLevel level)\n"},{"location":"xmldocs/llama.common.llamadefaultlogger/#parameters_2","title":"Parameters","text":"source String The source of this message (e.g. class name)
message String The message to log
level LogLevel Severity level of this message
Write a log message with \"Info\" severity
public void Info(string message)\n"},{"location":"xmldocs/llama.common.llamadefaultlogger/#parameters_3","title":"Parameters","text":"message String
Write a log message with \"Warn\" severity
public void Warn(string message)\n"},{"location":"xmldocs/llama.common.llamadefaultlogger/#parameters_4","title":"Parameters","text":"message String
Write a log message with \"Error\" severity
public void Error(string message)\n"},{"location":"xmldocs/llama.common.llamadefaultlogger/#parameters_5","title":"Parameters","text":"message String
Namespace: LLama.Common
Type of \"mirostat\" sampling to use. https://github.com/basusourya/mirostat
public enum MirostatType\n Inheritance Object \u2192 ValueType \u2192 Enum \u2192 MirostatType Implements IComparable, IFormattable, IConvertible
"},{"location":"xmldocs/llama.common.mirostattype/#fields","title":"Fields","text":"Name Value Description Disable 0 Disable Mirostat sampling Mirostat 1 Original mirostat algorithm Mirostat2 2 Mirostat 2.0 algorithm"},{"location":"xmldocs/llama.common.modelparams/","title":"ModelParams","text":"Namespace: LLama.Common
The parameters for initializing a LLama model.
public class ModelParams : LLama.Abstractions.IModelParams, System.IEquatable`1[[LLama.Common.ModelParams, LLamaSharp, Version=0.5.0.0, Culture=neutral, PublicKeyToken=null]]\n Inheritance Object \u2192 ModelParams Implements IModelParams, IEquatable<ModelParams>
"},{"location":"xmldocs/llama.common.modelparams/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.common.modelparams/#contextsize","title":"ContextSize","text":"Model context size (n_ctx)
public int ContextSize { get; set; }\n"},{"location":"xmldocs/llama.common.modelparams/#property-value","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.common.modelparams/#maingpu","title":"MainGpu","text":"the GPU that is used for scratch and small tensors
public int MainGpu { get; set; }\n"},{"location":"xmldocs/llama.common.modelparams/#property-value_1","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.common.modelparams/#lowvram","title":"LowVram","text":"if true, reduce VRAM usage at the cost of performance
public bool LowVram { get; set; }\n"},{"location":"xmldocs/llama.common.modelparams/#property-value_2","title":"Property Value","text":"Boolean
"},{"location":"xmldocs/llama.common.modelparams/#gpulayercount","title":"GpuLayerCount","text":"Number of layers to run in VRAM / GPU memory (n_gpu_layers)
public int GpuLayerCount { get; set; }\n"},{"location":"xmldocs/llama.common.modelparams/#property-value_3","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.common.modelparams/#seed","title":"Seed","text":"Seed for the random number generator (seed)
public int Seed { get; set; }\n"},{"location":"xmldocs/llama.common.modelparams/#property-value_4","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.common.modelparams/#usefp16memory","title":"UseFp16Memory","text":"Use f16 instead of f32 for memory kv (memory_f16)
public bool UseFp16Memory { get; set; }\n"},{"location":"xmldocs/llama.common.modelparams/#property-value_5","title":"Property Value","text":"Boolean
"},{"location":"xmldocs/llama.common.modelparams/#usememorymap","title":"UseMemorymap","text":"Use mmap for faster loads (use_mmap)
public bool UseMemorymap { get; set; }\n"},{"location":"xmldocs/llama.common.modelparams/#property-value_6","title":"Property Value","text":"Boolean
"},{"location":"xmldocs/llama.common.modelparams/#usememorylock","title":"UseMemoryLock","text":"Use mlock to keep model in memory (use_mlock)
public bool UseMemoryLock { get; set; }\n"},{"location":"xmldocs/llama.common.modelparams/#property-value_7","title":"Property Value","text":"Boolean
"},{"location":"xmldocs/llama.common.modelparams/#perplexity","title":"Perplexity","text":"Compute perplexity over the prompt (perplexity)
public bool Perplexity { get; set; }\n"},{"location":"xmldocs/llama.common.modelparams/#property-value_8","title":"Property Value","text":"Boolean
"},{"location":"xmldocs/llama.common.modelparams/#modelpath","title":"ModelPath","text":"Model path (model)
public string ModelPath { get; set; }\n"},{"location":"xmldocs/llama.common.modelparams/#property-value_9","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.common.modelparams/#modelalias","title":"ModelAlias","text":"model alias
public string ModelAlias { get; set; }\n"},{"location":"xmldocs/llama.common.modelparams/#property-value_10","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.common.modelparams/#loraadapter","title":"LoraAdapter","text":"lora adapter path (lora_adapter)
public string LoraAdapter { get; set; }\n"},{"location":"xmldocs/llama.common.modelparams/#property-value_11","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.common.modelparams/#lorabase","title":"LoraBase","text":"base model path for the lora adapter (lora_base)
public string LoraBase { get; set; }\n"},{"location":"xmldocs/llama.common.modelparams/#property-value_12","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.common.modelparams/#threads","title":"Threads","text":"Number of threads (-1 = autodetect) (n_threads)
public int Threads { get; set; }\n"},{"location":"xmldocs/llama.common.modelparams/#property-value_13","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.common.modelparams/#batchsize","title":"BatchSize","text":"batch size for prompt processing (must be >=32 to use BLAS) (n_batch)
public int BatchSize { get; set; }\n"},{"location":"xmldocs/llama.common.modelparams/#property-value_14","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.common.modelparams/#converteostonewline","title":"ConvertEosToNewLine","text":"Whether to convert eos to newline during the inference.
public bool ConvertEosToNewLine { get; set; }\n"},{"location":"xmldocs/llama.common.modelparams/#property-value_15","title":"Property Value","text":"Boolean
"},{"location":"xmldocs/llama.common.modelparams/#embeddingmode","title":"EmbeddingMode","text":"Whether to use embedding mode. (embedding) Note that if this is set to true, The LLamaModel won't produce text response anymore.
public bool EmbeddingMode { get; set; }\n"},{"location":"xmldocs/llama.common.modelparams/#property-value_16","title":"Property Value","text":"Boolean
"},{"location":"xmldocs/llama.common.modelparams/#tensorsplits","title":"TensorSplits","text":"how split tensors should be distributed across GPUs
public Single[] TensorSplits { get; set; }\n"},{"location":"xmldocs/llama.common.modelparams/#property-value_17","title":"Property Value","text":"Single[]
"},{"location":"xmldocs/llama.common.modelparams/#ropefrequencybase","title":"RopeFrequencyBase","text":"RoPE base frequency
public float RopeFrequencyBase { get; set; }\n"},{"location":"xmldocs/llama.common.modelparams/#property-value_18","title":"Property Value","text":"Single
"},{"location":"xmldocs/llama.common.modelparams/#ropefrequencyscale","title":"RopeFrequencyScale","text":"RoPE frequency scaling factor
public float RopeFrequencyScale { get; set; }\n"},{"location":"xmldocs/llama.common.modelparams/#property-value_19","title":"Property Value","text":"Single
"},{"location":"xmldocs/llama.common.modelparams/#mulmatq","title":"MulMatQ","text":"Use experimental mul_mat_q kernels
public bool MulMatQ { get; set; }\n"},{"location":"xmldocs/llama.common.modelparams/#property-value_20","title":"Property Value","text":"Boolean
"},{"location":"xmldocs/llama.common.modelparams/#encoding","title":"Encoding","text":"The encoding to use to convert text for the model
public Encoding Encoding { get; set; }\n"},{"location":"xmldocs/llama.common.modelparams/#property-value_21","title":"Property Value","text":"Encoding
"},{"location":"xmldocs/llama.common.modelparams/#constructors","title":"Constructors","text":""},{"location":"xmldocs/llama.common.modelparams/#modelparamsstring","title":"ModelParams(String)","text":"public ModelParams(string modelPath)\n"},{"location":"xmldocs/llama.common.modelparams/#parameters","title":"Parameters","text":"modelPath String The model path.
Use object initializer to set all optional parameters
public ModelParams(string modelPath, int contextSize, int gpuLayerCount, int seed, bool useFp16Memory, bool useMemorymap, bool useMemoryLock, bool perplexity, string loraAdapter, string loraBase, int threads, int batchSize, bool convertEosToNewLine, bool embeddingMode, float ropeFrequencyBase, float ropeFrequencyScale, bool mulMatQ, string encoding)\n"},{"location":"xmldocs/llama.common.modelparams/#parameters_1","title":"Parameters","text":"modelPath String The model path.
contextSize Int32 Model context size (n_ctx)
gpuLayerCount Int32 Number of layers to run in VRAM / GPU memory (n_gpu_layers)
seed Int32 Seed for the random number generator (seed)
useFp16Memory Boolean Whether to use f16 instead of f32 for memory kv (memory_f16)
useMemorymap Boolean Whether to use mmap for faster loads (use_mmap)
useMemoryLock Boolean Whether to use mlock to keep model in memory (use_mlock)
perplexity Boolean Thether to compute perplexity over the prompt (perplexity)
loraAdapter String Lora adapter path (lora_adapter)
loraBase String Base model path for the lora adapter (lora_base)
threads Int32 Number of threads (-1 = autodetect) (n_threads)
batchSize Int32 Batch size for prompt processing (must be >=32 to use BLAS) (n_batch)
convertEosToNewLine Boolean Whether to convert eos to newline during the inference.
embeddingMode Boolean Whether to use embedding mode. (embedding) Note that if this is set to true, The LLamaModel won't produce text response anymore.
ropeFrequencyBase Single RoPE base frequency.
ropeFrequencyScale Single RoPE frequency scaling factor
mulMatQ Boolean Use experimental mul_mat_q kernels
encoding String The encoding to use to convert text for the model
public string ToString()\n"},{"location":"xmldocs/llama.common.modelparams/#returns","title":"Returns","text":"String
"},{"location":"xmldocs/llama.common.modelparams/#printmembersstringbuilder","title":"PrintMembers(StringBuilder)","text":"protected bool PrintMembers(StringBuilder builder)\n"},{"location":"xmldocs/llama.common.modelparams/#parameters_2","title":"Parameters","text":"builder StringBuilder
Boolean
"},{"location":"xmldocs/llama.common.modelparams/#gethashcode","title":"GetHashCode()","text":"public int GetHashCode()\n"},{"location":"xmldocs/llama.common.modelparams/#returns_2","title":"Returns","text":"Int32
"},{"location":"xmldocs/llama.common.modelparams/#equalsobject","title":"Equals(Object)","text":"public bool Equals(object obj)\n"},{"location":"xmldocs/llama.common.modelparams/#parameters_3","title":"Parameters","text":"obj Object
Boolean
"},{"location":"xmldocs/llama.common.modelparams/#equalsmodelparams","title":"Equals(ModelParams)","text":"public bool Equals(ModelParams other)\n"},{"location":"xmldocs/llama.common.modelparams/#parameters_4","title":"Parameters","text":"other ModelParams
Boolean
"},{"location":"xmldocs/llama.common.modelparams/#clone","title":"<Clone>$()","text":"public ModelParams <Clone>$()\n"},{"location":"xmldocs/llama.common.modelparams/#returns_5","title":"Returns","text":"ModelParams
"},{"location":"xmldocs/llama.exceptions.grammarexpectedname/","title":"GrammarExpectedName","text":"Namespace: LLama.Exceptions
Failed to parse a \"name\" element when one was expected
public class GrammarExpectedName : GrammarFormatException, System.Runtime.Serialization.ISerializable\n Inheritance Object \u2192 Exception \u2192 GrammarFormatException \u2192 GrammarExpectedName Implements ISerializable
"},{"location":"xmldocs/llama.exceptions.grammarexpectedname/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.exceptions.grammarexpectedname/#targetsite","title":"TargetSite","text":"public MethodBase TargetSite { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarexpectedname/#property-value","title":"Property Value","text":"MethodBase
"},{"location":"xmldocs/llama.exceptions.grammarexpectedname/#message","title":"Message","text":"public string Message { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarexpectedname/#property-value_1","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.exceptions.grammarexpectedname/#data","title":"Data","text":"public IDictionary Data { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarexpectedname/#property-value_2","title":"Property Value","text":"IDictionary
"},{"location":"xmldocs/llama.exceptions.grammarexpectedname/#innerexception","title":"InnerException","text":"public Exception InnerException { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarexpectedname/#property-value_3","title":"Property Value","text":"Exception
"},{"location":"xmldocs/llama.exceptions.grammarexpectedname/#helplink","title":"HelpLink","text":"public string HelpLink { get; set; }\n"},{"location":"xmldocs/llama.exceptions.grammarexpectedname/#property-value_4","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.exceptions.grammarexpectedname/#source","title":"Source","text":"public string Source { get; set; }\n"},{"location":"xmldocs/llama.exceptions.grammarexpectedname/#property-value_5","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.exceptions.grammarexpectedname/#hresult","title":"HResult","text":"public int HResult { get; set; }\n"},{"location":"xmldocs/llama.exceptions.grammarexpectedname/#property-value_6","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.exceptions.grammarexpectedname/#stacktrace","title":"StackTrace","text":"public string StackTrace { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarexpectedname/#property-value_7","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.exceptions.grammarexpectednext/","title":"GrammarExpectedNext","text":"Namespace: LLama.Exceptions
A specified string was expected when parsing
public class GrammarExpectedNext : GrammarFormatException, System.Runtime.Serialization.ISerializable\n Inheritance Object \u2192 Exception \u2192 GrammarFormatException \u2192 GrammarExpectedNext Implements ISerializable
"},{"location":"xmldocs/llama.exceptions.grammarexpectednext/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.exceptions.grammarexpectednext/#targetsite","title":"TargetSite","text":"public MethodBase TargetSite { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarexpectednext/#property-value","title":"Property Value","text":"MethodBase
"},{"location":"xmldocs/llama.exceptions.grammarexpectednext/#message","title":"Message","text":"public string Message { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarexpectednext/#property-value_1","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.exceptions.grammarexpectednext/#data","title":"Data","text":"public IDictionary Data { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarexpectednext/#property-value_2","title":"Property Value","text":"IDictionary
"},{"location":"xmldocs/llama.exceptions.grammarexpectednext/#innerexception","title":"InnerException","text":"public Exception InnerException { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarexpectednext/#property-value_3","title":"Property Value","text":"Exception
"},{"location":"xmldocs/llama.exceptions.grammarexpectednext/#helplink","title":"HelpLink","text":"public string HelpLink { get; set; }\n"},{"location":"xmldocs/llama.exceptions.grammarexpectednext/#property-value_4","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.exceptions.grammarexpectednext/#source","title":"Source","text":"public string Source { get; set; }\n"},{"location":"xmldocs/llama.exceptions.grammarexpectednext/#property-value_5","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.exceptions.grammarexpectednext/#hresult","title":"HResult","text":"public int HResult { get; set; }\n"},{"location":"xmldocs/llama.exceptions.grammarexpectednext/#property-value_6","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.exceptions.grammarexpectednext/#stacktrace","title":"StackTrace","text":"public string StackTrace { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarexpectednext/#property-value_7","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.exceptions.grammarexpectedprevious/","title":"GrammarExpectedPrevious","text":"Namespace: LLama.Exceptions
A specified character was expected to preceded another when parsing
public class GrammarExpectedPrevious : GrammarFormatException, System.Runtime.Serialization.ISerializable\n Inheritance Object \u2192 Exception \u2192 GrammarFormatException \u2192 GrammarExpectedPrevious Implements ISerializable
"},{"location":"xmldocs/llama.exceptions.grammarexpectedprevious/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.exceptions.grammarexpectedprevious/#targetsite","title":"TargetSite","text":"public MethodBase TargetSite { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarexpectedprevious/#property-value","title":"Property Value","text":"MethodBase
"},{"location":"xmldocs/llama.exceptions.grammarexpectedprevious/#message","title":"Message","text":"public string Message { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarexpectedprevious/#property-value_1","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.exceptions.grammarexpectedprevious/#data","title":"Data","text":"public IDictionary Data { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarexpectedprevious/#property-value_2","title":"Property Value","text":"IDictionary
"},{"location":"xmldocs/llama.exceptions.grammarexpectedprevious/#innerexception","title":"InnerException","text":"public Exception InnerException { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarexpectedprevious/#property-value_3","title":"Property Value","text":"Exception
"},{"location":"xmldocs/llama.exceptions.grammarexpectedprevious/#helplink","title":"HelpLink","text":"public string HelpLink { get; set; }\n"},{"location":"xmldocs/llama.exceptions.grammarexpectedprevious/#property-value_4","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.exceptions.grammarexpectedprevious/#source","title":"Source","text":"public string Source { get; set; }\n"},{"location":"xmldocs/llama.exceptions.grammarexpectedprevious/#property-value_5","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.exceptions.grammarexpectedprevious/#hresult","title":"HResult","text":"public int HResult { get; set; }\n"},{"location":"xmldocs/llama.exceptions.grammarexpectedprevious/#property-value_6","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.exceptions.grammarexpectedprevious/#stacktrace","title":"StackTrace","text":"public string StackTrace { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarexpectedprevious/#property-value_7","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.exceptions.grammarformatexception/","title":"GrammarFormatException","text":"Namespace: LLama.Exceptions
Base class for all grammar exceptions
public abstract class GrammarFormatException : System.Exception, System.Runtime.Serialization.ISerializable\n Inheritance Object \u2192 Exception \u2192 GrammarFormatException Implements ISerializable
"},{"location":"xmldocs/llama.exceptions.grammarformatexception/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.exceptions.grammarformatexception/#targetsite","title":"TargetSite","text":"public MethodBase TargetSite { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarformatexception/#property-value","title":"Property Value","text":"MethodBase
"},{"location":"xmldocs/llama.exceptions.grammarformatexception/#message","title":"Message","text":"public string Message { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarformatexception/#property-value_1","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.exceptions.grammarformatexception/#data","title":"Data","text":"public IDictionary Data { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarformatexception/#property-value_2","title":"Property Value","text":"IDictionary
"},{"location":"xmldocs/llama.exceptions.grammarformatexception/#innerexception","title":"InnerException","text":"public Exception InnerException { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarformatexception/#property-value_3","title":"Property Value","text":"Exception
"},{"location":"xmldocs/llama.exceptions.grammarformatexception/#helplink","title":"HelpLink","text":"public string HelpLink { get; set; }\n"},{"location":"xmldocs/llama.exceptions.grammarformatexception/#property-value_4","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.exceptions.grammarformatexception/#source","title":"Source","text":"public string Source { get; set; }\n"},{"location":"xmldocs/llama.exceptions.grammarformatexception/#property-value_5","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.exceptions.grammarformatexception/#hresult","title":"HResult","text":"public int HResult { get; set; }\n"},{"location":"xmldocs/llama.exceptions.grammarformatexception/#property-value_6","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.exceptions.grammarformatexception/#stacktrace","title":"StackTrace","text":"public string StackTrace { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarformatexception/#property-value_7","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.exceptions.grammarunexpectedcharaltelement/","title":"GrammarUnexpectedCharAltElement","text":"Namespace: LLama.Exceptions
A CHAR_ALT was created without a preceding CHAR element
public class GrammarUnexpectedCharAltElement : GrammarFormatException, System.Runtime.Serialization.ISerializable\n Inheritance Object \u2192 Exception \u2192 GrammarFormatException \u2192 GrammarUnexpectedCharAltElement Implements ISerializable
"},{"location":"xmldocs/llama.exceptions.grammarunexpectedcharaltelement/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.exceptions.grammarunexpectedcharaltelement/#targetsite","title":"TargetSite","text":"public MethodBase TargetSite { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarunexpectedcharaltelement/#property-value","title":"Property Value","text":"MethodBase
"},{"location":"xmldocs/llama.exceptions.grammarunexpectedcharaltelement/#message","title":"Message","text":"public string Message { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarunexpectedcharaltelement/#property-value_1","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.exceptions.grammarunexpectedcharaltelement/#data","title":"Data","text":"public IDictionary Data { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarunexpectedcharaltelement/#property-value_2","title":"Property Value","text":"IDictionary
"},{"location":"xmldocs/llama.exceptions.grammarunexpectedcharaltelement/#innerexception","title":"InnerException","text":"public Exception InnerException { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarunexpectedcharaltelement/#property-value_3","title":"Property Value","text":"Exception
"},{"location":"xmldocs/llama.exceptions.grammarunexpectedcharaltelement/#helplink","title":"HelpLink","text":"public string HelpLink { get; set; }\n"},{"location":"xmldocs/llama.exceptions.grammarunexpectedcharaltelement/#property-value_4","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.exceptions.grammarunexpectedcharaltelement/#source","title":"Source","text":"public string Source { get; set; }\n"},{"location":"xmldocs/llama.exceptions.grammarunexpectedcharaltelement/#property-value_5","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.exceptions.grammarunexpectedcharaltelement/#hresult","title":"HResult","text":"public int HResult { get; set; }\n"},{"location":"xmldocs/llama.exceptions.grammarunexpectedcharaltelement/#property-value_6","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.exceptions.grammarunexpectedcharaltelement/#stacktrace","title":"StackTrace","text":"public string StackTrace { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarunexpectedcharaltelement/#property-value_7","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.exceptions.grammarunexpectedcharrngelement/","title":"GrammarUnexpectedCharRngElement","text":"Namespace: LLama.Exceptions
A CHAR_RNG was created without a preceding CHAR element
public class GrammarUnexpectedCharRngElement : GrammarFormatException, System.Runtime.Serialization.ISerializable\n Inheritance Object \u2192 Exception \u2192 GrammarFormatException \u2192 GrammarUnexpectedCharRngElement Implements ISerializable
"},{"location":"xmldocs/llama.exceptions.grammarunexpectedcharrngelement/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.exceptions.grammarunexpectedcharrngelement/#targetsite","title":"TargetSite","text":"public MethodBase TargetSite { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarunexpectedcharrngelement/#property-value","title":"Property Value","text":"MethodBase
"},{"location":"xmldocs/llama.exceptions.grammarunexpectedcharrngelement/#message","title":"Message","text":"public string Message { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarunexpectedcharrngelement/#property-value_1","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.exceptions.grammarunexpectedcharrngelement/#data","title":"Data","text":"public IDictionary Data { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarunexpectedcharrngelement/#property-value_2","title":"Property Value","text":"IDictionary
"},{"location":"xmldocs/llama.exceptions.grammarunexpectedcharrngelement/#innerexception","title":"InnerException","text":"public Exception InnerException { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarunexpectedcharrngelement/#property-value_3","title":"Property Value","text":"Exception
"},{"location":"xmldocs/llama.exceptions.grammarunexpectedcharrngelement/#helplink","title":"HelpLink","text":"public string HelpLink { get; set; }\n"},{"location":"xmldocs/llama.exceptions.grammarunexpectedcharrngelement/#property-value_4","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.exceptions.grammarunexpectedcharrngelement/#source","title":"Source","text":"public string Source { get; set; }\n"},{"location":"xmldocs/llama.exceptions.grammarunexpectedcharrngelement/#property-value_5","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.exceptions.grammarunexpectedcharrngelement/#hresult","title":"HResult","text":"public int HResult { get; set; }\n"},{"location":"xmldocs/llama.exceptions.grammarunexpectedcharrngelement/#property-value_6","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.exceptions.grammarunexpectedcharrngelement/#stacktrace","title":"StackTrace","text":"public string StackTrace { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarunexpectedcharrngelement/#property-value_7","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.exceptions.grammarunexpectedendelement/","title":"GrammarUnexpectedEndElement","text":"Namespace: LLama.Exceptions
An END was encountered before the last element
public class GrammarUnexpectedEndElement : GrammarFormatException, System.Runtime.Serialization.ISerializable\n Inheritance Object \u2192 Exception \u2192 GrammarFormatException \u2192 GrammarUnexpectedEndElement Implements ISerializable
"},{"location":"xmldocs/llama.exceptions.grammarunexpectedendelement/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.exceptions.grammarunexpectedendelement/#targetsite","title":"TargetSite","text":"public MethodBase TargetSite { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarunexpectedendelement/#property-value","title":"Property Value","text":"MethodBase
"},{"location":"xmldocs/llama.exceptions.grammarunexpectedendelement/#message","title":"Message","text":"public string Message { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarunexpectedendelement/#property-value_1","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.exceptions.grammarunexpectedendelement/#data","title":"Data","text":"public IDictionary Data { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarunexpectedendelement/#property-value_2","title":"Property Value","text":"IDictionary
"},{"location":"xmldocs/llama.exceptions.grammarunexpectedendelement/#innerexception","title":"InnerException","text":"public Exception InnerException { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarunexpectedendelement/#property-value_3","title":"Property Value","text":"Exception
"},{"location":"xmldocs/llama.exceptions.grammarunexpectedendelement/#helplink","title":"HelpLink","text":"public string HelpLink { get; set; }\n"},{"location":"xmldocs/llama.exceptions.grammarunexpectedendelement/#property-value_4","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.exceptions.grammarunexpectedendelement/#source","title":"Source","text":"public string Source { get; set; }\n"},{"location":"xmldocs/llama.exceptions.grammarunexpectedendelement/#property-value_5","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.exceptions.grammarunexpectedendelement/#hresult","title":"HResult","text":"public int HResult { get; set; }\n"},{"location":"xmldocs/llama.exceptions.grammarunexpectedendelement/#property-value_6","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.exceptions.grammarunexpectedendelement/#stacktrace","title":"StackTrace","text":"public string StackTrace { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarunexpectedendelement/#property-value_7","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.exceptions.grammarunexpectedendofinput/","title":"GrammarUnexpectedEndOfInput","text":"Namespace: LLama.Exceptions
End-of-file was encountered while parsing
public class GrammarUnexpectedEndOfInput : GrammarFormatException, System.Runtime.Serialization.ISerializable\n Inheritance Object \u2192 Exception \u2192 GrammarFormatException \u2192 GrammarUnexpectedEndOfInput Implements ISerializable
"},{"location":"xmldocs/llama.exceptions.grammarunexpectedendofinput/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.exceptions.grammarunexpectedendofinput/#targetsite","title":"TargetSite","text":"public MethodBase TargetSite { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarunexpectedendofinput/#property-value","title":"Property Value","text":"MethodBase
"},{"location":"xmldocs/llama.exceptions.grammarunexpectedendofinput/#message","title":"Message","text":"public string Message { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarunexpectedendofinput/#property-value_1","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.exceptions.grammarunexpectedendofinput/#data","title":"Data","text":"public IDictionary Data { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarunexpectedendofinput/#property-value_2","title":"Property Value","text":"IDictionary
"},{"location":"xmldocs/llama.exceptions.grammarunexpectedendofinput/#innerexception","title":"InnerException","text":"public Exception InnerException { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarunexpectedendofinput/#property-value_3","title":"Property Value","text":"Exception
"},{"location":"xmldocs/llama.exceptions.grammarunexpectedendofinput/#helplink","title":"HelpLink","text":"public string HelpLink { get; set; }\n"},{"location":"xmldocs/llama.exceptions.grammarunexpectedendofinput/#property-value_4","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.exceptions.grammarunexpectedendofinput/#source","title":"Source","text":"public string Source { get; set; }\n"},{"location":"xmldocs/llama.exceptions.grammarunexpectedendofinput/#property-value_5","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.exceptions.grammarunexpectedendofinput/#hresult","title":"HResult","text":"public int HResult { get; set; }\n"},{"location":"xmldocs/llama.exceptions.grammarunexpectedendofinput/#property-value_6","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.exceptions.grammarunexpectedendofinput/#stacktrace","title":"StackTrace","text":"public string StackTrace { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarunexpectedendofinput/#property-value_7","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.exceptions.grammarunexpectedhexcharscount/","title":"GrammarUnexpectedHexCharsCount","text":"Namespace: LLama.Exceptions
An incorrect number of characters were encountered while parsing a hex literal
public class GrammarUnexpectedHexCharsCount : GrammarFormatException, System.Runtime.Serialization.ISerializable\n Inheritance Object \u2192 Exception \u2192 GrammarFormatException \u2192 GrammarUnexpectedHexCharsCount Implements ISerializable
"},{"location":"xmldocs/llama.exceptions.grammarunexpectedhexcharscount/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.exceptions.grammarunexpectedhexcharscount/#targetsite","title":"TargetSite","text":"public MethodBase TargetSite { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarunexpectedhexcharscount/#property-value","title":"Property Value","text":"MethodBase
"},{"location":"xmldocs/llama.exceptions.grammarunexpectedhexcharscount/#message","title":"Message","text":"public string Message { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarunexpectedhexcharscount/#property-value_1","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.exceptions.grammarunexpectedhexcharscount/#data","title":"Data","text":"public IDictionary Data { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarunexpectedhexcharscount/#property-value_2","title":"Property Value","text":"IDictionary
"},{"location":"xmldocs/llama.exceptions.grammarunexpectedhexcharscount/#innerexception","title":"InnerException","text":"public Exception InnerException { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarunexpectedhexcharscount/#property-value_3","title":"Property Value","text":"Exception
"},{"location":"xmldocs/llama.exceptions.grammarunexpectedhexcharscount/#helplink","title":"HelpLink","text":"public string HelpLink { get; set; }\n"},{"location":"xmldocs/llama.exceptions.grammarunexpectedhexcharscount/#property-value_4","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.exceptions.grammarunexpectedhexcharscount/#source","title":"Source","text":"public string Source { get; set; }\n"},{"location":"xmldocs/llama.exceptions.grammarunexpectedhexcharscount/#property-value_5","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.exceptions.grammarunexpectedhexcharscount/#hresult","title":"HResult","text":"public int HResult { get; set; }\n"},{"location":"xmldocs/llama.exceptions.grammarunexpectedhexcharscount/#property-value_6","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.exceptions.grammarunexpectedhexcharscount/#stacktrace","title":"StackTrace","text":"public string StackTrace { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarunexpectedhexcharscount/#property-value_7","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.exceptions.grammarunknownescapecharacter/","title":"GrammarUnknownEscapeCharacter","text":"Namespace: LLama.Exceptions
An unexpected character was encountered after an escape sequence
public class GrammarUnknownEscapeCharacter : GrammarFormatException, System.Runtime.Serialization.ISerializable\n Inheritance Object \u2192 Exception \u2192 GrammarFormatException \u2192 GrammarUnknownEscapeCharacter Implements ISerializable
"},{"location":"xmldocs/llama.exceptions.grammarunknownescapecharacter/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.exceptions.grammarunknownescapecharacter/#targetsite","title":"TargetSite","text":"public MethodBase TargetSite { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarunknownescapecharacter/#property-value","title":"Property Value","text":"MethodBase
"},{"location":"xmldocs/llama.exceptions.grammarunknownescapecharacter/#message","title":"Message","text":"public string Message { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarunknownescapecharacter/#property-value_1","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.exceptions.grammarunknownescapecharacter/#data","title":"Data","text":"public IDictionary Data { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarunknownescapecharacter/#property-value_2","title":"Property Value","text":"IDictionary
"},{"location":"xmldocs/llama.exceptions.grammarunknownescapecharacter/#innerexception","title":"InnerException","text":"public Exception InnerException { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarunknownescapecharacter/#property-value_3","title":"Property Value","text":"Exception
"},{"location":"xmldocs/llama.exceptions.grammarunknownescapecharacter/#helplink","title":"HelpLink","text":"public string HelpLink { get; set; }\n"},{"location":"xmldocs/llama.exceptions.grammarunknownescapecharacter/#property-value_4","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.exceptions.grammarunknownescapecharacter/#source","title":"Source","text":"public string Source { get; set; }\n"},{"location":"xmldocs/llama.exceptions.grammarunknownescapecharacter/#property-value_5","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.exceptions.grammarunknownescapecharacter/#hresult","title":"HResult","text":"public int HResult { get; set; }\n"},{"location":"xmldocs/llama.exceptions.grammarunknownescapecharacter/#property-value_6","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.exceptions.grammarunknownescapecharacter/#stacktrace","title":"StackTrace","text":"public string StackTrace { get; }\n"},{"location":"xmldocs/llama.exceptions.grammarunknownescapecharacter/#property-value_7","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.exceptions.runtimeerror/","title":"RuntimeError","text":"Namespace: LLama.Exceptions
public class RuntimeError : System.Exception, System.Runtime.Serialization.ISerializable\n Inheritance Object \u2192 Exception \u2192 RuntimeError Implements ISerializable
"},{"location":"xmldocs/llama.exceptions.runtimeerror/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.exceptions.runtimeerror/#targetsite","title":"TargetSite","text":"public MethodBase TargetSite { get; }\n"},{"location":"xmldocs/llama.exceptions.runtimeerror/#property-value","title":"Property Value","text":"MethodBase
"},{"location":"xmldocs/llama.exceptions.runtimeerror/#message","title":"Message","text":"public string Message { get; }\n"},{"location":"xmldocs/llama.exceptions.runtimeerror/#property-value_1","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.exceptions.runtimeerror/#data","title":"Data","text":"public IDictionary Data { get; }\n"},{"location":"xmldocs/llama.exceptions.runtimeerror/#property-value_2","title":"Property Value","text":"IDictionary
"},{"location":"xmldocs/llama.exceptions.runtimeerror/#innerexception","title":"InnerException","text":"public Exception InnerException { get; }\n"},{"location":"xmldocs/llama.exceptions.runtimeerror/#property-value_3","title":"Property Value","text":"Exception
"},{"location":"xmldocs/llama.exceptions.runtimeerror/#helplink","title":"HelpLink","text":"public string HelpLink { get; set; }\n"},{"location":"xmldocs/llama.exceptions.runtimeerror/#property-value_4","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.exceptions.runtimeerror/#source","title":"Source","text":"public string Source { get; set; }\n"},{"location":"xmldocs/llama.exceptions.runtimeerror/#property-value_5","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.exceptions.runtimeerror/#hresult","title":"HResult","text":"public int HResult { get; set; }\n"},{"location":"xmldocs/llama.exceptions.runtimeerror/#property-value_6","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.exceptions.runtimeerror/#stacktrace","title":"StackTrace","text":"public string StackTrace { get; }\n"},{"location":"xmldocs/llama.exceptions.runtimeerror/#property-value_7","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.exceptions.runtimeerror/#constructors","title":"Constructors","text":""},{"location":"xmldocs/llama.exceptions.runtimeerror/#runtimeerror_1","title":"RuntimeError()","text":"public RuntimeError()\n"},{"location":"xmldocs/llama.exceptions.runtimeerror/#runtimeerrorstring","title":"RuntimeError(String)","text":"public RuntimeError(string message)\n"},{"location":"xmldocs/llama.exceptions.runtimeerror/#parameters","title":"Parameters","text":"message String
Namespace: LLama.Extensions
Extention methods to the IModelParams interface
public static class IModelParamsExtensions\n Inheritance Object \u2192 IModelParamsExtensions
"},{"location":"xmldocs/llama.extensions.imodelparamsextensions/#methods","title":"Methods","text":""},{"location":"xmldocs/llama.extensions.imodelparamsextensions/#tollamacontextparamsimodelparams-llamacontextparams","title":"ToLlamaContextParams(IModelParams, LLamaContextParams&)","text":"Convert the given IModelParams into a LLamaContextParams
public static MemoryHandle ToLlamaContextParams(IModelParams params, LLamaContextParams& result)\n"},{"location":"xmldocs/llama.extensions.imodelparamsextensions/#parameters","title":"Parameters","text":"params IModelParams
result LLamaContextParams&
MemoryHandle
"},{"location":"xmldocs/llama.extensions.imodelparamsextensions/#exceptions","title":"Exceptions","text":"FileNotFoundException
ArgumentException
"},{"location":"xmldocs/llama.extensions.keyvaluepairextensions/","title":"KeyValuePairExtensions","text":"Namespace: LLama.Extensions
Extensions to the KeyValuePair struct
public static class KeyValuePairExtensions\n Inheritance Object \u2192 KeyValuePairExtensions
"},{"location":"xmldocs/llama.extensions.keyvaluepairextensions/#methods","title":"Methods","text":""},{"location":"xmldocs/llama.extensions.keyvaluepairextensions/#deconstructtkey-tvaluekeyvaluepairtkey-tvalue-tkey-tvalue","title":"Deconstruct<TKey, TValue>(KeyValuePair<TKey, TValue>, TKey&, TValue&)","text":"Deconstruct a KeyValuePair into it's constituent parts.
public static void Deconstruct<TKey, TValue>(KeyValuePair<TKey, TValue> pair, TKey& first, TValue& second)\n"},{"location":"xmldocs/llama.extensions.keyvaluepairextensions/#type-parameters","title":"Type Parameters","text":"TKey Type of the Key
TValue Type of the Value
pair KeyValuePair<TKey, TValue> The KeyValuePair to deconstruct
first TKey& First element, the Key
second TValue& Second element, the Value
Namespace: LLama.Grammars
A grammar is a set of GrammarRules for deciding which characters are valid next. Can be used to constrain output to certain formats - e.g. force the model to output JSON
public sealed class Grammar\n Inheritance Object \u2192 Grammar
"},{"location":"xmldocs/llama.grammars.grammar/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.grammars.grammar/#startruleindex","title":"StartRuleIndex","text":"Index of the initial rule to start from
public ulong StartRuleIndex { get; set; }\n"},{"location":"xmldocs/llama.grammars.grammar/#property-value","title":"Property Value","text":"UInt64
"},{"location":"xmldocs/llama.grammars.grammar/#rules","title":"Rules","text":"The rules which make up this grammar
public IReadOnlyList<GrammarRule> Rules { get; }\n"},{"location":"xmldocs/llama.grammars.grammar/#property-value_1","title":"Property Value","text":"IReadOnlyList<GrammarRule>
"},{"location":"xmldocs/llama.grammars.grammar/#constructors","title":"Constructors","text":""},{"location":"xmldocs/llama.grammars.grammar/#grammarireadonlylistgrammarrule-uint64","title":"Grammar(IReadOnlyList<GrammarRule>, UInt64)","text":"Create a new grammar from a set of rules
public Grammar(IReadOnlyList<GrammarRule> rules, ulong startRuleIndex)\n"},{"location":"xmldocs/llama.grammars.grammar/#parameters","title":"Parameters","text":"rules IReadOnlyList<GrammarRule> The rules which make up this grammar
startRuleIndex UInt64 Index of the initial rule to start from
ArgumentOutOfRangeException
"},{"location":"xmldocs/llama.grammars.grammar/#methods","title":"Methods","text":""},{"location":"xmldocs/llama.grammars.grammar/#createinstance","title":"CreateInstance()","text":"Create a SafeLLamaGrammarHandle instance to use for parsing
public SafeLLamaGrammarHandle CreateInstance()\n"},{"location":"xmldocs/llama.grammars.grammar/#returns","title":"Returns","text":"SafeLLamaGrammarHandle
"},{"location":"xmldocs/llama.grammars.grammar/#parsestring-string","title":"Parse(String, String)","text":"Parse a string of GGML BNF into a Grammar
public static Grammar Parse(string gbnf, string startRule)\n"},{"location":"xmldocs/llama.grammars.grammar/#parameters_1","title":"Parameters","text":"gbnf String The string to parse
startRule String Name of the start rule of this grammar
Grammar A Grammar which can be converted into a SafeLLamaGrammarHandle for sampling
"},{"location":"xmldocs/llama.grammars.grammar/#exceptions_1","title":"Exceptions","text":"GrammarFormatException Thrown if input is malformed
"},{"location":"xmldocs/llama.grammars.grammar/#tostring","title":"ToString()","text":"public string ToString()\n"},{"location":"xmldocs/llama.grammars.grammar/#returns_2","title":"Returns","text":"String
"},{"location":"xmldocs/llama.grammars.grammarrule/","title":"GrammarRule","text":"Namespace: LLama.Grammars
A single rule in a Grammar
public sealed class GrammarRule : System.IEquatable`1[[LLama.Grammars.GrammarRule, LLamaSharp, Version=0.5.0.0, Culture=neutral, PublicKeyToken=null]]\n Inheritance Object \u2192 GrammarRule Implements IEquatable<GrammarRule>
"},{"location":"xmldocs/llama.grammars.grammarrule/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.grammars.grammarrule/#name","title":"Name","text":"Name of this rule
public string Name { get; }\n"},{"location":"xmldocs/llama.grammars.grammarrule/#property-value","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.grammars.grammarrule/#elements","title":"Elements","text":"The elements of this grammar rule
public IReadOnlyList<LLamaGrammarElement> Elements { get; }\n"},{"location":"xmldocs/llama.grammars.grammarrule/#property-value_1","title":"Property Value","text":"IReadOnlyList<LLamaGrammarElement>
"},{"location":"xmldocs/llama.grammars.grammarrule/#constructors","title":"Constructors","text":""},{"location":"xmldocs/llama.grammars.grammarrule/#grammarrulestring-ireadonlylistllamagrammarelement","title":"GrammarRule(String, IReadOnlyList<LLamaGrammarElement>)","text":"Create a new GrammarRule containing the given elements
public GrammarRule(string name, IReadOnlyList<LLamaGrammarElement> elements)\n"},{"location":"xmldocs/llama.grammars.grammarrule/#parameters","title":"Parameters","text":"name String
elements IReadOnlyList<LLamaGrammarElement>
ArgumentException
"},{"location":"xmldocs/llama.grammars.grammarrule/#methods","title":"Methods","text":""},{"location":"xmldocs/llama.grammars.grammarrule/#tostring","title":"ToString()","text":"public string ToString()\n"},{"location":"xmldocs/llama.grammars.grammarrule/#returns","title":"Returns","text":"String
"},{"location":"xmldocs/llama.grammars.grammarrule/#gethashcode","title":"GetHashCode()","text":"public int GetHashCode()\n"},{"location":"xmldocs/llama.grammars.grammarrule/#returns_1","title":"Returns","text":"Int32
"},{"location":"xmldocs/llama.grammars.grammarrule/#equalsobject","title":"Equals(Object)","text":"public bool Equals(object obj)\n"},{"location":"xmldocs/llama.grammars.grammarrule/#parameters_1","title":"Parameters","text":"obj Object
Boolean
"},{"location":"xmldocs/llama.grammars.grammarrule/#equalsgrammarrule","title":"Equals(GrammarRule)","text":"public bool Equals(GrammarRule other)\n"},{"location":"xmldocs/llama.grammars.grammarrule/#parameters_2","title":"Parameters","text":"other GrammarRule
Boolean
"},{"location":"xmldocs/llama.grammars.grammarrule/#clone","title":"<Clone>$()","text":"public GrammarRule <Clone>$()\n"},{"location":"xmldocs/llama.grammars.grammarrule/#returns_4","title":"Returns","text":"GrammarRule
"},{"location":"xmldocs/llama.instructexecutor/","title":"InstructExecutor","text":"Namespace: LLama
The LLama executor for instruct mode.
public class InstructExecutor : StatefulExecutorBase, LLama.Abstractions.ILLamaExecutor\n Inheritance Object \u2192 StatefulExecutorBase \u2192 InstructExecutor Implements ILLamaExecutor
"},{"location":"xmldocs/llama.instructexecutor/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.instructexecutor/#context","title":"Context","text":"The context used by the executor.
public LLamaContext Context { get; }\n"},{"location":"xmldocs/llama.instructexecutor/#property-value","title":"Property Value","text":"LLamaContext
"},{"location":"xmldocs/llama.instructexecutor/#constructors","title":"Constructors","text":""},{"location":"xmldocs/llama.instructexecutor/#instructexecutorllamacontext-string-string","title":"InstructExecutor(LLamaContext, String, String)","text":"public InstructExecutor(LLamaContext context, string instructionPrefix, string instructionSuffix)\n"},{"location":"xmldocs/llama.instructexecutor/#parameters","title":"Parameters","text":"context LLamaContext
instructionPrefix String
instructionSuffix String
public ExecutorBaseState GetStateData()\n"},{"location":"xmldocs/llama.instructexecutor/#returns","title":"Returns","text":"ExecutorBaseState
"},{"location":"xmldocs/llama.instructexecutor/#loadstateexecutorbasestate","title":"LoadState(ExecutorBaseState)","text":"public void LoadState(ExecutorBaseState data)\n"},{"location":"xmldocs/llama.instructexecutor/#parameters_1","title":"Parameters","text":"data ExecutorBaseState
public void SaveState(string filename)\n"},{"location":"xmldocs/llama.instructexecutor/#parameters_2","title":"Parameters","text":"filename String
public void LoadState(string filename)\n"},{"location":"xmldocs/llama.instructexecutor/#parameters_3","title":"Parameters","text":"filename String
protected bool GetLoopCondition(InferStateArgs args)\n"},{"location":"xmldocs/llama.instructexecutor/#parameters_4","title":"Parameters","text":"args InferStateArgs
Boolean
"},{"location":"xmldocs/llama.instructexecutor/#preprocessinputsstring-inferstateargs","title":"PreprocessInputs(String, InferStateArgs)","text":"protected void PreprocessInputs(string text, InferStateArgs args)\n"},{"location":"xmldocs/llama.instructexecutor/#parameters_5","title":"Parameters","text":"text String
args InferStateArgs
protected bool PostProcess(IInferenceParams inferenceParams, InferStateArgs args, IEnumerable`1& extraOutputs)\n"},{"location":"xmldocs/llama.instructexecutor/#parameters_6","title":"Parameters","text":"inferenceParams IInferenceParams
args InferStateArgs
extraOutputs IEnumerable`1&
Boolean
"},{"location":"xmldocs/llama.instructexecutor/#inferinternaliinferenceparams-inferstateargs","title":"InferInternal(IInferenceParams, InferStateArgs)","text":"protected void InferInternal(IInferenceParams inferenceParams, InferStateArgs args)\n"},{"location":"xmldocs/llama.instructexecutor/#parameters_7","title":"Parameters","text":"inferenceParams IInferenceParams
args InferStateArgs
Namespace: LLama
The LLama executor for interactive mode.
public class InteractiveExecutor : StatefulExecutorBase, LLama.Abstractions.ILLamaExecutor\n Inheritance Object \u2192 StatefulExecutorBase \u2192 InteractiveExecutor Implements ILLamaExecutor
"},{"location":"xmldocs/llama.interactiveexecutor/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.interactiveexecutor/#context","title":"Context","text":"The context used by the executor.
public LLamaContext Context { get; }\n"},{"location":"xmldocs/llama.interactiveexecutor/#property-value","title":"Property Value","text":"LLamaContext
"},{"location":"xmldocs/llama.interactiveexecutor/#constructors","title":"Constructors","text":""},{"location":"xmldocs/llama.interactiveexecutor/#interactiveexecutorllamacontext","title":"InteractiveExecutor(LLamaContext)","text":"public InteractiveExecutor(LLamaContext context)\n"},{"location":"xmldocs/llama.interactiveexecutor/#parameters","title":"Parameters","text":"context LLamaContext
public ExecutorBaseState GetStateData()\n"},{"location":"xmldocs/llama.interactiveexecutor/#returns","title":"Returns","text":"ExecutorBaseState
"},{"location":"xmldocs/llama.interactiveexecutor/#loadstateexecutorbasestate","title":"LoadState(ExecutorBaseState)","text":"public void LoadState(ExecutorBaseState data)\n"},{"location":"xmldocs/llama.interactiveexecutor/#parameters_1","title":"Parameters","text":"data ExecutorBaseState
public void SaveState(string filename)\n"},{"location":"xmldocs/llama.interactiveexecutor/#parameters_2","title":"Parameters","text":"filename String
public void LoadState(string filename)\n"},{"location":"xmldocs/llama.interactiveexecutor/#parameters_3","title":"Parameters","text":"filename String
Define whether to continue the loop to generate responses.
protected bool GetLoopCondition(InferStateArgs args)\n"},{"location":"xmldocs/llama.interactiveexecutor/#parameters_4","title":"Parameters","text":"args InferStateArgs
Boolean
"},{"location":"xmldocs/llama.interactiveexecutor/#preprocessinputsstring-inferstateargs","title":"PreprocessInputs(String, InferStateArgs)","text":"protected void PreprocessInputs(string text, InferStateArgs args)\n"},{"location":"xmldocs/llama.interactiveexecutor/#parameters_5","title":"Parameters","text":"text String
args InferStateArgs
Return whether to break the generation.
protected bool PostProcess(IInferenceParams inferenceParams, InferStateArgs args, IEnumerable`1& extraOutputs)\n"},{"location":"xmldocs/llama.interactiveexecutor/#parameters_6","title":"Parameters","text":"inferenceParams IInferenceParams
args InferStateArgs
extraOutputs IEnumerable`1&
Boolean
"},{"location":"xmldocs/llama.interactiveexecutor/#inferinternaliinferenceparams-inferstateargs","title":"InferInternal(IInferenceParams, InferStateArgs)","text":"protected void InferInternal(IInferenceParams inferenceParams, InferStateArgs args)\n"},{"location":"xmldocs/llama.interactiveexecutor/#parameters_7","title":"Parameters","text":"inferenceParams IInferenceParams
args InferStateArgs
Namespace: LLama
A llama_context, which holds all the context required to interact with a model
public sealed class LLamaContext : System.IDisposable\n Inheritance Object \u2192 LLamaContext Implements IDisposable
"},{"location":"xmldocs/llama.llamacontext/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.llamacontext/#vocabcount","title":"VocabCount","text":"Total number of tokens in vocabulary of this model
public int VocabCount { get; }\n"},{"location":"xmldocs/llama.llamacontext/#property-value","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.llamacontext/#contextsize","title":"ContextSize","text":"Total number of tokens in the context
public int ContextSize { get; }\n"},{"location":"xmldocs/llama.llamacontext/#property-value_1","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.llamacontext/#embeddingsize","title":"EmbeddingSize","text":"Dimension of embedding vectors
public int EmbeddingSize { get; }\n"},{"location":"xmldocs/llama.llamacontext/#property-value_2","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.llamacontext/#params","title":"Params","text":"The model params set for this model.
public IModelParams Params { get; set; }\n"},{"location":"xmldocs/llama.llamacontext/#property-value_3","title":"Property Value","text":"IModelParams
"},{"location":"xmldocs/llama.llamacontext/#nativehandle","title":"NativeHandle","text":"The native handle, which is used to be passed to the native APIs
public SafeLLamaContextHandle NativeHandle { get; }\n"},{"location":"xmldocs/llama.llamacontext/#property-value_4","title":"Property Value","text":"SafeLLamaContextHandle
Remarks:
Be careful how you use this!
"},{"location":"xmldocs/llama.llamacontext/#encoding","title":"Encoding","text":"The encoding set for this model to deal with text input.
public Encoding Encoding { get; }\n"},{"location":"xmldocs/llama.llamacontext/#property-value_5","title":"Property Value","text":"Encoding
"},{"location":"xmldocs/llama.llamacontext/#embeddinglength","title":"EmbeddingLength","text":"The embedding length of the model, also known as n_embed
public int EmbeddingLength { get; }\n"},{"location":"xmldocs/llama.llamacontext/#property-value_6","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.llamacontext/#constructors","title":"Constructors","text":""},{"location":"xmldocs/llama.llamacontext/#llamacontextimodelparams-illamalogger","title":"LLamaContext(IModelParams, ILLamaLogger)","text":""},{"location":"xmldocs/llama.llamacontext/#caution","title":"Caution","text":"Use the LLamaWeights.CreateContext instead
public LLamaContext(IModelParams params, ILLamaLogger logger)\n"},{"location":"xmldocs/llama.llamacontext/#parameters","title":"Parameters","text":"params IModelParams Model params.
logger ILLamaLogger The logger.
Create a new LLamaContext for the given LLamaWeights
public LLamaContext(LLamaWeights model, IModelParams params, ILLamaLogger logger)\n"},{"location":"xmldocs/llama.llamacontext/#parameters_1","title":"Parameters","text":"model LLamaWeights
params IModelParams
logger ILLamaLogger
ObjectDisposedException
"},{"location":"xmldocs/llama.llamacontext/#methods","title":"Methods","text":""},{"location":"xmldocs/llama.llamacontext/#clone","title":"Clone()","text":"Create a copy of the current state of this context
public LLamaContext Clone()\n"},{"location":"xmldocs/llama.llamacontext/#returns","title":"Returns","text":"LLamaContext
"},{"location":"xmldocs/llama.llamacontext/#tokenizestring-boolean","title":"Tokenize(String, Boolean)","text":"Tokenize a string.
public Int32[] Tokenize(string text, bool addBos)\n"},{"location":"xmldocs/llama.llamacontext/#parameters_2","title":"Parameters","text":"text String
addBos Boolean Whether to add a bos to the text.
Int32[]
"},{"location":"xmldocs/llama.llamacontext/#detokenizeienumerableint32","title":"DeTokenize(IEnumerable<Int32>)","text":"Detokenize the tokens to text.
public string DeTokenize(IEnumerable<int> tokens)\n"},{"location":"xmldocs/llama.llamacontext/#parameters_3","title":"Parameters","text":"tokens IEnumerable<Int32>
String
"},{"location":"xmldocs/llama.llamacontext/#savestatestring","title":"SaveState(String)","text":"Save the state to specified path.
public void SaveState(string filename)\n"},{"location":"xmldocs/llama.llamacontext/#parameters_4","title":"Parameters","text":"filename String
Use GetState instead, this supports larger states (over 2GB)
Get the state data as a byte array.
public Byte[] GetStateData()\n"},{"location":"xmldocs/llama.llamacontext/#returns_3","title":"Returns","text":"Byte[]
"},{"location":"xmldocs/llama.llamacontext/#getstate","title":"GetState()","text":"Get the state data as an opaque handle
public State GetState()\n"},{"location":"xmldocs/llama.llamacontext/#returns_4","title":"Returns","text":"State
"},{"location":"xmldocs/llama.llamacontext/#loadstatestring","title":"LoadState(String)","text":"Load the state from specified path.
public void LoadState(string filename)\n"},{"location":"xmldocs/llama.llamacontext/#parameters_5","title":"Parameters","text":"filename String
RuntimeError
"},{"location":"xmldocs/llama.llamacontext/#loadstatebyte","title":"LoadState(Byte[])","text":"Load the state from memory.
public void LoadState(Byte[] stateData)\n"},{"location":"xmldocs/llama.llamacontext/#parameters_6","title":"Parameters","text":"stateData Byte[]
RuntimeError
"},{"location":"xmldocs/llama.llamacontext/#loadstatestate","title":"LoadState(State)","text":"Load the state from memory.
public void LoadState(State state)\n"},{"location":"xmldocs/llama.llamacontext/#parameters_7","title":"Parameters","text":"state State
RuntimeError
"},{"location":"xmldocs/llama.llamacontext/#samplellamatokendataarray-nullable1-single-mirostattype-single-single-int32-single-single-single-safellamagrammarhandle","title":"Sample(LLamaTokenDataArray, Nullable`1&, Single, MirostatType, Single, Single, Int32, Single, Single, Single, SafeLLamaGrammarHandle)","text":"Perform the sampling. Please don't use it unless you fully know what it does.
public int Sample(LLamaTokenDataArray candidates, Nullable`1& mirostat_mu, float temperature, MirostatType mirostat, float mirostatTau, float mirostatEta, int topK, float topP, float tfsZ, float typicalP, SafeLLamaGrammarHandle grammar)\n"},{"location":"xmldocs/llama.llamacontext/#parameters_8","title":"Parameters","text":"candidates LLamaTokenDataArray
mirostat_mu Nullable`1&
temperature Single
mirostat MirostatType
mirostatTau Single
mirostatEta Single
topK Int32
topP Single
tfsZ Single
typicalP Single
grammar SafeLLamaGrammarHandle
Int32
"},{"location":"xmldocs/llama.llamacontext/#applypenaltyienumerableint32-dictionaryint32-single-int32-single-single-single-boolean","title":"ApplyPenalty(IEnumerable<Int32>, Dictionary<Int32, Single>, Int32, Single, Single, Single, Boolean)","text":"Apply the penalty for the tokens. Please don't use it unless you fully know what it does.
public LLamaTokenDataArray ApplyPenalty(IEnumerable<int> lastTokens, Dictionary<int, float> logitBias, int repeatLastTokensCount, float repeatPenalty, float alphaFrequency, float alphaPresence, bool penalizeNL)\n"},{"location":"xmldocs/llama.llamacontext/#parameters_9","title":"Parameters","text":"lastTokens IEnumerable<Int32>
logitBias Dictionary<Int32, Single>
repeatLastTokensCount Int32
repeatPenalty Single
alphaFrequency Single
alphaPresence Single
penalizeNL Boolean
LLamaTokenDataArray
"},{"location":"xmldocs/llama.llamacontext/#evalint32-int32","title":"Eval(Int32[], Int32)","text":"public int Eval(Int32[] tokens, int pastTokensCount)\n"},{"location":"xmldocs/llama.llamacontext/#parameters_10","title":"Parameters","text":"tokens Int32[]
pastTokensCount Int32
Int32 The updated pastTokensCount.
RuntimeError
"},{"location":"xmldocs/llama.llamacontext/#evallistint32-int32","title":"Eval(List<Int32>, Int32)","text":"public int Eval(List<int> tokens, int pastTokensCount)\n"},{"location":"xmldocs/llama.llamacontext/#parameters_11","title":"Parameters","text":"tokens List<Int32>
pastTokensCount Int32
Int32 The updated pastTokensCount.
RuntimeError
"},{"location":"xmldocs/llama.llamacontext/#evalreadonlymemoryint32-int32","title":"Eval(ReadOnlyMemory<Int32>, Int32)","text":"public int Eval(ReadOnlyMemory<int> tokens, int pastTokensCount)\n"},{"location":"xmldocs/llama.llamacontext/#parameters_12","title":"Parameters","text":"tokens ReadOnlyMemory<Int32>
pastTokensCount Int32
Int32 The updated pastTokensCount.
RuntimeError
"},{"location":"xmldocs/llama.llamacontext/#evalreadonlyspanint32-int32","title":"Eval(ReadOnlySpan<Int32>, Int32)","text":"public int Eval(ReadOnlySpan<int> tokens, int pastTokensCount)\n"},{"location":"xmldocs/llama.llamacontext/#parameters_13","title":"Parameters","text":"tokens ReadOnlySpan<Int32>
pastTokensCount Int32
Int32 The updated pastTokensCount.
RuntimeError
"},{"location":"xmldocs/llama.llamacontext/#generateresultienumerableint32","title":"GenerateResult(IEnumerable<Int32>)","text":"internal IEnumerable<string> GenerateResult(IEnumerable<int> ids)\n"},{"location":"xmldocs/llama.llamacontext/#parameters_14","title":"Parameters","text":"ids IEnumerable<Int32>
IEnumerable<String>
"},{"location":"xmldocs/llama.llamacontext/#tokentostringint32","title":"TokenToString(Int32)","text":"Convert a token into a string
public string TokenToString(int token)\n"},{"location":"xmldocs/llama.llamacontext/#parameters_15","title":"Parameters","text":"token Int32
String
"},{"location":"xmldocs/llama.llamacontext/#dispose","title":"Dispose()","text":"public void Dispose()\n"},{"location":"xmldocs/llama.llamaembedder/","title":"LLamaEmbedder","text":"Namespace: LLama
The embedder for LLama, which supports getting embeddings from text.
public sealed class LLamaEmbedder : System.IDisposable\n Inheritance Object \u2192 LLamaEmbedder Implements IDisposable
"},{"location":"xmldocs/llama.llamaembedder/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.llamaembedder/#embeddingsize","title":"EmbeddingSize","text":"Dimension of embedding vectors
public int EmbeddingSize { get; }\n"},{"location":"xmldocs/llama.llamaembedder/#property-value","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.llamaembedder/#constructors","title":"Constructors","text":""},{"location":"xmldocs/llama.llamaembedder/#llamaembedderimodelparams","title":"LLamaEmbedder(IModelParams)","text":"public LLamaEmbedder(IModelParams params)\n"},{"location":"xmldocs/llama.llamaembedder/#parameters","title":"Parameters","text":"params IModelParams
public LLamaEmbedder(LLamaWeights weights, IModelParams params)\n"},{"location":"xmldocs/llama.llamaembedder/#parameters_1","title":"Parameters","text":"weights LLamaWeights
params IModelParams
'threads' and 'encoding' parameters are no longer used
Get the embeddings of the text.
public Single[] GetEmbeddings(string text, int threads, bool addBos, string encoding)\n"},{"location":"xmldocs/llama.llamaembedder/#parameters_2","title":"Parameters","text":"text String
threads Int32 unused
addBos Boolean Add bos to the text.
encoding String unused
Single[]
"},{"location":"xmldocs/llama.llamaembedder/#exceptions","title":"Exceptions","text":"RuntimeError
"},{"location":"xmldocs/llama.llamaembedder/#getembeddingsstring","title":"GetEmbeddings(String)","text":"Get the embeddings of the text.
public Single[] GetEmbeddings(string text)\n"},{"location":"xmldocs/llama.llamaembedder/#parameters_3","title":"Parameters","text":"text String
Single[]
"},{"location":"xmldocs/llama.llamaembedder/#exceptions_1","title":"Exceptions","text":"RuntimeError
"},{"location":"xmldocs/llama.llamaembedder/#getembeddingsstring-boolean","title":"GetEmbeddings(String, Boolean)","text":"Get the embeddings of the text.
public Single[] GetEmbeddings(string text, bool addBos)\n"},{"location":"xmldocs/llama.llamaembedder/#parameters_4","title":"Parameters","text":"text String
addBos Boolean Add bos to the text.
Single[]
"},{"location":"xmldocs/llama.llamaembedder/#exceptions_2","title":"Exceptions","text":"RuntimeError
"},{"location":"xmldocs/llama.llamaembedder/#dispose","title":"Dispose()","text":"public void Dispose()\n"},{"location":"xmldocs/llama.llamaquantizer/","title":"LLamaQuantizer","text":"Namespace: LLama
The quantizer to quantize the model.
public static class LLamaQuantizer\n Inheritance Object \u2192 LLamaQuantizer
"},{"location":"xmldocs/llama.llamaquantizer/#methods","title":"Methods","text":""},{"location":"xmldocs/llama.llamaquantizer/#quantizestring-string-llamaftype-int32-boolean-boolean","title":"Quantize(String, String, LLamaFtype, Int32, Boolean, Boolean)","text":"Quantize the model.
public static bool Quantize(string srcFileName, string dstFilename, LLamaFtype ftype, int nthread, bool allowRequantize, bool quantizeOutputTensor)\n"},{"location":"xmldocs/llama.llamaquantizer/#parameters","title":"Parameters","text":"srcFileName String The model file to be quantized.
dstFilename String The path to save the quantized model.
ftype LLamaFtype The type of quantization.
nthread Int32 Thread to be used during the quantization. By default it's the physical core number.
allowRequantize Boolean
quantizeOutputTensor Boolean
Boolean Whether the quantization is successful.
"},{"location":"xmldocs/llama.llamaquantizer/#exceptions","title":"Exceptions","text":"ArgumentException
"},{"location":"xmldocs/llama.llamaquantizer/#quantizestring-string-string-int32-boolean-boolean","title":"Quantize(String, String, String, Int32, Boolean, Boolean)","text":"Quantize the model.
public static bool Quantize(string srcFileName, string dstFilename, string ftype, int nthread, bool allowRequantize, bool quantizeOutputTensor)\n"},{"location":"xmldocs/llama.llamaquantizer/#parameters_1","title":"Parameters","text":"srcFileName String The model file to be quantized.
dstFilename String The path to save the quantized model.
ftype String The type of quantization.
nthread Int32 Thread to be used during the quantization. By default it's the physical core number.
allowRequantize Boolean
quantizeOutputTensor Boolean
Boolean Whether the quantization is successful.
"},{"location":"xmldocs/llama.llamaquantizer/#exceptions_1","title":"Exceptions","text":"ArgumentException
"},{"location":"xmldocs/llama.llamatransforms/","title":"LLamaTransforms","text":"Namespace: LLama
A class that contains all the transforms provided internally by LLama.
public class LLamaTransforms\n Inheritance Object \u2192 LLamaTransforms
"},{"location":"xmldocs/llama.llamatransforms/#constructors","title":"Constructors","text":""},{"location":"xmldocs/llama.llamatransforms/#llamatransforms_1","title":"LLamaTransforms()","text":"public LLamaTransforms()\n"},{"location":"xmldocs/llama.llamaweights/","title":"LLamaWeights","text":"Namespace: LLama
A set of model weights, loaded into memory.
public sealed class LLamaWeights : System.IDisposable\n Inheritance Object \u2192 LLamaWeights Implements IDisposable
"},{"location":"xmldocs/llama.llamaweights/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.llamaweights/#nativehandle","title":"NativeHandle","text":"The native handle, which is used in the native APIs
public SafeLlamaModelHandle NativeHandle { get; }\n"},{"location":"xmldocs/llama.llamaweights/#property-value","title":"Property Value","text":"SafeLlamaModelHandle
Remarks:
Be careful how you use this!
"},{"location":"xmldocs/llama.llamaweights/#encoding","title":"Encoding","text":"Encoding to use to convert text into bytes for the model
public Encoding Encoding { get; }\n"},{"location":"xmldocs/llama.llamaweights/#property-value_1","title":"Property Value","text":"Encoding
"},{"location":"xmldocs/llama.llamaweights/#vocabcount","title":"VocabCount","text":"Total number of tokens in vocabulary of this model
public int VocabCount { get; }\n"},{"location":"xmldocs/llama.llamaweights/#property-value_2","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.llamaweights/#contextsize","title":"ContextSize","text":"Total number of tokens in the context
public int ContextSize { get; }\n"},{"location":"xmldocs/llama.llamaweights/#property-value_3","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.llamaweights/#embeddingsize","title":"EmbeddingSize","text":"Dimension of embedding vectors
public int EmbeddingSize { get; }\n"},{"location":"xmldocs/llama.llamaweights/#property-value_4","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.llamaweights/#methods","title":"Methods","text":""},{"location":"xmldocs/llama.llamaweights/#loadfromfileimodelparams","title":"LoadFromFile(IModelParams)","text":"Load weights into memory
public static LLamaWeights LoadFromFile(IModelParams params)\n"},{"location":"xmldocs/llama.llamaweights/#parameters","title":"Parameters","text":"params IModelParams
LLamaWeights
"},{"location":"xmldocs/llama.llamaweights/#dispose","title":"Dispose()","text":"public void Dispose()\n"},{"location":"xmldocs/llama.llamaweights/#createcontextimodelparams","title":"CreateContext(IModelParams)","text":"Create a llama_context using this model
public LLamaContext CreateContext(IModelParams params)\n"},{"location":"xmldocs/llama.llamaweights/#parameters_1","title":"Parameters","text":"params IModelParams
LLamaContext
"},{"location":"xmldocs/llama.native.llamacontextparams/","title":"LLamaContextParams","text":"Namespace: LLama.Native
A C# representation of the llama.cpp llama_context_params struct
public struct LLamaContextParams\n Inheritance Object \u2192 ValueType \u2192 LLamaContextParams
"},{"location":"xmldocs/llama.native.llamacontextparams/#fields","title":"Fields","text":""},{"location":"xmldocs/llama.native.llamacontextparams/#seed","title":"seed","text":"RNG seed, -1 for random
public int seed;\n"},{"location":"xmldocs/llama.native.llamacontextparams/#n_ctx","title":"n_ctx","text":"text context
public int n_ctx;\n"},{"location":"xmldocs/llama.native.llamacontextparams/#n_batch","title":"n_batch","text":"prompt processing batch size
public int n_batch;\n"},{"location":"xmldocs/llama.native.llamacontextparams/#n_gpu_layers","title":"n_gpu_layers","text":"number of layers to store in VRAM
public int n_gpu_layers;\n"},{"location":"xmldocs/llama.native.llamacontextparams/#main_gpu","title":"main_gpu","text":"the GPU that is used for scratch and small tensors
public int main_gpu;\n"},{"location":"xmldocs/llama.native.llamacontextparams/#tensor_split","title":"tensor_split","text":"how to split layers across multiple GPUs
public IntPtr tensor_split;\n"},{"location":"xmldocs/llama.native.llamacontextparams/#rope_freq_base","title":"rope_freq_base","text":"ref: https://github.com/ggerganov/llama.cpp/pull/2054 RoPE base frequency
public float rope_freq_base;\n"},{"location":"xmldocs/llama.native.llamacontextparams/#rope_freq_scale","title":"rope_freq_scale","text":"ref: https://github.com/ggerganov/llama.cpp/pull/2054 RoPE frequency scaling factor
public float rope_freq_scale;\n"},{"location":"xmldocs/llama.native.llamacontextparams/#progress_callback","title":"progress_callback","text":"called with a progress value between 0 and 1, pass NULL to disable
public IntPtr progress_callback;\n"},{"location":"xmldocs/llama.native.llamacontextparams/#progress_callback_user_data","title":"progress_callback_user_data","text":"context pointer passed to the progress callback
public IntPtr progress_callback_user_data;\n"},{"location":"xmldocs/llama.native.llamacontextparams/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.native.llamacontextparams/#low_vram","title":"low_vram","text":"if true, reduce VRAM usage at the cost of performance
public bool low_vram { get; set; }\n"},{"location":"xmldocs/llama.native.llamacontextparams/#property-value","title":"Property Value","text":"Boolean
"},{"location":"xmldocs/llama.native.llamacontextparams/#mul_mat_q","title":"mul_mat_q","text":"if true, use experimental mul_mat_q kernels
public bool mul_mat_q { get; set; }\n"},{"location":"xmldocs/llama.native.llamacontextparams/#property-value_1","title":"Property Value","text":"Boolean
"},{"location":"xmldocs/llama.native.llamacontextparams/#f16_kv","title":"f16_kv","text":"use fp16 for KV cache
public bool f16_kv { get; set; }\n"},{"location":"xmldocs/llama.native.llamacontextparams/#property-value_2","title":"Property Value","text":"Boolean
"},{"location":"xmldocs/llama.native.llamacontextparams/#logits_all","title":"logits_all","text":"the llama_eval() call computes all logits, not just the last one
public bool logits_all { get; set; }\n"},{"location":"xmldocs/llama.native.llamacontextparams/#property-value_3","title":"Property Value","text":"Boolean
"},{"location":"xmldocs/llama.native.llamacontextparams/#vocab_only","title":"vocab_only","text":"only load the vocabulary, no weights
public bool vocab_only { get; set; }\n"},{"location":"xmldocs/llama.native.llamacontextparams/#property-value_4","title":"Property Value","text":"Boolean
"},{"location":"xmldocs/llama.native.llamacontextparams/#use_mmap","title":"use_mmap","text":"use mmap if possible
public bool use_mmap { get; set; }\n"},{"location":"xmldocs/llama.native.llamacontextparams/#property-value_5","title":"Property Value","text":"Boolean
"},{"location":"xmldocs/llama.native.llamacontextparams/#use_mlock","title":"use_mlock","text":"force system to keep model in RAM
public bool use_mlock { get; set; }\n"},{"location":"xmldocs/llama.native.llamacontextparams/#property-value_6","title":"Property Value","text":"Boolean
"},{"location":"xmldocs/llama.native.llamacontextparams/#embedding","title":"embedding","text":"embedding mode only
public bool embedding { get; set; }\n"},{"location":"xmldocs/llama.native.llamacontextparams/#property-value_7","title":"Property Value","text":"Boolean
"},{"location":"xmldocs/llama.native.llamaftype/","title":"LLamaFtype","text":"Namespace: LLama.Native
Supported model file types
public enum LLamaFtype\n Inheritance Object \u2192 ValueType \u2192 Enum \u2192 LLamaFtype Implements IComparable, IFormattable, IConvertible
"},{"location":"xmldocs/llama.native.llamaftype/#fields","title":"Fields","text":"Name Value Description LLAMA_FTYPE_ALL_F32 0 All f32 LLAMA_FTYPE_MOSTLY_F16 1 Mostly f16 LLAMA_FTYPE_MOSTLY_Q8_0 7 Mostly 8 bit LLAMA_FTYPE_MOSTLY_Q4_0 2 Mostly 4 bit LLAMA_FTYPE_MOSTLY_Q4_1 3 Mostly 4 bit LLAMA_FTYPE_MOSTLY_Q4_1_SOME_F16 4 Mostly 4 bit, tok_embeddings.weight and output.weight are f16 LLAMA_FTYPE_MOSTLY_Q5_0 8 Mostly 5 bit LLAMA_FTYPE_MOSTLY_Q5_1 9 Mostly 5 bit LLAMA_FTYPE_MOSTLY_Q2_K 10 K-Quant 2 bit LLAMA_FTYPE_MOSTLY_Q3_K_S 11 K-Quant 3 bit (Small) LLAMA_FTYPE_MOSTLY_Q3_K_M 12 K-Quant 3 bit (Medium) LLAMA_FTYPE_MOSTLY_Q3_K_L 13 K-Quant 3 bit (Large) LLAMA_FTYPE_MOSTLY_Q4_K_S 14 K-Quant 4 bit (Small) LLAMA_FTYPE_MOSTLY_Q4_K_M 15 K-Quant 4 bit (Medium) LLAMA_FTYPE_MOSTLY_Q5_K_S 16 K-Quant 5 bit (Small) LLAMA_FTYPE_MOSTLY_Q5_K_M 17 K-Quant 5 bit (Medium) LLAMA_FTYPE_MOSTLY_Q6_K 18 K-Quant 6 bit LLAMA_FTYPE_GUESSED 1024 File type was not specified"},{"location":"xmldocs/llama.native.llamagrammarelement/","title":"LLamaGrammarElement","text":"Namespace: LLama.Native
An element of a grammar
public struct LLamaGrammarElement\n Inheritance Object \u2192 ValueType \u2192 LLamaGrammarElement Implements IEquatable<LLamaGrammarElement>
"},{"location":"xmldocs/llama.native.llamagrammarelement/#fields","title":"Fields","text":""},{"location":"xmldocs/llama.native.llamagrammarelement/#type","title":"Type","text":"The type of this element
public LLamaGrammarElementType Type;\n"},{"location":"xmldocs/llama.native.llamagrammarelement/#value","title":"Value","text":"Unicode code point or rule ID
public uint Value;\n"},{"location":"xmldocs/llama.native.llamagrammarelement/#constructors","title":"Constructors","text":""},{"location":"xmldocs/llama.native.llamagrammarelement/#llamagrammarelementllamagrammarelementtype-uint32","title":"LLamaGrammarElement(LLamaGrammarElementType, UInt32)","text":"Construct a new LLamaGrammarElement
LLamaGrammarElement(LLamaGrammarElementType type, uint value)\n"},{"location":"xmldocs/llama.native.llamagrammarelement/#parameters","title":"Parameters","text":"type LLamaGrammarElementType
value UInt32
bool Equals(LLamaGrammarElement other)\n"},{"location":"xmldocs/llama.native.llamagrammarelement/#parameters_1","title":"Parameters","text":"other LLamaGrammarElement
Boolean
"},{"location":"xmldocs/llama.native.llamagrammarelement/#equalsobject","title":"Equals(Object)","text":"bool Equals(object obj)\n"},{"location":"xmldocs/llama.native.llamagrammarelement/#parameters_2","title":"Parameters","text":"obj Object
Boolean
"},{"location":"xmldocs/llama.native.llamagrammarelement/#gethashcode","title":"GetHashCode()","text":"int GetHashCode()\n"},{"location":"xmldocs/llama.native.llamagrammarelement/#returns_2","title":"Returns","text":"Int32
"},{"location":"xmldocs/llama.native.llamagrammarelement/#ischarelement","title":"IsCharElement()","text":"bool IsCharElement()\n"},{"location":"xmldocs/llama.native.llamagrammarelement/#returns_3","title":"Returns","text":"Boolean
"},{"location":"xmldocs/llama.native.llamagrammarelementtype/","title":"LLamaGrammarElementType","text":"Namespace: LLama.Native
grammar element type
public enum LLamaGrammarElementType\n Inheritance Object \u2192 ValueType \u2192 Enum \u2192 LLamaGrammarElementType Implements IComparable, IFormattable, IConvertible
"},{"location":"xmldocs/llama.native.llamagrammarelementtype/#fields","title":"Fields","text":"Name Value Description END 0 end of rule definition ALT 1 start of alternate definition for rule RULE_REF 2 non-terminal element: reference to rule CHAR 3 terminal element: character (code point) CHAR_NOT 4 inverse char(s) ([^a], [^a-b] [^abc]) CHAR_RNG_UPPER 5 modifies a preceding CHAR or CHAR_ALT to be an inclusive range ([a-z]) CHAR_ALT 6 modifies a preceding CHAR or CHAR_RNG_UPPER to add an alternate char to match ([ab], [a-zA])"},{"location":"xmldocs/llama.native.llamamodelquantizeparams/","title":"LLamaModelQuantizeParams","text":"Namespace: LLama.Native
Quantizer parameters used in the native API
public struct LLamaModelQuantizeParams\n Inheritance Object \u2192 ValueType \u2192 LLamaModelQuantizeParams
"},{"location":"xmldocs/llama.native.llamamodelquantizeparams/#fields","title":"Fields","text":""},{"location":"xmldocs/llama.native.llamamodelquantizeparams/#nthread","title":"nthread","text":"number of threads to use for quantizing, if <=0 will use std::thread::hardware_concurrency()
public int nthread;\n"},{"location":"xmldocs/llama.native.llamamodelquantizeparams/#ftype","title":"ftype","text":"quantize to this llama_ftype
public LLamaFtype ftype;\n"},{"location":"xmldocs/llama.native.llamamodelquantizeparams/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.native.llamamodelquantizeparams/#allow_requantize","title":"allow_requantize","text":"allow quantizing non-f32/f16 tensors
public bool allow_requantize { get; set; }\n"},{"location":"xmldocs/llama.native.llamamodelquantizeparams/#property-value","title":"Property Value","text":"Boolean
"},{"location":"xmldocs/llama.native.llamamodelquantizeparams/#quantize_output_tensor","title":"quantize_output_tensor","text":"quantize output.weight
public bool quantize_output_tensor { get; set; }\n"},{"location":"xmldocs/llama.native.llamamodelquantizeparams/#property-value_1","title":"Property Value","text":"Boolean
"},{"location":"xmldocs/llama.native.llamatokendata/","title":"LLamaTokenData","text":"Namespace: LLama.Native
public struct LLamaTokenData\n Inheritance Object \u2192 ValueType \u2192 LLamaTokenData
"},{"location":"xmldocs/llama.native.llamatokendata/#fields","title":"Fields","text":""},{"location":"xmldocs/llama.native.llamatokendata/#id","title":"id","text":"token id
public int id;\n"},{"location":"xmldocs/llama.native.llamatokendata/#logit","title":"logit","text":"log-odds of the token
public float logit;\n"},{"location":"xmldocs/llama.native.llamatokendata/#p","title":"p","text":"probability of the token
public float p;\n"},{"location":"xmldocs/llama.native.llamatokendata/#constructors","title":"Constructors","text":""},{"location":"xmldocs/llama.native.llamatokendata/#llamatokendataint32-single-single","title":"LLamaTokenData(Int32, Single, Single)","text":"LLamaTokenData(int id, float logit, float p)\n"},{"location":"xmldocs/llama.native.llamatokendata/#parameters","title":"Parameters","text":"id Int32
logit Single
p Single
Namespace: LLama.Native
Contains an array of LLamaTokenData, potentially sorted.
public struct LLamaTokenDataArray\n Inheritance Object \u2192 ValueType \u2192 LLamaTokenDataArray
"},{"location":"xmldocs/llama.native.llamatokendataarray/#fields","title":"Fields","text":""},{"location":"xmldocs/llama.native.llamatokendataarray/#data","title":"data","text":"The LLamaTokenData
public Memory<LLamaTokenData> data;\n"},{"location":"xmldocs/llama.native.llamatokendataarray/#sorted","title":"sorted","text":"Indicates if data is sorted by logits in descending order. If this is false the token data is in no particular order.
public bool sorted;\n"},{"location":"xmldocs/llama.native.llamatokendataarray/#constructors","title":"Constructors","text":""},{"location":"xmldocs/llama.native.llamatokendataarray/#llamatokendataarraymemoryllamatokendata-boolean","title":"LLamaTokenDataArray(Memory<LLamaTokenData>, Boolean)","text":"Create a new LLamaTokenDataArray
LLamaTokenDataArray(Memory<LLamaTokenData> tokens, bool isSorted)\n"},{"location":"xmldocs/llama.native.llamatokendataarray/#parameters","title":"Parameters","text":"tokens Memory<LLamaTokenData>
isSorted Boolean
Create a new LLamaTokenDataArray, copying the data from the given logits
LLamaTokenDataArray Create(ReadOnlySpan<float> logits)\n"},{"location":"xmldocs/llama.native.llamatokendataarray/#parameters_1","title":"Parameters","text":"logits ReadOnlySpan<Single>
LLamaTokenDataArray
"},{"location":"xmldocs/llama.native.llamatokendataarraynative/","title":"LLamaTokenDataArrayNative","text":"Namespace: LLama.Native
Contains a pointer to an array of LLamaTokenData which is pinned in memory.
public struct LLamaTokenDataArrayNative\n Inheritance Object \u2192 ValueType \u2192 LLamaTokenDataArrayNative
"},{"location":"xmldocs/llama.native.llamatokendataarraynative/#fields","title":"Fields","text":""},{"location":"xmldocs/llama.native.llamatokendataarraynative/#data","title":"data","text":"A pointer to an array of LlamaTokenData
public IntPtr data;\n Remarks:
Memory must be pinned in place for all the time this LLamaTokenDataArrayNative is in use
"},{"location":"xmldocs/llama.native.llamatokendataarraynative/#size","title":"size","text":"Number of LLamaTokenData in the array
public ulong size;\n"},{"location":"xmldocs/llama.native.llamatokendataarraynative/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.native.llamatokendataarraynative/#sorted","title":"sorted","text":"Indicates if the items in the array are sorted
public bool sorted { get; set; }\n"},{"location":"xmldocs/llama.native.llamatokendataarraynative/#property-value","title":"Property Value","text":"Boolean
"},{"location":"xmldocs/llama.native.llamatokendataarraynative/#methods","title":"Methods","text":""},{"location":"xmldocs/llama.native.llamatokendataarraynative/#createllamatokendataarray-llamatokendataarraynative","title":"Create(LLamaTokenDataArray, LLamaTokenDataArrayNative&)","text":"Create a new LLamaTokenDataArrayNative around the data in the LLamaTokenDataArray
MemoryHandle Create(LLamaTokenDataArray array, LLamaTokenDataArrayNative& native)\n"},{"location":"xmldocs/llama.native.llamatokendataarraynative/#parameters","title":"Parameters","text":"array LLamaTokenDataArray Data source
native LLamaTokenDataArrayNative& Created native array
MemoryHandle A memory handle, pinning the data in place until disposed
"},{"location":"xmldocs/llama.native.nativeapi/","title":"NativeApi","text":"Namespace: LLama.Native
Direct translation of the llama.cpp API
public class NativeApi\n Inheritance Object \u2192 NativeApi
"},{"location":"xmldocs/llama.native.nativeapi/#constructors","title":"Constructors","text":""},{"location":"xmldocs/llama.native.nativeapi/#nativeapi_1","title":"NativeApi()","text":"public NativeApi()\n"},{"location":"xmldocs/llama.native.nativeapi/#methods","title":"Methods","text":""},{"location":"xmldocs/llama.native.nativeapi/#llama_sample_token_mirostatsafellamacontexthandle-llamatokendataarraynative-single-single-int32-single","title":"llama_sample_token_mirostat(SafeLLamaContextHandle, LLamaTokenDataArrayNative&, Single, Single, Int32, Single&)","text":"Mirostat 1.0 algorithm described in the paper https://arxiv.org/abs/2007.14966. Uses tokens instead of words.
public static int llama_sample_token_mirostat(SafeLLamaContextHandle ctx, LLamaTokenDataArrayNative& candidates, float tau, float eta, int m, Single& mu)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters","title":"Parameters","text":"ctx SafeLLamaContextHandle
candidates LLamaTokenDataArrayNative& A vector of llama_token_data containing the candidate tokens, their probabilities (p), and log-odds (logit) for the current position in the generated text.
tau Single The target cross-entropy (or surprise) value you want to achieve for the generated text. A higher value corresponds to more surprising or less predictable text, while a lower value corresponds to less surprising or more predictable text.
eta Single The learning rate used to update mu based on the error between the target and observed surprisal of the sampled word. A larger learning rate will cause mu to be updated more quickly, while a smaller learning rate will result in slower updates.
m Int32 The number of tokens considered in the estimation of s_hat. This is an arbitrary value that is used to calculate s_hat, which in turn helps to calculate the value of k. In the paper, they use m = 100, but you can experiment with different values to see how it affects the performance of the algorithm.
mu Single& Maximum cross-entropy. This value is initialized to be twice the target cross-entropy (2 * tau) and is updated in the algorithm based on the error between the target and observed surprisal.
Int32
"},{"location":"xmldocs/llama.native.nativeapi/#llama_sample_token_mirostat_v2safellamacontexthandle-llamatokendataarraynative-single-single-single","title":"llama_sample_token_mirostat_v2(SafeLLamaContextHandle, LLamaTokenDataArrayNative&, Single, Single, Single&)","text":"Mirostat 2.0 algorithm described in the paper https://arxiv.org/abs/2007.14966. Uses tokens instead of words.
public static int llama_sample_token_mirostat_v2(SafeLLamaContextHandle ctx, LLamaTokenDataArrayNative& candidates, float tau, float eta, Single& mu)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_1","title":"Parameters","text":"ctx SafeLLamaContextHandle
candidates LLamaTokenDataArrayNative& A vector of llama_token_data containing the candidate tokens, their probabilities (p), and log-odds (logit) for the current position in the generated text.
tau Single The target cross-entropy (or surprise) value you want to achieve for the generated text. A higher value corresponds to more surprising or less predictable text, while a lower value corresponds to less surprising or more predictable text.
eta Single The learning rate used to update mu based on the error between the target and observed surprisal of the sampled word. A larger learning rate will cause mu to be updated more quickly, while a smaller learning rate will result in slower updates.
mu Single& Maximum cross-entropy. This value is initialized to be twice the target cross-entropy (2 * tau) and is updated in the algorithm based on the error between the target and observed surprisal.
Int32
"},{"location":"xmldocs/llama.native.nativeapi/#llama_sample_token_greedysafellamacontexthandle-llamatokendataarraynative","title":"llama_sample_token_greedy(SafeLLamaContextHandle, LLamaTokenDataArrayNative&)","text":"Selects the token with the highest probability.
public static int llama_sample_token_greedy(SafeLLamaContextHandle ctx, LLamaTokenDataArrayNative& candidates)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_2","title":"Parameters","text":"ctx SafeLLamaContextHandle
candidates LLamaTokenDataArrayNative& Pointer to LLamaTokenDataArray
Int32
"},{"location":"xmldocs/llama.native.nativeapi/#llama_sample_tokensafellamacontexthandle-llamatokendataarraynative","title":"llama_sample_token(SafeLLamaContextHandle, LLamaTokenDataArrayNative&)","text":"Randomly selects a token from the candidates based on their probabilities.
public static int llama_sample_token(SafeLLamaContextHandle ctx, LLamaTokenDataArrayNative& candidates)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_3","title":"Parameters","text":"ctx SafeLLamaContextHandle
candidates LLamaTokenDataArrayNative& Pointer to LLamaTokenDataArray
Int32
"},{"location":"xmldocs/llama.native.nativeapi/#llama_token_to_strsafellamacontexthandle-int32","title":"llama_token_to_str(SafeLLamaContextHandle, Int32)","text":"Token Id -> String. Uses the vocabulary in the provided context
public static IntPtr llama_token_to_str(SafeLLamaContextHandle ctx, int token)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_4","title":"Parameters","text":"ctx SafeLLamaContextHandle
token Int32
IntPtr Pointer to a string.
"},{"location":"xmldocs/llama.native.nativeapi/#llama_token_bossafellamacontexthandle","title":"llama_token_bos(SafeLLamaContextHandle)","text":"Get the \"Beginning of sentence\" token
public static int llama_token_bos(SafeLLamaContextHandle ctx)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_5","title":"Parameters","text":"ctx SafeLLamaContextHandle
Int32
"},{"location":"xmldocs/llama.native.nativeapi/#llama_token_eossafellamacontexthandle","title":"llama_token_eos(SafeLLamaContextHandle)","text":"Get the \"End of sentence\" token
public static int llama_token_eos(SafeLLamaContextHandle ctx)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_6","title":"Parameters","text":"ctx SafeLLamaContextHandle
Int32
"},{"location":"xmldocs/llama.native.nativeapi/#llama_token_nlsafellamacontexthandle","title":"llama_token_nl(SafeLLamaContextHandle)","text":"Get the \"new line\" token
public static int llama_token_nl(SafeLLamaContextHandle ctx)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_7","title":"Parameters","text":"ctx SafeLLamaContextHandle
Int32
"},{"location":"xmldocs/llama.native.nativeapi/#llama_print_timingssafellamacontexthandle","title":"llama_print_timings(SafeLLamaContextHandle)","text":"Print out timing information for this context
public static void llama_print_timings(SafeLLamaContextHandle ctx)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_8","title":"Parameters","text":"ctx SafeLLamaContextHandle
Reset all collected timing information for this context
public static void llama_reset_timings(SafeLLamaContextHandle ctx)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_9","title":"Parameters","text":"ctx SafeLLamaContextHandle
Print system information
public static IntPtr llama_print_system_info()\n"},{"location":"xmldocs/llama.native.nativeapi/#returns_8","title":"Returns","text":"IntPtr
"},{"location":"xmldocs/llama.native.nativeapi/#llama_model_n_vocabsafellamamodelhandle","title":"llama_model_n_vocab(SafeLlamaModelHandle)","text":"Get the number of tokens in the model vocabulary
public static int llama_model_n_vocab(SafeLlamaModelHandle model)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_10","title":"Parameters","text":"model SafeLlamaModelHandle
Int32
"},{"location":"xmldocs/llama.native.nativeapi/#llama_model_n_ctxsafellamamodelhandle","title":"llama_model_n_ctx(SafeLlamaModelHandle)","text":"Get the size of the context window for the model
public static int llama_model_n_ctx(SafeLlamaModelHandle model)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_11","title":"Parameters","text":"model SafeLlamaModelHandle
Int32
"},{"location":"xmldocs/llama.native.nativeapi/#llama_model_n_embdsafellamamodelhandle","title":"llama_model_n_embd(SafeLlamaModelHandle)","text":"Get the dimension of embedding vectors from this model
public static int llama_model_n_embd(SafeLlamaModelHandle model)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_12","title":"Parameters","text":"model SafeLlamaModelHandle
Int32
"},{"location":"xmldocs/llama.native.nativeapi/#llama_token_to_piece_with_modelsafellamamodelhandle-int32-byte-int32","title":"llama_token_to_piece_with_model(SafeLlamaModelHandle, Int32, Byte*, Int32)","text":"Convert a single token into text
public static int llama_token_to_piece_with_model(SafeLlamaModelHandle model, int llamaToken, Byte* buffer, int length)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_13","title":"Parameters","text":"model SafeLlamaModelHandle
llamaToken Int32
buffer Byte* buffer to write string into
length Int32 size of the buffer
Int32 The length writte, or if the buffer is too small a negative that indicates the length required
"},{"location":"xmldocs/llama.native.nativeapi/#llama_tokenize_with_modelsafellamamodelhandle-byte-int32-int32-boolean","title":"llama_tokenize_with_model(SafeLlamaModelHandle, Byte, Int32, Int32, Boolean)","text":"Convert text into tokens
public static int llama_tokenize_with_model(SafeLlamaModelHandle model, Byte* text, Int32* tokens, int n_max_tokens, bool add_bos)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_14","title":"Parameters","text":"model SafeLlamaModelHandle
text Byte*
tokens Int32*
n_max_tokens Int32
add_bos Boolean
Int32 Returns the number of tokens on success, no more than n_max_tokens. Returns a negative number on failure - the number of tokens that would have been returned
"},{"location":"xmldocs/llama.native.nativeapi/#llama_log_setllamalogcallback","title":"llama_log_set(LLamaLogCallback)","text":"Register a callback to receive llama log messages
public static void llama_log_set(LLamaLogCallback logCallback)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_15","title":"Parameters","text":"logCallback LLamaLogCallback
Create a new grammar from the given set of grammar rules
public static IntPtr llama_grammar_init(LLamaGrammarElement** rules, ulong n_rules, ulong start_rule_index)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_16","title":"Parameters","text":"rules LLamaGrammarElement**
n_rules UInt64
start_rule_index UInt64
IntPtr
"},{"location":"xmldocs/llama.native.nativeapi/#llama_grammar_freeintptr","title":"llama_grammar_free(IntPtr)","text":"Free all memory from the given SafeLLamaGrammarHandle
public static void llama_grammar_free(IntPtr grammar)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_17","title":"Parameters","text":"grammar IntPtr
Apply constraints from grammar
public static void llama_sample_grammar(SafeLLamaContextHandle ctx, LLamaTokenDataArrayNative& candidates, SafeLLamaGrammarHandle grammar)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_18","title":"Parameters","text":"ctx SafeLLamaContextHandle
candidates LLamaTokenDataArrayNative&
grammar SafeLLamaGrammarHandle
Accepts the sampled token into the grammar
public static void llama_grammar_accept_token(SafeLLamaContextHandle ctx, SafeLLamaGrammarHandle grammar, int token)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_19","title":"Parameters","text":"ctx SafeLLamaContextHandle
grammar SafeLLamaGrammarHandle
token Int32
Returns 0 on success
public static int llama_model_quantize(string fname_inp, string fname_out, LLamaModelQuantizeParams* param)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_20","title":"Parameters","text":"fname_inp String
fname_out String
param LLamaModelQuantizeParams*
Int32 Returns 0 on success
Remarks:
not great API - very likely to change
"},{"location":"xmldocs/llama.native.nativeapi/#llama_sample_classifier_free_guidancesafellamacontexthandle-llamatokendataarraynative-safellamacontexthandle-single","title":"llama_sample_classifier_free_guidance(SafeLLamaContextHandle, LLamaTokenDataArrayNative, SafeLLamaContextHandle, Single)","text":"Apply classifier-free guidance to the logits as described in academic paper \"Stay on topic with Classifier-Free Guidance\" https://arxiv.org/abs/2306.17806
public static void llama_sample_classifier_free_guidance(SafeLLamaContextHandle ctx, LLamaTokenDataArrayNative candidates, SafeLLamaContextHandle guidanceCtx, float scale)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_21","title":"Parameters","text":"ctx SafeLLamaContextHandle
candidates LLamaTokenDataArrayNative A vector of llama_token_data containing the candidate tokens, the logits must be directly extracted from the original generation context without being sorted.
guidanceCtx SafeLLamaContextHandle A separate context from the same model. Other than a negative prompt at the beginning, it should have all generated and user input tokens copied from the main context.
scale Single Guidance strength. 1.0f means no guidance. Higher values mean stronger guidance.
Repetition penalty described in CTRL academic paper https://arxiv.org/abs/1909.05858, with negative logit fix.
public static void llama_sample_repetition_penalty(SafeLLamaContextHandle ctx, LLamaTokenDataArrayNative& candidates, Int32* last_tokens, ulong last_tokens_size, float penalty)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_22","title":"Parameters","text":"ctx SafeLLamaContextHandle
candidates LLamaTokenDataArrayNative& Pointer to LLamaTokenDataArray
last_tokens Int32*
last_tokens_size UInt64
penalty Single
Frequency and presence penalties described in OpenAI API https://platform.openai.com/docs/api-reference/parameter-details.
public static void llama_sample_frequency_and_presence_penalties(SafeLLamaContextHandle ctx, LLamaTokenDataArrayNative& candidates, Int32* last_tokens, ulong last_tokens_size, float alpha_frequency, float alpha_presence)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_23","title":"Parameters","text":"ctx SafeLLamaContextHandle
candidates LLamaTokenDataArrayNative& Pointer to LLamaTokenDataArray
last_tokens Int32*
last_tokens_size UInt64
alpha_frequency Single
alpha_presence Single
Apply classifier-free guidance to the logits as described in academic paper \"Stay on topic with Classifier-Free Guidance\" https://arxiv.org/abs/2306.17806
public static void llama_sample_classifier_free_guidance(SafeLLamaContextHandle ctx, LLamaTokenDataArrayNative& candidates, SafeLLamaContextHandle guidance_ctx, float scale)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_24","title":"Parameters","text":"ctx SafeLLamaContextHandle
candidates LLamaTokenDataArrayNative& A vector of llama_token_data containing the candidate tokens, the logits must be directly extracted from the original generation context without being sorted.
guidance_ctx SafeLLamaContextHandle A separate context from the same model. Other than a negative prompt at the beginning, it should have all generated and user input tokens copied from the main context.
scale Single Guidance strength. 1.0f means no guidance. Higher values mean stronger guidance.
Sorts candidate tokens by their logits in descending order and calculate probabilities based on logits.
public static void llama_sample_softmax(SafeLLamaContextHandle ctx, LLamaTokenDataArrayNative& candidates)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_25","title":"Parameters","text":"ctx SafeLLamaContextHandle
candidates LLamaTokenDataArrayNative& Pointer to LLamaTokenDataArray
Top-K sampling described in academic paper \"The Curious Case of Neural Text Degeneration\" https://arxiv.org/abs/1904.09751
public static void llama_sample_top_k(SafeLLamaContextHandle ctx, LLamaTokenDataArrayNative& candidates, int k, ulong min_keep)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_26","title":"Parameters","text":"ctx SafeLLamaContextHandle
candidates LLamaTokenDataArrayNative& Pointer to LLamaTokenDataArray
k Int32
min_keep UInt64
Nucleus sampling described in academic paper \"The Curious Case of Neural Text Degeneration\" https://arxiv.org/abs/1904.09751
public static void llama_sample_top_p(SafeLLamaContextHandle ctx, LLamaTokenDataArrayNative& candidates, float p, ulong min_keep)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_27","title":"Parameters","text":"ctx SafeLLamaContextHandle
candidates LLamaTokenDataArrayNative& Pointer to LLamaTokenDataArray
p Single
min_keep UInt64
Tail Free Sampling described in https://www.trentonbricken.com/Tail-Free-Sampling/.
public static void llama_sample_tail_free(SafeLLamaContextHandle ctx, LLamaTokenDataArrayNative& candidates, float z, ulong min_keep)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_28","title":"Parameters","text":"ctx SafeLLamaContextHandle
candidates LLamaTokenDataArrayNative& Pointer to LLamaTokenDataArray
z Single
min_keep UInt64
Locally Typical Sampling implementation described in the paper https://arxiv.org/abs/2202.00666.
public static void llama_sample_typical(SafeLLamaContextHandle ctx, LLamaTokenDataArrayNative& candidates, float p, ulong min_keep)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_29","title":"Parameters","text":"ctx SafeLLamaContextHandle
candidates LLamaTokenDataArrayNative& Pointer to LLamaTokenDataArray
p Single
min_keep UInt64
Modify logits by temperature
public static void llama_sample_temperature(SafeLLamaContextHandle ctx, LLamaTokenDataArrayNative& candidates, float temp)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_30","title":"Parameters","text":"ctx SafeLLamaContextHandle
candidates LLamaTokenDataArrayNative&
temp Single
A method that does nothing. This is a native method, calling it will force the llama native dependencies to be loaded.
public static bool llama_empty_call()\n"},{"location":"xmldocs/llama.native.nativeapi/#returns_16","title":"Returns","text":"Boolean
"},{"location":"xmldocs/llama.native.nativeapi/#llama_context_default_params","title":"llama_context_default_params()","text":"Create a LLamaContextParams with default values
public static LLamaContextParams llama_context_default_params()\n"},{"location":"xmldocs/llama.native.nativeapi/#returns_17","title":"Returns","text":"LLamaContextParams
"},{"location":"xmldocs/llama.native.nativeapi/#llama_model_quantize_default_params","title":"llama_model_quantize_default_params()","text":"Create a LLamaModelQuantizeParams with default values
public static LLamaModelQuantizeParams llama_model_quantize_default_params()\n"},{"location":"xmldocs/llama.native.nativeapi/#returns_18","title":"Returns","text":"LLamaModelQuantizeParams
"},{"location":"xmldocs/llama.native.nativeapi/#llama_mmap_supported","title":"llama_mmap_supported()","text":"Check if memory mapping is supported
public static bool llama_mmap_supported()\n"},{"location":"xmldocs/llama.native.nativeapi/#returns_19","title":"Returns","text":"Boolean
"},{"location":"xmldocs/llama.native.nativeapi/#llama_mlock_supported","title":"llama_mlock_supported()","text":"Check if memory lockingis supported
public static bool llama_mlock_supported()\n"},{"location":"xmldocs/llama.native.nativeapi/#returns_20","title":"Returns","text":"Boolean
"},{"location":"xmldocs/llama.native.nativeapi/#llama_eval_exportsafellamacontexthandle-string","title":"llama_eval_export(SafeLLamaContextHandle, String)","text":"Export a static computation graph for context of 511 and batch size of 1 NOTE: since this functionality is mostly for debugging and demonstration purposes, we hardcode these parameters here to keep things simple IMPORTANT: do not use for anything else other than debugging and testing!
public static int llama_eval_export(SafeLLamaContextHandle ctx, string fname)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_31","title":"Parameters","text":"ctx SafeLLamaContextHandle
fname String
Int32
"},{"location":"xmldocs/llama.native.nativeapi/#llama_load_model_from_filestring-llamacontextparams","title":"llama_load_model_from_file(String, LLamaContextParams)","text":"Various functions for loading a ggml llama model. Allocate (almost) all memory needed for the model. Return NULL on failure
public static IntPtr llama_load_model_from_file(string path_model, LLamaContextParams params)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_32","title":"Parameters","text":"path_model String
params LLamaContextParams
IntPtr
"},{"location":"xmldocs/llama.native.nativeapi/#llama_new_context_with_modelsafellamamodelhandle-llamacontextparams","title":"llama_new_context_with_model(SafeLlamaModelHandle, LLamaContextParams)","text":"Create a new llama_context with the given model. Return value should always be wrapped in SafeLLamaContextHandle!
public static IntPtr llama_new_context_with_model(SafeLlamaModelHandle model, LLamaContextParams params)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_33","title":"Parameters","text":"model SafeLlamaModelHandle
params LLamaContextParams
IntPtr
"},{"location":"xmldocs/llama.native.nativeapi/#llama_backend_initboolean","title":"llama_backend_init(Boolean)","text":"not great API - very likely to change. Initialize the llama + ggml backend Call once at the start of the program
public static void llama_backend_init(bool numa)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_34","title":"Parameters","text":"numa Boolean
Frees all allocated memory in the given llama_context
public static void llama_free(IntPtr ctx)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_35","title":"Parameters","text":"ctx IntPtr
Frees all allocated memory associated with a model
public static void llama_free_model(IntPtr model)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_36","title":"Parameters","text":"model IntPtr
Apply a LoRA adapter to a loaded model path_base_model is the path to a higher quality model to use as a base for the layers modified by the adapter. Can be NULL to use the current loaded model. The model needs to be reloaded before applying a new adapter, otherwise the adapter will be applied on top of the previous one
public static int llama_model_apply_lora_from_file(SafeLlamaModelHandle model_ptr, string path_lora, string path_base_model, int n_threads)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_37","title":"Parameters","text":"model_ptr SafeLlamaModelHandle
path_lora String
path_base_model String
n_threads Int32
Int32 Returns 0 on success
"},{"location":"xmldocs/llama.native.nativeapi/#llama_get_kv_cache_token_countsafellamacontexthandle","title":"llama_get_kv_cache_token_count(SafeLLamaContextHandle)","text":"Returns the number of tokens in the KV cache
public static int llama_get_kv_cache_token_count(SafeLLamaContextHandle ctx)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_38","title":"Parameters","text":"ctx SafeLLamaContextHandle
Int32
"},{"location":"xmldocs/llama.native.nativeapi/#llama_set_rng_seedsafellamacontexthandle-int32","title":"llama_set_rng_seed(SafeLLamaContextHandle, Int32)","text":"Sets the current rng seed.
public static void llama_set_rng_seed(SafeLLamaContextHandle ctx, int seed)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_39","title":"Parameters","text":"ctx SafeLLamaContextHandle
seed Int32
Returns the maximum size in bytes of the state (rng, logits, embedding and kv_cache) - will often be smaller after compacting tokens
public static ulong llama_get_state_size(SafeLLamaContextHandle ctx)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_40","title":"Parameters","text":"ctx SafeLLamaContextHandle
UInt64
"},{"location":"xmldocs/llama.native.nativeapi/#llama_copy_state_datasafellamacontexthandle-byte","title":"llama_copy_state_data(SafeLLamaContextHandle, Byte*)","text":"Copies the state to the specified destination address. Destination needs to have allocated enough memory.
public static ulong llama_copy_state_data(SafeLLamaContextHandle ctx, Byte* dest)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_41","title":"Parameters","text":"ctx SafeLLamaContextHandle
dest Byte*
UInt64 the number of bytes copied
"},{"location":"xmldocs/llama.native.nativeapi/#llama_copy_state_datasafellamacontexthandle-byte_1","title":"llama_copy_state_data(SafeLLamaContextHandle, Byte[])","text":"Copies the state to the specified destination address. Destination needs to have allocated enough memory (see llama_get_state_size)
public static ulong llama_copy_state_data(SafeLLamaContextHandle ctx, Byte[] dest)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_42","title":"Parameters","text":"ctx SafeLLamaContextHandle
dest Byte[]
UInt64 the number of bytes copied
"},{"location":"xmldocs/llama.native.nativeapi/#llama_set_state_datasafellamacontexthandle-byte","title":"llama_set_state_data(SafeLLamaContextHandle, Byte*)","text":"Set the state reading from the specified address
public static ulong llama_set_state_data(SafeLLamaContextHandle ctx, Byte* src)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_43","title":"Parameters","text":"ctx SafeLLamaContextHandle
src Byte*
UInt64 the number of bytes read
"},{"location":"xmldocs/llama.native.nativeapi/#llama_set_state_datasafellamacontexthandle-byte_1","title":"llama_set_state_data(SafeLLamaContextHandle, Byte[])","text":"Set the state reading from the specified address
public static ulong llama_set_state_data(SafeLLamaContextHandle ctx, Byte[] src)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_44","title":"Parameters","text":"ctx SafeLLamaContextHandle
src Byte[]
UInt64 the number of bytes read
"},{"location":"xmldocs/llama.native.nativeapi/#llama_load_session_filesafellamacontexthandle-string-int32-uint64-uint64","title":"llama_load_session_file(SafeLLamaContextHandle, String, Int32[], UInt64, UInt64*)","text":"Load session file
public static bool llama_load_session_file(SafeLLamaContextHandle ctx, string path_session, Int32[] tokens_out, ulong n_token_capacity, UInt64* n_token_count_out)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_45","title":"Parameters","text":"ctx SafeLLamaContextHandle
path_session String
tokens_out Int32[]
n_token_capacity UInt64
n_token_count_out UInt64*
Boolean
"},{"location":"xmldocs/llama.native.nativeapi/#llama_save_session_filesafellamacontexthandle-string-int32-uint64","title":"llama_save_session_file(SafeLLamaContextHandle, String, Int32[], UInt64)","text":"Save session file
public static bool llama_save_session_file(SafeLLamaContextHandle ctx, string path_session, Int32[] tokens, ulong n_token_count)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_46","title":"Parameters","text":"ctx SafeLLamaContextHandle
path_session String
tokens Int32[]
n_token_count UInt64
Boolean
"},{"location":"xmldocs/llama.native.nativeapi/#llama_evalsafellamacontexthandle-int32-int32-int32-int32","title":"llama_eval(SafeLLamaContextHandle, Int32[], Int32, Int32, Int32)","text":"Run the llama inference to obtain the logits and probabilities for the next token. tokens + n_tokens is the provided batch of new tokens to process n_past is the number of tokens to use from previous eval calls
public static int llama_eval(SafeLLamaContextHandle ctx, Int32[] tokens, int n_tokens, int n_past, int n_threads)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_47","title":"Parameters","text":"ctx SafeLLamaContextHandle
tokens Int32[]
n_tokens Int32
n_past Int32
n_threads Int32
Int32 Returns 0 on success
"},{"location":"xmldocs/llama.native.nativeapi/#llama_eval_with_pointersafellamacontexthandle-int32-int32-int32-int32","title":"llama_eval_with_pointer(SafeLLamaContextHandle, Int32*, Int32, Int32, Int32)","text":"Run the llama inference to obtain the logits and probabilities for the next token. tokens + n_tokens is the provided batch of new tokens to process n_past is the number of tokens to use from previous eval calls
public static int llama_eval_with_pointer(SafeLLamaContextHandle ctx, Int32* tokens, int n_tokens, int n_past, int n_threads)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_48","title":"Parameters","text":"ctx SafeLLamaContextHandle
tokens Int32*
n_tokens Int32
n_past Int32
n_threads Int32
Int32 Returns 0 on success
"},{"location":"xmldocs/llama.native.nativeapi/#llama_tokenizesafellamacontexthandle-string-encoding-int32-int32-boolean","title":"llama_tokenize(SafeLLamaContextHandle, String, Encoding, Int32[], Int32, Boolean)","text":"Convert the provided text into tokens.
public static int llama_tokenize(SafeLLamaContextHandle ctx, string text, Encoding encoding, Int32[] tokens, int n_max_tokens, bool add_bos)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_49","title":"Parameters","text":"ctx SafeLLamaContextHandle
text String
encoding Encoding
tokens Int32[]
n_max_tokens Int32
add_bos Boolean
Int32 Returns the number of tokens on success, no more than n_max_tokens. Returns a negative number on failure - the number of tokens that would have been returned
"},{"location":"xmldocs/llama.native.nativeapi/#llama_tokenize_nativesafellamacontexthandle-byte-int32-int32-boolean","title":"llama_tokenize_native(SafeLLamaContextHandle, Byte, Int32, Int32, Boolean)","text":"Convert the provided text into tokens.
public static int llama_tokenize_native(SafeLLamaContextHandle ctx, Byte* text, Int32* tokens, int n_max_tokens, bool add_bos)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_50","title":"Parameters","text":"ctx SafeLLamaContextHandle
text Byte*
tokens Int32*
n_max_tokens Int32
add_bos Boolean
Int32 Returns the number of tokens on success, no more than n_max_tokens. Returns a negative number on failure - the number of tokens that would have been returned
"},{"location":"xmldocs/llama.native.nativeapi/#llama_n_vocabsafellamacontexthandle","title":"llama_n_vocab(SafeLLamaContextHandle)","text":"Get the number of tokens in the model vocabulary for this context
public static int llama_n_vocab(SafeLLamaContextHandle ctx)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_51","title":"Parameters","text":"ctx SafeLLamaContextHandle
Int32
"},{"location":"xmldocs/llama.native.nativeapi/#llama_n_ctxsafellamacontexthandle","title":"llama_n_ctx(SafeLLamaContextHandle)","text":"Get the size of the context window for the model for this context
public static int llama_n_ctx(SafeLLamaContextHandle ctx)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_52","title":"Parameters","text":"ctx SafeLLamaContextHandle
Int32
"},{"location":"xmldocs/llama.native.nativeapi/#llama_n_embdsafellamacontexthandle","title":"llama_n_embd(SafeLLamaContextHandle)","text":"Get the dimension of embedding vectors from the model for this context
public static int llama_n_embd(SafeLLamaContextHandle ctx)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_53","title":"Parameters","text":"ctx SafeLLamaContextHandle
Int32
"},{"location":"xmldocs/llama.native.nativeapi/#llama_get_logitssafellamacontexthandle","title":"llama_get_logits(SafeLLamaContextHandle)","text":"Token logits obtained from the last call to llama_eval() The logits for the last token are stored in the last row Can be mutated in order to change the probabilities of the next token. Rows: n_tokens Cols: n_vocab
public static Single* llama_get_logits(SafeLLamaContextHandle ctx)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_54","title":"Parameters","text":"ctx SafeLLamaContextHandle
Single*
"},{"location":"xmldocs/llama.native.nativeapi/#llama_get_embeddingssafellamacontexthandle","title":"llama_get_embeddings(SafeLLamaContextHandle)","text":"Get the embeddings for the input shape: [n_embd] (1-dimensional)
public static Single* llama_get_embeddings(SafeLLamaContextHandle ctx)\n"},{"location":"xmldocs/llama.native.nativeapi/#parameters_55","title":"Parameters","text":"ctx SafeLLamaContextHandle
Single*
"},{"location":"xmldocs/llama.native.safellamacontexthandle/","title":"SafeLLamaContextHandle","text":"Namespace: LLama.Native
A safe wrapper around a llama_context
public sealed class SafeLLamaContextHandle : SafeLLamaHandleBase, System.IDisposable\n Inheritance Object \u2192 CriticalFinalizerObject \u2192 SafeHandle \u2192 SafeLLamaHandleBase \u2192 SafeLLamaContextHandle Implements IDisposable
"},{"location":"xmldocs/llama.native.safellamacontexthandle/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.native.safellamacontexthandle/#vocabcount","title":"VocabCount","text":"Total number of tokens in vocabulary of this model
public int VocabCount { get; }\n"},{"location":"xmldocs/llama.native.safellamacontexthandle/#property-value","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.native.safellamacontexthandle/#contextsize","title":"ContextSize","text":"Total number of tokens in the context
public int ContextSize { get; }\n"},{"location":"xmldocs/llama.native.safellamacontexthandle/#property-value_1","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.native.safellamacontexthandle/#embeddingsize","title":"EmbeddingSize","text":"Dimension of embedding vectors
public int EmbeddingSize { get; }\n"},{"location":"xmldocs/llama.native.safellamacontexthandle/#property-value_2","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.native.safellamacontexthandle/#modelhandle","title":"ModelHandle","text":"Get the model which this context is using
public SafeLlamaModelHandle ModelHandle { get; }\n"},{"location":"xmldocs/llama.native.safellamacontexthandle/#property-value_3","title":"Property Value","text":"SafeLlamaModelHandle
"},{"location":"xmldocs/llama.native.safellamacontexthandle/#isinvalid","title":"IsInvalid","text":"public bool IsInvalid { get; }\n"},{"location":"xmldocs/llama.native.safellamacontexthandle/#property-value_4","title":"Property Value","text":"Boolean
"},{"location":"xmldocs/llama.native.safellamacontexthandle/#isclosed","title":"IsClosed","text":"public bool IsClosed { get; }\n"},{"location":"xmldocs/llama.native.safellamacontexthandle/#property-value_5","title":"Property Value","text":"Boolean
"},{"location":"xmldocs/llama.native.safellamacontexthandle/#constructors","title":"Constructors","text":""},{"location":"xmldocs/llama.native.safellamacontexthandle/#safellamacontexthandleintptr-safellamamodelhandle","title":"SafeLLamaContextHandle(IntPtr, SafeLlamaModelHandle)","text":"Create a new SafeLLamaContextHandle
public SafeLLamaContextHandle(IntPtr handle, SafeLlamaModelHandle model)\n"},{"location":"xmldocs/llama.native.safellamacontexthandle/#parameters","title":"Parameters","text":"handle IntPtr pointer to an allocated llama_context
model SafeLlamaModelHandle the model which this context was created from
protected bool ReleaseHandle()\n"},{"location":"xmldocs/llama.native.safellamacontexthandle/#returns","title":"Returns","text":"Boolean
"},{"location":"xmldocs/llama.native.safellamacontexthandle/#createsafellamamodelhandle-llamacontextparams","title":"Create(SafeLlamaModelHandle, LLamaContextParams)","text":"Create a new llama_state for the given model
public static SafeLLamaContextHandle Create(SafeLlamaModelHandle model, LLamaContextParams lparams)\n"},{"location":"xmldocs/llama.native.safellamacontexthandle/#parameters_1","title":"Parameters","text":"model SafeLlamaModelHandle
lparams LLamaContextParams
SafeLLamaContextHandle
"},{"location":"xmldocs/llama.native.safellamacontexthandle/#exceptions","title":"Exceptions","text":"RuntimeError
"},{"location":"xmldocs/llama.native.safellamacontexthandle/#clonellamacontextparams","title":"Clone(LLamaContextParams)","text":"Create a new llama context with a clone of the current llama context state
public SafeLLamaContextHandle Clone(LLamaContextParams lparams)\n"},{"location":"xmldocs/llama.native.safellamacontexthandle/#parameters_2","title":"Parameters","text":"lparams LLamaContextParams
SafeLLamaContextHandle
"},{"location":"xmldocs/llama.native.safellamacontexthandle/#tokenizestring-boolean-encoding","title":"Tokenize(String, Boolean, Encoding)","text":"Convert the given text into tokens
public Int32[] Tokenize(string text, bool add_bos, Encoding encoding)\n"},{"location":"xmldocs/llama.native.safellamacontexthandle/#parameters_3","title":"Parameters","text":"text String The text to tokenize
add_bos Boolean Whether the \"BOS\" token should be added
encoding Encoding Encoding to use for the text
Int32[]
"},{"location":"xmldocs/llama.native.safellamacontexthandle/#exceptions_1","title":"Exceptions","text":"RuntimeError
"},{"location":"xmldocs/llama.native.safellamacontexthandle/#getlogits","title":"GetLogits()","text":"Token logits obtained from the last call to llama_eval() The logits for the last token are stored in the last row Can be mutated in order to change the probabilities of the next token. Rows: n_tokens Cols: n_vocab
public Span<float> GetLogits()\n"},{"location":"xmldocs/llama.native.safellamacontexthandle/#returns_4","title":"Returns","text":"Span<Single>
"},{"location":"xmldocs/llama.native.safellamacontexthandle/#tokentostringint32-encoding","title":"TokenToString(Int32, Encoding)","text":"Convert a token into a string
public string TokenToString(int token, Encoding encoding)\n"},{"location":"xmldocs/llama.native.safellamacontexthandle/#parameters_4","title":"Parameters","text":"token Int32 Token to decode into a string
encoding Encoding
String
"},{"location":"xmldocs/llama.native.safellamacontexthandle/#tokentostringint32-encoding-stringbuilder","title":"TokenToString(Int32, Encoding, StringBuilder)","text":"Append a single llama token to a string builder
public void TokenToString(int token, Encoding encoding, StringBuilder dest)\n"},{"location":"xmldocs/llama.native.safellamacontexthandle/#parameters_5","title":"Parameters","text":"token Int32 Token to decode
encoding Encoding
dest StringBuilder string builder to append the result to
Convert a single llama token into bytes
public int TokenToSpan(int token, Span<byte> dest)\n"},{"location":"xmldocs/llama.native.safellamacontexthandle/#parameters_6","title":"Parameters","text":"token Int32 Token to decode
dest Span<Byte> A span to attempt to write into. If this is too small nothing will be written
Int32 The size of this token. nothing will be written if this is larger than dest
Run the llama inference to obtain the logits and probabilities for the next token.
public bool Eval(ReadOnlySpan<int> tokens, int n_past, int n_threads)\n"},{"location":"xmldocs/llama.native.safellamacontexthandle/#parameters_7","title":"Parameters","text":"tokens ReadOnlySpan<Int32> The provided batch of new tokens to process
n_past Int32 the number of tokens to use from previous eval calls
n_threads Int32
Boolean Returns true on success
"},{"location":"xmldocs/llama.native.safellamacontexthandle/#getstatesize","title":"GetStateSize()","text":"Get the size of the state, when saved as bytes
public ulong GetStateSize()\n"},{"location":"xmldocs/llama.native.safellamacontexthandle/#returns_8","title":"Returns","text":"UInt64
"},{"location":"xmldocs/llama.native.safellamacontexthandle/#getstatebyte-uint64","title":"GetState(Byte*, UInt64)","text":"Get the raw state of this context, encoded as bytes. Data is written into the dest pointer.
public ulong GetState(Byte* dest, ulong size)\n"},{"location":"xmldocs/llama.native.safellamacontexthandle/#parameters_8","title":"Parameters","text":"dest Byte* Destination to write to
size UInt64 Number of bytes available to write to in dest (check required size with GetStateSize())
UInt64 The number of bytes written to dest
"},{"location":"xmldocs/llama.native.safellamacontexthandle/#exceptions_2","title":"Exceptions","text":"ArgumentOutOfRangeException Thrown if dest is too small
"},{"location":"xmldocs/llama.native.safellamacontexthandle/#getstateintptr-uint64","title":"GetState(IntPtr, UInt64)","text":"Get the raw state of this context, encoded as bytes. Data is written into the dest pointer.
public ulong GetState(IntPtr dest, ulong size)\n"},{"location":"xmldocs/llama.native.safellamacontexthandle/#parameters_9","title":"Parameters","text":"dest IntPtr Destination to write to
size UInt64 Number of bytes available to write to in dest (check required size with GetStateSize())
UInt64 The number of bytes written to dest
"},{"location":"xmldocs/llama.native.safellamacontexthandle/#exceptions_3","title":"Exceptions","text":"ArgumentOutOfRangeException Thrown if dest is too small
"},{"location":"xmldocs/llama.native.safellamacontexthandle/#setstatebyte","title":"SetState(Byte*)","text":"Set the raw state of this context
public ulong SetState(Byte* src)\n"},{"location":"xmldocs/llama.native.safellamacontexthandle/#parameters_10","title":"Parameters","text":"src Byte* The pointer to read the state from
UInt64 Number of bytes read from the src pointer
"},{"location":"xmldocs/llama.native.safellamacontexthandle/#setstateintptr","title":"SetState(IntPtr)","text":"Set the raw state of this context
public ulong SetState(IntPtr src)\n"},{"location":"xmldocs/llama.native.safellamacontexthandle/#parameters_11","title":"Parameters","text":"src IntPtr The pointer to read the state from
UInt64 Number of bytes read from the src pointer
"},{"location":"xmldocs/llama.native.safellamagrammarhandle/","title":"SafeLLamaGrammarHandle","text":"Namespace: LLama.Native
A safe reference to a llama_grammar
public class SafeLLamaGrammarHandle : SafeLLamaHandleBase, System.IDisposable\n Inheritance Object \u2192 CriticalFinalizerObject \u2192 SafeHandle \u2192 SafeLLamaHandleBase \u2192 SafeLLamaGrammarHandle Implements IDisposable
"},{"location":"xmldocs/llama.native.safellamagrammarhandle/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.native.safellamagrammarhandle/#isinvalid","title":"IsInvalid","text":"public bool IsInvalid { get; }\n"},{"location":"xmldocs/llama.native.safellamagrammarhandle/#property-value","title":"Property Value","text":"Boolean
"},{"location":"xmldocs/llama.native.safellamagrammarhandle/#isclosed","title":"IsClosed","text":"public bool IsClosed { get; }\n"},{"location":"xmldocs/llama.native.safellamagrammarhandle/#property-value_1","title":"Property Value","text":"Boolean
"},{"location":"xmldocs/llama.native.safellamagrammarhandle/#methods","title":"Methods","text":""},{"location":"xmldocs/llama.native.safellamagrammarhandle/#releasehandle","title":"ReleaseHandle()","text":"protected bool ReleaseHandle()\n"},{"location":"xmldocs/llama.native.safellamagrammarhandle/#returns","title":"Returns","text":"Boolean
"},{"location":"xmldocs/llama.native.safellamagrammarhandle/#createireadonlylistgrammarrule-uint64","title":"Create(IReadOnlyList<GrammarRule>, UInt64)","text":"Create a new llama_grammar
public static SafeLLamaGrammarHandle Create(IReadOnlyList<GrammarRule> rules, ulong start_rule_index)\n"},{"location":"xmldocs/llama.native.safellamagrammarhandle/#parameters","title":"Parameters","text":"rules IReadOnlyList<GrammarRule> A list of list of elements, each inner list makes up one grammar rule
start_rule_index UInt64 The index (in the outer list) of the start rule
SafeLLamaGrammarHandle
"},{"location":"xmldocs/llama.native.safellamagrammarhandle/#exceptions","title":"Exceptions","text":"RuntimeError
"},{"location":"xmldocs/llama.native.safellamagrammarhandle/#createllamagrammarelement-uint64-uint64","title":"Create(LLamaGrammarElement, UInt64, UInt64)**","text":"Create a new llama_grammar
public static SafeLLamaGrammarHandle Create(LLamaGrammarElement** rules, ulong nrules, ulong start_rule_index)\n"},{"location":"xmldocs/llama.native.safellamagrammarhandle/#parameters_1","title":"Parameters","text":"rules LLamaGrammarElement** rules list, each rule is a list of rule elements (terminated by a LLamaGrammarElementType.END element)
nrules UInt64 total number of rules
start_rule_index UInt64 index of the start rule of the grammar
SafeLLamaGrammarHandle
"},{"location":"xmldocs/llama.native.safellamagrammarhandle/#exceptions_1","title":"Exceptions","text":"RuntimeError
"},{"location":"xmldocs/llama.native.safellamahandlebase/","title":"SafeLLamaHandleBase","text":"Namespace: LLama.Native
Base class for all llama handles to native resources
public abstract class SafeLLamaHandleBase : System.Runtime.InteropServices.SafeHandle, System.IDisposable\n Inheritance Object \u2192 CriticalFinalizerObject \u2192 SafeHandle \u2192 SafeLLamaHandleBase Implements IDisposable
"},{"location":"xmldocs/llama.native.safellamahandlebase/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.native.safellamahandlebase/#isinvalid","title":"IsInvalid","text":"public bool IsInvalid { get; }\n"},{"location":"xmldocs/llama.native.safellamahandlebase/#property-value","title":"Property Value","text":"Boolean
"},{"location":"xmldocs/llama.native.safellamahandlebase/#isclosed","title":"IsClosed","text":"public bool IsClosed { get; }\n"},{"location":"xmldocs/llama.native.safellamahandlebase/#property-value_1","title":"Property Value","text":"Boolean
"},{"location":"xmldocs/llama.native.safellamahandlebase/#methods","title":"Methods","text":""},{"location":"xmldocs/llama.native.safellamahandlebase/#tostring","title":"ToString()","text":"public string ToString()\n"},{"location":"xmldocs/llama.native.safellamahandlebase/#returns","title":"Returns","text":"String
"},{"location":"xmldocs/llama.native.safellamamodelhandle/","title":"SafeLlamaModelHandle","text":"Namespace: LLama.Native
A reference to a set of llama model weights
public sealed class SafeLlamaModelHandle : SafeLLamaHandleBase, System.IDisposable\n Inheritance Object \u2192 CriticalFinalizerObject \u2192 SafeHandle \u2192 SafeLLamaHandleBase \u2192 SafeLlamaModelHandle Implements IDisposable
"},{"location":"xmldocs/llama.native.safellamamodelhandle/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.native.safellamamodelhandle/#vocabcount","title":"VocabCount","text":"Total number of tokens in vocabulary of this model
public int VocabCount { get; }\n"},{"location":"xmldocs/llama.native.safellamamodelhandle/#property-value","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.native.safellamamodelhandle/#contextsize","title":"ContextSize","text":"Total number of tokens in the context
public int ContextSize { get; }\n"},{"location":"xmldocs/llama.native.safellamamodelhandle/#property-value_1","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.native.safellamamodelhandle/#embeddingsize","title":"EmbeddingSize","text":"Dimension of embedding vectors
public int EmbeddingSize { get; }\n"},{"location":"xmldocs/llama.native.safellamamodelhandle/#property-value_2","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.native.safellamamodelhandle/#isinvalid","title":"IsInvalid","text":"public bool IsInvalid { get; }\n"},{"location":"xmldocs/llama.native.safellamamodelhandle/#property-value_3","title":"Property Value","text":"Boolean
"},{"location":"xmldocs/llama.native.safellamamodelhandle/#isclosed","title":"IsClosed","text":"public bool IsClosed { get; }\n"},{"location":"xmldocs/llama.native.safellamamodelhandle/#property-value_4","title":"Property Value","text":"Boolean
"},{"location":"xmldocs/llama.native.safellamamodelhandle/#methods","title":"Methods","text":""},{"location":"xmldocs/llama.native.safellamamodelhandle/#releasehandle","title":"ReleaseHandle()","text":"protected bool ReleaseHandle()\n"},{"location":"xmldocs/llama.native.safellamamodelhandle/#returns","title":"Returns","text":"Boolean
"},{"location":"xmldocs/llama.native.safellamamodelhandle/#loadfromfilestring-llamacontextparams","title":"LoadFromFile(String, LLamaContextParams)","text":"Load a model from the given file path into memory
public static SafeLlamaModelHandle LoadFromFile(string modelPath, LLamaContextParams lparams)\n"},{"location":"xmldocs/llama.native.safellamamodelhandle/#parameters","title":"Parameters","text":"modelPath String
lparams LLamaContextParams
SafeLlamaModelHandle
"},{"location":"xmldocs/llama.native.safellamamodelhandle/#exceptions","title":"Exceptions","text":"RuntimeError
"},{"location":"xmldocs/llama.native.safellamamodelhandle/#applylorafromfilestring-string-int32","title":"ApplyLoraFromFile(String, String, Int32)","text":"Apply a LoRA adapter to a loaded model
public void ApplyLoraFromFile(string lora, string modelBase, int threads)\n"},{"location":"xmldocs/llama.native.safellamamodelhandle/#parameters_1","title":"Parameters","text":"lora String
modelBase String A path to a higher quality model to use as a base for the layers modified by the adapter. Can be NULL to use the current loaded model.
threads Int32
RuntimeError
"},{"location":"xmldocs/llama.native.safellamamodelhandle/#tokentospanint32-spanbyte","title":"TokenToSpan(Int32, Span<Byte>)","text":"Convert a single llama token into bytes
public int TokenToSpan(int llama_token, Span<byte> dest)\n"},{"location":"xmldocs/llama.native.safellamamodelhandle/#parameters_2","title":"Parameters","text":"llama_token Int32 Token to decode
dest Span<Byte> A span to attempt to write into. If this is too small nothing will be written
Int32 The size of this token. nothing will be written if this is larger than dest
Convert a single llama token into a string
public string TokenToString(int llama_token, Encoding encoding)\n"},{"location":"xmldocs/llama.native.safellamamodelhandle/#parameters_3","title":"Parameters","text":"llama_token Int32
encoding Encoding Encoding to use to decode the bytes into a string
String
"},{"location":"xmldocs/llama.native.safellamamodelhandle/#tokentostringint32-encoding-stringbuilder","title":"TokenToString(Int32, Encoding, StringBuilder)","text":"Append a single llama token to a string builder
public void TokenToString(int llama_token, Encoding encoding, StringBuilder dest)\n"},{"location":"xmldocs/llama.native.safellamamodelhandle/#parameters_4","title":"Parameters","text":"llama_token Int32 Token to decode
encoding Encoding
dest StringBuilder string builder to append the result to
Convert a string of text into tokens
public Int32[] Tokenize(string text, bool add_bos, Encoding encoding)\n"},{"location":"xmldocs/llama.native.safellamamodelhandle/#parameters_5","title":"Parameters","text":"text String
add_bos Boolean
encoding Encoding
Int32[]
"},{"location":"xmldocs/llama.native.safellamamodelhandle/#createcontextllamacontextparams","title":"CreateContext(LLamaContextParams)","text":"Create a new context for this model
public SafeLLamaContextHandle CreateContext(LLamaContextParams params)\n"},{"location":"xmldocs/llama.native.safellamamodelhandle/#parameters_6","title":"Parameters","text":"params LLamaContextParams
SafeLLamaContextHandle
"},{"location":"xmldocs/llama.native.samplingapi/","title":"SamplingApi","text":"Namespace: LLama.Native
Direct translation of the llama.cpp sampling API
public class SamplingApi\n Inheritance Object \u2192 SamplingApi
"},{"location":"xmldocs/llama.native.samplingapi/#constructors","title":"Constructors","text":""},{"location":"xmldocs/llama.native.samplingapi/#samplingapi_1","title":"SamplingApi()","text":"public SamplingApi()\n"},{"location":"xmldocs/llama.native.samplingapi/#methods","title":"Methods","text":""},{"location":"xmldocs/llama.native.samplingapi/#llama_sample_grammarsafellamacontexthandle-llamatokendataarray-safellamagrammarhandle","title":"llama_sample_grammar(SafeLLamaContextHandle, LLamaTokenDataArray, SafeLLamaGrammarHandle)","text":"Apply grammar rules to candidate tokens
public static void llama_sample_grammar(SafeLLamaContextHandle ctx, LLamaTokenDataArray candidates, SafeLLamaGrammarHandle grammar)\n"},{"location":"xmldocs/llama.native.samplingapi/#parameters","title":"Parameters","text":"ctx SafeLLamaContextHandle
candidates LLamaTokenDataArray
grammar SafeLLamaGrammarHandle
last_tokens_size parameter is no longer needed
Repetition penalty described in CTRL academic paper https://arxiv.org/abs/1909.05858, with negative logit fix.
public static void llama_sample_repetition_penalty(SafeLLamaContextHandle ctx, LLamaTokenDataArray candidates, Memory<int> last_tokens, ulong last_tokens_size, float penalty)\n"},{"location":"xmldocs/llama.native.samplingapi/#parameters_1","title":"Parameters","text":"ctx SafeLLamaContextHandle
candidates LLamaTokenDataArray Pointer to LLamaTokenDataArray
last_tokens Memory<Int32>
last_tokens_size UInt64
penalty Single
Repetition penalty described in CTRL academic paper https://arxiv.org/abs/1909.05858, with negative logit fix.
public static void llama_sample_repetition_penalty(SafeLLamaContextHandle ctx, LLamaTokenDataArray candidates, Memory<int> last_tokens, float penalty)\n"},{"location":"xmldocs/llama.native.samplingapi/#parameters_2","title":"Parameters","text":"ctx SafeLLamaContextHandle
candidates LLamaTokenDataArray Pointer to LLamaTokenDataArray
last_tokens Memory<Int32>
penalty Single
last_tokens_size parameter is no longer needed
Frequency and presence penalties described in OpenAI API https://platform.openai.com/docs/api-reference/parameter-details.
public static void llama_sample_frequency_and_presence_penalties(SafeLLamaContextHandle ctx, LLamaTokenDataArray candidates, Memory<int> last_tokens, ulong last_tokens_size, float alpha_frequency, float alpha_presence)\n"},{"location":"xmldocs/llama.native.samplingapi/#parameters_3","title":"Parameters","text":"ctx SafeLLamaContextHandle
candidates LLamaTokenDataArray Pointer to LLamaTokenDataArray
last_tokens Memory<Int32>
last_tokens_size UInt64
alpha_frequency Single
alpha_presence Single
Frequency and presence penalties described in OpenAI API https://platform.openai.com/docs/api-reference/parameter-details.
public static void llama_sample_frequency_and_presence_penalties(SafeLLamaContextHandle ctx, LLamaTokenDataArray candidates, Memory<int> last_tokens, float alpha_frequency, float alpha_presence)\n"},{"location":"xmldocs/llama.native.samplingapi/#parameters_4","title":"Parameters","text":"ctx SafeLLamaContextHandle
candidates LLamaTokenDataArray Pointer to LLamaTokenDataArray
last_tokens Memory<Int32>
alpha_frequency Single
alpha_presence Single
Sorts candidate tokens by their logits in descending order and calculate probabilities based on logits.
public static void llama_sample_softmax(SafeLLamaContextHandle ctx, LLamaTokenDataArray candidates)\n"},{"location":"xmldocs/llama.native.samplingapi/#parameters_5","title":"Parameters","text":"ctx SafeLLamaContextHandle
candidates LLamaTokenDataArray Pointer to LLamaTokenDataArray
Top-K sampling described in academic paper \"The Curious Case of Neural Text Degeneration\" https://arxiv.org/abs/1904.09751
public static void llama_sample_top_k(SafeLLamaContextHandle ctx, LLamaTokenDataArray candidates, int k, ulong min_keep)\n"},{"location":"xmldocs/llama.native.samplingapi/#parameters_6","title":"Parameters","text":"ctx SafeLLamaContextHandle
candidates LLamaTokenDataArray Pointer to LLamaTokenDataArray
k Int32
min_keep UInt64
Nucleus sampling described in academic paper \"The Curious Case of Neural Text Degeneration\" https://arxiv.org/abs/1904.09751
public static void llama_sample_top_p(SafeLLamaContextHandle ctx, LLamaTokenDataArray candidates, float p, ulong min_keep)\n"},{"location":"xmldocs/llama.native.samplingapi/#parameters_7","title":"Parameters","text":"ctx SafeLLamaContextHandle
candidates LLamaTokenDataArray Pointer to LLamaTokenDataArray
p Single
min_keep UInt64
Tail Free Sampling described in https://www.trentonbricken.com/Tail-Free-Sampling/.
public static void llama_sample_tail_free(SafeLLamaContextHandle ctx, LLamaTokenDataArray candidates, float z, ulong min_keep)\n"},{"location":"xmldocs/llama.native.samplingapi/#parameters_8","title":"Parameters","text":"ctx SafeLLamaContextHandle
candidates LLamaTokenDataArray Pointer to LLamaTokenDataArray
z Single
min_keep UInt64
Locally Typical Sampling implementation described in the paper https://arxiv.org/abs/2202.00666.
public static void llama_sample_typical(SafeLLamaContextHandle ctx, LLamaTokenDataArray candidates, float p, ulong min_keep)\n"},{"location":"xmldocs/llama.native.samplingapi/#parameters_9","title":"Parameters","text":"ctx SafeLLamaContextHandle
candidates LLamaTokenDataArray Pointer to LLamaTokenDataArray
p Single
min_keep UInt64
Sample with temperature. As temperature increases, the prediction becomes diverse but also vulnerable to hallucinations -- generating tokens that are sensible but not factual
public static void llama_sample_temperature(SafeLLamaContextHandle ctx, LLamaTokenDataArray candidates, float temp)\n"},{"location":"xmldocs/llama.native.samplingapi/#parameters_10","title":"Parameters","text":"ctx SafeLLamaContextHandle
candidates LLamaTokenDataArray
temp Single
Mirostat 1.0 algorithm described in the paper https://arxiv.org/abs/2007.14966. Uses tokens instead of words.
public static int llama_sample_token_mirostat(SafeLLamaContextHandle ctx, LLamaTokenDataArray candidates, float tau, float eta, int m, Single& mu)\n"},{"location":"xmldocs/llama.native.samplingapi/#parameters_11","title":"Parameters","text":"ctx SafeLLamaContextHandle
candidates LLamaTokenDataArray A vector of LLamaTokenData containing the candidate tokens, their probabilities (p), and log-odds (logit) for the current position in the generated text.
tau Single The target cross-entropy (or surprise) value you want to achieve for the generated text. A higher value corresponds to more surprising or less predictable text, while a lower value corresponds to less surprising or more predictable text.
eta Single The learning rate used to update mu based on the error between the target and observed surprisal of the sampled word. A larger learning rate will cause mu to be updated more quickly, while a smaller learning rate will result in slower updates.
m Int32 The number of tokens considered in the estimation of s_hat. This is an arbitrary value that is used to calculate s_hat, which in turn helps to calculate the value of k. In the paper, they use m = 100, but you can experiment with different values to see how it affects the performance of the algorithm.
mu Single& Maximum cross-entropy. This value is initialized to be twice the target cross-entropy (2 * tau) and is updated in the algorithm based on the error between the target and observed surprisal.
Int32
"},{"location":"xmldocs/llama.native.samplingapi/#llama_sample_token_mirostat_v2safellamacontexthandle-llamatokendataarray-single-single-single","title":"llama_sample_token_mirostat_v2(SafeLLamaContextHandle, LLamaTokenDataArray, Single, Single, Single&)","text":"Mirostat 2.0 algorithm described in the paper https://arxiv.org/abs/2007.14966. Uses tokens instead of words.
public static int llama_sample_token_mirostat_v2(SafeLLamaContextHandle ctx, LLamaTokenDataArray candidates, float tau, float eta, Single& mu)\n"},{"location":"xmldocs/llama.native.samplingapi/#parameters_12","title":"Parameters","text":"ctx SafeLLamaContextHandle
candidates LLamaTokenDataArray A vector of LLamaTokenData containing the candidate tokens, their probabilities (p), and log-odds (logit) for the current position in the generated text.
tau Single The target cross-entropy (or surprise) value you want to achieve for the generated text. A higher value corresponds to more surprising or less predictable text, while a lower value corresponds to less surprising or more predictable text.
eta Single The learning rate used to update mu based on the error between the target and observed surprisal of the sampled word. A larger learning rate will cause mu to be updated more quickly, while a smaller learning rate will result in slower updates.
mu Single& Maximum cross-entropy. This value is initialized to be twice the target cross-entropy (2 * tau) and is updated in the algorithm based on the error between the target and observed surprisal.
Int32
"},{"location":"xmldocs/llama.native.samplingapi/#llama_sample_token_greedysafellamacontexthandle-llamatokendataarray","title":"llama_sample_token_greedy(SafeLLamaContextHandle, LLamaTokenDataArray)","text":"Selects the token with the highest probability.
public static int llama_sample_token_greedy(SafeLLamaContextHandle ctx, LLamaTokenDataArray candidates)\n"},{"location":"xmldocs/llama.native.samplingapi/#parameters_13","title":"Parameters","text":"ctx SafeLLamaContextHandle
candidates LLamaTokenDataArray Pointer to LLamaTokenDataArray
Int32
"},{"location":"xmldocs/llama.native.samplingapi/#llama_sample_tokensafellamacontexthandle-llamatokendataarray","title":"llama_sample_token(SafeLLamaContextHandle, LLamaTokenDataArray)","text":"Randomly selects a token from the candidates based on their probabilities.
public static int llama_sample_token(SafeLLamaContextHandle ctx, LLamaTokenDataArray candidates)\n"},{"location":"xmldocs/llama.native.samplingapi/#parameters_14","title":"Parameters","text":"ctx SafeLLamaContextHandle
candidates LLamaTokenDataArray Pointer to LLamaTokenDataArray
Int32
"},{"location":"xmldocs/llama.oldversion.chatcompletion/","title":"ChatCompletion","text":"Namespace: LLama.OldVersion
"},{"location":"xmldocs/llama.oldversion.chatcompletion/#caution","title":"Caution","text":"The entire LLama.OldVersion namespace will be removed
public class ChatCompletion : System.IEquatable`1[[LLama.OldVersion.ChatCompletion, LLamaSharp, Version=0.5.0.0, Culture=neutral, PublicKeyToken=null]]\n Inheritance Object \u2192 ChatCompletion Implements IEquatable<ChatCompletion>
"},{"location":"xmldocs/llama.oldversion.chatcompletion/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.oldversion.chatcompletion/#id","title":"Id","text":"public string Id { get; set; }\n"},{"location":"xmldocs/llama.oldversion.chatcompletion/#property-value","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.oldversion.chatcompletion/#object","title":"Object","text":"public string Object { get; set; }\n"},{"location":"xmldocs/llama.oldversion.chatcompletion/#property-value_1","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.oldversion.chatcompletion/#created","title":"Created","text":"public int Created { get; set; }\n"},{"location":"xmldocs/llama.oldversion.chatcompletion/#property-value_2","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.oldversion.chatcompletion/#model","title":"Model","text":"public string Model { get; set; }\n"},{"location":"xmldocs/llama.oldversion.chatcompletion/#property-value_3","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.oldversion.chatcompletion/#choices","title":"Choices","text":"public ChatCompletionChoice[] Choices { get; set; }\n"},{"location":"xmldocs/llama.oldversion.chatcompletion/#property-value_4","title":"Property Value","text":"ChatCompletionChoice[]
"},{"location":"xmldocs/llama.oldversion.chatcompletion/#usage","title":"Usage","text":"public CompletionUsage Usage { get; set; }\n"},{"location":"xmldocs/llama.oldversion.chatcompletion/#property-value_5","title":"Property Value","text":"CompletionUsage
"},{"location":"xmldocs/llama.oldversion.chatcompletion/#constructors","title":"Constructors","text":""},{"location":"xmldocs/llama.oldversion.chatcompletion/#chatcompletionstring-string-int32-string-chatcompletionchoice-completionusage","title":"ChatCompletion(String, String, Int32, String, ChatCompletionChoice[], CompletionUsage)","text":"public ChatCompletion(string Id, string Object, int Created, string Model, ChatCompletionChoice[] Choices, CompletionUsage Usage)\n"},{"location":"xmldocs/llama.oldversion.chatcompletion/#parameters","title":"Parameters","text":"Id String
Object String
Created Int32
Model String
Choices ChatCompletionChoice[]
Usage CompletionUsage
public string ToString()\n"},{"location":"xmldocs/llama.oldversion.chatcompletion/#returns","title":"Returns","text":"String
"},{"location":"xmldocs/llama.oldversion.chatcompletion/#printmembersstringbuilder","title":"PrintMembers(StringBuilder)","text":"protected bool PrintMembers(StringBuilder builder)\n"},{"location":"xmldocs/llama.oldversion.chatcompletion/#parameters_1","title":"Parameters","text":"builder StringBuilder
Boolean
"},{"location":"xmldocs/llama.oldversion.chatcompletion/#gethashcode","title":"GetHashCode()","text":"public int GetHashCode()\n"},{"location":"xmldocs/llama.oldversion.chatcompletion/#returns_2","title":"Returns","text":"Int32
"},{"location":"xmldocs/llama.oldversion.chatcompletion/#equalsobject","title":"Equals(Object)","text":"public bool Equals(object obj)\n"},{"location":"xmldocs/llama.oldversion.chatcompletion/#parameters_2","title":"Parameters","text":"obj Object
Boolean
"},{"location":"xmldocs/llama.oldversion.chatcompletion/#equalschatcompletion","title":"Equals(ChatCompletion)","text":"public bool Equals(ChatCompletion other)\n"},{"location":"xmldocs/llama.oldversion.chatcompletion/#parameters_3","title":"Parameters","text":"other ChatCompletion
Boolean
"},{"location":"xmldocs/llama.oldversion.chatcompletion/#clone","title":"<Clone>$()","text":"public ChatCompletion <Clone>$()\n"},{"location":"xmldocs/llama.oldversion.chatcompletion/#returns_5","title":"Returns","text":"ChatCompletion
"},{"location":"xmldocs/llama.oldversion.chatcompletion/#deconstructstring-string-int32-string-chatcompletionchoice-completionusage","title":"Deconstruct(String&, String&, Int32&, String&, ChatCompletionChoice[]&, CompletionUsage&)","text":"public void Deconstruct(String& Id, String& Object, Int32& Created, String& Model, ChatCompletionChoice[]& Choices, CompletionUsage& Usage)\n"},{"location":"xmldocs/llama.oldversion.chatcompletion/#parameters_4","title":"Parameters","text":"Id String&
Object String&
Created Int32&
Model String&
Choices ChatCompletionChoice[]&
Usage CompletionUsage&
Namespace: LLama.OldVersion
"},{"location":"xmldocs/llama.oldversion.chatcompletionchoice/#caution","title":"Caution","text":"The entire LLama.OldVersion namespace will be removed
public class ChatCompletionChoice : System.IEquatable`1[[LLama.OldVersion.ChatCompletionChoice, LLamaSharp, Version=0.5.0.0, Culture=neutral, PublicKeyToken=null]]\n Inheritance Object \u2192 ChatCompletionChoice Implements IEquatable<ChatCompletionChoice>
"},{"location":"xmldocs/llama.oldversion.chatcompletionchoice/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.oldversion.chatcompletionchoice/#index","title":"Index","text":"public int Index { get; set; }\n"},{"location":"xmldocs/llama.oldversion.chatcompletionchoice/#property-value","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.oldversion.chatcompletionchoice/#message","title":"Message","text":"public ChatCompletionMessage Message { get; set; }\n"},{"location":"xmldocs/llama.oldversion.chatcompletionchoice/#property-value_1","title":"Property Value","text":"ChatCompletionMessage
"},{"location":"xmldocs/llama.oldversion.chatcompletionchoice/#finishreason","title":"FinishReason","text":"public string FinishReason { get; set; }\n"},{"location":"xmldocs/llama.oldversion.chatcompletionchoice/#property-value_2","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.oldversion.chatcompletionchoice/#constructors","title":"Constructors","text":""},{"location":"xmldocs/llama.oldversion.chatcompletionchoice/#chatcompletionchoiceint32-chatcompletionmessage-string","title":"ChatCompletionChoice(Int32, ChatCompletionMessage, String)","text":"public ChatCompletionChoice(int Index, ChatCompletionMessage Message, string FinishReason)\n"},{"location":"xmldocs/llama.oldversion.chatcompletionchoice/#parameters","title":"Parameters","text":"Index Int32
Message ChatCompletionMessage
FinishReason String
public string ToString()\n"},{"location":"xmldocs/llama.oldversion.chatcompletionchoice/#returns","title":"Returns","text":"String
"},{"location":"xmldocs/llama.oldversion.chatcompletionchoice/#printmembersstringbuilder","title":"PrintMembers(StringBuilder)","text":"protected bool PrintMembers(StringBuilder builder)\n"},{"location":"xmldocs/llama.oldversion.chatcompletionchoice/#parameters_1","title":"Parameters","text":"builder StringBuilder
Boolean
"},{"location":"xmldocs/llama.oldversion.chatcompletionchoice/#gethashcode","title":"GetHashCode()","text":"public int GetHashCode()\n"},{"location":"xmldocs/llama.oldversion.chatcompletionchoice/#returns_2","title":"Returns","text":"Int32
"},{"location":"xmldocs/llama.oldversion.chatcompletionchoice/#equalsobject","title":"Equals(Object)","text":"public bool Equals(object obj)\n"},{"location":"xmldocs/llama.oldversion.chatcompletionchoice/#parameters_2","title":"Parameters","text":"obj Object
Boolean
"},{"location":"xmldocs/llama.oldversion.chatcompletionchoice/#equalschatcompletionchoice","title":"Equals(ChatCompletionChoice)","text":"public bool Equals(ChatCompletionChoice other)\n"},{"location":"xmldocs/llama.oldversion.chatcompletionchoice/#parameters_3","title":"Parameters","text":"other ChatCompletionChoice
Boolean
"},{"location":"xmldocs/llama.oldversion.chatcompletionchoice/#clone","title":"<Clone>$()","text":"public ChatCompletionChoice <Clone>$()\n"},{"location":"xmldocs/llama.oldversion.chatcompletionchoice/#returns_5","title":"Returns","text":"ChatCompletionChoice
"},{"location":"xmldocs/llama.oldversion.chatcompletionchoice/#deconstructint32-chatcompletionmessage-string","title":"Deconstruct(Int32&, ChatCompletionMessage&, String&)","text":"public void Deconstruct(Int32& Index, ChatCompletionMessage& Message, String& FinishReason)\n"},{"location":"xmldocs/llama.oldversion.chatcompletionchoice/#parameters_4","title":"Parameters","text":"Index Int32&
Message ChatCompletionMessage&
FinishReason String&
Namespace: LLama.OldVersion
"},{"location":"xmldocs/llama.oldversion.chatcompletionchunk/#caution","title":"Caution","text":"The entire LLama.OldVersion namespace will be removed
public class ChatCompletionChunk : System.IEquatable`1[[LLama.OldVersion.ChatCompletionChunk, LLamaSharp, Version=0.5.0.0, Culture=neutral, PublicKeyToken=null]]\n Inheritance Object \u2192 ChatCompletionChunk Implements IEquatable<ChatCompletionChunk>
"},{"location":"xmldocs/llama.oldversion.chatcompletionchunk/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.oldversion.chatcompletionchunk/#id","title":"Id","text":"public string Id { get; set; }\n"},{"location":"xmldocs/llama.oldversion.chatcompletionchunk/#property-value","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.oldversion.chatcompletionchunk/#model","title":"Model","text":"public string Model { get; set; }\n"},{"location":"xmldocs/llama.oldversion.chatcompletionchunk/#property-value_1","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.oldversion.chatcompletionchunk/#object","title":"Object","text":"public string Object { get; set; }\n"},{"location":"xmldocs/llama.oldversion.chatcompletionchunk/#property-value_2","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.oldversion.chatcompletionchunk/#created","title":"Created","text":"public int Created { get; set; }\n"},{"location":"xmldocs/llama.oldversion.chatcompletionchunk/#property-value_3","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.oldversion.chatcompletionchunk/#choices","title":"Choices","text":"public ChatCompletionChunkChoice[] Choices { get; set; }\n"},{"location":"xmldocs/llama.oldversion.chatcompletionchunk/#property-value_4","title":"Property Value","text":"ChatCompletionChunkChoice[]
"},{"location":"xmldocs/llama.oldversion.chatcompletionchunk/#constructors","title":"Constructors","text":""},{"location":"xmldocs/llama.oldversion.chatcompletionchunk/#chatcompletionchunkstring-string-string-int32-chatcompletionchunkchoice","title":"ChatCompletionChunk(String, String, String, Int32, ChatCompletionChunkChoice[])","text":"public ChatCompletionChunk(string Id, string Model, string Object, int Created, ChatCompletionChunkChoice[] Choices)\n"},{"location":"xmldocs/llama.oldversion.chatcompletionchunk/#parameters","title":"Parameters","text":"Id String
Model String
Object String
Created Int32
Choices ChatCompletionChunkChoice[]
public string ToString()\n"},{"location":"xmldocs/llama.oldversion.chatcompletionchunk/#returns","title":"Returns","text":"String
"},{"location":"xmldocs/llama.oldversion.chatcompletionchunk/#printmembersstringbuilder","title":"PrintMembers(StringBuilder)","text":"protected bool PrintMembers(StringBuilder builder)\n"},{"location":"xmldocs/llama.oldversion.chatcompletionchunk/#parameters_1","title":"Parameters","text":"builder StringBuilder
Boolean
"},{"location":"xmldocs/llama.oldversion.chatcompletionchunk/#gethashcode","title":"GetHashCode()","text":"public int GetHashCode()\n"},{"location":"xmldocs/llama.oldversion.chatcompletionchunk/#returns_2","title":"Returns","text":"Int32
"},{"location":"xmldocs/llama.oldversion.chatcompletionchunk/#equalsobject","title":"Equals(Object)","text":"public bool Equals(object obj)\n"},{"location":"xmldocs/llama.oldversion.chatcompletionchunk/#parameters_2","title":"Parameters","text":"obj Object
Boolean
"},{"location":"xmldocs/llama.oldversion.chatcompletionchunk/#equalschatcompletionchunk","title":"Equals(ChatCompletionChunk)","text":"public bool Equals(ChatCompletionChunk other)\n"},{"location":"xmldocs/llama.oldversion.chatcompletionchunk/#parameters_3","title":"Parameters","text":"other ChatCompletionChunk
Boolean
"},{"location":"xmldocs/llama.oldversion.chatcompletionchunk/#clone","title":"<Clone>$()","text":"public ChatCompletionChunk <Clone>$()\n"},{"location":"xmldocs/llama.oldversion.chatcompletionchunk/#returns_5","title":"Returns","text":"ChatCompletionChunk
"},{"location":"xmldocs/llama.oldversion.chatcompletionchunk/#deconstructstring-string-string-int32-chatcompletionchunkchoice","title":"Deconstruct(String&, String&, String&, Int32&, ChatCompletionChunkChoice[]&)","text":"public void Deconstruct(String& Id, String& Model, String& Object, Int32& Created, ChatCompletionChunkChoice[]& Choices)\n"},{"location":"xmldocs/llama.oldversion.chatcompletionchunk/#parameters_4","title":"Parameters","text":"Id String&
Model String&
Object String&
Created Int32&
Choices ChatCompletionChunkChoice[]&
Namespace: LLama.OldVersion
"},{"location":"xmldocs/llama.oldversion.chatcompletionchunkchoice/#caution","title":"Caution","text":"The entire LLama.OldVersion namespace will be removed
public class ChatCompletionChunkChoice : System.IEquatable`1[[LLama.OldVersion.ChatCompletionChunkChoice, LLamaSharp, Version=0.5.0.0, Culture=neutral, PublicKeyToken=null]]\n Inheritance Object \u2192 ChatCompletionChunkChoice Implements IEquatable<ChatCompletionChunkChoice>
"},{"location":"xmldocs/llama.oldversion.chatcompletionchunkchoice/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.oldversion.chatcompletionchunkchoice/#index","title":"Index","text":"public int Index { get; set; }\n"},{"location":"xmldocs/llama.oldversion.chatcompletionchunkchoice/#property-value","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.oldversion.chatcompletionchunkchoice/#delta","title":"Delta","text":"public ChatCompletionChunkDelta Delta { get; set; }\n"},{"location":"xmldocs/llama.oldversion.chatcompletionchunkchoice/#property-value_1","title":"Property Value","text":"ChatCompletionChunkDelta
"},{"location":"xmldocs/llama.oldversion.chatcompletionchunkchoice/#finishreason","title":"FinishReason","text":"public string FinishReason { get; set; }\n"},{"location":"xmldocs/llama.oldversion.chatcompletionchunkchoice/#property-value_2","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.oldversion.chatcompletionchunkchoice/#constructors","title":"Constructors","text":""},{"location":"xmldocs/llama.oldversion.chatcompletionchunkchoice/#chatcompletionchunkchoiceint32-chatcompletionchunkdelta-string","title":"ChatCompletionChunkChoice(Int32, ChatCompletionChunkDelta, String)","text":"public ChatCompletionChunkChoice(int Index, ChatCompletionChunkDelta Delta, string FinishReason)\n"},{"location":"xmldocs/llama.oldversion.chatcompletionchunkchoice/#parameters","title":"Parameters","text":"Index Int32
Delta ChatCompletionChunkDelta
FinishReason String
public string ToString()\n"},{"location":"xmldocs/llama.oldversion.chatcompletionchunkchoice/#returns","title":"Returns","text":"String
"},{"location":"xmldocs/llama.oldversion.chatcompletionchunkchoice/#printmembersstringbuilder","title":"PrintMembers(StringBuilder)","text":"protected bool PrintMembers(StringBuilder builder)\n"},{"location":"xmldocs/llama.oldversion.chatcompletionchunkchoice/#parameters_1","title":"Parameters","text":"builder StringBuilder
Boolean
"},{"location":"xmldocs/llama.oldversion.chatcompletionchunkchoice/#gethashcode","title":"GetHashCode()","text":"public int GetHashCode()\n"},{"location":"xmldocs/llama.oldversion.chatcompletionchunkchoice/#returns_2","title":"Returns","text":"Int32
"},{"location":"xmldocs/llama.oldversion.chatcompletionchunkchoice/#equalsobject","title":"Equals(Object)","text":"public bool Equals(object obj)\n"},{"location":"xmldocs/llama.oldversion.chatcompletionchunkchoice/#parameters_2","title":"Parameters","text":"obj Object
Boolean
"},{"location":"xmldocs/llama.oldversion.chatcompletionchunkchoice/#equalschatcompletionchunkchoice","title":"Equals(ChatCompletionChunkChoice)","text":"public bool Equals(ChatCompletionChunkChoice other)\n"},{"location":"xmldocs/llama.oldversion.chatcompletionchunkchoice/#parameters_3","title":"Parameters","text":"other ChatCompletionChunkChoice
Boolean
"},{"location":"xmldocs/llama.oldversion.chatcompletionchunkchoice/#clone","title":"<Clone>$()","text":"public ChatCompletionChunkChoice <Clone>$()\n"},{"location":"xmldocs/llama.oldversion.chatcompletionchunkchoice/#returns_5","title":"Returns","text":"ChatCompletionChunkChoice
"},{"location":"xmldocs/llama.oldversion.chatcompletionchunkchoice/#deconstructint32-chatcompletionchunkdelta-string","title":"Deconstruct(Int32&, ChatCompletionChunkDelta&, String&)","text":"public void Deconstruct(Int32& Index, ChatCompletionChunkDelta& Delta, String& FinishReason)\n"},{"location":"xmldocs/llama.oldversion.chatcompletionchunkchoice/#parameters_4","title":"Parameters","text":"Index Int32&
Delta ChatCompletionChunkDelta&
FinishReason String&
Namespace: LLama.OldVersion
"},{"location":"xmldocs/llama.oldversion.chatcompletionchunkdelta/#caution","title":"Caution","text":"The entire LLama.OldVersion namespace will be removed
public class ChatCompletionChunkDelta : System.IEquatable`1[[LLama.OldVersion.ChatCompletionChunkDelta, LLamaSharp, Version=0.5.0.0, Culture=neutral, PublicKeyToken=null]]\n Inheritance Object \u2192 ChatCompletionChunkDelta Implements IEquatable<ChatCompletionChunkDelta>
"},{"location":"xmldocs/llama.oldversion.chatcompletionchunkdelta/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.oldversion.chatcompletionchunkdelta/#role","title":"Role","text":"public string Role { get; set; }\n"},{"location":"xmldocs/llama.oldversion.chatcompletionchunkdelta/#property-value","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.oldversion.chatcompletionchunkdelta/#content","title":"Content","text":"public string Content { get; set; }\n"},{"location":"xmldocs/llama.oldversion.chatcompletionchunkdelta/#property-value_1","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.oldversion.chatcompletionchunkdelta/#constructors","title":"Constructors","text":""},{"location":"xmldocs/llama.oldversion.chatcompletionchunkdelta/#chatcompletionchunkdeltastring-string","title":"ChatCompletionChunkDelta(String, String)","text":"public ChatCompletionChunkDelta(string Role, string Content)\n"},{"location":"xmldocs/llama.oldversion.chatcompletionchunkdelta/#parameters","title":"Parameters","text":"Role String
Content String
public string ToString()\n"},{"location":"xmldocs/llama.oldversion.chatcompletionchunkdelta/#returns","title":"Returns","text":"String
"},{"location":"xmldocs/llama.oldversion.chatcompletionchunkdelta/#printmembersstringbuilder","title":"PrintMembers(StringBuilder)","text":"protected bool PrintMembers(StringBuilder builder)\n"},{"location":"xmldocs/llama.oldversion.chatcompletionchunkdelta/#parameters_1","title":"Parameters","text":"builder StringBuilder
Boolean
"},{"location":"xmldocs/llama.oldversion.chatcompletionchunkdelta/#gethashcode","title":"GetHashCode()","text":"public int GetHashCode()\n"},{"location":"xmldocs/llama.oldversion.chatcompletionchunkdelta/#returns_2","title":"Returns","text":"Int32
"},{"location":"xmldocs/llama.oldversion.chatcompletionchunkdelta/#equalsobject","title":"Equals(Object)","text":"public bool Equals(object obj)\n"},{"location":"xmldocs/llama.oldversion.chatcompletionchunkdelta/#parameters_2","title":"Parameters","text":"obj Object
Boolean
"},{"location":"xmldocs/llama.oldversion.chatcompletionchunkdelta/#equalschatcompletionchunkdelta","title":"Equals(ChatCompletionChunkDelta)","text":"public bool Equals(ChatCompletionChunkDelta other)\n"},{"location":"xmldocs/llama.oldversion.chatcompletionchunkdelta/#parameters_3","title":"Parameters","text":"other ChatCompletionChunkDelta
Boolean
"},{"location":"xmldocs/llama.oldversion.chatcompletionchunkdelta/#clone","title":"<Clone>$()","text":"public ChatCompletionChunkDelta <Clone>$()\n"},{"location":"xmldocs/llama.oldversion.chatcompletionchunkdelta/#returns_5","title":"Returns","text":"ChatCompletionChunkDelta
"},{"location":"xmldocs/llama.oldversion.chatcompletionchunkdelta/#deconstructstring-string","title":"Deconstruct(String&, String&)","text":"public void Deconstruct(String& Role, String& Content)\n"},{"location":"xmldocs/llama.oldversion.chatcompletionchunkdelta/#parameters_4","title":"Parameters","text":"Role String&
Content String&
Namespace: LLama.OldVersion
"},{"location":"xmldocs/llama.oldversion.chatcompletionmessage/#caution","title":"Caution","text":"The entire LLama.OldVersion namespace will be removed
public class ChatCompletionMessage : System.IEquatable`1[[LLama.OldVersion.ChatCompletionMessage, LLamaSharp, Version=0.5.0.0, Culture=neutral, PublicKeyToken=null]]\n Inheritance Object \u2192 ChatCompletionMessage Implements IEquatable<ChatCompletionMessage>
"},{"location":"xmldocs/llama.oldversion.chatcompletionmessage/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.oldversion.chatcompletionmessage/#role","title":"Role","text":"public ChatRole Role { get; set; }\n"},{"location":"xmldocs/llama.oldversion.chatcompletionmessage/#property-value","title":"Property Value","text":"ChatRole
"},{"location":"xmldocs/llama.oldversion.chatcompletionmessage/#content","title":"Content","text":"public string Content { get; set; }\n"},{"location":"xmldocs/llama.oldversion.chatcompletionmessage/#property-value_1","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.oldversion.chatcompletionmessage/#name","title":"Name","text":"public string Name { get; set; }\n"},{"location":"xmldocs/llama.oldversion.chatcompletionmessage/#property-value_2","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.oldversion.chatcompletionmessage/#constructors","title":"Constructors","text":""},{"location":"xmldocs/llama.oldversion.chatcompletionmessage/#chatcompletionmessagechatrole-string-string","title":"ChatCompletionMessage(ChatRole, String, String)","text":"public ChatCompletionMessage(ChatRole Role, string Content, string Name)\n"},{"location":"xmldocs/llama.oldversion.chatcompletionmessage/#parameters","title":"Parameters","text":"Role ChatRole
Content String
Name String
public string ToString()\n"},{"location":"xmldocs/llama.oldversion.chatcompletionmessage/#returns","title":"Returns","text":"String
"},{"location":"xmldocs/llama.oldversion.chatcompletionmessage/#printmembersstringbuilder","title":"PrintMembers(StringBuilder)","text":"protected bool PrintMembers(StringBuilder builder)\n"},{"location":"xmldocs/llama.oldversion.chatcompletionmessage/#parameters_1","title":"Parameters","text":"builder StringBuilder
Boolean
"},{"location":"xmldocs/llama.oldversion.chatcompletionmessage/#gethashcode","title":"GetHashCode()","text":"public int GetHashCode()\n"},{"location":"xmldocs/llama.oldversion.chatcompletionmessage/#returns_2","title":"Returns","text":"Int32
"},{"location":"xmldocs/llama.oldversion.chatcompletionmessage/#equalsobject","title":"Equals(Object)","text":"public bool Equals(object obj)\n"},{"location":"xmldocs/llama.oldversion.chatcompletionmessage/#parameters_2","title":"Parameters","text":"obj Object
Boolean
"},{"location":"xmldocs/llama.oldversion.chatcompletionmessage/#equalschatcompletionmessage","title":"Equals(ChatCompletionMessage)","text":"public bool Equals(ChatCompletionMessage other)\n"},{"location":"xmldocs/llama.oldversion.chatcompletionmessage/#parameters_3","title":"Parameters","text":"other ChatCompletionMessage
Boolean
"},{"location":"xmldocs/llama.oldversion.chatcompletionmessage/#clone","title":"<Clone>$()","text":"public ChatCompletionMessage <Clone>$()\n"},{"location":"xmldocs/llama.oldversion.chatcompletionmessage/#returns_5","title":"Returns","text":"ChatCompletionMessage
"},{"location":"xmldocs/llama.oldversion.chatcompletionmessage/#deconstructchatrole-string-string","title":"Deconstruct(ChatRole&, String&, String&)","text":"public void Deconstruct(ChatRole& Role, String& Content, String& Name)\n"},{"location":"xmldocs/llama.oldversion.chatcompletionmessage/#parameters_4","title":"Parameters","text":"Role ChatRole&
Content String&
Name String&
Namespace: LLama.OldVersion
"},{"location":"xmldocs/llama.oldversion.chatmessagerecord/#caution","title":"Caution","text":"The entire LLama.OldVersion namespace will be removed
public class ChatMessageRecord : System.IEquatable`1[[LLama.OldVersion.ChatMessageRecord, LLamaSharp, Version=0.5.0.0, Culture=neutral, PublicKeyToken=null]]\n Inheritance Object \u2192 ChatMessageRecord Implements IEquatable<ChatMessageRecord>
"},{"location":"xmldocs/llama.oldversion.chatmessagerecord/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.oldversion.chatmessagerecord/#message","title":"Message","text":"public ChatCompletionMessage Message { get; set; }\n"},{"location":"xmldocs/llama.oldversion.chatmessagerecord/#property-value","title":"Property Value","text":"ChatCompletionMessage
"},{"location":"xmldocs/llama.oldversion.chatmessagerecord/#time","title":"Time","text":"public DateTime Time { get; set; }\n"},{"location":"xmldocs/llama.oldversion.chatmessagerecord/#property-value_1","title":"Property Value","text":"DateTime
"},{"location":"xmldocs/llama.oldversion.chatmessagerecord/#constructors","title":"Constructors","text":""},{"location":"xmldocs/llama.oldversion.chatmessagerecord/#chatmessagerecordchatcompletionmessage-datetime","title":"ChatMessageRecord(ChatCompletionMessage, DateTime)","text":"public ChatMessageRecord(ChatCompletionMessage Message, DateTime Time)\n"},{"location":"xmldocs/llama.oldversion.chatmessagerecord/#parameters","title":"Parameters","text":"Message ChatCompletionMessage
Time DateTime
public string ToString()\n"},{"location":"xmldocs/llama.oldversion.chatmessagerecord/#returns","title":"Returns","text":"String
"},{"location":"xmldocs/llama.oldversion.chatmessagerecord/#printmembersstringbuilder","title":"PrintMembers(StringBuilder)","text":"protected bool PrintMembers(StringBuilder builder)\n"},{"location":"xmldocs/llama.oldversion.chatmessagerecord/#parameters_1","title":"Parameters","text":"builder StringBuilder
Boolean
"},{"location":"xmldocs/llama.oldversion.chatmessagerecord/#gethashcode","title":"GetHashCode()","text":"public int GetHashCode()\n"},{"location":"xmldocs/llama.oldversion.chatmessagerecord/#returns_2","title":"Returns","text":"Int32
"},{"location":"xmldocs/llama.oldversion.chatmessagerecord/#equalsobject","title":"Equals(Object)","text":"public bool Equals(object obj)\n"},{"location":"xmldocs/llama.oldversion.chatmessagerecord/#parameters_2","title":"Parameters","text":"obj Object
Boolean
"},{"location":"xmldocs/llama.oldversion.chatmessagerecord/#equalschatmessagerecord","title":"Equals(ChatMessageRecord)","text":"public bool Equals(ChatMessageRecord other)\n"},{"location":"xmldocs/llama.oldversion.chatmessagerecord/#parameters_3","title":"Parameters","text":"other ChatMessageRecord
Boolean
"},{"location":"xmldocs/llama.oldversion.chatmessagerecord/#clone","title":"<Clone>$()","text":"public ChatMessageRecord <Clone>$()\n"},{"location":"xmldocs/llama.oldversion.chatmessagerecord/#returns_5","title":"Returns","text":"ChatMessageRecord
"},{"location":"xmldocs/llama.oldversion.chatmessagerecord/#deconstructchatcompletionmessage-datetime","title":"Deconstruct(ChatCompletionMessage&, DateTime&)","text":"public void Deconstruct(ChatCompletionMessage& Message, DateTime& Time)\n"},{"location":"xmldocs/llama.oldversion.chatmessagerecord/#parameters_4","title":"Parameters","text":"Message ChatCompletionMessage&
Time DateTime&
Namespace: LLama.OldVersion
public enum ChatRole\n Inheritance Object \u2192 ValueType \u2192 Enum \u2192 ChatRole Implements IComparable, IFormattable, IConvertible
"},{"location":"xmldocs/llama.oldversion.chatrole/#fields","title":"Fields","text":"Name Value Description"},{"location":"xmldocs/llama.oldversion.chatsession-1/","title":"ChatSession<T>","text":"Namespace: LLama.OldVersion
"},{"location":"xmldocs/llama.oldversion.chatsession-1/#caution","title":"Caution","text":"The entire LLama.OldVersion namespace will be removed
public class ChatSession<T>\n"},{"location":"xmldocs/llama.oldversion.chatsession-1/#type-parameters","title":"Type Parameters","text":"T
Inheritance Object \u2192 ChatSession<T>
"},{"location":"xmldocs/llama.oldversion.chatsession-1/#constructors","title":"Constructors","text":""},{"location":"xmldocs/llama.oldversion.chatsession-1/#chatsessiont_1","title":"ChatSession(T)","text":"public ChatSession(T model)\n"},{"location":"xmldocs/llama.oldversion.chatsession-1/#parameters","title":"Parameters","text":"model T
public IEnumerable<string> Chat(string text, string prompt, string encoding)\n"},{"location":"xmldocs/llama.oldversion.chatsession-1/#parameters_1","title":"Parameters","text":"text String
prompt String
encoding String
IEnumerable<String>
"},{"location":"xmldocs/llama.oldversion.chatsession-1/#withpromptstring-string","title":"WithPrompt(String, String)","text":"public ChatSession<T> WithPrompt(string prompt, string encoding)\n"},{"location":"xmldocs/llama.oldversion.chatsession-1/#parameters_2","title":"Parameters","text":"prompt String
encoding String
ChatSession<T>
"},{"location":"xmldocs/llama.oldversion.chatsession-1/#withpromptfilestring-string","title":"WithPromptFile(String, String)","text":"public ChatSession<T> WithPromptFile(string promptFilename, string encoding)\n"},{"location":"xmldocs/llama.oldversion.chatsession-1/#parameters_3","title":"Parameters","text":"promptFilename String
encoding String
ChatSession<T>
"},{"location":"xmldocs/llama.oldversion.chatsession-1/#withantipromptstring","title":"WithAntiprompt(String[])","text":"Set the keywords to split the return value of chat AI.
public ChatSession<T> WithAntiprompt(String[] antiprompt)\n"},{"location":"xmldocs/llama.oldversion.chatsession-1/#parameters_4","title":"Parameters","text":"antiprompt String[]
ChatSession<T>
"},{"location":"xmldocs/llama.oldversion.completion/","title":"Completion","text":"Namespace: LLama.OldVersion
"},{"location":"xmldocs/llama.oldversion.completion/#caution","title":"Caution","text":"The entire LLama.OldVersion namespace will be removed
public class Completion : System.IEquatable`1[[LLama.OldVersion.Completion, LLamaSharp, Version=0.5.0.0, Culture=neutral, PublicKeyToken=null]]\n Inheritance Object \u2192 Completion Implements IEquatable<Completion>
"},{"location":"xmldocs/llama.oldversion.completion/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.oldversion.completion/#id","title":"Id","text":"public string Id { get; set; }\n"},{"location":"xmldocs/llama.oldversion.completion/#property-value","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.oldversion.completion/#object","title":"Object","text":"public string Object { get; set; }\n"},{"location":"xmldocs/llama.oldversion.completion/#property-value_1","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.oldversion.completion/#created","title":"Created","text":"public int Created { get; set; }\n"},{"location":"xmldocs/llama.oldversion.completion/#property-value_2","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.oldversion.completion/#model","title":"Model","text":"public string Model { get; set; }\n"},{"location":"xmldocs/llama.oldversion.completion/#property-value_3","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.oldversion.completion/#choices","title":"Choices","text":"public CompletionChoice[] Choices { get; set; }\n"},{"location":"xmldocs/llama.oldversion.completion/#property-value_4","title":"Property Value","text":"CompletionChoice[]
"},{"location":"xmldocs/llama.oldversion.completion/#usage","title":"Usage","text":"public CompletionUsage Usage { get; set; }\n"},{"location":"xmldocs/llama.oldversion.completion/#property-value_5","title":"Property Value","text":"CompletionUsage
"},{"location":"xmldocs/llama.oldversion.completion/#constructors","title":"Constructors","text":""},{"location":"xmldocs/llama.oldversion.completion/#completionstring-string-int32-string-completionchoice-completionusage","title":"Completion(String, String, Int32, String, CompletionChoice[], CompletionUsage)","text":"public Completion(string Id, string Object, int Created, string Model, CompletionChoice[] Choices, CompletionUsage Usage)\n"},{"location":"xmldocs/llama.oldversion.completion/#parameters","title":"Parameters","text":"Id String
Object String
Created Int32
Model String
Choices CompletionChoice[]
Usage CompletionUsage
public string ToString()\n"},{"location":"xmldocs/llama.oldversion.completion/#returns","title":"Returns","text":"String
"},{"location":"xmldocs/llama.oldversion.completion/#printmembersstringbuilder","title":"PrintMembers(StringBuilder)","text":"protected bool PrintMembers(StringBuilder builder)\n"},{"location":"xmldocs/llama.oldversion.completion/#parameters_1","title":"Parameters","text":"builder StringBuilder
Boolean
"},{"location":"xmldocs/llama.oldversion.completion/#gethashcode","title":"GetHashCode()","text":"public int GetHashCode()\n"},{"location":"xmldocs/llama.oldversion.completion/#returns_2","title":"Returns","text":"Int32
"},{"location":"xmldocs/llama.oldversion.completion/#equalsobject","title":"Equals(Object)","text":"public bool Equals(object obj)\n"},{"location":"xmldocs/llama.oldversion.completion/#parameters_2","title":"Parameters","text":"obj Object
Boolean
"},{"location":"xmldocs/llama.oldversion.completion/#equalscompletion","title":"Equals(Completion)","text":"public bool Equals(Completion other)\n"},{"location":"xmldocs/llama.oldversion.completion/#parameters_3","title":"Parameters","text":"other Completion
Boolean
"},{"location":"xmldocs/llama.oldversion.completion/#clone","title":"<Clone>$()","text":"public Completion <Clone>$()\n"},{"location":"xmldocs/llama.oldversion.completion/#returns_5","title":"Returns","text":"Completion
"},{"location":"xmldocs/llama.oldversion.completion/#deconstructstring-string-int32-string-completionchoice-completionusage","title":"Deconstruct(String&, String&, Int32&, String&, CompletionChoice[]&, CompletionUsage&)","text":"public void Deconstruct(String& Id, String& Object, Int32& Created, String& Model, CompletionChoice[]& Choices, CompletionUsage& Usage)\n"},{"location":"xmldocs/llama.oldversion.completion/#parameters_4","title":"Parameters","text":"Id String&
Object String&
Created Int32&
Model String&
Choices CompletionChoice[]&
Usage CompletionUsage&
Namespace: LLama.OldVersion
"},{"location":"xmldocs/llama.oldversion.completionchoice/#caution","title":"Caution","text":"The entire LLama.OldVersion namespace will be removed
public class CompletionChoice : System.IEquatable`1[[LLama.OldVersion.CompletionChoice, LLamaSharp, Version=0.5.0.0, Culture=neutral, PublicKeyToken=null]]\n Inheritance Object \u2192 CompletionChoice Implements IEquatable<CompletionChoice>
"},{"location":"xmldocs/llama.oldversion.completionchoice/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.oldversion.completionchoice/#text","title":"Text","text":"public string Text { get; set; }\n"},{"location":"xmldocs/llama.oldversion.completionchoice/#property-value","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.oldversion.completionchoice/#index","title":"Index","text":"public int Index { get; set; }\n"},{"location":"xmldocs/llama.oldversion.completionchoice/#property-value_1","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.oldversion.completionchoice/#logprobs","title":"Logprobs","text":"public CompletionLogprobs Logprobs { get; set; }\n"},{"location":"xmldocs/llama.oldversion.completionchoice/#property-value_2","title":"Property Value","text":"CompletionLogprobs
"},{"location":"xmldocs/llama.oldversion.completionchoice/#finishreason","title":"FinishReason","text":"public string FinishReason { get; set; }\n"},{"location":"xmldocs/llama.oldversion.completionchoice/#property-value_3","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.oldversion.completionchoice/#constructors","title":"Constructors","text":""},{"location":"xmldocs/llama.oldversion.completionchoice/#completionchoicestring-int32-completionlogprobs-string","title":"CompletionChoice(String, Int32, CompletionLogprobs, String)","text":"public CompletionChoice(string Text, int Index, CompletionLogprobs Logprobs, string FinishReason)\n"},{"location":"xmldocs/llama.oldversion.completionchoice/#parameters","title":"Parameters","text":"Text String
Index Int32
Logprobs CompletionLogprobs
FinishReason String
public string ToString()\n"},{"location":"xmldocs/llama.oldversion.completionchoice/#returns","title":"Returns","text":"String
"},{"location":"xmldocs/llama.oldversion.completionchoice/#printmembersstringbuilder","title":"PrintMembers(StringBuilder)","text":"protected bool PrintMembers(StringBuilder builder)\n"},{"location":"xmldocs/llama.oldversion.completionchoice/#parameters_1","title":"Parameters","text":"builder StringBuilder
Boolean
"},{"location":"xmldocs/llama.oldversion.completionchoice/#gethashcode","title":"GetHashCode()","text":"public int GetHashCode()\n"},{"location":"xmldocs/llama.oldversion.completionchoice/#returns_2","title":"Returns","text":"Int32
"},{"location":"xmldocs/llama.oldversion.completionchoice/#equalsobject","title":"Equals(Object)","text":"public bool Equals(object obj)\n"},{"location":"xmldocs/llama.oldversion.completionchoice/#parameters_2","title":"Parameters","text":"obj Object
Boolean
"},{"location":"xmldocs/llama.oldversion.completionchoice/#equalscompletionchoice","title":"Equals(CompletionChoice)","text":"public bool Equals(CompletionChoice other)\n"},{"location":"xmldocs/llama.oldversion.completionchoice/#parameters_3","title":"Parameters","text":"other CompletionChoice
Boolean
"},{"location":"xmldocs/llama.oldversion.completionchoice/#clone","title":"<Clone>$()","text":"public CompletionChoice <Clone>$()\n"},{"location":"xmldocs/llama.oldversion.completionchoice/#returns_5","title":"Returns","text":"CompletionChoice
"},{"location":"xmldocs/llama.oldversion.completionchoice/#deconstructstring-int32-completionlogprobs-string","title":"Deconstruct(String&, Int32&, CompletionLogprobs&, String&)","text":"public void Deconstruct(String& Text, Int32& Index, CompletionLogprobs& Logprobs, String& FinishReason)\n"},{"location":"xmldocs/llama.oldversion.completionchoice/#parameters_4","title":"Parameters","text":"Text String&
Index Int32&
Logprobs CompletionLogprobs&
FinishReason String&
Namespace: LLama.OldVersion
"},{"location":"xmldocs/llama.oldversion.completionchunk/#caution","title":"Caution","text":"The entire LLama.OldVersion namespace will be removed
public class CompletionChunk : System.IEquatable`1[[LLama.OldVersion.CompletionChunk, LLamaSharp, Version=0.5.0.0, Culture=neutral, PublicKeyToken=null]]\n Inheritance Object \u2192 CompletionChunk Implements IEquatable<CompletionChunk>
"},{"location":"xmldocs/llama.oldversion.completionchunk/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.oldversion.completionchunk/#id","title":"Id","text":"public string Id { get; set; }\n"},{"location":"xmldocs/llama.oldversion.completionchunk/#property-value","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.oldversion.completionchunk/#object","title":"Object","text":"public string Object { get; set; }\n"},{"location":"xmldocs/llama.oldversion.completionchunk/#property-value_1","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.oldversion.completionchunk/#created","title":"Created","text":"public int Created { get; set; }\n"},{"location":"xmldocs/llama.oldversion.completionchunk/#property-value_2","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.oldversion.completionchunk/#model","title":"Model","text":"public string Model { get; set; }\n"},{"location":"xmldocs/llama.oldversion.completionchunk/#property-value_3","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.oldversion.completionchunk/#choices","title":"Choices","text":"public CompletionChoice[] Choices { get; set; }\n"},{"location":"xmldocs/llama.oldversion.completionchunk/#property-value_4","title":"Property Value","text":"CompletionChoice[]
"},{"location":"xmldocs/llama.oldversion.completionchunk/#constructors","title":"Constructors","text":""},{"location":"xmldocs/llama.oldversion.completionchunk/#completionchunkstring-string-int32-string-completionchoice","title":"CompletionChunk(String, String, Int32, String, CompletionChoice[])","text":"public CompletionChunk(string Id, string Object, int Created, string Model, CompletionChoice[] Choices)\n"},{"location":"xmldocs/llama.oldversion.completionchunk/#parameters","title":"Parameters","text":"Id String
Object String
Created Int32
Model String
Choices CompletionChoice[]
public string ToString()\n"},{"location":"xmldocs/llama.oldversion.completionchunk/#returns","title":"Returns","text":"String
"},{"location":"xmldocs/llama.oldversion.completionchunk/#printmembersstringbuilder","title":"PrintMembers(StringBuilder)","text":"protected bool PrintMembers(StringBuilder builder)\n"},{"location":"xmldocs/llama.oldversion.completionchunk/#parameters_1","title":"Parameters","text":"builder StringBuilder
Boolean
"},{"location":"xmldocs/llama.oldversion.completionchunk/#gethashcode","title":"GetHashCode()","text":"public int GetHashCode()\n"},{"location":"xmldocs/llama.oldversion.completionchunk/#returns_2","title":"Returns","text":"Int32
"},{"location":"xmldocs/llama.oldversion.completionchunk/#equalsobject","title":"Equals(Object)","text":"public bool Equals(object obj)\n"},{"location":"xmldocs/llama.oldversion.completionchunk/#parameters_2","title":"Parameters","text":"obj Object
Boolean
"},{"location":"xmldocs/llama.oldversion.completionchunk/#equalscompletionchunk","title":"Equals(CompletionChunk)","text":"public bool Equals(CompletionChunk other)\n"},{"location":"xmldocs/llama.oldversion.completionchunk/#parameters_3","title":"Parameters","text":"other CompletionChunk
Boolean
"},{"location":"xmldocs/llama.oldversion.completionchunk/#clone","title":"<Clone>$()","text":"public CompletionChunk <Clone>$()\n"},{"location":"xmldocs/llama.oldversion.completionchunk/#returns_5","title":"Returns","text":"CompletionChunk
"},{"location":"xmldocs/llama.oldversion.completionchunk/#deconstructstring-string-int32-string-completionchoice","title":"Deconstruct(String&, String&, Int32&, String&, CompletionChoice[]&)","text":"public void Deconstruct(String& Id, String& Object, Int32& Created, String& Model, CompletionChoice[]& Choices)\n"},{"location":"xmldocs/llama.oldversion.completionchunk/#parameters_4","title":"Parameters","text":"Id String&
Object String&
Created Int32&
Model String&
Choices CompletionChoice[]&
Namespace: LLama.OldVersion
"},{"location":"xmldocs/llama.oldversion.completionlogprobs/#caution","title":"Caution","text":"The entire LLama.OldVersion namespace will be removed
public class CompletionLogprobs : System.IEquatable`1[[LLama.OldVersion.CompletionLogprobs, LLamaSharp, Version=0.5.0.0, Culture=neutral, PublicKeyToken=null]]\n Inheritance Object \u2192 CompletionLogprobs Implements IEquatable<CompletionLogprobs>
"},{"location":"xmldocs/llama.oldversion.completionlogprobs/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.oldversion.completionlogprobs/#textoffset","title":"TextOffset","text":"public Int32[] TextOffset { get; set; }\n"},{"location":"xmldocs/llama.oldversion.completionlogprobs/#property-value","title":"Property Value","text":"Int32[]
"},{"location":"xmldocs/llama.oldversion.completionlogprobs/#tokenlogprobs","title":"TokenLogProbs","text":"public Single[] TokenLogProbs { get; set; }\n"},{"location":"xmldocs/llama.oldversion.completionlogprobs/#property-value_1","title":"Property Value","text":"Single[]
"},{"location":"xmldocs/llama.oldversion.completionlogprobs/#tokens","title":"Tokens","text":"public String[] Tokens { get; set; }\n"},{"location":"xmldocs/llama.oldversion.completionlogprobs/#property-value_2","title":"Property Value","text":"String[]
"},{"location":"xmldocs/llama.oldversion.completionlogprobs/#toplogprobs","title":"TopLogprobs","text":"public Dictionary`2[] TopLogprobs { get; set; }\n"},{"location":"xmldocs/llama.oldversion.completionlogprobs/#property-value_3","title":"Property Value","text":"Dictionary`2[]
"},{"location":"xmldocs/llama.oldversion.completionlogprobs/#constructors","title":"Constructors","text":""},{"location":"xmldocs/llama.oldversion.completionlogprobs/#completionlogprobsint32-single-string-dictionary2","title":"CompletionLogprobs(Int32[], Single[], String[], Dictionary`2[])","text":"public CompletionLogprobs(Int32[] TextOffset, Single[] TokenLogProbs, String[] Tokens, Dictionary`2[] TopLogprobs)\n"},{"location":"xmldocs/llama.oldversion.completionlogprobs/#parameters","title":"Parameters","text":"TextOffset Int32[]
TokenLogProbs Single[]
Tokens String[]
TopLogprobs Dictionary`2[]
public string ToString()\n"},{"location":"xmldocs/llama.oldversion.completionlogprobs/#returns","title":"Returns","text":"String
"},{"location":"xmldocs/llama.oldversion.completionlogprobs/#printmembersstringbuilder","title":"PrintMembers(StringBuilder)","text":"protected bool PrintMembers(StringBuilder builder)\n"},{"location":"xmldocs/llama.oldversion.completionlogprobs/#parameters_1","title":"Parameters","text":"builder StringBuilder
Boolean
"},{"location":"xmldocs/llama.oldversion.completionlogprobs/#gethashcode","title":"GetHashCode()","text":"public int GetHashCode()\n"},{"location":"xmldocs/llama.oldversion.completionlogprobs/#returns_2","title":"Returns","text":"Int32
"},{"location":"xmldocs/llama.oldversion.completionlogprobs/#equalsobject","title":"Equals(Object)","text":"public bool Equals(object obj)\n"},{"location":"xmldocs/llama.oldversion.completionlogprobs/#parameters_2","title":"Parameters","text":"obj Object
Boolean
"},{"location":"xmldocs/llama.oldversion.completionlogprobs/#equalscompletionlogprobs","title":"Equals(CompletionLogprobs)","text":"public bool Equals(CompletionLogprobs other)\n"},{"location":"xmldocs/llama.oldversion.completionlogprobs/#parameters_3","title":"Parameters","text":"other CompletionLogprobs
Boolean
"},{"location":"xmldocs/llama.oldversion.completionlogprobs/#clone","title":"<Clone>$()","text":"public CompletionLogprobs <Clone>$()\n"},{"location":"xmldocs/llama.oldversion.completionlogprobs/#returns_5","title":"Returns","text":"CompletionLogprobs
"},{"location":"xmldocs/llama.oldversion.completionlogprobs/#deconstructint32-single-string-dictionary2","title":"Deconstruct(Int32[]&, Single[]&, String[]&, Dictionary`2[]&)","text":"public void Deconstruct(Int32[]& TextOffset, Single[]& TokenLogProbs, String[]& Tokens, Dictionary`2[]& TopLogprobs)\n"},{"location":"xmldocs/llama.oldversion.completionlogprobs/#parameters_4","title":"Parameters","text":"TextOffset Int32[]&
TokenLogProbs Single[]&
Tokens String[]&
TopLogprobs Dictionary`2[]&
Namespace: LLama.OldVersion
"},{"location":"xmldocs/llama.oldversion.completionusage/#caution","title":"Caution","text":"The entire LLama.OldVersion namespace will be removed
public class CompletionUsage : System.IEquatable`1[[LLama.OldVersion.CompletionUsage, LLamaSharp, Version=0.5.0.0, Culture=neutral, PublicKeyToken=null]]\n Inheritance Object \u2192 CompletionUsage Implements IEquatable<CompletionUsage>
"},{"location":"xmldocs/llama.oldversion.completionusage/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.oldversion.completionusage/#prompttokens","title":"PromptTokens","text":"public int PromptTokens { get; set; }\n"},{"location":"xmldocs/llama.oldversion.completionusage/#property-value","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.oldversion.completionusage/#completiontokens","title":"CompletionTokens","text":"public int CompletionTokens { get; set; }\n"},{"location":"xmldocs/llama.oldversion.completionusage/#property-value_1","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.oldversion.completionusage/#totaltokens","title":"TotalTokens","text":"public int TotalTokens { get; set; }\n"},{"location":"xmldocs/llama.oldversion.completionusage/#property-value_2","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.oldversion.completionusage/#constructors","title":"Constructors","text":""},{"location":"xmldocs/llama.oldversion.completionusage/#completionusageint32-int32-int32","title":"CompletionUsage(Int32, Int32, Int32)","text":"public CompletionUsage(int PromptTokens, int CompletionTokens, int TotalTokens)\n"},{"location":"xmldocs/llama.oldversion.completionusage/#parameters","title":"Parameters","text":"PromptTokens Int32
CompletionTokens Int32
TotalTokens Int32
public string ToString()\n"},{"location":"xmldocs/llama.oldversion.completionusage/#returns","title":"Returns","text":"String
"},{"location":"xmldocs/llama.oldversion.completionusage/#printmembersstringbuilder","title":"PrintMembers(StringBuilder)","text":"protected bool PrintMembers(StringBuilder builder)\n"},{"location":"xmldocs/llama.oldversion.completionusage/#parameters_1","title":"Parameters","text":"builder StringBuilder
Boolean
"},{"location":"xmldocs/llama.oldversion.completionusage/#gethashcode","title":"GetHashCode()","text":"public int GetHashCode()\n"},{"location":"xmldocs/llama.oldversion.completionusage/#returns_2","title":"Returns","text":"Int32
"},{"location":"xmldocs/llama.oldversion.completionusage/#equalsobject","title":"Equals(Object)","text":"public bool Equals(object obj)\n"},{"location":"xmldocs/llama.oldversion.completionusage/#parameters_2","title":"Parameters","text":"obj Object
Boolean
"},{"location":"xmldocs/llama.oldversion.completionusage/#equalscompletionusage","title":"Equals(CompletionUsage)","text":"public bool Equals(CompletionUsage other)\n"},{"location":"xmldocs/llama.oldversion.completionusage/#parameters_3","title":"Parameters","text":"other CompletionUsage
Boolean
"},{"location":"xmldocs/llama.oldversion.completionusage/#clone","title":"<Clone>$()","text":"public CompletionUsage <Clone>$()\n"},{"location":"xmldocs/llama.oldversion.completionusage/#returns_5","title":"Returns","text":"CompletionUsage
"},{"location":"xmldocs/llama.oldversion.completionusage/#deconstructint32-int32-int32","title":"Deconstruct(Int32&, Int32&, Int32&)","text":"public void Deconstruct(Int32& PromptTokens, Int32& CompletionTokens, Int32& TotalTokens)\n"},{"location":"xmldocs/llama.oldversion.completionusage/#parameters_4","title":"Parameters","text":"PromptTokens Int32&
CompletionTokens Int32&
TotalTokens Int32&
Namespace: LLama.OldVersion
"},{"location":"xmldocs/llama.oldversion.embedding/#caution","title":"Caution","text":"The entire LLama.OldVersion namespace will be removed
public class Embedding : System.IEquatable`1[[LLama.OldVersion.Embedding, LLamaSharp, Version=0.5.0.0, Culture=neutral, PublicKeyToken=null]]\n Inheritance Object \u2192 Embedding Implements IEquatable<Embedding>
"},{"location":"xmldocs/llama.oldversion.embedding/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.oldversion.embedding/#object","title":"Object","text":"public string Object { get; set; }\n"},{"location":"xmldocs/llama.oldversion.embedding/#property-value","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.oldversion.embedding/#model","title":"Model","text":"public string Model { get; set; }\n"},{"location":"xmldocs/llama.oldversion.embedding/#property-value_1","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.oldversion.embedding/#data","title":"Data","text":"public EmbeddingData[] Data { get; set; }\n"},{"location":"xmldocs/llama.oldversion.embedding/#property-value_2","title":"Property Value","text":"EmbeddingData[]
"},{"location":"xmldocs/llama.oldversion.embedding/#usage","title":"Usage","text":"public EmbeddingUsage Usage { get; set; }\n"},{"location":"xmldocs/llama.oldversion.embedding/#property-value_3","title":"Property Value","text":"EmbeddingUsage
"},{"location":"xmldocs/llama.oldversion.embedding/#constructors","title":"Constructors","text":""},{"location":"xmldocs/llama.oldversion.embedding/#embeddingstring-string-embeddingdata-embeddingusage","title":"Embedding(String, String, EmbeddingData[], EmbeddingUsage)","text":"public Embedding(string Object, string Model, EmbeddingData[] Data, EmbeddingUsage Usage)\n"},{"location":"xmldocs/llama.oldversion.embedding/#parameters","title":"Parameters","text":"Object String
Model String
Data EmbeddingData[]
Usage EmbeddingUsage
public string ToString()\n"},{"location":"xmldocs/llama.oldversion.embedding/#returns","title":"Returns","text":"String
"},{"location":"xmldocs/llama.oldversion.embedding/#printmembersstringbuilder","title":"PrintMembers(StringBuilder)","text":"protected bool PrintMembers(StringBuilder builder)\n"},{"location":"xmldocs/llama.oldversion.embedding/#parameters_1","title":"Parameters","text":"builder StringBuilder
Boolean
"},{"location":"xmldocs/llama.oldversion.embedding/#gethashcode","title":"GetHashCode()","text":"public int GetHashCode()\n"},{"location":"xmldocs/llama.oldversion.embedding/#returns_2","title":"Returns","text":"Int32
"},{"location":"xmldocs/llama.oldversion.embedding/#equalsobject","title":"Equals(Object)","text":"public bool Equals(object obj)\n"},{"location":"xmldocs/llama.oldversion.embedding/#parameters_2","title":"Parameters","text":"obj Object
Boolean
"},{"location":"xmldocs/llama.oldversion.embedding/#equalsembedding","title":"Equals(Embedding)","text":"public bool Equals(Embedding other)\n"},{"location":"xmldocs/llama.oldversion.embedding/#parameters_3","title":"Parameters","text":"other Embedding
Boolean
"},{"location":"xmldocs/llama.oldversion.embedding/#clone","title":"<Clone>$()","text":"public Embedding <Clone>$()\n"},{"location":"xmldocs/llama.oldversion.embedding/#returns_5","title":"Returns","text":"Embedding
"},{"location":"xmldocs/llama.oldversion.embedding/#deconstructstring-string-embeddingdata-embeddingusage","title":"Deconstruct(String&, String&, EmbeddingData[]&, EmbeddingUsage&)","text":"public void Deconstruct(String& Object, String& Model, EmbeddingData[]& Data, EmbeddingUsage& Usage)\n"},{"location":"xmldocs/llama.oldversion.embedding/#parameters_4","title":"Parameters","text":"Object String&
Model String&
Data EmbeddingData[]&
Usage EmbeddingUsage&
Namespace: LLama.OldVersion
"},{"location":"xmldocs/llama.oldversion.embeddingdata/#caution","title":"Caution","text":"The entire LLama.OldVersion namespace will be removed
public class EmbeddingData : System.IEquatable`1[[LLama.OldVersion.EmbeddingData, LLamaSharp, Version=0.5.0.0, Culture=neutral, PublicKeyToken=null]]\n Inheritance Object \u2192 EmbeddingData Implements IEquatable<EmbeddingData>
"},{"location":"xmldocs/llama.oldversion.embeddingdata/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.oldversion.embeddingdata/#index","title":"Index","text":"public int Index { get; set; }\n"},{"location":"xmldocs/llama.oldversion.embeddingdata/#property-value","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.oldversion.embeddingdata/#object","title":"Object","text":"public string Object { get; set; }\n"},{"location":"xmldocs/llama.oldversion.embeddingdata/#property-value_1","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.oldversion.embeddingdata/#embedding","title":"Embedding","text":"public Single[] Embedding { get; set; }\n"},{"location":"xmldocs/llama.oldversion.embeddingdata/#property-value_2","title":"Property Value","text":"Single[]
"},{"location":"xmldocs/llama.oldversion.embeddingdata/#constructors","title":"Constructors","text":""},{"location":"xmldocs/llama.oldversion.embeddingdata/#embeddingdataint32-string-single","title":"EmbeddingData(Int32, String, Single[])","text":"public EmbeddingData(int Index, string Object, Single[] Embedding)\n"},{"location":"xmldocs/llama.oldversion.embeddingdata/#parameters","title":"Parameters","text":"Index Int32
Object String
Embedding Single[]
public string ToString()\n"},{"location":"xmldocs/llama.oldversion.embeddingdata/#returns","title":"Returns","text":"String
"},{"location":"xmldocs/llama.oldversion.embeddingdata/#printmembersstringbuilder","title":"PrintMembers(StringBuilder)","text":"protected bool PrintMembers(StringBuilder builder)\n"},{"location":"xmldocs/llama.oldversion.embeddingdata/#parameters_1","title":"Parameters","text":"builder StringBuilder
Boolean
"},{"location":"xmldocs/llama.oldversion.embeddingdata/#gethashcode","title":"GetHashCode()","text":"public int GetHashCode()\n"},{"location":"xmldocs/llama.oldversion.embeddingdata/#returns_2","title":"Returns","text":"Int32
"},{"location":"xmldocs/llama.oldversion.embeddingdata/#equalsobject","title":"Equals(Object)","text":"public bool Equals(object obj)\n"},{"location":"xmldocs/llama.oldversion.embeddingdata/#parameters_2","title":"Parameters","text":"obj Object
Boolean
"},{"location":"xmldocs/llama.oldversion.embeddingdata/#equalsembeddingdata","title":"Equals(EmbeddingData)","text":"public bool Equals(EmbeddingData other)\n"},{"location":"xmldocs/llama.oldversion.embeddingdata/#parameters_3","title":"Parameters","text":"other EmbeddingData
Boolean
"},{"location":"xmldocs/llama.oldversion.embeddingdata/#clone","title":"<Clone>$()","text":"public EmbeddingData <Clone>$()\n"},{"location":"xmldocs/llama.oldversion.embeddingdata/#returns_5","title":"Returns","text":"EmbeddingData
"},{"location":"xmldocs/llama.oldversion.embeddingdata/#deconstructint32-string-single","title":"Deconstruct(Int32&, String&, Single[]&)","text":"public void Deconstruct(Int32& Index, String& Object, Single[]& Embedding)\n"},{"location":"xmldocs/llama.oldversion.embeddingdata/#parameters_4","title":"Parameters","text":"Index Int32&
Object String&
Embedding Single[]&
Namespace: LLama.OldVersion
"},{"location":"xmldocs/llama.oldversion.embeddingusage/#caution","title":"Caution","text":"The entire LLama.OldVersion namespace will be removed
public class EmbeddingUsage : System.IEquatable`1[[LLama.OldVersion.EmbeddingUsage, LLamaSharp, Version=0.5.0.0, Culture=neutral, PublicKeyToken=null]]\n Inheritance Object \u2192 EmbeddingUsage Implements IEquatable<EmbeddingUsage>
"},{"location":"xmldocs/llama.oldversion.embeddingusage/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.oldversion.embeddingusage/#prompttokens","title":"PromptTokens","text":"public int PromptTokens { get; set; }\n"},{"location":"xmldocs/llama.oldversion.embeddingusage/#property-value","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.oldversion.embeddingusage/#totaltokens","title":"TotalTokens","text":"public int TotalTokens { get; set; }\n"},{"location":"xmldocs/llama.oldversion.embeddingusage/#property-value_1","title":"Property Value","text":"Int32
"},{"location":"xmldocs/llama.oldversion.embeddingusage/#constructors","title":"Constructors","text":""},{"location":"xmldocs/llama.oldversion.embeddingusage/#embeddingusageint32-int32","title":"EmbeddingUsage(Int32, Int32)","text":"public EmbeddingUsage(int PromptTokens, int TotalTokens)\n"},{"location":"xmldocs/llama.oldversion.embeddingusage/#parameters","title":"Parameters","text":"PromptTokens Int32
TotalTokens Int32
public string ToString()\n"},{"location":"xmldocs/llama.oldversion.embeddingusage/#returns","title":"Returns","text":"String
"},{"location":"xmldocs/llama.oldversion.embeddingusage/#printmembersstringbuilder","title":"PrintMembers(StringBuilder)","text":"protected bool PrintMembers(StringBuilder builder)\n"},{"location":"xmldocs/llama.oldversion.embeddingusage/#parameters_1","title":"Parameters","text":"builder StringBuilder
Boolean
"},{"location":"xmldocs/llama.oldversion.embeddingusage/#gethashcode","title":"GetHashCode()","text":"public int GetHashCode()\n"},{"location":"xmldocs/llama.oldversion.embeddingusage/#returns_2","title":"Returns","text":"Int32
"},{"location":"xmldocs/llama.oldversion.embeddingusage/#equalsobject","title":"Equals(Object)","text":"public bool Equals(object obj)\n"},{"location":"xmldocs/llama.oldversion.embeddingusage/#parameters_2","title":"Parameters","text":"obj Object
Boolean
"},{"location":"xmldocs/llama.oldversion.embeddingusage/#equalsembeddingusage","title":"Equals(EmbeddingUsage)","text":"public bool Equals(EmbeddingUsage other)\n"},{"location":"xmldocs/llama.oldversion.embeddingusage/#parameters_3","title":"Parameters","text":"other EmbeddingUsage
Boolean
"},{"location":"xmldocs/llama.oldversion.embeddingusage/#clone","title":"<Clone>$()","text":"public EmbeddingUsage <Clone>$()\n"},{"location":"xmldocs/llama.oldversion.embeddingusage/#returns_5","title":"Returns","text":"EmbeddingUsage
"},{"location":"xmldocs/llama.oldversion.embeddingusage/#deconstructint32-int32","title":"Deconstruct(Int32&, Int32&)","text":"public void Deconstruct(Int32& PromptTokens, Int32& TotalTokens)\n"},{"location":"xmldocs/llama.oldversion.embeddingusage/#parameters_4","title":"Parameters","text":"PromptTokens Int32&
TotalTokens Int32&
Namespace: LLama.OldVersion
"},{"location":"xmldocs/llama.oldversion.ichatmodel/#caution","title":"Caution","text":"The entire LLama.OldVersion namespace will be removed
public interface IChatModel\n"},{"location":"xmldocs/llama.oldversion.ichatmodel/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.oldversion.ichatmodel/#name","title":"Name","text":"public abstract string Name { get; }\n"},{"location":"xmldocs/llama.oldversion.ichatmodel/#property-value","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.oldversion.ichatmodel/#methods","title":"Methods","text":""},{"location":"xmldocs/llama.oldversion.ichatmodel/#chatstring-string-string","title":"Chat(String, String, String)","text":"IEnumerable<string> Chat(string text, string prompt, string encoding)\n"},{"location":"xmldocs/llama.oldversion.ichatmodel/#parameters","title":"Parameters","text":"text String
prompt String
encoding String
IEnumerable<String>
"},{"location":"xmldocs/llama.oldversion.ichatmodel/#initchatpromptstring-string","title":"InitChatPrompt(String, String)","text":"Init a prompt for chat and automatically produce the next prompt during the chat.
void InitChatPrompt(string prompt, string encoding)\n"},{"location":"xmldocs/llama.oldversion.ichatmodel/#parameters_1","title":"Parameters","text":"prompt String
encoding String
void InitChatAntiprompt(String[] antiprompt)\n"},{"location":"xmldocs/llama.oldversion.ichatmodel/#parameters_2","title":"Parameters","text":"antiprompt String[]
Namespace: LLama.OldVersion
"},{"location":"xmldocs/llama.oldversion.llamaembedder/#caution","title":"Caution","text":"The entire LLama.OldVersion namespace will be removed
public class LLamaEmbedder : System.IDisposable\n Inheritance Object \u2192 LLamaEmbedder Implements IDisposable
"},{"location":"xmldocs/llama.oldversion.llamaembedder/#constructors","title":"Constructors","text":""},{"location":"xmldocs/llama.oldversion.llamaembedder/#llamaembedderllamaparams","title":"LLamaEmbedder(LLamaParams)","text":"public LLamaEmbedder(LLamaParams params)\n"},{"location":"xmldocs/llama.oldversion.llamaembedder/#parameters","title":"Parameters","text":"params LLamaParams
public Single[] GetEmbeddings(string text, int n_thread, bool add_bos, string encoding)\n"},{"location":"xmldocs/llama.oldversion.llamaembedder/#parameters_1","title":"Parameters","text":"text String
n_thread Int32
add_bos Boolean
encoding String
Single[]
"},{"location":"xmldocs/llama.oldversion.llamaembedder/#dispose","title":"Dispose()","text":"public void Dispose()\n"},{"location":"xmldocs/llama.oldversion.llamamodel/","title":"LLamaModel","text":"Namespace: LLama.OldVersion
"},{"location":"xmldocs/llama.oldversion.llamamodel/#caution","title":"Caution","text":"The entire LLama.OldVersion namespace will be removed
public class LLamaModel : IChatModel, System.IDisposable\n Inheritance Object \u2192 LLamaModel Implements IChatModel, IDisposable
"},{"location":"xmldocs/llama.oldversion.llamamodel/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.oldversion.llamamodel/#name","title":"Name","text":"public string Name { get; set; }\n"},{"location":"xmldocs/llama.oldversion.llamamodel/#property-value","title":"Property Value","text":"String
"},{"location":"xmldocs/llama.oldversion.llamamodel/#verbose","title":"Verbose","text":"public bool Verbose { get; set; }\n"},{"location":"xmldocs/llama.oldversion.llamamodel/#property-value_1","title":"Property Value","text":"Boolean
"},{"location":"xmldocs/llama.oldversion.llamamodel/#nativehandle","title":"NativeHandle","text":"public SafeLLamaContextHandle NativeHandle { get; }\n"},{"location":"xmldocs/llama.oldversion.llamamodel/#property-value_2","title":"Property Value","text":"SafeLLamaContextHandle
"},{"location":"xmldocs/llama.oldversion.llamamodel/#constructors","title":"Constructors","text":""},{"location":"xmldocs/llama.oldversion.llamamodel/#llamamodelstring-string-boolean-int32-int32-int32-int32-int32-int32-int32-dictionaryint32-single-int32-single-single-single-single-single-int32-single-single-int32-single-single-string-string-string-string-liststring-string-string-boolean-boolean-boolean-boolean-boolean-boolean-boolean-boolean-boolean-boolean-boolean-boolean-boolean-boolean-string","title":"LLamaModel(String, String, Boolean, Int32, Int32, Int32, Int32, Int32, Int32, Int32, Dictionary<Int32, Single>, Int32, Single, Single, Single, Single, Single, Int32, Single, Single, Int32, Single, Single, String, String, String, String, List<String>, String, String, Boolean, Boolean, Boolean, Boolean, Boolean, Boolean, Boolean, Boolean, Boolean, Boolean, Boolean, Boolean, Boolean, Boolean, String)","text":"Please refer LLamaParams to find the meanings of each arg. Be sure to have set the n_gpu_layers, otherwise it will load 20 layers to gpu by default.
public LLamaModel(string model_path, string model_name, bool verbose, int seed, int n_threads, int n_predict, int n_ctx, int n_batch, int n_keep, int n_gpu_layers, Dictionary<int, float> logit_bias, int top_k, float top_p, float tfs_z, float typical_p, float temp, float repeat_penalty, int repeat_last_n, float frequency_penalty, float presence_penalty, int mirostat, float mirostat_tau, float mirostat_eta, string prompt, string path_session, string input_prefix, string input_suffix, List<string> antiprompt, string lora_adapter, string lora_base, bool memory_f16, bool random_prompt, bool use_color, bool interactive, bool embedding, bool interactive_first, bool prompt_cache_all, bool instruct, bool penalize_nl, bool perplexity, bool use_mmap, bool use_mlock, bool mem_test, bool verbose_prompt, string encoding)\n"},{"location":"xmldocs/llama.oldversion.llamamodel/#parameters","title":"Parameters","text":"model_path String The model file path.
model_name String The model name.
verbose Boolean Whether to print details when running the model.
seed Int32
n_threads Int32
n_predict Int32
n_ctx Int32
n_batch Int32
n_keep Int32
n_gpu_layers Int32
logit_bias Dictionary<Int32, Single>
top_k Int32
top_p Single
tfs_z Single
typical_p Single
temp Single
repeat_penalty Single
repeat_last_n Int32
frequency_penalty Single
presence_penalty Single
mirostat Int32
mirostat_tau Single
mirostat_eta Single
prompt String
path_session String
input_prefix String
input_suffix String
antiprompt List<String>
lora_adapter String
lora_base String
memory_f16 Boolean
random_prompt Boolean
use_color Boolean
interactive Boolean
embedding Boolean
interactive_first Boolean
prompt_cache_all Boolean
instruct Boolean
penalize_nl Boolean
perplexity Boolean
use_mmap Boolean
use_mlock Boolean
mem_test Boolean
verbose_prompt Boolean
encoding String
Please refer LLamaParams to find the meanings of each arg. Be sure to have set the n_gpu_layers, otherwise it will load 20 layers to gpu by default.
public LLamaModel(LLamaParams params, string name, bool verbose, string encoding)\n"},{"location":"xmldocs/llama.oldversion.llamamodel/#parameters_1","title":"Parameters","text":"params LLamaParams The LLamaModel params
name String Model name
verbose Boolean Whether to output the detailed info.
encoding String
RuntimeError
"},{"location":"xmldocs/llama.oldversion.llamamodel/#methods","title":"Methods","text":""},{"location":"xmldocs/llama.oldversion.llamamodel/#withpromptstring-string","title":"WithPrompt(String, String)","text":"Apply a prompt to the model.
public LLamaModel WithPrompt(string prompt, string encoding)\n"},{"location":"xmldocs/llama.oldversion.llamamodel/#parameters_2","title":"Parameters","text":"prompt String
encoding String
LLamaModel
"},{"location":"xmldocs/llama.oldversion.llamamodel/#exceptions_1","title":"Exceptions","text":"ArgumentException
"},{"location":"xmldocs/llama.oldversion.llamamodel/#withpromptfilestring","title":"WithPromptFile(String)","text":"Apply the prompt file to the model.
public LLamaModel WithPromptFile(string promptFileName)\n"},{"location":"xmldocs/llama.oldversion.llamamodel/#parameters_3","title":"Parameters","text":"promptFileName String
LLamaModel
"},{"location":"xmldocs/llama.oldversion.llamamodel/#initchatpromptstring-string","title":"InitChatPrompt(String, String)","text":"public void InitChatPrompt(string prompt, string encoding)\n"},{"location":"xmldocs/llama.oldversion.llamamodel/#parameters_4","title":"Parameters","text":"prompt String
encoding String
public void InitChatAntiprompt(String[] antiprompt)\n"},{"location":"xmldocs/llama.oldversion.llamamodel/#parameters_5","title":"Parameters","text":"antiprompt String[]
Chat with the LLaMa model under interactive mode.
public IEnumerable<string> Chat(string text, string prompt, string encoding)\n"},{"location":"xmldocs/llama.oldversion.llamamodel/#parameters_6","title":"Parameters","text":"text String
prompt String
encoding String
IEnumerable<String>
"},{"location":"xmldocs/llama.oldversion.llamamodel/#exceptions_2","title":"Exceptions","text":"ArgumentException
"},{"location":"xmldocs/llama.oldversion.llamamodel/#savestatestring","title":"SaveState(String)","text":"Save the state to specified path.
public void SaveState(string filename)\n"},{"location":"xmldocs/llama.oldversion.llamamodel/#parameters_7","title":"Parameters","text":"filename String
Load the state from specified path.
public void LoadState(string filename, bool clearPreviousEmbed)\n"},{"location":"xmldocs/llama.oldversion.llamamodel/#parameters_8","title":"Parameters","text":"filename String
clearPreviousEmbed Boolean Whether to clear previous footprints of this model.
RuntimeError
"},{"location":"xmldocs/llama.oldversion.llamamodel/#tokenizestring-string","title":"Tokenize(String, String)","text":"Tokenize a string.
public List<int> Tokenize(string text, string encoding)\n"},{"location":"xmldocs/llama.oldversion.llamamodel/#parameters_9","title":"Parameters","text":"text String The utf-8 encoded string to tokenize.
encoding String
List<Int32> A list of tokens.
"},{"location":"xmldocs/llama.oldversion.llamamodel/#exceptions_4","title":"Exceptions","text":"RuntimeError If the tokenization failed.
"},{"location":"xmldocs/llama.oldversion.llamamodel/#detokenizeienumerableint32","title":"DeTokenize(IEnumerable<Int32>)","text":"Detokenize a list of tokens.
public string DeTokenize(IEnumerable<int> tokens)\n"},{"location":"xmldocs/llama.oldversion.llamamodel/#parameters_10","title":"Parameters","text":"tokens IEnumerable<Int32> The list of tokens to detokenize.
String The detokenized string.
"},{"location":"xmldocs/llama.oldversion.llamamodel/#callstring-string","title":"Call(String, String)","text":"Call the model to run inference.
public IEnumerable<string> Call(string text, string encoding)\n"},{"location":"xmldocs/llama.oldversion.llamamodel/#parameters_11","title":"Parameters","text":"text String
encoding String
IEnumerable<String>
"},{"location":"xmldocs/llama.oldversion.llamamodel/#exceptions_5","title":"Exceptions","text":"RuntimeError
"},{"location":"xmldocs/llama.oldversion.llamamodel/#dispose","title":"Dispose()","text":"public void Dispose()\n"},{"location":"xmldocs/llama.oldversion.llamaparams/","title":"LLamaParams","text":"Namespace: LLama.OldVersion
"},{"location":"xmldocs/llama.oldversion.llamaparams/#caution","title":"Caution","text":"The entire LLama.OldVersion namespace will be removed
public struct LLamaParams\n Inheritance Object \u2192 ValueType \u2192 LLamaParams
"},{"location":"xmldocs/llama.oldversion.llamaparams/#fields","title":"Fields","text":""},{"location":"xmldocs/llama.oldversion.llamaparams/#seed","title":"seed","text":"public int seed;\n"},{"location":"xmldocs/llama.oldversion.llamaparams/#n_threads","title":"n_threads","text":"public int n_threads;\n"},{"location":"xmldocs/llama.oldversion.llamaparams/#n_predict","title":"n_predict","text":"public int n_predict;\n"},{"location":"xmldocs/llama.oldversion.llamaparams/#n_ctx","title":"n_ctx","text":"public int n_ctx;\n"},{"location":"xmldocs/llama.oldversion.llamaparams/#n_batch","title":"n_batch","text":"public int n_batch;\n"},{"location":"xmldocs/llama.oldversion.llamaparams/#n_keep","title":"n_keep","text":"public int n_keep;\n"},{"location":"xmldocs/llama.oldversion.llamaparams/#n_gpu_layers","title":"n_gpu_layers","text":"public int n_gpu_layers;\n"},{"location":"xmldocs/llama.oldversion.llamaparams/#logit_bias","title":"logit_bias","text":"public Dictionary<int, float> logit_bias;\n"},{"location":"xmldocs/llama.oldversion.llamaparams/#top_k","title":"top_k","text":"public int top_k;\n"},{"location":"xmldocs/llama.oldversion.llamaparams/#top_p","title":"top_p","text":"public float top_p;\n"},{"location":"xmldocs/llama.oldversion.llamaparams/#tfs_z","title":"tfs_z","text":"public float tfs_z;\n"},{"location":"xmldocs/llama.oldversion.llamaparams/#typical_p","title":"typical_p","text":"public float typical_p;\n"},{"location":"xmldocs/llama.oldversion.llamaparams/#temp","title":"temp","text":"public float temp;\n"},{"location":"xmldocs/llama.oldversion.llamaparams/#repeat_penalty","title":"repeat_penalty","text":"public float repeat_penalty;\n"},{"location":"xmldocs/llama.oldversion.llamaparams/#repeat_last_n","title":"repeat_last_n","text":"public int repeat_last_n;\n"},{"location":"xmldocs/llama.oldversion.llamaparams/#frequency_penalty","title":"frequency_penalty","text":"public float frequency_penalty;\n"},{"location":"xmldocs/llama.oldversion.llamaparams/#presence_penalty","title":"presence_penalty","text":"public float presence_penalty;\n"},{"location":"xmldocs/llama.oldversion.llamaparams/#mirostat","title":"mirostat","text":"public int mirostat;\n"},{"location":"xmldocs/llama.oldversion.llamaparams/#mirostat_tau","title":"mirostat_tau","text":"public float mirostat_tau;\n"},{"location":"xmldocs/llama.oldversion.llamaparams/#mirostat_eta","title":"mirostat_eta","text":"public float mirostat_eta;\n"},{"location":"xmldocs/llama.oldversion.llamaparams/#model","title":"model","text":"public string model;\n"},{"location":"xmldocs/llama.oldversion.llamaparams/#prompt","title":"prompt","text":"public string prompt;\n"},{"location":"xmldocs/llama.oldversion.llamaparams/#path_session","title":"path_session","text":"public string path_session;\n"},{"location":"xmldocs/llama.oldversion.llamaparams/#input_prefix","title":"input_prefix","text":"public string input_prefix;\n"},{"location":"xmldocs/llama.oldversion.llamaparams/#input_suffix","title":"input_suffix","text":"public string input_suffix;\n"},{"location":"xmldocs/llama.oldversion.llamaparams/#antiprompt","title":"antiprompt","text":"public List<string> antiprompt;\n"},{"location":"xmldocs/llama.oldversion.llamaparams/#lora_adapter","title":"lora_adapter","text":"public string lora_adapter;\n"},{"location":"xmldocs/llama.oldversion.llamaparams/#lora_base","title":"lora_base","text":"public string lora_base;\n"},{"location":"xmldocs/llama.oldversion.llamaparams/#memory_f16","title":"memory_f16","text":"public bool memory_f16;\n"},{"location":"xmldocs/llama.oldversion.llamaparams/#random_prompt","title":"random_prompt","text":"public bool random_prompt;\n"},{"location":"xmldocs/llama.oldversion.llamaparams/#use_color","title":"use_color","text":"public bool use_color;\n"},{"location":"xmldocs/llama.oldversion.llamaparams/#interactive","title":"interactive","text":"public bool interactive;\n"},{"location":"xmldocs/llama.oldversion.llamaparams/#prompt_cache_all","title":"prompt_cache_all","text":"public bool prompt_cache_all;\n"},{"location":"xmldocs/llama.oldversion.llamaparams/#embedding","title":"embedding","text":"public bool embedding;\n"},{"location":"xmldocs/llama.oldversion.llamaparams/#interactive_first","title":"interactive_first","text":"public bool interactive_first;\n"},{"location":"xmldocs/llama.oldversion.llamaparams/#instruct","title":"instruct","text":"public bool instruct;\n"},{"location":"xmldocs/llama.oldversion.llamaparams/#penalize_nl","title":"penalize_nl","text":"public bool penalize_nl;\n"},{"location":"xmldocs/llama.oldversion.llamaparams/#perplexity","title":"perplexity","text":"public bool perplexity;\n"},{"location":"xmldocs/llama.oldversion.llamaparams/#use_mmap","title":"use_mmap","text":"public bool use_mmap;\n"},{"location":"xmldocs/llama.oldversion.llamaparams/#use_mlock","title":"use_mlock","text":"public bool use_mlock;\n"},{"location":"xmldocs/llama.oldversion.llamaparams/#mem_test","title":"mem_test","text":"public bool mem_test;\n"},{"location":"xmldocs/llama.oldversion.llamaparams/#verbose_prompt","title":"verbose_prompt","text":"public bool verbose_prompt;\n"},{"location":"xmldocs/llama.oldversion.llamaparams/#constructors","title":"Constructors","text":""},{"location":"xmldocs/llama.oldversion.llamaparams/#llamaparamsint32-int32-int32-int32-int32-int32-int32-dictionaryint32-single-int32-single-single-single-single-single-int32-single-single-int32-single-single-string-string-string-string-string-liststring-string-string-boolean-boolean-boolean-boolean-boolean-boolean-boolean-boolean-boolean-boolean-boolean-boolean-boolean-boolean","title":"LLamaParams(Int32, Int32, Int32, Int32, Int32, Int32, Int32, Dictionary<Int32, Single>, Int32, Single, Single, Single, Single, Single, Int32, Single, Single, Int32, Single, Single, String, String, String, String, String, List<String>, String, String, Boolean, Boolean, Boolean, Boolean, Boolean, Boolean, Boolean, Boolean, Boolean, Boolean, Boolean, Boolean, Boolean, Boolean)","text":"LLamaParams(int seed, int n_threads, int n_predict, int n_ctx, int n_batch, int n_keep, int n_gpu_layers, Dictionary<int, float> logit_bias, int top_k, float top_p, float tfs_z, float typical_p, float temp, float repeat_penalty, int repeat_last_n, float frequency_penalty, float presence_penalty, int mirostat, float mirostat_tau, float mirostat_eta, string model, string prompt, string path_session, string input_prefix, string input_suffix, List<string> antiprompt, string lora_adapter, string lora_base, bool memory_f16, bool random_prompt, bool use_color, bool interactive, bool prompt_cache_all, bool embedding, bool interactive_first, bool instruct, bool penalize_nl, bool perplexity, bool use_mmap, bool use_mlock, bool mem_test, bool verbose_prompt)\n"},{"location":"xmldocs/llama.oldversion.llamaparams/#parameters","title":"Parameters","text":"seed Int32
n_threads Int32
n_predict Int32
n_ctx Int32
n_batch Int32
n_keep Int32
n_gpu_layers Int32
logit_bias Dictionary<Int32, Single>
top_k Int32
top_p Single
tfs_z Single
typical_p Single
temp Single
repeat_penalty Single
repeat_last_n Int32
frequency_penalty Single
presence_penalty Single
mirostat Int32
mirostat_tau Single
mirostat_eta Single
model String
prompt String
path_session String
input_prefix String
input_suffix String
antiprompt List<String>
lora_adapter String
lora_base String
memory_f16 Boolean
random_prompt Boolean
use_color Boolean
interactive Boolean
prompt_cache_all Boolean
embedding Boolean
interactive_first Boolean
instruct Boolean
penalize_nl Boolean
perplexity Boolean
use_mmap Boolean
use_mlock Boolean
mem_test Boolean
verbose_prompt Boolean
Namespace: LLama
The base class for stateful LLama executors.
public abstract class StatefulExecutorBase : LLama.Abstractions.ILLamaExecutor\n Inheritance Object \u2192 StatefulExecutorBase Implements ILLamaExecutor
"},{"location":"xmldocs/llama.statefulexecutorbase/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.statefulexecutorbase/#context","title":"Context","text":"The context used by the executor.
public LLamaContext Context { get; }\n"},{"location":"xmldocs/llama.statefulexecutorbase/#property-value","title":"Property Value","text":"LLamaContext
"},{"location":"xmldocs/llama.statefulexecutorbase/#methods","title":"Methods","text":""},{"location":"xmldocs/llama.statefulexecutorbase/#withsessionfilestring","title":"WithSessionFile(String)","text":"This API is currently not verified.
public StatefulExecutorBase WithSessionFile(string filename)\n"},{"location":"xmldocs/llama.statefulexecutorbase/#parameters","title":"Parameters","text":"filename String
StatefulExecutorBase
"},{"location":"xmldocs/llama.statefulexecutorbase/#exceptions","title":"Exceptions","text":"ArgumentNullException
RuntimeError
"},{"location":"xmldocs/llama.statefulexecutorbase/#savesessionfilestring","title":"SaveSessionFile(String)","text":"This API has not been verified currently.
public void SaveSessionFile(string filename)\n"},{"location":"xmldocs/llama.statefulexecutorbase/#parameters_1","title":"Parameters","text":"filename String
After running out of the context, take some tokens from the original prompt and recompute the logits in batches.
protected void HandleRunOutOfContext(int tokensToKeep)\n"},{"location":"xmldocs/llama.statefulexecutorbase/#parameters_2","title":"Parameters","text":"tokensToKeep Int32
Try to reuse the matching prefix from the session file.
protected void TryReuseMathingPrefix()\n"},{"location":"xmldocs/llama.statefulexecutorbase/#getloopconditioninferstateargs","title":"GetLoopCondition(InferStateArgs)","text":"Decide whether to continue the loop.
protected abstract bool GetLoopCondition(InferStateArgs args)\n"},{"location":"xmldocs/llama.statefulexecutorbase/#parameters_3","title":"Parameters","text":"args InferStateArgs
Boolean
"},{"location":"xmldocs/llama.statefulexecutorbase/#preprocessinputsstring-inferstateargs","title":"PreprocessInputs(String, InferStateArgs)","text":"Preprocess the inputs before the inference.
protected abstract void PreprocessInputs(string text, InferStateArgs args)\n"},{"location":"xmldocs/llama.statefulexecutorbase/#parameters_4","title":"Parameters","text":"text String
args InferStateArgs
Do some post processing after the inference.
protected abstract bool PostProcess(IInferenceParams inferenceParams, InferStateArgs args, IEnumerable`1& extraOutputs)\n"},{"location":"xmldocs/llama.statefulexecutorbase/#parameters_5","title":"Parameters","text":"inferenceParams IInferenceParams
args InferStateArgs
extraOutputs IEnumerable`1&
Boolean
"},{"location":"xmldocs/llama.statefulexecutorbase/#inferinternaliinferenceparams-inferstateargs","title":"InferInternal(IInferenceParams, InferStateArgs)","text":"The core inference logic.
protected abstract void InferInternal(IInferenceParams inferenceParams, InferStateArgs args)\n"},{"location":"xmldocs/llama.statefulexecutorbase/#parameters_6","title":"Parameters","text":"inferenceParams IInferenceParams
args InferStateArgs
Save the current state to a file.
public abstract void SaveState(string filename)\n"},{"location":"xmldocs/llama.statefulexecutorbase/#parameters_7","title":"Parameters","text":"filename String
Get the current state data.
public abstract ExecutorBaseState GetStateData()\n"},{"location":"xmldocs/llama.statefulexecutorbase/#returns_3","title":"Returns","text":"ExecutorBaseState
"},{"location":"xmldocs/llama.statefulexecutorbase/#loadstateexecutorbasestate","title":"LoadState(ExecutorBaseState)","text":"Load the state from data.
public abstract void LoadState(ExecutorBaseState data)\n"},{"location":"xmldocs/llama.statefulexecutorbase/#parameters_8","title":"Parameters","text":"data ExecutorBaseState
Load the state from a file.
public abstract void LoadState(string filename)\n"},{"location":"xmldocs/llama.statefulexecutorbase/#parameters_9","title":"Parameters","text":"filename String
Execute the inference.
public IEnumerable<string> Infer(string text, IInferenceParams inferenceParams, CancellationToken cancellationToken)\n"},{"location":"xmldocs/llama.statefulexecutorbase/#parameters_10","title":"Parameters","text":"text String
inferenceParams IInferenceParams
cancellationToken CancellationToken
IEnumerable<String>
"},{"location":"xmldocs/llama.statefulexecutorbase/#inferasyncstring-iinferenceparams-cancellationtoken","title":"InferAsync(String, IInferenceParams, CancellationToken)","text":"Execute the inference asynchronously.
public IAsyncEnumerable<string> InferAsync(string text, IInferenceParams inferenceParams, CancellationToken cancellationToken)\n"},{"location":"xmldocs/llama.statefulexecutorbase/#parameters_11","title":"Parameters","text":"text String
inferenceParams IInferenceParams
cancellationToken CancellationToken
IAsyncEnumerable<String>
"},{"location":"xmldocs/llama.statelessexecutor/","title":"StatelessExecutor","text":"Namespace: LLama
This executor infer the input as one-time job. Previous inputs won't impact on the response to current input.
public class StatelessExecutor : LLama.Abstractions.ILLamaExecutor\n Inheritance Object \u2192 StatelessExecutor Implements ILLamaExecutor
"},{"location":"xmldocs/llama.statelessexecutor/#properties","title":"Properties","text":""},{"location":"xmldocs/llama.statelessexecutor/#context","title":"Context","text":"The context used by the executor when running the inference.
public LLamaContext Context { get; private set; }\n"},{"location":"xmldocs/llama.statelessexecutor/#property-value","title":"Property Value","text":"LLamaContext
"},{"location":"xmldocs/llama.statelessexecutor/#constructors","title":"Constructors","text":""},{"location":"xmldocs/llama.statelessexecutor/#statelessexecutorllamaweights-imodelparams","title":"StatelessExecutor(LLamaWeights, IModelParams)","text":"Create a new stateless executor which will use the given model
public StatelessExecutor(LLamaWeights weights, IModelParams params)\n"},{"location":"xmldocs/llama.statelessexecutor/#parameters","title":"Parameters","text":"weights LLamaWeights
params IModelParams
Use the constructor which automatically creates contexts using the LLamaWeights
Create a new stateless executor which will use the model used to create the given context
public StatelessExecutor(LLamaContext context)\n"},{"location":"xmldocs/llama.statelessexecutor/#parameters_1","title":"Parameters","text":"context LLamaContext
public IEnumerable<string> Infer(string text, IInferenceParams inferenceParams, CancellationToken cancellationToken)\n"},{"location":"xmldocs/llama.statelessexecutor/#parameters_2","title":"Parameters","text":"text String
inferenceParams IInferenceParams
cancellationToken CancellationToken
IEnumerable<String>
"},{"location":"xmldocs/llama.statelessexecutor/#inferasyncstring-iinferenceparams-cancellationtoken","title":"InferAsync(String, IInferenceParams, CancellationToken)","text":"public IAsyncEnumerable<string> InferAsync(string text, IInferenceParams inferenceParams, CancellationToken cancellationToken)\n"},{"location":"xmldocs/llama.statelessexecutor/#parameters_3","title":"Parameters","text":"text String
inferenceParams IInferenceParams
cancellationToken CancellationToken
IAsyncEnumerable<String>
"},{"location":"xmldocs/llama.utils/","title":"Utils","text":"Namespace: LLama
Assorted llama utilities
public static class Utils\n Inheritance Object \u2192 Utils
"},{"location":"xmldocs/llama.utils/#methods","title":"Methods","text":""},{"location":"xmldocs/llama.utils/#initllamacontextfrommodelparamsimodelparams","title":"InitLLamaContextFromModelParams(IModelParams)","text":""},{"location":"xmldocs/llama.utils/#caution","title":"Caution","text":"Use LLamaWeights.LoadFromFile and LLamaWeights.CreateContext instead
public static SafeLLamaContextHandle InitLLamaContextFromModelParams(IModelParams params)\n"},{"location":"xmldocs/llama.utils/#parameters","title":"Parameters","text":"params IModelParams
SafeLLamaContextHandle
"},{"location":"xmldocs/llama.utils/#tokenizesafellamacontexthandle-string-boolean-encoding","title":"Tokenize(SafeLLamaContextHandle, String, Boolean, Encoding)","text":""},{"location":"xmldocs/llama.utils/#caution_1","title":"Caution","text":"Use SafeLLamaContextHandle Tokenize method instead
public static IEnumerable<int> Tokenize(SafeLLamaContextHandle ctx, string text, bool add_bos, Encoding encoding)\n"},{"location":"xmldocs/llama.utils/#parameters_1","title":"Parameters","text":"ctx SafeLLamaContextHandle
text String
add_bos Boolean
encoding Encoding
IEnumerable<Int32>
"},{"location":"xmldocs/llama.utils/#getlogitssafellamacontexthandle-int32","title":"GetLogits(SafeLLamaContextHandle, Int32)","text":""},{"location":"xmldocs/llama.utils/#caution_2","title":"Caution","text":"Use SafeLLamaContextHandle GetLogits method instead
public static Span<float> GetLogits(SafeLLamaContextHandle ctx, int length)\n"},{"location":"xmldocs/llama.utils/#parameters_2","title":"Parameters","text":"ctx SafeLLamaContextHandle
length Int32
Span<Single>
"},{"location":"xmldocs/llama.utils/#evalsafellamacontexthandle-int32-int32-int32-int32-int32","title":"Eval(SafeLLamaContextHandle, Int32[], Int32, Int32, Int32, Int32)","text":""},{"location":"xmldocs/llama.utils/#caution_3","title":"Caution","text":"Use SafeLLamaContextHandle Eval method instead
public static int Eval(SafeLLamaContextHandle ctx, Int32[] tokens, int startIndex, int n_tokens, int n_past, int n_threads)\n"},{"location":"xmldocs/llama.utils/#parameters_3","title":"Parameters","text":"ctx SafeLLamaContextHandle
tokens Int32[]
startIndex Int32
n_tokens Int32
n_past Int32
n_threads Int32
Int32
"},{"location":"xmldocs/llama.utils/#tokentostringint32-safellamacontexthandle-encoding","title":"TokenToString(Int32, SafeLLamaContextHandle, Encoding)","text":""},{"location":"xmldocs/llama.utils/#caution_4","title":"Caution","text":"Use SafeLLamaContextHandle TokenToString method instead
public static string TokenToString(int token, SafeLLamaContextHandle ctx, Encoding encoding)\n"},{"location":"xmldocs/llama.utils/#parameters_4","title":"Parameters","text":"token Int32
ctx SafeLLamaContextHandle
encoding Encoding
String
"},{"location":"xmldocs/llama.utils/#ptrtostringintptr-encoding","title":"PtrToString(IntPtr, Encoding)","text":""},{"location":"xmldocs/llama.utils/#caution_5","title":"Caution","text":"No longer used internally by LlamaSharp
public static string PtrToString(IntPtr ptr, Encoding encoding)\n"},{"location":"xmldocs/llama.utils/#parameters_5","title":"Parameters","text":"ptr IntPtr
encoding Encoding
String
"}]} \ No newline at end of file diff --git a/0.5/sitemap.xml b/0.5/sitemap.xml new file mode 100755 index 00000000..0f8724ef --- /dev/null +++ b/0.5/sitemap.xml @@ -0,0 +1,3 @@ + +GrammarUnexpectedCharAltElement
+GrammarUnexpectedCharRngElement
+ + +GrammarUnexpectedHexCharsCount
+ + +Namespace: LLama.Abstractions
+Transform history to plain text and vice versa.
+public interface IHistoryTransform
+
+Convert a ChatHistory instance to plain text.
+string HistoryToText(ChatHistory history)
+
+history ChatHistory
+The ChatHistory instance
Converts plain text to a ChatHistory instance.
+ChatHistory TextToHistory(AuthorRole role, string text)
+
+role AuthorRole
+The role for the author.
text String
+The chat history as plain text.
ChatHistory
+The updated history.
Namespace: LLama.Abstractions
+The paramters used for inference.
+public interface IInferenceParams
+
+number of tokens to keep from initial prompt
+public abstract int TokensKeep { get; set; }
+
+how many new tokens to predict (n_predict), set to -1 to inifinitely generate response + until it complete.
+public abstract int MaxTokens { get; set; }
+
+logit bias for specific tokens
+public abstract Dictionary<int, float> LogitBias { get; set; }
+
+Sequences where the model will stop generating further tokens.
+public abstract IEnumerable<string> AntiPrompts { get; set; }
+
+path to file for saving/loading model eval state
+public abstract string PathSession { get; set; }
+
+string to suffix user inputs with
+public abstract string InputSuffix { get; set; }
+
+string to prefix user inputs with
+public abstract string InputPrefix { get; set; }
+
+0 or lower to use vocab size
+public abstract int TopK { get; set; }
+
+1.0 = disabled
+public abstract float TopP { get; set; }
+
+1.0 = disabled
+public abstract float TfsZ { get; set; }
+
+1.0 = disabled
+public abstract float TypicalP { get; set; }
+
+1.0 = disabled
+public abstract float Temperature { get; set; }
+
+1.0 = disabled
+public abstract float RepeatPenalty { get; set; }
+
+last n tokens to penalize (0 = disable penalty, -1 = context size) (repeat_last_n)
+public abstract int RepeatLastTokensCount { get; set; }
+
+frequency penalty coefficient + 0.0 = disabled
+public abstract float FrequencyPenalty { get; set; }
+
+presence penalty coefficient + 0.0 = disabled
+public abstract float PresencePenalty { get; set; }
+
+Mirostat uses tokens instead of words. + algorithm described in the paper https://arxiv.org/abs/2007.14966. + 0 = disabled, 1 = mirostat, 2 = mirostat 2.0
+public abstract MirostatType Mirostat { get; set; }
+
+target entropy
+public abstract float MirostatTau { get; set; }
+
+learning rate
+public abstract float MirostatEta { get; set; }
+
+consider newlines as a repeatable token (penalize_nl)
+public abstract bool PenalizeNL { get; set; }
+
+Grammar to constrain possible tokens
+public abstract SafeLLamaGrammarHandle Grammar { get; set; }
+
+Namespace: LLama.Abstractions
+A high level interface for LLama models.
+public interface ILLamaExecutor
+
+The loaded context for this executor.
+public abstract LLamaContext Context { get; }
+
+Infers a response from the model.
+IEnumerable<string> Infer(string text, IInferenceParams inferenceParams, CancellationToken token)
+
+text String
+Your prompt
inferenceParams IInferenceParams
+Any additional parameters
token CancellationToken
+A cancellation token.
Asynchronously infers a response from the model.
+IAsyncEnumerable<string> InferAsync(string text, IInferenceParams inferenceParams, CancellationToken token)
+
+text String
+Your prompt
inferenceParams IInferenceParams
+Any additional parameters
token CancellationToken
+A cancellation token.
Namespace: LLama.Abstractions
+The parameters for initializing a LLama model.
+public interface IModelParams
+
+Model context size (n_ctx)
+public abstract int ContextSize { get; set; }
+
+the GPU that is used for scratch and small tensors
+public abstract int MainGpu { get; set; }
+
+if true, reduce VRAM usage at the cost of performance
+public abstract bool LowVram { get; set; }
+
+Number of layers to run in VRAM / GPU memory (n_gpu_layers)
+public abstract int GpuLayerCount { get; set; }
+
+Seed for the random number generator (seed)
+public abstract int Seed { get; set; }
+
+Use f16 instead of f32 for memory kv (memory_f16)
+public abstract bool UseFp16Memory { get; set; }
+
+Use mmap for faster loads (use_mmap)
+public abstract bool UseMemorymap { get; set; }
+
+Use mlock to keep model in memory (use_mlock)
+public abstract bool UseMemoryLock { get; set; }
+
+Compute perplexity over the prompt (perplexity)
+public abstract bool Perplexity { get; set; }
+
+Model path (model)
+public abstract string ModelPath { get; set; }
+
+model alias
+public abstract string ModelAlias { get; set; }
+
+lora adapter path (lora_adapter)
+public abstract string LoraAdapter { get; set; }
+
+base model path for the lora adapter (lora_base)
+public abstract string LoraBase { get; set; }
+
+Number of threads (-1 = autodetect) (n_threads)
+public abstract int Threads { get; set; }
+
+batch size for prompt processing (must be >=32 to use BLAS) (n_batch)
+public abstract int BatchSize { get; set; }
+
+Whether to convert eos to newline during the inference.
+public abstract bool ConvertEosToNewLine { get; set; }
+
+Whether to use embedding mode. (embedding) Note that if this is set to true, + The LLamaModel won't produce text response anymore.
+public abstract bool EmbeddingMode { get; set; }
+
+how split tensors should be distributed across GPUs
+public abstract Single[] TensorSplits { get; set; }
+
+RoPE base frequency
+public abstract float RopeFrequencyBase { get; set; }
+
+RoPE frequency scaling factor
+public abstract float RopeFrequencyScale { get; set; }
+
+Use experimental mul_mat_q kernels
+public abstract bool MulMatQ { get; set; }
+
+The encoding to use for models
+public abstract Encoding Encoding { get; set; }
+
+Namespace: LLama.Abstractions
+Takes a stream of tokens and transforms them.
+public interface ITextStreamTransform
+
+Takes a stream of tokens and transforms them, returning a new stream of tokens.
+IEnumerable<string> Transform(IEnumerable<string> tokens)
+
+tokens IEnumerable<String>
Takes a stream of tokens and transforms them, returning a new stream of tokens asynchronously.
+IAsyncEnumerable<string> TransformAsync(IAsyncEnumerable<string> tokens)
+
+tokens IAsyncEnumerable<String>
Namespace: LLama.Abstractions
+An interface for text transformations. + These can be used to compose a pipeline of text transformations, such as: + - Tokenization + - Lowercasing + - Punctuation removal + - Trimming + - etc.
+public interface ITextTransform
+
+Takes a string and transforms it.
+string Transform(string text)
+
+text String
Namespace: LLama
+The main chat session class.
+public class ChatSession
+
+Inheritance Object → ChatSession
+The output transform used in this session.
+public ITextStreamTransform OutputTransform;
+
+The executor for this session.
+public ILLamaExecutor Executor { get; }
+
+The chat history for this session.
+public ChatHistory History { get; }
+
+The history transform used in this session.
+public IHistoryTransform HistoryTransform { get; set; }
+
+The input transform pipeline used in this session.
+public List<ITextTransform> InputTransformPipeline { get; set; }
+
+public ChatSession(ILLamaExecutor executor)
+
+executor ILLamaExecutor
+The executor for this session
Use a custom history transform.
+public ChatSession WithHistoryTransform(IHistoryTransform transform)
+
+transform IHistoryTransform
Add a text transform to the input transform pipeline.
+public ChatSession AddInputTransform(ITextTransform transform)
+
+transform ITextTransform
Use a custom output transform.
+public ChatSession WithOutputTransform(ITextStreamTransform transform)
+
+transform ITextStreamTransform
public void SaveSession(string path)
+
+path String
+The directory name to save the session. If the directory does not exist, a new directory will be created.
public void LoadSession(string path)
+
+path String
+The directory name to load the session.
Get the response from the LLama model with chat histories.
+public IEnumerable<string> Chat(ChatHistory history, IInferenceParams inferenceParams, CancellationToken cancellationToken)
+
+history ChatHistory
inferenceParams IInferenceParams
cancellationToken CancellationToken
Get the response from the LLama model. Note that prompt could not only be the preset words, + but also the question you want to ask.
+public IEnumerable<string> Chat(string prompt, IInferenceParams inferenceParams, CancellationToken cancellationToken)
+
+prompt String
inferenceParams IInferenceParams
cancellationToken CancellationToken
Get the response from the LLama model with chat histories.
+public IAsyncEnumerable<string> ChatAsync(ChatHistory history, IInferenceParams inferenceParams, CancellationToken cancellationToken)
+
+history ChatHistory
inferenceParams IInferenceParams
cancellationToken CancellationToken
Get the response from the LLama model with chat histories asynchronously.
+public IAsyncEnumerable<string> ChatAsync(string prompt, IInferenceParams inferenceParams, CancellationToken cancellationToken)
+
+prompt String
inferenceParams IInferenceParams
cancellationToken CancellationToken
Namespace: LLama.Common
+Role of the message author, e.g. user/assistant/system
+public enum AuthorRole
+
+Inheritance Object → ValueType → Enum → AuthorRole
+Implements IComparable, IFormattable, IConvertible
| Name | +Value | +Description | +
|---|---|---|
| Unknown | +-1 | +Role is unknown | +
| System | +0 | +Message comes from a "system" prompt, not written by a user or language model | +
| User | +1 | +Message comes from the user | +
| Assistant | +2 | +Messages was generated by the language model | +
Namespace: LLama.Common
+The chat history class
+public class ChatHistory
+
+Inheritance Object → ChatHistory
+List of messages in the chat
+public List<Message> Messages { get; }
+
+Create a new instance of the chat content class
+public ChatHistory()
+
+Add a message to the chat history
+public void AddMessage(AuthorRole authorRole, string content)
+
+authorRole AuthorRole
+Role of the message author
content String
+Message content
Namespace: LLama.Common
+A queue with fixed storage size. + Currently it's only a naive implementation and needs to be further optimized in the future.
+public class FixedSizeQueue<T> : , System.Collections.IEnumerable
+
+T
Inheritance Object → FixedSizeQueue<T>
+Implements IEnumerable<T>, IEnumerable
Number of items in this queue
+public int Count { get; }
+
+Maximum number of items allowed in this queue
+public int Capacity { get; }
+
+Create a new queue
+public FixedSizeQueue(int size)
+
+size Int32
+the maximum number of items to store in this queue
Fill the quene with the data. Please ensure that data.Count <= size
+public FixedSizeQueue(int size, IEnumerable<T> data)
+
+size Int32
data IEnumerable<T>
Replace every item in the queue with the given value
+public FixedSizeQueue<T> FillWith(T value)
+
+value T
+The value to replace all items with
FixedSizeQueue<T>
+returns this
Enquene an element.
+public void Enqueue(T item)
+
+item T
public IEnumerator<T> GetEnumerator()
+
+IEnumerator<T>
Namespace: LLama.Common
+receives log messages from LLamaSharp
+public interface ILLamaLogger
+
+Write the log in customized way
+void Log(string source, string message, LogLevel level)
+
+source String
+The source of the log. It may be a method name or class name.
message String
+The message.
level LogLevel
+The log level.
Namespace: LLama.Common
+The paramters used for inference.
+public class InferenceParams : LLama.Abstractions.IInferenceParams
+
+Inheritance Object → InferenceParams
+Implements IInferenceParams
number of tokens to keep from initial prompt
+public int TokensKeep { get; set; }
+
+how many new tokens to predict (n_predict), set to -1 to inifinitely generate response + until it complete.
+public int MaxTokens { get; set; }
+
+logit bias for specific tokens
+public Dictionary<int, float> LogitBias { get; set; }
+
+Sequences where the model will stop generating further tokens.
+public IEnumerable<string> AntiPrompts { get; set; }
+
+path to file for saving/loading model eval state
+public string PathSession { get; set; }
+
+string to suffix user inputs with
+public string InputSuffix { get; set; }
+
+string to prefix user inputs with
+public string InputPrefix { get; set; }
+
+0 or lower to use vocab size
+public int TopK { get; set; }
+
+1.0 = disabled
+public float TopP { get; set; }
+
+1.0 = disabled
+public float TfsZ { get; set; }
+
+1.0 = disabled
+public float TypicalP { get; set; }
+
+1.0 = disabled
+public float Temperature { get; set; }
+
+1.0 = disabled
+public float RepeatPenalty { get; set; }
+
+last n tokens to penalize (0 = disable penalty, -1 = context size) (repeat_last_n)
+public int RepeatLastTokensCount { get; set; }
+
+frequency penalty coefficient + 0.0 = disabled
+public float FrequencyPenalty { get; set; }
+
+presence penalty coefficient + 0.0 = disabled
+public float PresencePenalty { get; set; }
+
+Mirostat uses tokens instead of words. + algorithm described in the paper https://arxiv.org/abs/2007.14966. + 0 = disabled, 1 = mirostat, 2 = mirostat 2.0
+public MirostatType Mirostat { get; set; }
+
+target entropy
+public float MirostatTau { get; set; }
+
+learning rate
+public float MirostatEta { get; set; }
+
+consider newlines as a repeatable token (penalize_nl)
+public bool PenalizeNL { get; set; }
+
+A grammar to constrain the possible tokens
+public SafeLLamaGrammarHandle Grammar { get; set; }
+
+public InferenceParams()
+
+
+
+
+
+
+
+ Namespace: LLama.Common
+The default logger of LLamaSharp. On default it write to console. Use methods of LLamaLogger.Default to change the behavior.
+ It's recommended to inherit ILLamaLogger to customize the behavior.
public sealed class LLamaDefaultLogger : ILLamaLogger
+
+Inheritance Object → LLamaDefaultLogger
+Implements ILLamaLogger
Get the default logger instance
+public static LLamaDefaultLogger Default { get; }
+
+Enable logging output from llama.cpp
+public LLamaDefaultLogger EnableNative()
+
+Enable writing log messages to console
+public LLamaDefaultLogger EnableConsole()
+
+Disable writing messages to console
+public LLamaDefaultLogger DisableConsole()
+
+Enable writing log messages to file
+public LLamaDefaultLogger EnableFile(string filename, FileMode mode)
+
+filename String
mode FileMode
Use DisableFile method without 'filename' parameter
+Disable writing log messages to file
+public LLamaDefaultLogger DisableFile(string filename)
+
+filename String
+unused!
Disable writing log messages to file
+public LLamaDefaultLogger DisableFile()
+
+Log a message
+public void Log(string source, string message, LogLevel level)
+
+source String
+The source of this message (e.g. class name)
message String
+The message to log
level LogLevel
+Severity level of this message
Write a log message with "Info" severity
+public void Info(string message)
+
+message String
Write a log message with "Warn" severity
+public void Warn(string message)
+
+message String
Write a log message with "Error" severity
+public void Error(string message)
+
+message String
Namespace: LLama.Common
+Type of "mirostat" sampling to use. + https://github.com/basusourya/mirostat
+public enum MirostatType
+
+Inheritance Object → ValueType → Enum → MirostatType
+Implements IComparable, IFormattable, IConvertible
| Name | +Value | +Description | +
|---|---|---|
| Disable | +0 | +Disable Mirostat sampling | +
| Mirostat | +1 | +Original mirostat algorithm | +
| Mirostat2 | +2 | +Mirostat 2.0 algorithm | +
Namespace: LLama.Common
+The parameters for initializing a LLama model.
+public class ModelParams : LLama.Abstractions.IModelParams, System.IEquatable`1[[LLama.Common.ModelParams, LLamaSharp, Version=0.5.0.0, Culture=neutral, PublicKeyToken=null]]
+
+Inheritance Object → ModelParams
+Implements IModelParams, IEquatable<ModelParams>
Model context size (n_ctx)
+public int ContextSize { get; set; }
+
+the GPU that is used for scratch and small tensors
+public int MainGpu { get; set; }
+
+if true, reduce VRAM usage at the cost of performance
+public bool LowVram { get; set; }
+
+Number of layers to run in VRAM / GPU memory (n_gpu_layers)
+public int GpuLayerCount { get; set; }
+
+Seed for the random number generator (seed)
+public int Seed { get; set; }
+
+Use f16 instead of f32 for memory kv (memory_f16)
+public bool UseFp16Memory { get; set; }
+
+Use mmap for faster loads (use_mmap)
+public bool UseMemorymap { get; set; }
+
+Use mlock to keep model in memory (use_mlock)
+public bool UseMemoryLock { get; set; }
+
+Compute perplexity over the prompt (perplexity)
+public bool Perplexity { get; set; }
+
+Model path (model)
+public string ModelPath { get; set; }
+
+model alias
+public string ModelAlias { get; set; }
+
+lora adapter path (lora_adapter)
+public string LoraAdapter { get; set; }
+
+base model path for the lora adapter (lora_base)
+public string LoraBase { get; set; }
+
+Number of threads (-1 = autodetect) (n_threads)
+public int Threads { get; set; }
+
+batch size for prompt processing (must be >=32 to use BLAS) (n_batch)
+public int BatchSize { get; set; }
+
+Whether to convert eos to newline during the inference.
+public bool ConvertEosToNewLine { get; set; }
+
+Whether to use embedding mode. (embedding) Note that if this is set to true, + The LLamaModel won't produce text response anymore.
+public bool EmbeddingMode { get; set; }
+
+how split tensors should be distributed across GPUs
+public Single[] TensorSplits { get; set; }
+
+RoPE base frequency
+public float RopeFrequencyBase { get; set; }
+
+RoPE frequency scaling factor
+public float RopeFrequencyScale { get; set; }
+
+Use experimental mul_mat_q kernels
+public bool MulMatQ { get; set; }
+
+The encoding to use to convert text for the model
+public Encoding Encoding { get; set; }
+
+public ModelParams(string modelPath)
+
+modelPath String
+The model path.
Use object initializer to set all optional parameters
+public ModelParams(string modelPath, int contextSize, int gpuLayerCount, int seed, bool useFp16Memory, bool useMemorymap, bool useMemoryLock, bool perplexity, string loraAdapter, string loraBase, int threads, int batchSize, bool convertEosToNewLine, bool embeddingMode, float ropeFrequencyBase, float ropeFrequencyScale, bool mulMatQ, string encoding)
+
+modelPath String
+The model path.
contextSize Int32
+Model context size (n_ctx)
gpuLayerCount Int32
+Number of layers to run in VRAM / GPU memory (n_gpu_layers)
seed Int32
+Seed for the random number generator (seed)
useFp16Memory Boolean
+Whether to use f16 instead of f32 for memory kv (memory_f16)
useMemorymap Boolean
+Whether to use mmap for faster loads (use_mmap)
useMemoryLock Boolean
+Whether to use mlock to keep model in memory (use_mlock)
perplexity Boolean
+Thether to compute perplexity over the prompt (perplexity)
loraAdapter String
+Lora adapter path (lora_adapter)
loraBase String
+Base model path for the lora adapter (lora_base)
threads Int32
+Number of threads (-1 = autodetect) (n_threads)
batchSize Int32
+Batch size for prompt processing (must be >=32 to use BLAS) (n_batch)
convertEosToNewLine Boolean
+Whether to convert eos to newline during the inference.
embeddingMode Boolean
+Whether to use embedding mode. (embedding) Note that if this is set to true, The LLamaModel won't produce text response anymore.
ropeFrequencyBase Single
+RoPE base frequency.
ropeFrequencyScale Single
+RoPE frequency scaling factor
mulMatQ Boolean
+Use experimental mul_mat_q kernels
encoding String
+The encoding to use to convert text for the model
public string ToString()
+
+protected bool PrintMembers(StringBuilder builder)
+
+builder StringBuilder
public int GetHashCode()
+
+public bool Equals(object obj)
+
+obj Object
public bool Equals(ModelParams other)
+
+other ModelParams
public ModelParams <Clone>$()
+
+Namespace: LLama.Exceptions
+Failed to parse a "name" element when one was expected
+public class GrammarExpectedName : GrammarFormatException, System.Runtime.Serialization.ISerializable
+
+Inheritance Object → Exception → GrammarFormatException → GrammarExpectedName
+Implements ISerializable
public MethodBase TargetSite { get; }
+
+public string Message { get; }
+
+public IDictionary Data { get; }
+
+public Exception InnerException { get; }
+
+public string HelpLink { get; set; }
+
+public string Source { get; set; }
+
+public int HResult { get; set; }
+
+public string StackTrace { get; }
+
+Namespace: LLama.Exceptions
+A specified string was expected when parsing
+public class GrammarExpectedNext : GrammarFormatException, System.Runtime.Serialization.ISerializable
+
+Inheritance Object → Exception → GrammarFormatException → GrammarExpectedNext
+Implements ISerializable
public MethodBase TargetSite { get; }
+
+public string Message { get; }
+
+public IDictionary Data { get; }
+
+public Exception InnerException { get; }
+
+public string HelpLink { get; set; }
+
+public string Source { get; set; }
+
+public int HResult { get; set; }
+
+public string StackTrace { get; }
+
+Namespace: LLama.Exceptions
+A specified character was expected to preceded another when parsing
+public class GrammarExpectedPrevious : GrammarFormatException, System.Runtime.Serialization.ISerializable
+
+Inheritance Object → Exception → GrammarFormatException → GrammarExpectedPrevious
+Implements ISerializable
public MethodBase TargetSite { get; }
+
+public string Message { get; }
+
+public IDictionary Data { get; }
+
+public Exception InnerException { get; }
+
+public string HelpLink { get; set; }
+
+public string Source { get; set; }
+
+public int HResult { get; set; }
+
+public string StackTrace { get; }
+
+Namespace: LLama.Exceptions
+Base class for all grammar exceptions
+public abstract class GrammarFormatException : System.Exception, System.Runtime.Serialization.ISerializable
+
+Inheritance Object → Exception → GrammarFormatException
+Implements ISerializable
public MethodBase TargetSite { get; }
+
+public string Message { get; }
+
+public IDictionary Data { get; }
+
+public Exception InnerException { get; }
+
+public string HelpLink { get; set; }
+
+public string Source { get; set; }
+
+public int HResult { get; set; }
+
+public string StackTrace { get; }
+
+Namespace: LLama.Exceptions
+A CHAR_ALT was created without a preceding CHAR element
+public class GrammarUnexpectedCharAltElement : GrammarFormatException, System.Runtime.Serialization.ISerializable
+
+Inheritance Object → Exception → GrammarFormatException → GrammarUnexpectedCharAltElement
+Implements ISerializable
public MethodBase TargetSite { get; }
+
+public string Message { get; }
+
+public IDictionary Data { get; }
+
+public Exception InnerException { get; }
+
+public string HelpLink { get; set; }
+
+public string Source { get; set; }
+
+public int HResult { get; set; }
+
+public string StackTrace { get; }
+
+Namespace: LLama.Exceptions
+A CHAR_RNG was created without a preceding CHAR element
+public class GrammarUnexpectedCharRngElement : GrammarFormatException, System.Runtime.Serialization.ISerializable
+
+Inheritance Object → Exception → GrammarFormatException → GrammarUnexpectedCharRngElement
+Implements ISerializable
public MethodBase TargetSite { get; }
+
+public string Message { get; }
+
+public IDictionary Data { get; }
+
+public Exception InnerException { get; }
+
+public string HelpLink { get; set; }
+
+public string Source { get; set; }
+
+public int HResult { get; set; }
+
+public string StackTrace { get; }
+
+Namespace: LLama.Exceptions
+An END was encountered before the last element
+public class GrammarUnexpectedEndElement : GrammarFormatException, System.Runtime.Serialization.ISerializable
+
+Inheritance Object → Exception → GrammarFormatException → GrammarUnexpectedEndElement
+Implements ISerializable
public MethodBase TargetSite { get; }
+
+public string Message { get; }
+
+public IDictionary Data { get; }
+
+public Exception InnerException { get; }
+
+public string HelpLink { get; set; }
+
+public string Source { get; set; }
+
+public int HResult { get; set; }
+
+public string StackTrace { get; }
+
+Namespace: LLama.Exceptions
+End-of-file was encountered while parsing
+public class GrammarUnexpectedEndOfInput : GrammarFormatException, System.Runtime.Serialization.ISerializable
+
+Inheritance Object → Exception → GrammarFormatException → GrammarUnexpectedEndOfInput
+Implements ISerializable
public MethodBase TargetSite { get; }
+
+public string Message { get; }
+
+public IDictionary Data { get; }
+
+public Exception InnerException { get; }
+
+public string HelpLink { get; set; }
+
+public string Source { get; set; }
+
+public int HResult { get; set; }
+
+public string StackTrace { get; }
+
+Namespace: LLama.Exceptions
+An incorrect number of characters were encountered while parsing a hex literal
+public class GrammarUnexpectedHexCharsCount : GrammarFormatException, System.Runtime.Serialization.ISerializable
+
+Inheritance Object → Exception → GrammarFormatException → GrammarUnexpectedHexCharsCount
+Implements ISerializable
public MethodBase TargetSite { get; }
+
+public string Message { get; }
+
+public IDictionary Data { get; }
+
+public Exception InnerException { get; }
+
+public string HelpLink { get; set; }
+
+public string Source { get; set; }
+
+public int HResult { get; set; }
+
+public string StackTrace { get; }
+
+Namespace: LLama.Exceptions
+An unexpected character was encountered after an escape sequence
+public class GrammarUnknownEscapeCharacter : GrammarFormatException, System.Runtime.Serialization.ISerializable
+
+Inheritance Object → Exception → GrammarFormatException → GrammarUnknownEscapeCharacter
+Implements ISerializable
public MethodBase TargetSite { get; }
+
+public string Message { get; }
+
+public IDictionary Data { get; }
+
+public Exception InnerException { get; }
+
+public string HelpLink { get; set; }
+
+public string Source { get; set; }
+
+public int HResult { get; set; }
+
+public string StackTrace { get; }
+
+Namespace: LLama.Exceptions
+public class RuntimeError : System.Exception, System.Runtime.Serialization.ISerializable
+
+Inheritance Object → Exception → RuntimeError
+Implements ISerializable
public MethodBase TargetSite { get; }
+
+public string Message { get; }
+
+public IDictionary Data { get; }
+
+public Exception InnerException { get; }
+
+public string HelpLink { get; set; }
+
+public string Source { get; set; }
+
+public int HResult { get; set; }
+
+public string StackTrace { get; }
+
+public RuntimeError()
+
+public RuntimeError(string message)
+
+message String
Namespace: LLama.Extensions
+Extention methods to the IModelParams interface
+public static class IModelParamsExtensions
+
+Inheritance Object → IModelParamsExtensions
+Convert the given IModelParams into a LLamaContextParams
public static MemoryHandle ToLlamaContextParams(IModelParams params, LLamaContextParams& result)
+
+params IModelParams
result LLamaContextParams&
Namespace: LLama.Extensions
+Extensions to the KeyValuePair struct
+public static class KeyValuePairExtensions
+
+Inheritance Object → KeyValuePairExtensions
+Deconstruct a KeyValuePair into it's constituent parts.
+public static void Deconstruct<TKey, TValue>(KeyValuePair<TKey, TValue> pair, TKey& first, TValue& second)
+
+TKey
+Type of the Key
TValue
+Type of the Value
pair KeyValuePair<TKey, TValue>
+The KeyValuePair to deconstruct
first TKey&
+First element, the Key
second TValue&
+Second element, the Value
Namespace: LLama.Grammars
+A grammar is a set of GrammarRules for deciding which characters are valid next. Can be used to constrain + output to certain formats - e.g. force the model to output JSON
+public sealed class Grammar
+
+
+Index of the initial rule to start from
+public ulong StartRuleIndex { get; set; }
+
+The rules which make up this grammar
+public IReadOnlyList<GrammarRule> Rules { get; }
+
+Create a new grammar from a set of rules
+public Grammar(IReadOnlyList<GrammarRule> rules, ulong startRuleIndex)
+
+rules IReadOnlyList<GrammarRule>
+The rules which make up this grammar
startRuleIndex UInt64
+Index of the initial rule to start from
Create a SafeLLamaGrammarHandle instance to use for parsing
public SafeLLamaGrammarHandle CreateInstance()
+
+Parse a string of GGML BNF into a Grammar
+public static Grammar Parse(string gbnf, string startRule)
+
+gbnf String
+The string to parse
startRule String
+Name of the start rule of this grammar
Grammar
+A Grammar which can be converted into a SafeLLamaGrammarHandle for sampling
GrammarFormatException
+Thrown if input is malformed
public string ToString()
+
+Namespace: LLama.Grammars
+A single rule in a Grammar
+public sealed class GrammarRule : System.IEquatable`1[[LLama.Grammars.GrammarRule, LLamaSharp, Version=0.5.0.0, Culture=neutral, PublicKeyToken=null]]
+
+Inheritance Object → GrammarRule
+Implements IEquatable<GrammarRule>
Name of this rule
+public string Name { get; }
+
+The elements of this grammar rule
+public IReadOnlyList<LLamaGrammarElement> Elements { get; }
+
+IReadOnlyList<LLamaGrammarElement>
Create a new GrammarRule containing the given elements
+public GrammarRule(string name, IReadOnlyList<LLamaGrammarElement> elements)
+
+name String
elements IReadOnlyList<LLamaGrammarElement>
public string ToString()
+
+public int GetHashCode()
+
+public bool Equals(object obj)
+
+obj Object
public bool Equals(GrammarRule other)
+
+other GrammarRule
public GrammarRule <Clone>$()
+
+Namespace: LLama
+The LLama executor for instruct mode.
+public class InstructExecutor : StatefulExecutorBase, LLama.Abstractions.ILLamaExecutor
+
+Inheritance Object → StatefulExecutorBase → InstructExecutor
+Implements ILLamaExecutor
The context used by the executor.
+public LLamaContext Context { get; }
+
+public InstructExecutor(LLamaContext context, string instructionPrefix, string instructionSuffix)
+
+context LLamaContext
instructionPrefix String
instructionSuffix String
public ExecutorBaseState GetStateData()
+
+public void LoadState(ExecutorBaseState data)
+
+data ExecutorBaseState
public void SaveState(string filename)
+
+filename String
public void LoadState(string filename)
+
+filename String
protected bool GetLoopCondition(InferStateArgs args)
+
+args InferStateArgs
protected void PreprocessInputs(string text, InferStateArgs args)
+
+text String
args InferStateArgs
protected bool PostProcess(IInferenceParams inferenceParams, InferStateArgs args, IEnumerable`1& extraOutputs)
+
+inferenceParams IInferenceParams
args InferStateArgs
extraOutputs IEnumerable`1&
protected void InferInternal(IInferenceParams inferenceParams, InferStateArgs args)
+
+inferenceParams IInferenceParams
args InferStateArgs
Namespace: LLama
+The LLama executor for interactive mode.
+public class InteractiveExecutor : StatefulExecutorBase, LLama.Abstractions.ILLamaExecutor
+
+Inheritance Object → StatefulExecutorBase → InteractiveExecutor
+Implements ILLamaExecutor
The context used by the executor.
+public LLamaContext Context { get; }
+
+public InteractiveExecutor(LLamaContext context)
+
+context LLamaContext
public ExecutorBaseState GetStateData()
+
+public void LoadState(ExecutorBaseState data)
+
+data ExecutorBaseState
public void SaveState(string filename)
+
+filename String
public void LoadState(string filename)
+
+filename String
Define whether to continue the loop to generate responses.
+protected bool GetLoopCondition(InferStateArgs args)
+
+args InferStateArgs
protected void PreprocessInputs(string text, InferStateArgs args)
+
+text String
args InferStateArgs
Return whether to break the generation.
+protected bool PostProcess(IInferenceParams inferenceParams, InferStateArgs args, IEnumerable`1& extraOutputs)
+
+inferenceParams IInferenceParams
args InferStateArgs
extraOutputs IEnumerable`1&
protected void InferInternal(IInferenceParams inferenceParams, InferStateArgs args)
+
+inferenceParams IInferenceParams
args InferStateArgs
Namespace: LLama
+A llama_context, which holds all the context required to interact with a model
+public sealed class LLamaContext : System.IDisposable
+
+Inheritance Object → LLamaContext
+Implements IDisposable
Total number of tokens in vocabulary of this model
+public int VocabCount { get; }
+
+Total number of tokens in the context
+public int ContextSize { get; }
+
+Dimension of embedding vectors
+public int EmbeddingSize { get; }
+
+The model params set for this model.
+public IModelParams Params { get; set; }
+
+The native handle, which is used to be passed to the native APIs
+public SafeLLamaContextHandle NativeHandle { get; }
+
+Remarks:
+Be careful how you use this!
+The encoding set for this model to deal with text input.
+public Encoding Encoding { get; }
+
+The embedding length of the model, also known as n_embed
public int EmbeddingLength { get; }
+
+Use the LLamaWeights.CreateContext instead
+public LLamaContext(IModelParams params, ILLamaLogger logger)
+
+params IModelParams
+Model params.
logger ILLamaLogger
+The logger.
Create a new LLamaContext for the given LLamaWeights
+public LLamaContext(LLamaWeights model, IModelParams params, ILLamaLogger logger)
+
+model LLamaWeights
params IModelParams
logger ILLamaLogger
Create a copy of the current state of this context
+public LLamaContext Clone()
+
+Tokenize a string.
+public Int32[] Tokenize(string text, bool addBos)
+
+text String
addBos Boolean
+Whether to add a bos to the text.
Detokenize the tokens to text.
+public string DeTokenize(IEnumerable<int> tokens)
+
+tokens IEnumerable<Int32>
Save the state to specified path.
+public void SaveState(string filename)
+
+filename String
Use GetState instead, this supports larger states (over 2GB)
Get the state data as a byte array.
+public Byte[] GetStateData()
+
+Get the state data as an opaque handle
+public State GetState()
+
+Load the state from specified path.
+public void LoadState(string filename)
+
+filename String
Load the state from memory.
+public void LoadState(Byte[] stateData)
+
+stateData Byte[]
Load the state from memory.
+public void LoadState(State state)
+
+state State
Perform the sampling. Please don't use it unless you fully know what it does.
+public int Sample(LLamaTokenDataArray candidates, Nullable`1& mirostat_mu, float temperature, MirostatType mirostat, float mirostatTau, float mirostatEta, int topK, float topP, float tfsZ, float typicalP, SafeLLamaGrammarHandle grammar)
+
+candidates LLamaTokenDataArray
mirostat_mu Nullable`1&
temperature Single
mirostat MirostatType
mirostatTau Single
mirostatEta Single
topK Int32
topP Single
tfsZ Single
typicalP Single
grammar SafeLLamaGrammarHandle
Apply the penalty for the tokens. Please don't use it unless you fully know what it does.
+public LLamaTokenDataArray ApplyPenalty(IEnumerable<int> lastTokens, Dictionary<int, float> logitBias, int repeatLastTokensCount, float repeatPenalty, float alphaFrequency, float alphaPresence, bool penalizeNL)
+
+lastTokens IEnumerable<Int32>
logitBias Dictionary<Int32, Single>
repeatLastTokensCount Int32
repeatPenalty Single
alphaFrequency Single
alphaPresence Single
penalizeNL Boolean
public int Eval(Int32[] tokens, int pastTokensCount)
+
+tokens Int32[]
pastTokensCount Int32
Int32
+The updated pastTokensCount.
public int Eval(List<int> tokens, int pastTokensCount)
+
+tokens List<Int32>
pastTokensCount Int32
Int32
+The updated pastTokensCount.
public int Eval(ReadOnlyMemory<int> tokens, int pastTokensCount)
+
+tokens ReadOnlyMemory<Int32>
pastTokensCount Int32
Int32
+The updated pastTokensCount.
public int Eval(ReadOnlySpan<int> tokens, int pastTokensCount)
+
+tokens ReadOnlySpan<Int32>
pastTokensCount Int32
Int32
+The updated pastTokensCount.
internal IEnumerable<string> GenerateResult(IEnumerable<int> ids)
+
+Convert a token into a string
+public string TokenToString(int token)
+
+token Int32
public void Dispose()
+
+
+
+
+
+
+
+ Namespace: LLama
+The embedder for LLama, which supports getting embeddings from text.
+public sealed class LLamaEmbedder : System.IDisposable
+
+Inheritance Object → LLamaEmbedder
+Implements IDisposable
Dimension of embedding vectors
+public int EmbeddingSize { get; }
+
+public LLamaEmbedder(IModelParams params)
+
+params IModelParams
public LLamaEmbedder(LLamaWeights weights, IModelParams params)
+
+weights LLamaWeights
params IModelParams
'threads' and 'encoding' parameters are no longer used
+Get the embeddings of the text.
+public Single[] GetEmbeddings(string text, int threads, bool addBos, string encoding)
+
+text String
threads Int32
+unused
addBos Boolean
+Add bos to the text.
encoding String
+unused
Get the embeddings of the text.
+public Single[] GetEmbeddings(string text)
+
+text String
Get the embeddings of the text.
+public Single[] GetEmbeddings(string text, bool addBos)
+
+text String
addBos Boolean
+Add bos to the text.
public void Dispose()
+
+
+
+
+
+
+
+ Namespace: LLama
+The quantizer to quantize the model.
+public static class LLamaQuantizer
+
+Inheritance Object → LLamaQuantizer
+Quantize the model.
+public static bool Quantize(string srcFileName, string dstFilename, LLamaFtype ftype, int nthread, bool allowRequantize, bool quantizeOutputTensor)
+
+srcFileName String
+The model file to be quantized.
dstFilename String
+The path to save the quantized model.
ftype LLamaFtype
+The type of quantization.
nthread Int32
+Thread to be used during the quantization. By default it's the physical core number.
allowRequantize Boolean
quantizeOutputTensor Boolean
Boolean
+Whether the quantization is successful.
Quantize the model.
+public static bool Quantize(string srcFileName, string dstFilename, string ftype, int nthread, bool allowRequantize, bool quantizeOutputTensor)
+
+srcFileName String
+The model file to be quantized.
dstFilename String
+The path to save the quantized model.
ftype String
+The type of quantization.
nthread Int32
+Thread to be used during the quantization. By default it's the physical core number.
allowRequantize Boolean
quantizeOutputTensor Boolean
Boolean
+Whether the quantization is successful.
Namespace: LLama
+A class that contains all the transforms provided internally by LLama.
+public class LLamaTransforms
+
+Inheritance Object → LLamaTransforms
+public LLamaTransforms()
+
+
+
+
+
+
+
+ Namespace: LLama
+A set of model weights, loaded into memory.
+public sealed class LLamaWeights : System.IDisposable
+
+Inheritance Object → LLamaWeights
+Implements IDisposable
The native handle, which is used in the native APIs
+public SafeLlamaModelHandle NativeHandle { get; }
+
+Remarks:
+Be careful how you use this!
+Encoding to use to convert text into bytes for the model
+public Encoding Encoding { get; }
+
+Total number of tokens in vocabulary of this model
+public int VocabCount { get; }
+
+Total number of tokens in the context
+public int ContextSize { get; }
+
+Dimension of embedding vectors
+public int EmbeddingSize { get; }
+
+Load weights into memory
+public static LLamaWeights LoadFromFile(IModelParams params)
+
+params IModelParams
public void Dispose()
+
+Create a llama_context using this model
+public LLamaContext CreateContext(IModelParams params)
+
+params IModelParams
Namespace: LLama.Native
+A C# representation of the llama.cpp llama_context_params struct
public struct LLamaContextParams
+
+Inheritance Object → ValueType → LLamaContextParams
+RNG seed, -1 for random
+public int seed;
+
+text context
+public int n_ctx;
+
+prompt processing batch size
+public int n_batch;
+
+number of layers to store in VRAM
+public int n_gpu_layers;
+
+the GPU that is used for scratch and small tensors
+public int main_gpu;
+
+how to split layers across multiple GPUs
+public IntPtr tensor_split;
+
+ref: https://github.com/ggerganov/llama.cpp/pull/2054 + RoPE base frequency
+public float rope_freq_base;
+
+ref: https://github.com/ggerganov/llama.cpp/pull/2054 + RoPE frequency scaling factor
+public float rope_freq_scale;
+
+called with a progress value between 0 and 1, pass NULL to disable
+public IntPtr progress_callback;
+
+context pointer passed to the progress callback
+public IntPtr progress_callback_user_data;
+
+if true, reduce VRAM usage at the cost of performance
+public bool low_vram { get; set; }
+
+if true, use experimental mul_mat_q kernels
+public bool mul_mat_q { get; set; }
+
+use fp16 for KV cache
+public bool f16_kv { get; set; }
+
+the llama_eval() call computes all logits, not just the last one
+public bool logits_all { get; set; }
+
+only load the vocabulary, no weights
+public bool vocab_only { get; set; }
+
+use mmap if possible
+public bool use_mmap { get; set; }
+
+force system to keep model in RAM
+public bool use_mlock { get; set; }
+
+embedding mode only
+public bool embedding { get; set; }
+
+Namespace: LLama.Native
+Supported model file types
+public enum LLamaFtype
+
+Inheritance Object → ValueType → Enum → LLamaFtype
+Implements IComparable, IFormattable, IConvertible
| Name | +Value | +Description | +
|---|---|---|
| LLAMA_FTYPE_ALL_F32 | +0 | +All f32 | +
| LLAMA_FTYPE_MOSTLY_F16 | +1 | +Mostly f16 | +
| LLAMA_FTYPE_MOSTLY_Q8_0 | +7 | +Mostly 8 bit | +
| LLAMA_FTYPE_MOSTLY_Q4_0 | +2 | +Mostly 4 bit | +
| LLAMA_FTYPE_MOSTLY_Q4_1 | +3 | +Mostly 4 bit | +
| LLAMA_FTYPE_MOSTLY_Q4_1_SOME_F16 | +4 | +Mostly 4 bit, tok_embeddings.weight and output.weight are f16 | +
| LLAMA_FTYPE_MOSTLY_Q5_0 | +8 | +Mostly 5 bit | +
| LLAMA_FTYPE_MOSTLY_Q5_1 | +9 | +Mostly 5 bit | +
| LLAMA_FTYPE_MOSTLY_Q2_K | +10 | +K-Quant 2 bit | +
| LLAMA_FTYPE_MOSTLY_Q3_K_S | +11 | +K-Quant 3 bit (Small) | +
| LLAMA_FTYPE_MOSTLY_Q3_K_M | +12 | +K-Quant 3 bit (Medium) | +
| LLAMA_FTYPE_MOSTLY_Q3_K_L | +13 | +K-Quant 3 bit (Large) | +
| LLAMA_FTYPE_MOSTLY_Q4_K_S | +14 | +K-Quant 4 bit (Small) | +
| LLAMA_FTYPE_MOSTLY_Q4_K_M | +15 | +K-Quant 4 bit (Medium) | +
| LLAMA_FTYPE_MOSTLY_Q5_K_S | +16 | +K-Quant 5 bit (Small) | +
| LLAMA_FTYPE_MOSTLY_Q5_K_M | +17 | +K-Quant 5 bit (Medium) | +
| LLAMA_FTYPE_MOSTLY_Q6_K | +18 | +K-Quant 6 bit | +
| LLAMA_FTYPE_GUESSED | +1024 | +File type was not specified | +
Namespace: LLama.Native
+An element of a grammar
+public struct LLamaGrammarElement
+
+Inheritance Object → ValueType → LLamaGrammarElement
+Implements IEquatable<LLamaGrammarElement>
The type of this element
+public LLamaGrammarElementType Type;
+
+Unicode code point or rule ID
+public uint Value;
+
+Construct a new LLamaGrammarElement
+LLamaGrammarElement(LLamaGrammarElementType type, uint value)
+
+value UInt32
bool Equals(LLamaGrammarElement other)
+
+other LLamaGrammarElement
bool Equals(object obj)
+
+obj Object
int GetHashCode()
+
+bool IsCharElement()
+
+Namespace: LLama.Native
+grammar element type
+public enum LLamaGrammarElementType
+
+Inheritance Object → ValueType → Enum → LLamaGrammarElementType
+Implements IComparable, IFormattable, IConvertible
| Name | +Value | +Description | +
|---|---|---|
| END | +0 | +end of rule definition | +
| ALT | +1 | +start of alternate definition for rule | +
| RULE_REF | +2 | +non-terminal element: reference to rule | +
| CHAR | +3 | +terminal element: character (code point) | +
| CHAR_NOT | +4 | +inverse char(s) ([^a], [^a-b] [^abc]) | +
| CHAR_RNG_UPPER | +5 | +modifies a preceding CHAR or CHAR_ALT to be an inclusive range ([a-z]) | +
| CHAR_ALT | +6 | +modifies a preceding CHAR or CHAR_RNG_UPPER to add an alternate char to match ([ab], [a-zA]) | +
Namespace: LLama.Native
+Quantizer parameters used in the native API
+public struct LLamaModelQuantizeParams
+
+Inheritance Object → ValueType → LLamaModelQuantizeParams
+number of threads to use for quantizing, if <=0 will use std::thread::hardware_concurrency()
+public int nthread;
+
+quantize to this llama_ftype
+public LLamaFtype ftype;
+
+allow quantizing non-f32/f16 tensors
+public bool allow_requantize { get; set; }
+
+quantize output.weight
+public bool quantize_output_tensor { get; set; }
+
+Namespace: LLama.Native
+public struct LLamaTokenData
+
+Inheritance Object → ValueType → LLamaTokenData
+token id
+public int id;
+
+log-odds of the token
+public float logit;
+
+probability of the token
+public float p;
+
+LLamaTokenData(int id, float logit, float p)
+
+id Int32
logit Single
p Single
Namespace: LLama.Native
+Contains an array of LLamaTokenData, potentially sorted.
+public struct LLamaTokenDataArray
+
+Inheritance Object → ValueType → LLamaTokenDataArray
+The LLamaTokenData
+public Memory<LLamaTokenData> data;
+
+Indicates if data is sorted by logits in descending order. If this is false the token data is in no particular order.
public bool sorted;
+
+Create a new LLamaTokenDataArray
+LLamaTokenDataArray(Memory<LLamaTokenData> tokens, bool isSorted)
+
+tokens Memory<LLamaTokenData>
isSorted Boolean
Create a new LLamaTokenDataArray, copying the data from the given logits
+LLamaTokenDataArray Create(ReadOnlySpan<float> logits)
+
+logits ReadOnlySpan<Single>
Namespace: LLama.Native
+Contains a pointer to an array of LLamaTokenData which is pinned in memory.
+public struct LLamaTokenDataArrayNative
+
+Inheritance Object → ValueType → LLamaTokenDataArrayNative
+A pointer to an array of LlamaTokenData
+public IntPtr data;
+
+Remarks:
+Memory must be pinned in place for all the time this LLamaTokenDataArrayNative is in use
+Number of LLamaTokenData in the array
+public ulong size;
+
+Indicates if the items in the array are sorted
+public bool sorted { get; set; }
+
+Create a new LLamaTokenDataArrayNative around the data in the LLamaTokenDataArray
+MemoryHandle Create(LLamaTokenDataArray array, LLamaTokenDataArrayNative& native)
+
+array LLamaTokenDataArray
+Data source
native LLamaTokenDataArrayNative&
+Created native array
MemoryHandle
+A memory handle, pinning the data in place until disposed
Namespace: LLama.Native
+Direct translation of the llama.cpp API
+public class NativeApi
+
+Inheritance Object → NativeApi
+public NativeApi()
+
+Mirostat 1.0 algorithm described in the paper https://arxiv.org/abs/2007.14966. Uses tokens instead of words.
+public static int llama_sample_token_mirostat(SafeLLamaContextHandle ctx, LLamaTokenDataArrayNative& candidates, float tau, float eta, int m, Single& mu)
+
+candidates LLamaTokenDataArrayNative&
+A vector of llama_token_data containing the candidate tokens, their probabilities (p), and log-odds (logit) for the current position in the generated text.
tau Single
+The target cross-entropy (or surprise) value you want to achieve for the generated text. A higher value corresponds to more surprising or less predictable text, while a lower value corresponds to less surprising or more predictable text.
eta Single
+The learning rate used to update mu based on the error between the target and observed surprisal of the sampled word. A larger learning rate will cause mu to be updated more quickly, while a smaller learning rate will result in slower updates.
m Int32
+The number of tokens considered in the estimation of s_hat. This is an arbitrary value that is used to calculate s_hat, which in turn helps to calculate the value of k. In the paper, they use m = 100, but you can experiment with different values to see how it affects the performance of the algorithm.
mu Single&
+Maximum cross-entropy. This value is initialized to be twice the target cross-entropy (2 * tau) and is updated in the algorithm based on the error between the target and observed surprisal.
Mirostat 2.0 algorithm described in the paper https://arxiv.org/abs/2007.14966. Uses tokens instead of words.
+public static int llama_sample_token_mirostat_v2(SafeLLamaContextHandle ctx, LLamaTokenDataArrayNative& candidates, float tau, float eta, Single& mu)
+
+candidates LLamaTokenDataArrayNative&
+A vector of llama_token_data containing the candidate tokens, their probabilities (p), and log-odds (logit) for the current position in the generated text.
tau Single
+The target cross-entropy (or surprise) value you want to achieve for the generated text. A higher value corresponds to more surprising or less predictable text, while a lower value corresponds to less surprising or more predictable text.
eta Single
+The learning rate used to update mu based on the error between the target and observed surprisal of the sampled word. A larger learning rate will cause mu to be updated more quickly, while a smaller learning rate will result in slower updates.
mu Single&
+Maximum cross-entropy. This value is initialized to be twice the target cross-entropy (2 * tau) and is updated in the algorithm based on the error between the target and observed surprisal.
Selects the token with the highest probability.
+public static int llama_sample_token_greedy(SafeLLamaContextHandle ctx, LLamaTokenDataArrayNative& candidates)
+
+candidates LLamaTokenDataArrayNative&
+Pointer to LLamaTokenDataArray
Randomly selects a token from the candidates based on their probabilities.
+public static int llama_sample_token(SafeLLamaContextHandle ctx, LLamaTokenDataArrayNative& candidates)
+
+candidates LLamaTokenDataArrayNative&
+Pointer to LLamaTokenDataArray
Token Id -> String. Uses the vocabulary in the provided context
+public static IntPtr llama_token_to_str(SafeLLamaContextHandle ctx, int token)
+
+token Int32
IntPtr
+Pointer to a string.
Get the "Beginning of sentence" token
+public static int llama_token_bos(SafeLLamaContextHandle ctx)
+
+Get the "End of sentence" token
+public static int llama_token_eos(SafeLLamaContextHandle ctx)
+
+Get the "new line" token
+public static int llama_token_nl(SafeLLamaContextHandle ctx)
+
+Print out timing information for this context
+public static void llama_print_timings(SafeLLamaContextHandle ctx)
+
+Reset all collected timing information for this context
+public static void llama_reset_timings(SafeLLamaContextHandle ctx)
+
+Print system information
+public static IntPtr llama_print_system_info()
+
+Get the number of tokens in the model vocabulary
+public static int llama_model_n_vocab(SafeLlamaModelHandle model)
+
+model SafeLlamaModelHandle
Get the size of the context window for the model
+public static int llama_model_n_ctx(SafeLlamaModelHandle model)
+
+model SafeLlamaModelHandle
Get the dimension of embedding vectors from this model
+public static int llama_model_n_embd(SafeLlamaModelHandle model)
+
+model SafeLlamaModelHandle
Convert a single token into text
+public static int llama_token_to_piece_with_model(SafeLlamaModelHandle model, int llamaToken, Byte* buffer, int length)
+
+model SafeLlamaModelHandle
llamaToken Int32
buffer Byte*
+buffer to write string into
length Int32
+size of the buffer
Int32
+The length writte, or if the buffer is too small a negative that indicates the length required
Convert text into tokens
+public static int llama_tokenize_with_model(SafeLlamaModelHandle model, Byte* text, Int32* tokens, int n_max_tokens, bool add_bos)
+
+model SafeLlamaModelHandle
text Byte*
tokens Int32*
n_max_tokens Int32
add_bos Boolean
Int32
+Returns the number of tokens on success, no more than n_max_tokens.
+ Returns a negative number on failure - the number of tokens that would have been returned
Register a callback to receive llama log messages
+public static void llama_log_set(LLamaLogCallback logCallback)
+
+logCallback LLamaLogCallback
Create a new grammar from the given set of grammar rules
+public static IntPtr llama_grammar_init(LLamaGrammarElement** rules, ulong n_rules, ulong start_rule_index)
+
+rules LLamaGrammarElement**
n_rules UInt64
start_rule_index UInt64
Free all memory from the given SafeLLamaGrammarHandle
+public static void llama_grammar_free(IntPtr grammar)
+
+grammar IntPtr
Apply constraints from grammar
+public static void llama_sample_grammar(SafeLLamaContextHandle ctx, LLamaTokenDataArrayNative& candidates, SafeLLamaGrammarHandle grammar)
+
+candidates LLamaTokenDataArrayNative&
grammar SafeLLamaGrammarHandle
Accepts the sampled token into the grammar
+public static void llama_grammar_accept_token(SafeLLamaContextHandle ctx, SafeLLamaGrammarHandle grammar, int token)
+
+grammar SafeLLamaGrammarHandle
token Int32
Returns 0 on success
+public static int llama_model_quantize(string fname_inp, string fname_out, LLamaModelQuantizeParams* param)
+
+fname_inp String
fname_out String
param LLamaModelQuantizeParams*
Int32
+Returns 0 on success
Remarks:
+not great API - very likely to change
+Apply classifier-free guidance to the logits as described in academic paper "Stay on topic with Classifier-Free Guidance" https://arxiv.org/abs/2306.17806
+public static void llama_sample_classifier_free_guidance(SafeLLamaContextHandle ctx, LLamaTokenDataArrayNative candidates, SafeLLamaContextHandle guidanceCtx, float scale)
+
+candidates LLamaTokenDataArrayNative
+A vector of llama_token_data containing the candidate tokens, the logits must be directly extracted from the original generation context without being sorted.
guidanceCtx SafeLLamaContextHandle
+A separate context from the same model. Other than a negative prompt at the beginning, it should have all generated and user input tokens copied from the main context.
scale Single
+Guidance strength. 1.0f means no guidance. Higher values mean stronger guidance.
Repetition penalty described in CTRL academic paper https://arxiv.org/abs/1909.05858, with negative logit fix.
+public static void llama_sample_repetition_penalty(SafeLLamaContextHandle ctx, LLamaTokenDataArrayNative& candidates, Int32* last_tokens, ulong last_tokens_size, float penalty)
+
+candidates LLamaTokenDataArrayNative&
+Pointer to LLamaTokenDataArray
last_tokens Int32*
last_tokens_size UInt64
penalty Single
Frequency and presence penalties described in OpenAI API https://platform.openai.com/docs/api-reference/parameter-details.
+public static void llama_sample_frequency_and_presence_penalties(SafeLLamaContextHandle ctx, LLamaTokenDataArrayNative& candidates, Int32* last_tokens, ulong last_tokens_size, float alpha_frequency, float alpha_presence)
+
+candidates LLamaTokenDataArrayNative&
+Pointer to LLamaTokenDataArray
last_tokens Int32*
last_tokens_size UInt64
alpha_frequency Single
alpha_presence Single
Apply classifier-free guidance to the logits as described in academic paper "Stay on topic with Classifier-Free Guidance" https://arxiv.org/abs/2306.17806
+public static void llama_sample_classifier_free_guidance(SafeLLamaContextHandle ctx, LLamaTokenDataArrayNative& candidates, SafeLLamaContextHandle guidance_ctx, float scale)
+
+candidates LLamaTokenDataArrayNative&
+A vector of llama_token_data containing the candidate tokens, the logits must be directly extracted from the original generation context without being sorted.
guidance_ctx SafeLLamaContextHandle
+A separate context from the same model. Other than a negative prompt at the beginning, it should have all generated and user input tokens copied from the main context.
scale Single
+Guidance strength. 1.0f means no guidance. Higher values mean stronger guidance.
Sorts candidate tokens by their logits in descending order and calculate probabilities based on logits.
+public static void llama_sample_softmax(SafeLLamaContextHandle ctx, LLamaTokenDataArrayNative& candidates)
+
+candidates LLamaTokenDataArrayNative&
+Pointer to LLamaTokenDataArray
Top-K sampling described in academic paper "The Curious Case of Neural Text Degeneration" https://arxiv.org/abs/1904.09751
+public static void llama_sample_top_k(SafeLLamaContextHandle ctx, LLamaTokenDataArrayNative& candidates, int k, ulong min_keep)
+
+candidates LLamaTokenDataArrayNative&
+Pointer to LLamaTokenDataArray
k Int32
min_keep UInt64
Nucleus sampling described in academic paper "The Curious Case of Neural Text Degeneration" https://arxiv.org/abs/1904.09751
+public static void llama_sample_top_p(SafeLLamaContextHandle ctx, LLamaTokenDataArrayNative& candidates, float p, ulong min_keep)
+
+candidates LLamaTokenDataArrayNative&
+Pointer to LLamaTokenDataArray
p Single
min_keep UInt64
Tail Free Sampling described in https://www.trentonbricken.com/Tail-Free-Sampling/.
+public static void llama_sample_tail_free(SafeLLamaContextHandle ctx, LLamaTokenDataArrayNative& candidates, float z, ulong min_keep)
+
+candidates LLamaTokenDataArrayNative&
+Pointer to LLamaTokenDataArray
z Single
min_keep UInt64
Locally Typical Sampling implementation described in the paper https://arxiv.org/abs/2202.00666.
+public static void llama_sample_typical(SafeLLamaContextHandle ctx, LLamaTokenDataArrayNative& candidates, float p, ulong min_keep)
+
+candidates LLamaTokenDataArrayNative&
+Pointer to LLamaTokenDataArray
p Single
min_keep UInt64
Modify logits by temperature
+public static void llama_sample_temperature(SafeLLamaContextHandle ctx, LLamaTokenDataArrayNative& candidates, float temp)
+
+candidates LLamaTokenDataArrayNative&
temp Single
A method that does nothing. This is a native method, calling it will force the llama native dependencies to be loaded.
+public static bool llama_empty_call()
+
+Create a LLamaContextParams with default values
+public static LLamaContextParams llama_context_default_params()
+
+Create a LLamaModelQuantizeParams with default values
+public static LLamaModelQuantizeParams llama_model_quantize_default_params()
+
+Check if memory mapping is supported
+public static bool llama_mmap_supported()
+
+Check if memory lockingis supported
+public static bool llama_mlock_supported()
+
+Export a static computation graph for context of 511 and batch size of 1 + NOTE: since this functionality is mostly for debugging and demonstration purposes, we hardcode these + parameters here to keep things simple + IMPORTANT: do not use for anything else other than debugging and testing!
+public static int llama_eval_export(SafeLLamaContextHandle ctx, string fname)
+
+fname String
Various functions for loading a ggml llama model. + Allocate (almost) all memory needed for the model. + Return NULL on failure
+public static IntPtr llama_load_model_from_file(string path_model, LLamaContextParams params)
+
+path_model String
params LLamaContextParams
Create a new llama_context with the given model. + Return value should always be wrapped in SafeLLamaContextHandle!
+public static IntPtr llama_new_context_with_model(SafeLlamaModelHandle model, LLamaContextParams params)
+
+model SafeLlamaModelHandle
params LLamaContextParams
not great API - very likely to change. + Initialize the llama + ggml backend + Call once at the start of the program
+public static void llama_backend_init(bool numa)
+
+numa Boolean
Frees all allocated memory in the given llama_context
+public static void llama_free(IntPtr ctx)
+
+ctx IntPtr
Frees all allocated memory associated with a model
+public static void llama_free_model(IntPtr model)
+
+model IntPtr
Apply a LoRA adapter to a loaded model + path_base_model is the path to a higher quality model to use as a base for + the layers modified by the adapter. Can be NULL to use the current loaded model. + The model needs to be reloaded before applying a new adapter, otherwise the adapter + will be applied on top of the previous one
+public static int llama_model_apply_lora_from_file(SafeLlamaModelHandle model_ptr, string path_lora, string path_base_model, int n_threads)
+
+model_ptr SafeLlamaModelHandle
path_lora String
path_base_model String
n_threads Int32
Int32
+Returns 0 on success
Returns the number of tokens in the KV cache
+public static int llama_get_kv_cache_token_count(SafeLLamaContextHandle ctx)
+
+Sets the current rng seed.
+public static void llama_set_rng_seed(SafeLLamaContextHandle ctx, int seed)
+
+seed Int32
Returns the maximum size in bytes of the state (rng, logits, embedding + and kv_cache) - will often be smaller after compacting tokens
+public static ulong llama_get_state_size(SafeLLamaContextHandle ctx)
+
+Copies the state to the specified destination address. + Destination needs to have allocated enough memory.
+public static ulong llama_copy_state_data(SafeLLamaContextHandle ctx, Byte* dest)
+
+dest Byte*
UInt64
+the number of bytes copied
Copies the state to the specified destination address. + Destination needs to have allocated enough memory (see llama_get_state_size)
+public static ulong llama_copy_state_data(SafeLLamaContextHandle ctx, Byte[] dest)
+
+dest Byte[]
UInt64
+the number of bytes copied
Set the state reading from the specified address
+public static ulong llama_set_state_data(SafeLLamaContextHandle ctx, Byte* src)
+
+src Byte*
UInt64
+the number of bytes read
Set the state reading from the specified address
+public static ulong llama_set_state_data(SafeLLamaContextHandle ctx, Byte[] src)
+
+src Byte[]
UInt64
+the number of bytes read
Load session file
+public static bool llama_load_session_file(SafeLLamaContextHandle ctx, string path_session, Int32[] tokens_out, ulong n_token_capacity, UInt64* n_token_count_out)
+
+path_session String
tokens_out Int32[]
n_token_capacity UInt64
n_token_count_out UInt64*
Save session file
+public static bool llama_save_session_file(SafeLLamaContextHandle ctx, string path_session, Int32[] tokens, ulong n_token_count)
+
+path_session String
tokens Int32[]
n_token_count UInt64
Run the llama inference to obtain the logits and probabilities for the next token. + tokens + n_tokens is the provided batch of new tokens to process + n_past is the number of tokens to use from previous eval calls
+public static int llama_eval(SafeLLamaContextHandle ctx, Int32[] tokens, int n_tokens, int n_past, int n_threads)
+
+tokens Int32[]
n_tokens Int32
n_past Int32
n_threads Int32
Int32
+Returns 0 on success
Run the llama inference to obtain the logits and probabilities for the next token. + tokens + n_tokens is the provided batch of new tokens to process + n_past is the number of tokens to use from previous eval calls
+public static int llama_eval_with_pointer(SafeLLamaContextHandle ctx, Int32* tokens, int n_tokens, int n_past, int n_threads)
+
+tokens Int32*
n_tokens Int32
n_past Int32
n_threads Int32
Int32
+Returns 0 on success
Convert the provided text into tokens.
+public static int llama_tokenize(SafeLLamaContextHandle ctx, string text, Encoding encoding, Int32[] tokens, int n_max_tokens, bool add_bos)
+
+text String
encoding Encoding
tokens Int32[]
n_max_tokens Int32
add_bos Boolean
Int32
+Returns the number of tokens on success, no more than n_max_tokens.
+ Returns a negative number on failure - the number of tokens that would have been returned
Convert the provided text into tokens.
+public static int llama_tokenize_native(SafeLLamaContextHandle ctx, Byte* text, Int32* tokens, int n_max_tokens, bool add_bos)
+
+text Byte*
tokens Int32*
n_max_tokens Int32
add_bos Boolean
Int32
+Returns the number of tokens on success, no more than n_max_tokens.
+ Returns a negative number on failure - the number of tokens that would have been returned
Get the number of tokens in the model vocabulary for this context
+public static int llama_n_vocab(SafeLLamaContextHandle ctx)
+
+Get the size of the context window for the model for this context
+public static int llama_n_ctx(SafeLLamaContextHandle ctx)
+
+Get the dimension of embedding vectors from the model for this context
+public static int llama_n_embd(SafeLLamaContextHandle ctx)
+
+Token logits obtained from the last call to llama_eval()
+ The logits for the last token are stored in the last row
+ Can be mutated in order to change the probabilities of the next token.
+ Rows: n_tokens
+ Cols: n_vocab
public static Single* llama_get_logits(SafeLLamaContextHandle ctx)
+
+Get the embeddings for the input + shape: [n_embd] (1-dimensional)
+public static Single* llama_get_embeddings(SafeLLamaContextHandle ctx)
+
+Namespace: LLama.Native
+A safe wrapper around a llama_context
+public sealed class SafeLLamaContextHandle : SafeLLamaHandleBase, System.IDisposable
+
+Inheritance Object → CriticalFinalizerObject → SafeHandle → SafeLLamaHandleBase → SafeLLamaContextHandle
+Implements IDisposable
Total number of tokens in vocabulary of this model
+public int VocabCount { get; }
+
+Total number of tokens in the context
+public int ContextSize { get; }
+
+Dimension of embedding vectors
+public int EmbeddingSize { get; }
+
+Get the model which this context is using
+public SafeLlamaModelHandle ModelHandle { get; }
+
+public bool IsInvalid { get; }
+
+public bool IsClosed { get; }
+
+Create a new SafeLLamaContextHandle
+public SafeLLamaContextHandle(IntPtr handle, SafeLlamaModelHandle model)
+
+handle IntPtr
+pointer to an allocated llama_context
model SafeLlamaModelHandle
+the model which this context was created from
protected bool ReleaseHandle()
+
+Create a new llama_state for the given model
+public static SafeLLamaContextHandle Create(SafeLlamaModelHandle model, LLamaContextParams lparams)
+
+model SafeLlamaModelHandle
lparams LLamaContextParams
Create a new llama context with a clone of the current llama context state
+public SafeLLamaContextHandle Clone(LLamaContextParams lparams)
+
+lparams LLamaContextParams
Convert the given text into tokens
+public Int32[] Tokenize(string text, bool add_bos, Encoding encoding)
+
+text String
+The text to tokenize
add_bos Boolean
+Whether the "BOS" token should be added
encoding Encoding
+Encoding to use for the text
Token logits obtained from the last call to llama_eval()
+ The logits for the last token are stored in the last row
+ Can be mutated in order to change the probabilities of the next token.
+ Rows: n_tokens
+ Cols: n_vocab
public Span<float> GetLogits()
+
+Convert a token into a string
+public string TokenToString(int token, Encoding encoding)
+
+token Int32
+Token to decode into a string
encoding Encoding
Append a single llama token to a string builder
+public void TokenToString(int token, Encoding encoding, StringBuilder dest)
+
+token Int32
+Token to decode
encoding Encoding
dest StringBuilder
+string builder to append the result to
Convert a single llama token into bytes
+public int TokenToSpan(int token, Span<byte> dest)
+
+token Int32
+Token to decode
dest Span<Byte>
+A span to attempt to write into. If this is too small nothing will be written
Int32
+The size of this token. nothing will be written if this is larger than dest
Run the llama inference to obtain the logits and probabilities for the next token.
+public bool Eval(ReadOnlySpan<int> tokens, int n_past, int n_threads)
+
+tokens ReadOnlySpan<Int32>
+The provided batch of new tokens to process
n_past Int32
+the number of tokens to use from previous eval calls
n_threads Int32
Boolean
+Returns true on success
Get the size of the state, when saved as bytes
+public ulong GetStateSize()
+
+Get the raw state of this context, encoded as bytes. Data is written into the dest pointer.
public ulong GetState(Byte* dest, ulong size)
+
+dest Byte*
+Destination to write to
size UInt64
+Number of bytes available to write to in dest (check required size with GetStateSize())
UInt64
+The number of bytes written to dest
ArgumentOutOfRangeException
+Thrown if dest is too small
Get the raw state of this context, encoded as bytes. Data is written into the dest pointer.
public ulong GetState(IntPtr dest, ulong size)
+
+dest IntPtr
+Destination to write to
size UInt64
+Number of bytes available to write to in dest (check required size with GetStateSize())
UInt64
+The number of bytes written to dest
ArgumentOutOfRangeException
+Thrown if dest is too small
Set the raw state of this context
+public ulong SetState(Byte* src)
+
+src Byte*
+The pointer to read the state from
UInt64
+Number of bytes read from the src pointer
Set the raw state of this context
+public ulong SetState(IntPtr src)
+
+src IntPtr
+The pointer to read the state from
UInt64
+Number of bytes read from the src pointer
Namespace: LLama.Native
+A safe reference to a llama_grammar
public class SafeLLamaGrammarHandle : SafeLLamaHandleBase, System.IDisposable
+
+Inheritance Object → CriticalFinalizerObject → SafeHandle → SafeLLamaHandleBase → SafeLLamaGrammarHandle
+Implements IDisposable
public bool IsInvalid { get; }
+
+public bool IsClosed { get; }
+
+protected bool ReleaseHandle()
+
+Create a new llama_grammar
+public static SafeLLamaGrammarHandle Create(IReadOnlyList<GrammarRule> rules, ulong start_rule_index)
+
+rules IReadOnlyList<GrammarRule>
+A list of list of elements, each inner list makes up one grammar rule
start_rule_index UInt64
+The index (in the outer list) of the start rule
Create a new llama_grammar
+public static SafeLLamaGrammarHandle Create(LLamaGrammarElement** rules, ulong nrules, ulong start_rule_index)
+
+rules LLamaGrammarElement**
+rules list, each rule is a list of rule elements (terminated by a LLamaGrammarElementType.END element)
nrules UInt64
+total number of rules
start_rule_index UInt64
+index of the start rule of the grammar
Namespace: LLama.Native
+Base class for all llama handles to native resources
+public abstract class SafeLLamaHandleBase : System.Runtime.InteropServices.SafeHandle, System.IDisposable
+
+Inheritance Object → CriticalFinalizerObject → SafeHandle → SafeLLamaHandleBase
+Implements IDisposable
public bool IsInvalid { get; }
+
+public bool IsClosed { get; }
+
+public string ToString()
+
+Namespace: LLama.Native
+A reference to a set of llama model weights
+public sealed class SafeLlamaModelHandle : SafeLLamaHandleBase, System.IDisposable
+
+Inheritance Object → CriticalFinalizerObject → SafeHandle → SafeLLamaHandleBase → SafeLlamaModelHandle
+Implements IDisposable
Total number of tokens in vocabulary of this model
+public int VocabCount { get; }
+
+Total number of tokens in the context
+public int ContextSize { get; }
+
+Dimension of embedding vectors
+public int EmbeddingSize { get; }
+
+public bool IsInvalid { get; }
+
+public bool IsClosed { get; }
+
+protected bool ReleaseHandle()
+
+Load a model from the given file path into memory
+public static SafeLlamaModelHandle LoadFromFile(string modelPath, LLamaContextParams lparams)
+
+modelPath String
lparams LLamaContextParams
Apply a LoRA adapter to a loaded model
+public void ApplyLoraFromFile(string lora, string modelBase, int threads)
+
+lora String
modelBase String
+A path to a higher quality model to use as a base for the layers modified by the
+ adapter. Can be NULL to use the current loaded model.
threads Int32
Convert a single llama token into bytes
+public int TokenToSpan(int llama_token, Span<byte> dest)
+
+llama_token Int32
+Token to decode
dest Span<Byte>
+A span to attempt to write into. If this is too small nothing will be written
Int32
+The size of this token. nothing will be written if this is larger than dest
Convert a single llama token into a string
+public string TokenToString(int llama_token, Encoding encoding)
+
+llama_token Int32
encoding Encoding
+Encoding to use to decode the bytes into a string
Append a single llama token to a string builder
+public void TokenToString(int llama_token, Encoding encoding, StringBuilder dest)
+
+llama_token Int32
+Token to decode
encoding Encoding
dest StringBuilder
+string builder to append the result to
Convert a string of text into tokens
+public Int32[] Tokenize(string text, bool add_bos, Encoding encoding)
+
+text String
add_bos Boolean
encoding Encoding
Create a new context for this model
+public SafeLLamaContextHandle CreateContext(LLamaContextParams params)
+
+params LLamaContextParams
Namespace: LLama.Native
+Direct translation of the llama.cpp sampling API
+public class SamplingApi
+
+Inheritance Object → SamplingApi
+public SamplingApi()
+
+Apply grammar rules to candidate tokens
+public static void llama_sample_grammar(SafeLLamaContextHandle ctx, LLamaTokenDataArray candidates, SafeLLamaGrammarHandle grammar)
+
+candidates LLamaTokenDataArray
grammar SafeLLamaGrammarHandle
last_tokens_size parameter is no longer needed
+Repetition penalty described in CTRL academic paper https://arxiv.org/abs/1909.05858, with negative logit fix.
+public static void llama_sample_repetition_penalty(SafeLLamaContextHandle ctx, LLamaTokenDataArray candidates, Memory<int> last_tokens, ulong last_tokens_size, float penalty)
+
+candidates LLamaTokenDataArray
+Pointer to LLamaTokenDataArray
last_tokens Memory<Int32>
last_tokens_size UInt64
penalty Single
Repetition penalty described in CTRL academic paper https://arxiv.org/abs/1909.05858, with negative logit fix.
+public static void llama_sample_repetition_penalty(SafeLLamaContextHandle ctx, LLamaTokenDataArray candidates, Memory<int> last_tokens, float penalty)
+
+candidates LLamaTokenDataArray
+Pointer to LLamaTokenDataArray
last_tokens Memory<Int32>
penalty Single
last_tokens_size parameter is no longer needed
+Frequency and presence penalties described in OpenAI API https://platform.openai.com/docs/api-reference/parameter-details.
+public static void llama_sample_frequency_and_presence_penalties(SafeLLamaContextHandle ctx, LLamaTokenDataArray candidates, Memory<int> last_tokens, ulong last_tokens_size, float alpha_frequency, float alpha_presence)
+
+candidates LLamaTokenDataArray
+Pointer to LLamaTokenDataArray
last_tokens Memory<Int32>
last_tokens_size UInt64
alpha_frequency Single
alpha_presence Single
Frequency and presence penalties described in OpenAI API https://platform.openai.com/docs/api-reference/parameter-details.
+public static void llama_sample_frequency_and_presence_penalties(SafeLLamaContextHandle ctx, LLamaTokenDataArray candidates, Memory<int> last_tokens, float alpha_frequency, float alpha_presence)
+
+candidates LLamaTokenDataArray
+Pointer to LLamaTokenDataArray
last_tokens Memory<Int32>
alpha_frequency Single
alpha_presence Single
Sorts candidate tokens by their logits in descending order and calculate probabilities based on logits.
+public static void llama_sample_softmax(SafeLLamaContextHandle ctx, LLamaTokenDataArray candidates)
+
+candidates LLamaTokenDataArray
+Pointer to LLamaTokenDataArray
Top-K sampling described in academic paper "The Curious Case of Neural Text Degeneration" https://arxiv.org/abs/1904.09751
+public static void llama_sample_top_k(SafeLLamaContextHandle ctx, LLamaTokenDataArray candidates, int k, ulong min_keep)
+
+candidates LLamaTokenDataArray
+Pointer to LLamaTokenDataArray
k Int32
min_keep UInt64
Nucleus sampling described in academic paper "The Curious Case of Neural Text Degeneration" https://arxiv.org/abs/1904.09751
+public static void llama_sample_top_p(SafeLLamaContextHandle ctx, LLamaTokenDataArray candidates, float p, ulong min_keep)
+
+candidates LLamaTokenDataArray
+Pointer to LLamaTokenDataArray
p Single
min_keep UInt64
Tail Free Sampling described in https://www.trentonbricken.com/Tail-Free-Sampling/.
+public static void llama_sample_tail_free(SafeLLamaContextHandle ctx, LLamaTokenDataArray candidates, float z, ulong min_keep)
+
+candidates LLamaTokenDataArray
+Pointer to LLamaTokenDataArray
z Single
min_keep UInt64
Locally Typical Sampling implementation described in the paper https://arxiv.org/abs/2202.00666.
+public static void llama_sample_typical(SafeLLamaContextHandle ctx, LLamaTokenDataArray candidates, float p, ulong min_keep)
+
+candidates LLamaTokenDataArray
+Pointer to LLamaTokenDataArray
p Single
min_keep UInt64
Sample with temperature. + As temperature increases, the prediction becomes diverse but also vulnerable to hallucinations -- generating tokens that are sensible but not factual
+public static void llama_sample_temperature(SafeLLamaContextHandle ctx, LLamaTokenDataArray candidates, float temp)
+
+candidates LLamaTokenDataArray
temp Single
Mirostat 1.0 algorithm described in the paper https://arxiv.org/abs/2007.14966. Uses tokens instead of words.
+public static int llama_sample_token_mirostat(SafeLLamaContextHandle ctx, LLamaTokenDataArray candidates, float tau, float eta, int m, Single& mu)
+
+candidates LLamaTokenDataArray
+A vector of LLamaTokenData containing the candidate tokens, their probabilities (p), and log-odds (logit) for the current position in the generated text.
tau Single
+The target cross-entropy (or surprise) value you want to achieve for the generated text. A higher value corresponds to more surprising or less predictable text, while a lower value corresponds to less surprising or more predictable text.
eta Single
+The learning rate used to update mu based on the error between the target and observed surprisal of the sampled word. A larger learning rate will cause mu to be updated more quickly, while a smaller learning rate will result in slower updates.
m Int32
+The number of tokens considered in the estimation of s_hat. This is an arbitrary value that is used to calculate s_hat, which in turn helps to calculate the value of k. In the paper, they use m = 100, but you can experiment with different values to see how it affects the performance of the algorithm.
mu Single&
+Maximum cross-entropy. This value is initialized to be twice the target cross-entropy (2 * tau) and is updated in the algorithm based on the error between the target and observed surprisal.
Mirostat 2.0 algorithm described in the paper https://arxiv.org/abs/2007.14966. Uses tokens instead of words.
+public static int llama_sample_token_mirostat_v2(SafeLLamaContextHandle ctx, LLamaTokenDataArray candidates, float tau, float eta, Single& mu)
+
+candidates LLamaTokenDataArray
+A vector of LLamaTokenData containing the candidate tokens, their probabilities (p), and log-odds (logit) for the current position in the generated text.
tau Single
+The target cross-entropy (or surprise) value you want to achieve for the generated text. A higher value corresponds to more surprising or less predictable text, while a lower value corresponds to less surprising or more predictable text.
eta Single
+The learning rate used to update mu based on the error between the target and observed surprisal of the sampled word. A larger learning rate will cause mu to be updated more quickly, while a smaller learning rate will result in slower updates.
mu Single&
+Maximum cross-entropy. This value is initialized to be twice the target cross-entropy (2 * tau) and is updated in the algorithm based on the error between the target and observed surprisal.
Selects the token with the highest probability.
+public static int llama_sample_token_greedy(SafeLLamaContextHandle ctx, LLamaTokenDataArray candidates)
+
+candidates LLamaTokenDataArray
+Pointer to LLamaTokenDataArray
Randomly selects a token from the candidates based on their probabilities.
+public static int llama_sample_token(SafeLLamaContextHandle ctx, LLamaTokenDataArray candidates)
+
+candidates LLamaTokenDataArray
+Pointer to LLamaTokenDataArray
Namespace: LLama.OldVersion
+The entire LLama.OldVersion namespace will be removed
+public class ChatCompletion : System.IEquatable`1[[LLama.OldVersion.ChatCompletion, LLamaSharp, Version=0.5.0.0, Culture=neutral, PublicKeyToken=null]]
+
+Inheritance Object → ChatCompletion
+Implements IEquatable<ChatCompletion>
public string Id { get; set; }
+
+public string Object { get; set; }
+
+public int Created { get; set; }
+
+public string Model { get; set; }
+
+public ChatCompletionChoice[] Choices { get; set; }
+
+public CompletionUsage Usage { get; set; }
+
+public ChatCompletion(string Id, string Object, int Created, string Model, ChatCompletionChoice[] Choices, CompletionUsage Usage)
+
+Id String
Object String
Created Int32
Model String
Choices ChatCompletionChoice[]
Usage CompletionUsage
public string ToString()
+
+protected bool PrintMembers(StringBuilder builder)
+
+builder StringBuilder
public int GetHashCode()
+
+public bool Equals(object obj)
+
+obj Object
public bool Equals(ChatCompletion other)
+
+other ChatCompletion
public ChatCompletion <Clone>$()
+
+public void Deconstruct(String& Id, String& Object, Int32& Created, String& Model, ChatCompletionChoice[]& Choices, CompletionUsage& Usage)
+
+Id String&
Object String&
Created Int32&
Model String&
Choices ChatCompletionChoice[]&
Usage CompletionUsage&
Namespace: LLama.OldVersion
+The entire LLama.OldVersion namespace will be removed
+public class ChatCompletionChoice : System.IEquatable`1[[LLama.OldVersion.ChatCompletionChoice, LLamaSharp, Version=0.5.0.0, Culture=neutral, PublicKeyToken=null]]
+
+Inheritance Object → ChatCompletionChoice
+Implements IEquatable<ChatCompletionChoice>
public int Index { get; set; }
+
+public ChatCompletionMessage Message { get; set; }
+
+public string FinishReason { get; set; }
+
+public ChatCompletionChoice(int Index, ChatCompletionMessage Message, string FinishReason)
+
+Index Int32
Message ChatCompletionMessage
FinishReason String
public string ToString()
+
+protected bool PrintMembers(StringBuilder builder)
+
+builder StringBuilder
public int GetHashCode()
+
+public bool Equals(object obj)
+
+obj Object
public bool Equals(ChatCompletionChoice other)
+
+other ChatCompletionChoice
public ChatCompletionChoice <Clone>$()
+
+public void Deconstruct(Int32& Index, ChatCompletionMessage& Message, String& FinishReason)
+
+Index Int32&
Message ChatCompletionMessage&
FinishReason String&
Namespace: LLama.OldVersion
+The entire LLama.OldVersion namespace will be removed
+public class ChatCompletionChunk : System.IEquatable`1[[LLama.OldVersion.ChatCompletionChunk, LLamaSharp, Version=0.5.0.0, Culture=neutral, PublicKeyToken=null]]
+
+Inheritance Object → ChatCompletionChunk
+Implements IEquatable<ChatCompletionChunk>
public string Id { get; set; }
+
+public string Model { get; set; }
+
+public string Object { get; set; }
+
+public int Created { get; set; }
+
+public ChatCompletionChunkChoice[] Choices { get; set; }
+
+public ChatCompletionChunk(string Id, string Model, string Object, int Created, ChatCompletionChunkChoice[] Choices)
+
+Id String
Model String
Object String
Created Int32
Choices ChatCompletionChunkChoice[]
public string ToString()
+
+protected bool PrintMembers(StringBuilder builder)
+
+builder StringBuilder
public int GetHashCode()
+
+public bool Equals(object obj)
+
+obj Object
public bool Equals(ChatCompletionChunk other)
+
+other ChatCompletionChunk
public ChatCompletionChunk <Clone>$()
+
+public void Deconstruct(String& Id, String& Model, String& Object, Int32& Created, ChatCompletionChunkChoice[]& Choices)
+
+Id String&
Model String&
Object String&
Created Int32&
Choices ChatCompletionChunkChoice[]&
Namespace: LLama.OldVersion
+The entire LLama.OldVersion namespace will be removed
+public class ChatCompletionChunkChoice : System.IEquatable`1[[LLama.OldVersion.ChatCompletionChunkChoice, LLamaSharp, Version=0.5.0.0, Culture=neutral, PublicKeyToken=null]]
+
+Inheritance Object → ChatCompletionChunkChoice
+Implements IEquatable<ChatCompletionChunkChoice>
public int Index { get; set; }
+
+public ChatCompletionChunkDelta Delta { get; set; }
+
+public string FinishReason { get; set; }
+
+public ChatCompletionChunkChoice(int Index, ChatCompletionChunkDelta Delta, string FinishReason)
+
+Index Int32
Delta ChatCompletionChunkDelta
FinishReason String
public string ToString()
+
+protected bool PrintMembers(StringBuilder builder)
+
+builder StringBuilder
public int GetHashCode()
+
+public bool Equals(object obj)
+
+obj Object
public bool Equals(ChatCompletionChunkChoice other)
+
+other ChatCompletionChunkChoice
public ChatCompletionChunkChoice <Clone>$()
+
+public void Deconstruct(Int32& Index, ChatCompletionChunkDelta& Delta, String& FinishReason)
+
+Index Int32&
Delta ChatCompletionChunkDelta&
FinishReason String&
Namespace: LLama.OldVersion
+The entire LLama.OldVersion namespace will be removed
+public class ChatCompletionChunkDelta : System.IEquatable`1[[LLama.OldVersion.ChatCompletionChunkDelta, LLamaSharp, Version=0.5.0.0, Culture=neutral, PublicKeyToken=null]]
+
+Inheritance Object → ChatCompletionChunkDelta
+Implements IEquatable<ChatCompletionChunkDelta>
public string Role { get; set; }
+
+public string Content { get; set; }
+
+public ChatCompletionChunkDelta(string Role, string Content)
+
+Role String
Content String
public string ToString()
+
+protected bool PrintMembers(StringBuilder builder)
+
+builder StringBuilder
public int GetHashCode()
+
+public bool Equals(object obj)
+
+obj Object
public bool Equals(ChatCompletionChunkDelta other)
+
+other ChatCompletionChunkDelta
public ChatCompletionChunkDelta <Clone>$()
+
+public void Deconstruct(String& Role, String& Content)
+
+Role String&
Content String&
Namespace: LLama.OldVersion
+The entire LLama.OldVersion namespace will be removed
+public class ChatCompletionMessage : System.IEquatable`1[[LLama.OldVersion.ChatCompletionMessage, LLamaSharp, Version=0.5.0.0, Culture=neutral, PublicKeyToken=null]]
+
+Inheritance Object → ChatCompletionMessage
+Implements IEquatable<ChatCompletionMessage>
public ChatRole Role { get; set; }
+
+public string Content { get; set; }
+
+public string Name { get; set; }
+
+public ChatCompletionMessage(ChatRole Role, string Content, string Name)
+
+Role ChatRole
Content String
Name String
public string ToString()
+
+protected bool PrintMembers(StringBuilder builder)
+
+builder StringBuilder
public int GetHashCode()
+
+public bool Equals(object obj)
+
+obj Object
public bool Equals(ChatCompletionMessage other)
+
+other ChatCompletionMessage
public ChatCompletionMessage <Clone>$()
+
+public void Deconstruct(ChatRole& Role, String& Content, String& Name)
+
+Role ChatRole&
Content String&
Name String&
Namespace: LLama.OldVersion
+The entire LLama.OldVersion namespace will be removed
+public class ChatMessageRecord : System.IEquatable`1[[LLama.OldVersion.ChatMessageRecord, LLamaSharp, Version=0.5.0.0, Culture=neutral, PublicKeyToken=null]]
+
+Inheritance Object → ChatMessageRecord
+Implements IEquatable<ChatMessageRecord>
public ChatCompletionMessage Message { get; set; }
+
+public DateTime Time { get; set; }
+
+public ChatMessageRecord(ChatCompletionMessage Message, DateTime Time)
+
+Message ChatCompletionMessage
Time DateTime
public string ToString()
+
+protected bool PrintMembers(StringBuilder builder)
+
+builder StringBuilder
public int GetHashCode()
+
+public bool Equals(object obj)
+
+obj Object
public bool Equals(ChatMessageRecord other)
+
+other ChatMessageRecord
public ChatMessageRecord <Clone>$()
+
+public void Deconstruct(ChatCompletionMessage& Message, DateTime& Time)
+
+Message ChatCompletionMessage&
Time DateTime&
Namespace: LLama.OldVersion
+public enum ChatRole
+
+Inheritance Object → ValueType → Enum → ChatRole
+Implements IComparable, IFormattable, IConvertible
| Name | +Value | +Description | +
|---|---|---|
| + | + | + |
Namespace: LLama.OldVersion
+The entire LLama.OldVersion namespace will be removed
+public class ChatSession<T>
+
+T
Inheritance Object → ChatSession<T>
+public ChatSession(T model)
+
+model T
public IEnumerable<string> Chat(string text, string prompt, string encoding)
+
+text String
prompt String
encoding String
public ChatSession<T> WithPrompt(string prompt, string encoding)
+
+prompt String
encoding String
public ChatSession<T> WithPromptFile(string promptFilename, string encoding)
+
+promptFilename String
encoding String
Set the keywords to split the return value of chat AI.
+public ChatSession<T> WithAntiprompt(String[] antiprompt)
+
+antiprompt String[]
Namespace: LLama.OldVersion
+The entire LLama.OldVersion namespace will be removed
+public class Completion : System.IEquatable`1[[LLama.OldVersion.Completion, LLamaSharp, Version=0.5.0.0, Culture=neutral, PublicKeyToken=null]]
+
+Inheritance Object → Completion
+Implements IEquatable<Completion>
public string Id { get; set; }
+
+public string Object { get; set; }
+
+public int Created { get; set; }
+
+public string Model { get; set; }
+
+public CompletionChoice[] Choices { get; set; }
+
+public CompletionUsage Usage { get; set; }
+
+public Completion(string Id, string Object, int Created, string Model, CompletionChoice[] Choices, CompletionUsage Usage)
+
+Id String
Object String
Created Int32
Model String
Choices CompletionChoice[]
Usage CompletionUsage
public string ToString()
+
+protected bool PrintMembers(StringBuilder builder)
+
+builder StringBuilder
public int GetHashCode()
+
+public bool Equals(object obj)
+
+obj Object
public bool Equals(Completion other)
+
+other Completion
public Completion <Clone>$()
+
+public void Deconstruct(String& Id, String& Object, Int32& Created, String& Model, CompletionChoice[]& Choices, CompletionUsage& Usage)
+
+Id String&
Object String&
Created Int32&
Model String&
Choices CompletionChoice[]&
Usage CompletionUsage&
Namespace: LLama.OldVersion
+The entire LLama.OldVersion namespace will be removed
+public class CompletionChoice : System.IEquatable`1[[LLama.OldVersion.CompletionChoice, LLamaSharp, Version=0.5.0.0, Culture=neutral, PublicKeyToken=null]]
+
+Inheritance Object → CompletionChoice
+Implements IEquatable<CompletionChoice>
public string Text { get; set; }
+
+public int Index { get; set; }
+
+public CompletionLogprobs Logprobs { get; set; }
+
+public string FinishReason { get; set; }
+
+public CompletionChoice(string Text, int Index, CompletionLogprobs Logprobs, string FinishReason)
+
+Text String
Index Int32
Logprobs CompletionLogprobs
FinishReason String
public string ToString()
+
+protected bool PrintMembers(StringBuilder builder)
+
+builder StringBuilder
public int GetHashCode()
+
+public bool Equals(object obj)
+
+obj Object
public bool Equals(CompletionChoice other)
+
+other CompletionChoice
public CompletionChoice <Clone>$()
+
+public void Deconstruct(String& Text, Int32& Index, CompletionLogprobs& Logprobs, String& FinishReason)
+
+Text String&
Index Int32&
Logprobs CompletionLogprobs&
FinishReason String&
Namespace: LLama.OldVersion
+The entire LLama.OldVersion namespace will be removed
+public class CompletionChunk : System.IEquatable`1[[LLama.OldVersion.CompletionChunk, LLamaSharp, Version=0.5.0.0, Culture=neutral, PublicKeyToken=null]]
+
+Inheritance Object → CompletionChunk
+Implements IEquatable<CompletionChunk>
public string Id { get; set; }
+
+public string Object { get; set; }
+
+public int Created { get; set; }
+
+public string Model { get; set; }
+
+public CompletionChoice[] Choices { get; set; }
+
+public CompletionChunk(string Id, string Object, int Created, string Model, CompletionChoice[] Choices)
+
+Id String
Object String
Created Int32
Model String
Choices CompletionChoice[]
public string ToString()
+
+protected bool PrintMembers(StringBuilder builder)
+
+builder StringBuilder
public int GetHashCode()
+
+public bool Equals(object obj)
+
+obj Object
public bool Equals(CompletionChunk other)
+
+other CompletionChunk
public CompletionChunk <Clone>$()
+
+public void Deconstruct(String& Id, String& Object, Int32& Created, String& Model, CompletionChoice[]& Choices)
+
+Id String&
Object String&
Created Int32&
Model String&
Choices CompletionChoice[]&
Namespace: LLama.OldVersion
+The entire LLama.OldVersion namespace will be removed
+public class CompletionLogprobs : System.IEquatable`1[[LLama.OldVersion.CompletionLogprobs, LLamaSharp, Version=0.5.0.0, Culture=neutral, PublicKeyToken=null]]
+
+Inheritance Object → CompletionLogprobs
+Implements IEquatable<CompletionLogprobs>
public Int32[] TextOffset { get; set; }
+
+public Single[] TokenLogProbs { get; set; }
+
+public String[] Tokens { get; set; }
+
+public Dictionary`2[] TopLogprobs { get; set; }
+
+public CompletionLogprobs(Int32[] TextOffset, Single[] TokenLogProbs, String[] Tokens, Dictionary`2[] TopLogprobs)
+
+TextOffset Int32[]
TokenLogProbs Single[]
Tokens String[]
TopLogprobs Dictionary`2[]
public string ToString()
+
+protected bool PrintMembers(StringBuilder builder)
+
+builder StringBuilder
public int GetHashCode()
+
+public bool Equals(object obj)
+
+obj Object
public bool Equals(CompletionLogprobs other)
+
+other CompletionLogprobs
public CompletionLogprobs <Clone>$()
+
+public void Deconstruct(Int32[]& TextOffset, Single[]& TokenLogProbs, String[]& Tokens, Dictionary`2[]& TopLogprobs)
+
+TextOffset Int32[]&
TokenLogProbs Single[]&
Tokens String[]&
TopLogprobs Dictionary`2[]&
Namespace: LLama.OldVersion
+The entire LLama.OldVersion namespace will be removed
+public class CompletionUsage : System.IEquatable`1[[LLama.OldVersion.CompletionUsage, LLamaSharp, Version=0.5.0.0, Culture=neutral, PublicKeyToken=null]]
+
+Inheritance Object → CompletionUsage
+Implements IEquatable<CompletionUsage>
public int PromptTokens { get; set; }
+
+public int CompletionTokens { get; set; }
+
+public int TotalTokens { get; set; }
+
+public CompletionUsage(int PromptTokens, int CompletionTokens, int TotalTokens)
+
+PromptTokens Int32
CompletionTokens Int32
TotalTokens Int32
public string ToString()
+
+protected bool PrintMembers(StringBuilder builder)
+
+builder StringBuilder
public int GetHashCode()
+
+public bool Equals(object obj)
+
+obj Object
public bool Equals(CompletionUsage other)
+
+other CompletionUsage
public CompletionUsage <Clone>$()
+
+public void Deconstruct(Int32& PromptTokens, Int32& CompletionTokens, Int32& TotalTokens)
+
+PromptTokens Int32&
CompletionTokens Int32&
TotalTokens Int32&
Namespace: LLama.OldVersion
+The entire LLama.OldVersion namespace will be removed
+public class Embedding : System.IEquatable`1[[LLama.OldVersion.Embedding, LLamaSharp, Version=0.5.0.0, Culture=neutral, PublicKeyToken=null]]
+
+Inheritance Object → Embedding
+Implements IEquatable<Embedding>
public string Object { get; set; }
+
+public string Model { get; set; }
+
+public EmbeddingData[] Data { get; set; }
+
+public EmbeddingUsage Usage { get; set; }
+
+public Embedding(string Object, string Model, EmbeddingData[] Data, EmbeddingUsage Usage)
+
+Object String
Model String
Data EmbeddingData[]
Usage EmbeddingUsage
public string ToString()
+
+protected bool PrintMembers(StringBuilder builder)
+
+builder StringBuilder
public int GetHashCode()
+
+public bool Equals(object obj)
+
+obj Object
public bool Equals(Embedding other)
+
+other Embedding
public Embedding <Clone>$()
+
+public void Deconstruct(String& Object, String& Model, EmbeddingData[]& Data, EmbeddingUsage& Usage)
+
+Object String&
Model String&
Data EmbeddingData[]&
Usage EmbeddingUsage&
Namespace: LLama.OldVersion
+The entire LLama.OldVersion namespace will be removed
+public class EmbeddingData : System.IEquatable`1[[LLama.OldVersion.EmbeddingData, LLamaSharp, Version=0.5.0.0, Culture=neutral, PublicKeyToken=null]]
+
+Inheritance Object → EmbeddingData
+Implements IEquatable<EmbeddingData>
public int Index { get; set; }
+
+public string Object { get; set; }
+
+public Single[] Embedding { get; set; }
+
+public EmbeddingData(int Index, string Object, Single[] Embedding)
+
+Index Int32
Object String
Embedding Single[]
public string ToString()
+
+protected bool PrintMembers(StringBuilder builder)
+
+builder StringBuilder
public int GetHashCode()
+
+public bool Equals(object obj)
+
+obj Object
public bool Equals(EmbeddingData other)
+
+other EmbeddingData
public EmbeddingData <Clone>$()
+
+public void Deconstruct(Int32& Index, String& Object, Single[]& Embedding)
+
+Index Int32&
Object String&
Embedding Single[]&
Namespace: LLama.OldVersion
+The entire LLama.OldVersion namespace will be removed
+public class EmbeddingUsage : System.IEquatable`1[[LLama.OldVersion.EmbeddingUsage, LLamaSharp, Version=0.5.0.0, Culture=neutral, PublicKeyToken=null]]
+
+Inheritance Object → EmbeddingUsage
+Implements IEquatable<EmbeddingUsage>
public int PromptTokens { get; set; }
+
+public int TotalTokens { get; set; }
+
+public EmbeddingUsage(int PromptTokens, int TotalTokens)
+
+PromptTokens Int32
TotalTokens Int32
public string ToString()
+
+protected bool PrintMembers(StringBuilder builder)
+
+builder StringBuilder
public int GetHashCode()
+
+public bool Equals(object obj)
+
+obj Object
public bool Equals(EmbeddingUsage other)
+
+other EmbeddingUsage
public EmbeddingUsage <Clone>$()
+
+public void Deconstruct(Int32& PromptTokens, Int32& TotalTokens)
+
+PromptTokens Int32&
TotalTokens Int32&
Namespace: LLama.OldVersion
+The entire LLama.OldVersion namespace will be removed
+public interface IChatModel
+
+public abstract string Name { get; }
+
+IEnumerable<string> Chat(string text, string prompt, string encoding)
+
+text String
prompt String
encoding String
Init a prompt for chat and automatically produce the next prompt during the chat.
+void InitChatPrompt(string prompt, string encoding)
+
+prompt String
encoding String
void InitChatAntiprompt(String[] antiprompt)
+
+antiprompt String[]
Namespace: LLama.OldVersion
+The entire LLama.OldVersion namespace will be removed
+public class LLamaEmbedder : System.IDisposable
+
+Inheritance Object → LLamaEmbedder
+Implements IDisposable
public LLamaEmbedder(LLamaParams params)
+
+params LLamaParams
public Single[] GetEmbeddings(string text, int n_thread, bool add_bos, string encoding)
+
+text String
n_thread Int32
add_bos Boolean
encoding String
public void Dispose()
+
+
+
+
+
+
+
+ Namespace: LLama.OldVersion
+The entire LLama.OldVersion namespace will be removed
+public class LLamaModel : IChatModel, System.IDisposable
+
+Inheritance Object → LLamaModel
+Implements IChatModel, IDisposable
public string Name { get; set; }
+
+public bool Verbose { get; set; }
+
+public SafeLLamaContextHandle NativeHandle { get; }
+
+Please refer LLamaParams to find the meanings of each arg. Be sure to have set the n_gpu_layers, otherwise it will
+ load 20 layers to gpu by default.
public LLamaModel(string model_path, string model_name, bool verbose, int seed, int n_threads, int n_predict, int n_ctx, int n_batch, int n_keep, int n_gpu_layers, Dictionary<int, float> logit_bias, int top_k, float top_p, float tfs_z, float typical_p, float temp, float repeat_penalty, int repeat_last_n, float frequency_penalty, float presence_penalty, int mirostat, float mirostat_tau, float mirostat_eta, string prompt, string path_session, string input_prefix, string input_suffix, List<string> antiprompt, string lora_adapter, string lora_base, bool memory_f16, bool random_prompt, bool use_color, bool interactive, bool embedding, bool interactive_first, bool prompt_cache_all, bool instruct, bool penalize_nl, bool perplexity, bool use_mmap, bool use_mlock, bool mem_test, bool verbose_prompt, string encoding)
+
+model_path String
+The model file path.
model_name String
+The model name.
verbose Boolean
+Whether to print details when running the model.
seed Int32
n_threads Int32
n_predict Int32
n_ctx Int32
n_batch Int32
n_keep Int32
n_gpu_layers Int32
logit_bias Dictionary<Int32, Single>
top_k Int32
top_p Single
tfs_z Single
typical_p Single
temp Single
repeat_penalty Single
repeat_last_n Int32
frequency_penalty Single
presence_penalty Single
mirostat Int32
mirostat_tau Single
mirostat_eta Single
prompt String
path_session String
input_prefix String
input_suffix String
antiprompt List<String>
lora_adapter String
lora_base String
memory_f16 Boolean
random_prompt Boolean
use_color Boolean
interactive Boolean
embedding Boolean
interactive_first Boolean
prompt_cache_all Boolean
instruct Boolean
penalize_nl Boolean
perplexity Boolean
use_mmap Boolean
use_mlock Boolean
mem_test Boolean
verbose_prompt Boolean
encoding String
Please refer LLamaParams to find the meanings of each arg. Be sure to have set the n_gpu_layers, otherwise it will
+ load 20 layers to gpu by default.
public LLamaModel(LLamaParams params, string name, bool verbose, string encoding)
+
+params LLamaParams
+The LLamaModel params
name String
+Model name
verbose Boolean
+Whether to output the detailed info.
encoding String
Apply a prompt to the model.
+public LLamaModel WithPrompt(string prompt, string encoding)
+
+prompt String
encoding String
Apply the prompt file to the model.
+public LLamaModel WithPromptFile(string promptFileName)
+
+promptFileName String
public void InitChatPrompt(string prompt, string encoding)
+
+prompt String
encoding String
public void InitChatAntiprompt(String[] antiprompt)
+
+antiprompt String[]
Chat with the LLaMa model under interactive mode.
+public IEnumerable<string> Chat(string text, string prompt, string encoding)
+
+text String
prompt String
encoding String
Save the state to specified path.
+public void SaveState(string filename)
+
+filename String
Load the state from specified path.
+public void LoadState(string filename, bool clearPreviousEmbed)
+
+filename String
clearPreviousEmbed Boolean
+Whether to clear previous footprints of this model.
Tokenize a string.
+public List<int> Tokenize(string text, string encoding)
+
+text String
+The utf-8 encoded string to tokenize.
encoding String
List<Int32>
+A list of tokens.
RuntimeError
+If the tokenization failed.
Detokenize a list of tokens.
+public string DeTokenize(IEnumerable<int> tokens)
+
+tokens IEnumerable<Int32>
+The list of tokens to detokenize.
String
+The detokenized string.
Call the model to run inference.
+public IEnumerable<string> Call(string text, string encoding)
+
+text String
encoding String
public void Dispose()
+
+
+
+
+
+
+
+ Namespace: LLama.OldVersion
+The entire LLama.OldVersion namespace will be removed
+public struct LLamaParams
+
+Inheritance Object → ValueType → LLamaParams
+public int seed;
+
+public int n_threads;
+
+public int n_predict;
+
+public int n_ctx;
+
+public int n_batch;
+
+public int n_keep;
+
+public int n_gpu_layers;
+
+public Dictionary<int, float> logit_bias;
+
+public int top_k;
+
+public float top_p;
+
+public float tfs_z;
+
+public float typical_p;
+
+public float temp;
+
+public float repeat_penalty;
+
+public int repeat_last_n;
+
+public float frequency_penalty;
+
+public float presence_penalty;
+
+public int mirostat;
+
+public float mirostat_tau;
+
+public float mirostat_eta;
+
+public string model;
+
+public string prompt;
+
+public string path_session;
+
+public string input_prefix;
+
+public string input_suffix;
+
+public List<string> antiprompt;
+
+public string lora_adapter;
+
+public string lora_base;
+
+public bool memory_f16;
+
+public bool random_prompt;
+
+public bool use_color;
+
+public bool interactive;
+
+public bool prompt_cache_all;
+
+public bool embedding;
+
+public bool interactive_first;
+
+public bool instruct;
+
+public bool penalize_nl;
+
+public bool perplexity;
+
+public bool use_mmap;
+
+public bool use_mlock;
+
+public bool mem_test;
+
+public bool verbose_prompt;
+
+LLamaParams(int seed, int n_threads, int n_predict, int n_ctx, int n_batch, int n_keep, int n_gpu_layers, Dictionary<int, float> logit_bias, int top_k, float top_p, float tfs_z, float typical_p, float temp, float repeat_penalty, int repeat_last_n, float frequency_penalty, float presence_penalty, int mirostat, float mirostat_tau, float mirostat_eta, string model, string prompt, string path_session, string input_prefix, string input_suffix, List<string> antiprompt, string lora_adapter, string lora_base, bool memory_f16, bool random_prompt, bool use_color, bool interactive, bool prompt_cache_all, bool embedding, bool interactive_first, bool instruct, bool penalize_nl, bool perplexity, bool use_mmap, bool use_mlock, bool mem_test, bool verbose_prompt)
+
+seed Int32
n_threads Int32
n_predict Int32
n_ctx Int32
n_batch Int32
n_keep Int32
n_gpu_layers Int32
logit_bias Dictionary<Int32, Single>
top_k Int32
top_p Single
tfs_z Single
typical_p Single
temp Single
repeat_penalty Single
repeat_last_n Int32
frequency_penalty Single
presence_penalty Single
mirostat Int32
mirostat_tau Single
mirostat_eta Single
model String
prompt String
path_session String
input_prefix String
input_suffix String
antiprompt List<String>
lora_adapter String
lora_base String
memory_f16 Boolean
random_prompt Boolean
use_color Boolean
interactive Boolean
prompt_cache_all Boolean
embedding Boolean
interactive_first Boolean
instruct Boolean
penalize_nl Boolean
perplexity Boolean
use_mmap Boolean
use_mlock Boolean
mem_test Boolean
verbose_prompt Boolean
Namespace: LLama
+The base class for stateful LLama executors.
+public abstract class StatefulExecutorBase : LLama.Abstractions.ILLamaExecutor
+
+Inheritance Object → StatefulExecutorBase
+Implements ILLamaExecutor
The context used by the executor.
+public LLamaContext Context { get; }
+
+This API is currently not verified.
+public StatefulExecutorBase WithSessionFile(string filename)
+
+filename String
This API has not been verified currently.
+public void SaveSessionFile(string filename)
+
+filename String
After running out of the context, take some tokens from the original prompt and recompute the logits in batches.
+protected void HandleRunOutOfContext(int tokensToKeep)
+
+tokensToKeep Int32
Try to reuse the matching prefix from the session file.
+protected void TryReuseMathingPrefix()
+
+Decide whether to continue the loop.
+protected abstract bool GetLoopCondition(InferStateArgs args)
+
+args InferStateArgs
Preprocess the inputs before the inference.
+protected abstract void PreprocessInputs(string text, InferStateArgs args)
+
+text String
args InferStateArgs
Do some post processing after the inference.
+protected abstract bool PostProcess(IInferenceParams inferenceParams, InferStateArgs args, IEnumerable`1& extraOutputs)
+
+inferenceParams IInferenceParams
args InferStateArgs
extraOutputs IEnumerable`1&
The core inference logic.
+protected abstract void InferInternal(IInferenceParams inferenceParams, InferStateArgs args)
+
+inferenceParams IInferenceParams
args InferStateArgs
Save the current state to a file.
+public abstract void SaveState(string filename)
+
+filename String
Get the current state data.
+public abstract ExecutorBaseState GetStateData()
+
+Load the state from data.
+public abstract void LoadState(ExecutorBaseState data)
+
+data ExecutorBaseState
Load the state from a file.
+public abstract void LoadState(string filename)
+
+filename String
Execute the inference.
+public IEnumerable<string> Infer(string text, IInferenceParams inferenceParams, CancellationToken cancellationToken)
+
+text String
inferenceParams IInferenceParams
cancellationToken CancellationToken
Execute the inference asynchronously.
+public IAsyncEnumerable<string> InferAsync(string text, IInferenceParams inferenceParams, CancellationToken cancellationToken)
+
+text String
inferenceParams IInferenceParams
cancellationToken CancellationToken
Namespace: LLama
+This executor infer the input as one-time job. Previous inputs won't impact on the + response to current input.
+public class StatelessExecutor : LLama.Abstractions.ILLamaExecutor
+
+Inheritance Object → StatelessExecutor
+Implements ILLamaExecutor
The context used by the executor when running the inference.
+public LLamaContext Context { get; private set; }
+
+Create a new stateless executor which will use the given model
+public StatelessExecutor(LLamaWeights weights, IModelParams params)
+
+weights LLamaWeights
params IModelParams
Use the constructor which automatically creates contexts using the LLamaWeights
+Create a new stateless executor which will use the model used to create the given context
+public StatelessExecutor(LLamaContext context)
+
+context LLamaContext
public IEnumerable<string> Infer(string text, IInferenceParams inferenceParams, CancellationToken cancellationToken)
+
+text String
inferenceParams IInferenceParams
cancellationToken CancellationToken
public IAsyncEnumerable<string> InferAsync(string text, IInferenceParams inferenceParams, CancellationToken cancellationToken)
+
+text String
inferenceParams IInferenceParams
cancellationToken CancellationToken
Namespace: LLama
+Assorted llama utilities
+public static class Utils
+
+
+Use LLamaWeights.LoadFromFile and LLamaWeights.CreateContext instead
+public static SafeLLamaContextHandle InitLLamaContextFromModelParams(IModelParams params)
+
+params IModelParams
Use SafeLLamaContextHandle Tokenize method instead
+public static IEnumerable<int> Tokenize(SafeLLamaContextHandle ctx, string text, bool add_bos, Encoding encoding)
+
+text String
add_bos Boolean
encoding Encoding
Use SafeLLamaContextHandle GetLogits method instead
+public static Span<float> GetLogits(SafeLLamaContextHandle ctx, int length)
+
+length Int32
Use SafeLLamaContextHandle Eval method instead
+public static int Eval(SafeLLamaContextHandle ctx, Int32[] tokens, int startIndex, int n_tokens, int n_past, int n_threads)
+
+tokens Int32[]
startIndex Int32
n_tokens Int32
n_past Int32
n_threads Int32
Use SafeLLamaContextHandle TokenToString method instead
+public static string TokenToString(int token, SafeLLamaContextHandle ctx, Encoding encoding)
+
+token Int32
encoding Encoding
No longer used internally by LlamaSharp
+public static string PtrToString(IntPtr ptr, Encoding encoding)
+
+ptr IntPtr
encoding Encoding