You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

QuickStart.md 8.2 kB

1 year ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197
  1. # Quick start
  2. ## Installation
  3. To gain high performance, LLamaSharp interacts with a native library compiled from c++, which is called `backend`. We provide backend packages for Windows, Linux and MAC with CPU, Cuda, Metal and OpenCL. You **don't** need to handle anything about c++ but just install the backend packages.
  4. If no published backend match your device, please open an issue to let us know. If compiling c++ code is not difficult for you, you could also follow [this guide](./ContributingGuide.md) to compile a backend and run LLamaSharp with it.
  5. 1. Install [LLamaSharp](https://www.nuget.org/packages/LLamaSharp) package on NuGet:
  6. ```
  7. PM> Install-Package LLamaSharp
  8. ```
  9. 2. Install one or more of these backends, or use self-compiled backend.
  10. - [`LLamaSharp.Backend.Cpu`](https://www.nuget.org/packages/LLamaSharp.Backend.Cpu): Pure CPU for Windows & Linux & MAC. Metal (GPU) support for MAC.
  11. - [`LLamaSharp.Backend.Cuda11`](https://www.nuget.org/packages/LLamaSharp.Backend.Cuda11): CUDA11 for Windows & Linux.
  12. - [`LLamaSharp.Backend.Cuda12`](https://www.nuget.org/packages/LLamaSharp.Backend.Cuda12): CUDA 12 for Windows & Linux.
  13. - [`LLamaSharp.Backend.OpenCL`](https://www.nuget.org/packages/LLamaSharp.Backend.OpenCL): OpenCL for Windows & Linux.
  14. 3. (optional) For [Microsoft semantic-kernel](https://github.com/microsoft/semantic-kernel) integration, install the [LLamaSharp.semantic-kernel](https://www.nuget.org/packages/LLamaSharp.semantic-kernel) package.
  15. 4. (optional) To enable RAG support, install the [LLamaSharp.kernel-memory](https://www.nuget.org/packages/LLamaSharp.kernel-memory) package (this package only supports `net6.0` or higher yet), which is based on [Microsoft kernel-memory](https://github.com/microsoft/kernel-memory) integration.
  16. ## Model preparation
  17. There are two popular format of model file of LLM now, which are PyTorch format (.pth) and Huggingface format (.bin). LLamaSharp uses `GGUF` format file, which could be converted from these two formats. To get `GGUF` file, there are two options:
  18. 1. Search model name + 'gguf' in [Huggingface](https://huggingface.co), you will find lots of model files that have already been converted to GGUF format. Please take care of the publishing time of them because some old ones could only work with old version of LLamaSharp.
  19. 2. Convert PyTorch or Huggingface format to GGUF format yourself. Please follow the instructions of [this part of llama.cpp readme](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#prepare-and-quantize) to convert them with the python scripts.
  20. Generally, we recommend downloading models with quantization rather than fp16, because it significantly reduce the required memory size while only slightly impact on its generation quality.
  21. ## Example of LLaMA chat session
  22. Here is a simple example to chat with bot based on LLM in LLamaSharp. Please replace the model path with yours.
  23. ![llama_demo](./media/console_demo.gif)
  24. ```cs
  25. using LLama.Common;
  26. using LLama;
  27. string modelPath = @"<Your Model Path>"; // change it to your own model path.
  28. var parameters = new ModelParams(modelPath)
  29. {
  30. ContextSize = 1024, // The longest length of chat as memory.
  31. GpuLayerCount = 5 // How many layers to offload to GPU. Please adjust it according to your GPU memory.
  32. };
  33. using var model = LLamaWeights.LoadFromFile(parameters);
  34. using var context = model.CreateContext(parameters);
  35. var executor = new InteractiveExecutor(context);
  36. // Add chat histories as prompt to tell AI how to act.
  37. var chatHistory = new ChatHistory();
  38. chatHistory.AddMessage(AuthorRole.System, "Transcript of a dialog, where the User interacts with an Assistant named Bob. Bob is helpful, kind, honest, good at writing, and never fails to answer the User's requests immediately and with precision.");
  39. chatHistory.AddMessage(AuthorRole.User, "Hello, Bob.");
  40. chatHistory.AddMessage(AuthorRole.Assistant, "Hello. How may I help you today?");
  41. ChatSession session = new(executor, chatHistory);
  42. InferenceParams inferenceParams = new InferenceParams()
  43. {
  44. MaxTokens = 256, // No more than 256 tokens should appear in answer. Remove it if antiprompt is enough for control.
  45. AntiPrompts = new List<string> { "User:" } // Stop generation once antiprompts appear.
  46. };
  47. Console.ForegroundColor = ConsoleColor.Yellow;
  48. Console.Write("The chat session has started.\nUser: ");
  49. Console.ForegroundColor = ConsoleColor.Green;
  50. string userInput = Console.ReadLine() ?? "";
  51. while (userInput != "exit")
  52. {
  53. await foreach ( // Generate the response streamingly.
  54. var text
  55. in session.ChatAsync(
  56. new ChatHistory.Message(AuthorRole.User, userInput),
  57. inferenceParams))
  58. {
  59. Console.ForegroundColor = ConsoleColor.White;
  60. Console.Write(text);
  61. }
  62. Console.ForegroundColor = ConsoleColor.Green;
  63. userInput = Console.ReadLine() ?? "";
  64. }
  65. ```
  66. ## Examples of chatting with LLaVA
  67. This example shows chatting with LLaVA to ask it to describe the picture.
  68. ![llava_demo](./media/llava_demo.gif)
  69. ```cs
  70. using System.Text.RegularExpressions;
  71. using LLama;
  72. using LLama.Common;
  73. string multiModalProj = @"<Your multi-modal proj file path>";
  74. string modelPath = @"<Your LLaVA model file path>";
  75. string modelImage = @"<Your image path>";
  76. const int maxTokens = 1024; // The max tokens that could be generated.
  77. var prompt = $"{{{modelImage}}}\nUSER:\nProvide a full description of the image.\nASSISTANT:\n";
  78. var parameters = new ModelParams(modelPath)
  79. {
  80. ContextSize = 4096,
  81. Seed = 1337,
  82. };
  83. using var model = LLamaWeights.LoadFromFile(parameters);
  84. using var context = model.CreateContext(parameters);
  85. // Llava Init
  86. using var clipModel = LLavaWeights.LoadFromFile(multiModalProj);
  87. var ex = new InteractiveExecutor(context, clipModel);
  88. Console.ForegroundColor = ConsoleColor.Yellow;
  89. Console.WriteLine("The executor has been enabled. In this example, the prompt is printed, the maximum tokens is set to {0} and the context size is {1}.", maxTokens, parameters.ContextSize);
  90. Console.WriteLine("To send an image, enter its filename in curly braces, like this {c:/image.jpg}.");
  91. var inferenceParams = new InferenceParams() { Temperature = 0.1f, AntiPrompts = new List<string> { "\nUSER:" }, MaxTokens = maxTokens };
  92. do
  93. {
  94. // Evaluate if we have images
  95. //
  96. var imageMatches = Regex.Matches(prompt, "{([^}]*)}").Select(m => m.Value);
  97. var imageCount = imageMatches.Count();
  98. var hasImages = imageCount > 0;
  99. byte[][] imageBytes = null;
  100. if (hasImages)
  101. {
  102. var imagePathsWithCurlyBraces = Regex.Matches(prompt, "{([^}]*)}").Select(m => m.Value);
  103. var imagePaths = Regex.Matches(prompt, "{([^}]*)}").Select(m => m.Groups[1].Value);
  104. try
  105. {
  106. imageBytes = imagePaths.Select(File.ReadAllBytes).ToArray();
  107. }
  108. catch (IOException exception)
  109. {
  110. Console.ForegroundColor = ConsoleColor.Red;
  111. Console.Write(
  112. $"Could not load your {(imageCount == 1 ? "image" : "images")}:");
  113. Console.Write($"{exception.Message}");
  114. Console.ForegroundColor = ConsoleColor.Yellow;
  115. Console.WriteLine("Please try again.");
  116. break;
  117. }
  118. int index = 0;
  119. foreach (var path in imagePathsWithCurlyBraces)
  120. {
  121. // First image replace to tag <image, the rest of the images delete the tag
  122. if (index++ == 0)
  123. prompt = prompt.Replace(path, "<image>");
  124. else
  125. prompt = prompt.Replace(path, "");
  126. }
  127. Console.WriteLine();
  128. // Initialize Images in executor
  129. //
  130. ex.ImagePaths = imagePaths.ToList();
  131. }
  132. Console.ForegroundColor = ConsoleColor.White;
  133. await foreach (var text in ex.InferAsync(prompt, inferenceParams))
  134. {
  135. Console.Write(text);
  136. }
  137. Console.Write(" ");
  138. Console.ForegroundColor = ConsoleColor.Green;
  139. prompt = Console.ReadLine();
  140. Console.WriteLine();
  141. // let the user finish with exit
  142. //
  143. if (prompt.Equals("/exit", StringComparison.OrdinalIgnoreCase))
  144. break;
  145. }
  146. while (true);
  147. ```
  148. *For more examples, please refer to [LLamaSharp.Examples](./LLama.Examples).*