# ModelParams Namespace: LLama.Common The parameters for initializing a LLama model. ```csharp public class ModelParams : LLama.Abstractions.IModelParams, System.IEquatable`1[[LLama.Common.ModelParams, LLamaSharp, Version=0.5.0.0, Culture=neutral, PublicKeyToken=null]] ``` Inheritance [Object](https://docs.microsoft.com/en-us/dotnet/api/system.object) → [ModelParams](./llama.common.modelparams.md)
Implements [IModelParams](./llama.abstractions.imodelparams.md), [IEquatable<ModelParams>](https://docs.microsoft.com/en-us/dotnet/api/system.iequatable-1) ## Properties ### **ContextSize** Model context size (n_ctx) ```csharp public int ContextSize { get; set; } ``` #### Property Value [Int32](https://docs.microsoft.com/en-us/dotnet/api/system.int32)
### **MainGpu** the GPU that is used for scratch and small tensors ```csharp public int MainGpu { get; set; } ``` #### Property Value [Int32](https://docs.microsoft.com/en-us/dotnet/api/system.int32)
### **LowVram** if true, reduce VRAM usage at the cost of performance ```csharp public bool LowVram { get; set; } ``` #### Property Value [Boolean](https://docs.microsoft.com/en-us/dotnet/api/system.boolean)
### **GpuLayerCount** Number of layers to run in VRAM / GPU memory (n_gpu_layers) ```csharp public int GpuLayerCount { get; set; } ``` #### Property Value [Int32](https://docs.microsoft.com/en-us/dotnet/api/system.int32)
### **Seed** Seed for the random number generator (seed) ```csharp public int Seed { get; set; } ``` #### Property Value [Int32](https://docs.microsoft.com/en-us/dotnet/api/system.int32)
### **UseFp16Memory** Use f16 instead of f32 for memory kv (memory_f16) ```csharp public bool UseFp16Memory { get; set; } ``` #### Property Value [Boolean](https://docs.microsoft.com/en-us/dotnet/api/system.boolean)
### **UseMemorymap** Use mmap for faster loads (use_mmap) ```csharp public bool UseMemorymap { get; set; } ``` #### Property Value [Boolean](https://docs.microsoft.com/en-us/dotnet/api/system.boolean)
### **UseMemoryLock** Use mlock to keep model in memory (use_mlock) ```csharp public bool UseMemoryLock { get; set; } ``` #### Property Value [Boolean](https://docs.microsoft.com/en-us/dotnet/api/system.boolean)
### **Perplexity** Compute perplexity over the prompt (perplexity) ```csharp public bool Perplexity { get; set; } ``` #### Property Value [Boolean](https://docs.microsoft.com/en-us/dotnet/api/system.boolean)
### **ModelPath** Model path (model) ```csharp public string ModelPath { get; set; } ``` #### Property Value [String](https://docs.microsoft.com/en-us/dotnet/api/system.string)
### **ModelAlias** model alias ```csharp public string ModelAlias { get; set; } ``` #### Property Value [String](https://docs.microsoft.com/en-us/dotnet/api/system.string)
### **LoraAdapter** lora adapter path (lora_adapter) ```csharp public string LoraAdapter { get; set; } ``` #### Property Value [String](https://docs.microsoft.com/en-us/dotnet/api/system.string)
### **LoraBase** base model path for the lora adapter (lora_base) ```csharp public string LoraBase { get; set; } ``` #### Property Value [String](https://docs.microsoft.com/en-us/dotnet/api/system.string)
### **Threads** Number of threads (-1 = autodetect) (n_threads) ```csharp public int Threads { get; set; } ``` #### Property Value [Int32](https://docs.microsoft.com/en-us/dotnet/api/system.int32)
### **BatchSize** batch size for prompt processing (must be >=32 to use BLAS) (n_batch) ```csharp public int BatchSize { get; set; } ``` #### Property Value [Int32](https://docs.microsoft.com/en-us/dotnet/api/system.int32)
### **ConvertEosToNewLine** Whether to convert eos to newline during the inference. ```csharp public bool ConvertEosToNewLine { get; set; } ``` #### Property Value [Boolean](https://docs.microsoft.com/en-us/dotnet/api/system.boolean)
### **EmbeddingMode** Whether to use embedding mode. (embedding) Note that if this is set to true, The LLamaModel won't produce text response anymore. ```csharp public bool EmbeddingMode { get; set; } ``` #### Property Value [Boolean](https://docs.microsoft.com/en-us/dotnet/api/system.boolean)
### **TensorSplits** how split tensors should be distributed across GPUs ```csharp public Single[] TensorSplits { get; set; } ``` #### Property Value [Single[]](https://docs.microsoft.com/en-us/dotnet/api/system.single)
### **RopeFrequencyBase** RoPE base frequency ```csharp public float RopeFrequencyBase { get; set; } ``` #### Property Value [Single](https://docs.microsoft.com/en-us/dotnet/api/system.single)
### **RopeFrequencyScale** RoPE frequency scaling factor ```csharp public float RopeFrequencyScale { get; set; } ``` #### Property Value [Single](https://docs.microsoft.com/en-us/dotnet/api/system.single)
### **MulMatQ** Use experimental mul_mat_q kernels ```csharp public bool MulMatQ { get; set; } ``` #### Property Value [Boolean](https://docs.microsoft.com/en-us/dotnet/api/system.boolean)
### **Encoding** The encoding to use to convert text for the model ```csharp public Encoding Encoding { get; set; } ``` #### Property Value [Encoding](https://docs.microsoft.com/en-us/dotnet/api/system.text.encoding)
## Constructors ### **ModelParams(String)** ```csharp public ModelParams(string modelPath) ``` #### Parameters `modelPath` [String](https://docs.microsoft.com/en-us/dotnet/api/system.string)
The model path. ### **ModelParams(String, Int32, Int32, Int32, Boolean, Boolean, Boolean, Boolean, String, String, Int32, Int32, Boolean, Boolean, Single, Single, Boolean, String)** #### Caution Use object initializer to set all optional parameters --- ```csharp public ModelParams(string modelPath, int contextSize, int gpuLayerCount, int seed, bool useFp16Memory, bool useMemorymap, bool useMemoryLock, bool perplexity, string loraAdapter, string loraBase, int threads, int batchSize, bool convertEosToNewLine, bool embeddingMode, float ropeFrequencyBase, float ropeFrequencyScale, bool mulMatQ, string encoding) ``` #### Parameters `modelPath` [String](https://docs.microsoft.com/en-us/dotnet/api/system.string)
The model path. `contextSize` [Int32](https://docs.microsoft.com/en-us/dotnet/api/system.int32)
Model context size (n_ctx) `gpuLayerCount` [Int32](https://docs.microsoft.com/en-us/dotnet/api/system.int32)
Number of layers to run in VRAM / GPU memory (n_gpu_layers) `seed` [Int32](https://docs.microsoft.com/en-us/dotnet/api/system.int32)
Seed for the random number generator (seed) `useFp16Memory` [Boolean](https://docs.microsoft.com/en-us/dotnet/api/system.boolean)
Whether to use f16 instead of f32 for memory kv (memory_f16) `useMemorymap` [Boolean](https://docs.microsoft.com/en-us/dotnet/api/system.boolean)
Whether to use mmap for faster loads (use_mmap) `useMemoryLock` [Boolean](https://docs.microsoft.com/en-us/dotnet/api/system.boolean)
Whether to use mlock to keep model in memory (use_mlock) `perplexity` [Boolean](https://docs.microsoft.com/en-us/dotnet/api/system.boolean)
Thether to compute perplexity over the prompt (perplexity) `loraAdapter` [String](https://docs.microsoft.com/en-us/dotnet/api/system.string)
Lora adapter path (lora_adapter) `loraBase` [String](https://docs.microsoft.com/en-us/dotnet/api/system.string)
Base model path for the lora adapter (lora_base) `threads` [Int32](https://docs.microsoft.com/en-us/dotnet/api/system.int32)
Number of threads (-1 = autodetect) (n_threads) `batchSize` [Int32](https://docs.microsoft.com/en-us/dotnet/api/system.int32)
Batch size for prompt processing (must be >=32 to use BLAS) (n_batch) `convertEosToNewLine` [Boolean](https://docs.microsoft.com/en-us/dotnet/api/system.boolean)
Whether to convert eos to newline during the inference. `embeddingMode` [Boolean](https://docs.microsoft.com/en-us/dotnet/api/system.boolean)
Whether to use embedding mode. (embedding) Note that if this is set to true, The LLamaModel won't produce text response anymore. `ropeFrequencyBase` [Single](https://docs.microsoft.com/en-us/dotnet/api/system.single)
RoPE base frequency. `ropeFrequencyScale` [Single](https://docs.microsoft.com/en-us/dotnet/api/system.single)
RoPE frequency scaling factor `mulMatQ` [Boolean](https://docs.microsoft.com/en-us/dotnet/api/system.boolean)
Use experimental mul_mat_q kernels `encoding` [String](https://docs.microsoft.com/en-us/dotnet/api/system.string)
The encoding to use to convert text for the model ## Methods ### **ToString()** ```csharp public string ToString() ``` #### Returns [String](https://docs.microsoft.com/en-us/dotnet/api/system.string)
### **PrintMembers(StringBuilder)** ```csharp protected bool PrintMembers(StringBuilder builder) ``` #### Parameters `builder` [StringBuilder](https://docs.microsoft.com/en-us/dotnet/api/system.text.stringbuilder)
#### Returns [Boolean](https://docs.microsoft.com/en-us/dotnet/api/system.boolean)
### **GetHashCode()** ```csharp public int GetHashCode() ``` #### Returns [Int32](https://docs.microsoft.com/en-us/dotnet/api/system.int32)
### **Equals(Object)** ```csharp public bool Equals(object obj) ``` #### Parameters `obj` [Object](https://docs.microsoft.com/en-us/dotnet/api/system.object)
#### Returns [Boolean](https://docs.microsoft.com/en-us/dotnet/api/system.boolean)
### **Equals(ModelParams)** ```csharp public bool Equals(ModelParams other) ``` #### Parameters `other` [ModelParams](./llama.common.modelparams.md)
#### Returns [Boolean](https://docs.microsoft.com/en-us/dotnet/api/system.boolean)
### **<Clone>$()** ```csharp public ModelParams $() ``` #### Returns [ModelParams](./llama.common.modelparams.md)