18 Commits (15a98b36d85810cc98be2d621d83c84b69499448)

Author SHA1 Message Date
  Martin Evans 15a98b36d8 Updated everything to work with llama.cpp ce32060198 1 year ago
  Martin Evans 9b995510d6 Removed all setters in `IModelParams` and `IContextParams`, allowing implementations to be immutable. 1 year ago
  Steven Kennedy cf2e9e35f8 Updating the GpuLayerCount to mirror the Python Port of Llama.cpp 1 year ago
  Martin Evans 48ef3bb080 Added runtime checks that UseMemoryLock and UseMemorymap are actually supported. 1 year ago
  Martin Evans 3fc0f34cbe Fixed some issues which were causing metadata overrides not to work (mostly importantly, converting the key was failing so all keys were null bytes and thus ignored). 1 year ago
  Martin Evans 2f0deeadcd Implemented serialization for `MetadataOverride`. Deserialization is broken (converter is never called) 1 year ago
  Martin Evans b868b056f7 Added metadata overrides to `IModelParams` 1 year ago
  Martin Evans b22d8b7495 - Added `GroupDisposable` to dispose a collection of items all together 1 year ago
  Martin Evans 6a4cd506bd Added a safe `TensorSplitsCollection` to the params which prevents incorrectly setting the `tensor_splits` collection 2 years ago
  Martin Evans 15db194c17 Added multi GPU support 2 years ago
  Martin Evans 4e9b1f8cdc - Split extension methods into separate files 2 years ago
  Martin Evans 669ae47ef7 - Split parameters into two interfaces 2 years ago
  Martin Evans bca55eace0 Initial changes to match the llama.cpp changes 2 years ago
  Martin Evans 2056078aef Initial changes required for GGUF support 2 years ago
  Martin Evans 91bcefc852 comment on IModelParamsExtensions 2 years ago
  Martin Evans 9cdc72aa67 Fixed `ToLlamaContextParams` using the wrong parameter for `use_mmap` 2 years ago
  sa_ddam213 2d1269cae9 Access to IModelParamsExtensions 2 years ago
  Martin Evans f2499371ea Pulled conversion of a `IModelParams` into a `LLamaContextParams` out into an extension method which can be used in other places. 2 years ago