21 Commits (master)

Author SHA1 Message Date
  SignalRT 53ae904875 Set GPULayerCount to execute the Test 1 year ago
  SignalRT 74bde89a61 Test to disable metal on test 1 year ago
  Martin Evans c325ac9127
April 2024 Binary Update (#662) 1 year ago
  Martin Evans 15a98b36d8 Updated everything to work with llama.cpp ce32060198 1 year ago
  Martin Evans 1472704e12 Added a test with examples of troublesome strings from 0.9.1 1 year ago
  Martin Evans 2ea2048b78 - Added a test for tokenizing just a new line (reproduce issue https://github.com/SciSharp/LLamaSharp/issues/430) 1 year ago
  Martin Evans 82727c4414 Removed collection expressions from test 1 year ago
  Martin Evans 42be9b136d Switched form using raw integers, to a `LLamaToken` struct 1 year ago
  Martin Evans 1f8c94e386 Added in the `special` parameter to the tokenizer (introduced in https://github.com/ggerganov/llama.cpp/pull/3538) 2 years ago
  Martin Evans efb0664df0 - Added new binaries 2 years ago
  Martin Evans 669ae47ef7 - Split parameters into two interfaces 2 years ago
  Martin Evans ce1fc51163 Added some more native methods 2 years ago
  Martin Evans bca55eace0 Initial changes to match the llama.cpp changes 2 years ago
  Martin Evans daf09eae64 Skipping tokenization of empty strings (saves allocating an empty array every time) 2 years ago
  Martin Evans bba801f4b7 Added a property to get the KV cache size from a context 2 years ago
  SignalRT fb007e5921 Changes to compile in VS Mac + change model to llama2 2 years ago
  Martin Evans 95dc12dd76 Switched to codellama-7b.gguf in tests (probably temporarily) 2 years ago
  Martin Evans 0c98ae1955 Passing ctx to `llama_token_nl(_ctx)` 2 years ago
  Martin Evans 2830e5755c - Applied a lot of minor R# code quality suggestions. Lots of unnecessary imports removed. 2 years ago
  Martin Evans a9e6f21ab8 - Creating and destroying contexts in the stateless executor, saving memory. It now uses zero memory when not inferring! 2 years ago
  Martin Evans 1b35be2e0c Added some additional basic tests 2 years ago