* Added a `Guidance` method to `LLamaTokenDataArray` which applies classifier free guidance
* Factored out a safer `llama_sample_apply_guidance` method based on spans
* Created a guided sampling demo using the batched executor
* fixed comment, "classifier free" not "context free"
* Rebased onto master and fixed breakage due to changes in `BaseSamplingPipeline`
* Asking user for guidance weight
* Progress bar in batched fork demo
* Improved fork example (using tree display)
* Added proper disposal of resources in batched examples
* Added some more comments in BatchedExecutorGuidance
- UserSettings, simplifying the validation/re-ask loop down to one call
- Program, adding colour to figlet title
- Batched examples, showing default prompt
- ExampleRunner, resetting state after running an example
* LLama.Examples: disable console logging
* LLama.Examples: rename titles to signal grouped topics
* LLama.Examples: add additional PDF for Q&A
* LLama.Examples: improve kernel memory demo
multi-document ingestion
* LLama.Examples: improve message before resetting to main menu
* LLama.Examples: document Q&A with local memory
* LLama.Examples: RepoUtils.cs → ConsoleLogger.cs
* LLama.Examples: Examples/Runner.cs → ExampleRunner.cs
* LLama.Examples: delete unused console logger
* LLama.Examples: improve splash screen appearance
the llama_empty_call() no longer shows configuration information on startup, but it will display it automatically the first time a model is engaged
* LLama.Examples: Runner → ExampleRunner
* LLama.Examples: improve model path prompt
The last used model is stored in a config file and is re-used when a blank path is provided
* LLama.Examples: NativeApi.llama_empty_call() at startup
* LLama.Examples: reduce console noise when saving model path