LLama.Web has no heavy dependencies and no extra frameworks over bootstrap and jquery to keep the examples clean and easy to copy over to your own project
Using signalr websockets simplifies the streaming of responses and model per connection management
You can setup Models, Prompts and Inference parameters in the appsettings.json
Models
You can add multiple models to the options for quick selection in the UI, options are based on ModelParams so its fully configurable
Parameters
You can add multiple sets of inference parameters to the options for quick selection in the UI, options are based on InferenceParams so its fully configurable
Prompts
You can add multiple sets of prompts to the options for quick selection in the UI
Example:
{
"Name": "Alpaca",
"Path": "D:\\Repositories\\AI\\Prompts\\alpaca.txt",
"Prompt": "Alternatively to can set a prompt text directly and omit the Path"
"AntiPrompt": [
"User:"
],
"OutputFilter": [
"Response:",
"User:"
]
}