

Contact: For questions and comments about the model, visit the CarperAI and StableFoundation Discord servers.One or Several Deserts by Carter St.Note: License for the base LLaMA model's weights is Meta's non-commercial bespoke license.License for delta weights: CC-BY-NC-SA-4.0.Model type: StableVicuna-13B is an auto-regressive language model based on the LLaMA transformer architecture.StableVicuna-13B is a Vicuna-13B v0 model fine-tuned using reinforcement learning from human feedback (RLHF) via Proximal Policy Optimization (PPO) on various conversational and instructional datasets. Thireus has written a great guide on how to update it to the latest llama.cpp code so that these files can be used in the UI. Note: at this time text-generation-webui will not support the new q5 quantisation methods. GGML models can be loaded into text-generation-webui by installing the llama.cpp module, then placing the ggml model file in a model folder as usual.įurther instructions here: text-generation-webui/docs/. For example if your system has 8 cores/16 threads, use -t 8. main -t 18 -m 4_2.bin -color -c 2048 -temp 0.7 -repeat_penalty 1.1 -n -1 -r "# Human:" -p "# Human: write a story about llamas # Assistant:"Ĭhange -t 18 to the number of physical CPU cores you have. I use the following command line adjust for your tastes and needs. You will need to pull the latest llama.cpp code and rebuild to be able to use them.ĭon't expect any third-party UIs/tools to support them yet. These new methods were released to llama.cpp on 26th April. If you want to ensure guaranteed compatibility with a wide range of llama.cpp versions, use the q4_0 file. If and when the q4_2 file no longer works with recent versions of llama.cpp I will endeavour to update it. And it's possible that future updates to llama.cpp could require that these files are re-generated. In order to use these files you will need to use recent llama.cpp code. However they are still under development and their formats are subject to change. Q4_2 is a relatively new 4bit quantisation method offering improved quality. The q5_1 file is using brand new 5bit method released 26th April.The q5_0 file is using brand new 5bit method released 26th April.This format is still subject to change and there may be compatibility issues, see below.


The q4_2 file offers the best combination of performance and quality.It will work with past and future versions of llama.cpp The q4_0 file provides lower quality, but maximal compatibility.Slightly higher resource usage than q5_0. Potentially higher quality than 4bit, at cost of slightly higher resources.īrand new 5bit method. Best compromise between resources, speed and qualityīrand new 5bit method.
