Product attributes
Other attributes
Guanaco (Generative Universal Assistant for Natural-language Adaptive Context-aware Omnilingual outputs) is an instruction-following language model built on Meta's LLaMA 7B model. Building on the 52K dataset of the Alpaca model, Guanaco incorporates 534,530 entries covering English, Simplified Chinese, Traditional Chinese (Taiwan), Traditional Chinese (Hong Kong), Japanese, German, and various linguistic and grammatical tasks. This data allows Guanaco to perform in multilingual environments. The Guanaco Dataset is publicly accessible along with the model weights.
Guanaco was developed by Joseph Cheung. Guanaco has been releasing multilingual datasets since March 2023. The team behind Guanaco recommends users use fp16 inference for the model, as 8-bit precision can affect performance. When using the model, users are reminded that Guanaco has not been filtered for harmful, biased, or explicit content. Therefore, outputs that do not adhere to ethical norms can be generated. While other people in the field have linked Guanaco to Quantized Low-Rank Adapters (QLoRA), a method of fine-tuning LLMs, the team behind Guanaco has rejected this, stating that QLoRA lacks mathematical robustness and trails other methods such as GPTQ and PEFT.
The Guanaco model has had a number of updates, including those below:
- Improving context and prompt role support to allow for better integration with the Alpaca format and enhance user experience
- Role-playing support (similar to Character.AI) in English, Simplified Chinese, Traditional Chinese, Japanese, and Deutsch. Users can instruct the model to assume specific roles, historical figures, fictional characters, or assume a personality based on input
- The model has introduced reserved keywords to clearly communicate different scenarios when the model lacks sufficient knowledge or is unable to provide a valid response
- Continuation of responses upon user request, becoming more adaptable to extended conversations
- Multimodal visual question answering (VQA) support, allowing the model to interpret and respond to queries involving both text and visual inputs