🦍 Gorilla: Large Language Model Connected with Massive APIs

Shishir G. Patil*, Tianjun Zhang*, Xin Wang, Joseph E. Gonzalez

UC Berkeley, Microsoft Research
sgp@berkeley.edu, tianjunz@berkeley.edu

Colab GitHub Paper Gorilla-Spotlight Signup Discord

An API Appstore for LLMs

Gorilla LLM logo
Gorilla is a LLM that can provide appropriate API calls. It is trained on three massive machine learning hub datasets: Torch Hub, TensorFlow Hub and HuggingFace. We are rapidly adding new domains, including Kubernetes, GCP, AWS, OpenAPI, and more. Zero-shot Gorilla outperforms GPT-4, Chat-GPT and Claude. Gorilla is extremely reliable, and significantly reduces hallucination errors.

🚀 Try Gorilla in 60s! No sign-ups, no installs, just colab!
🤩 With Apache 2.0 licensed LLM models, you can use Gorilla comercially without any obligations!
📣 We are excited to hear your feedback and we welcome API contributions as we build this open-source project. Join us on Discord or feel free to email us!

Gorilla for your CLI and Spotlight Search

Gorilla powered CLI
Get started with pip install gorilla-cli

Gorilla Powered Spotlight Search
Gorilla-Spotlight Signup


Abstract

Large Language Models (LLMs) have seen an impressive wave of advances recently, with models now excelling in a variety of tasks, such as mathematical reasoning and program synthesis. However, their potential to effectively use tools via API calls remains unfulfilled. This is a challenging task even for today's state-of-the-art LLMs such as GPT-4, largely due to their inability to generate accurate input arguments and their tendency to hallucinate the wrong usage of an API call. We release Gorilla, a finetuned LLaMA-based model that surpasses the performance of GPT-4 on writing API calls. When combined with a document retriever, Gorilla demonstrates a strong capability to adapt to test-time document changes, enabling flexible API updates and version changes. Gorilla also substantially mitigates the issue of hallucination, commonly encountered when prompting LLMs directly. To evaluate the model's ability, we introduce APIBench, a comprehensive dataset consisting of HuggingFace, TorchHub, and TensorHub APIs. The successful integration of the retrieval system with Gorilla demonstrates the potential for LLMs to use tools more accurately, keep up with frequently updated documentation, and consequently increase the reliability and applicability of their outputs. Gorilla models and code are available at https://github.com/ShishirPatil/gorilla.


Example

Gorilla LLM logo
Example API calls generated by GPT-4, Claude, and Gorilla for the given prompt. In this example, GPT-4 presents a model that doesn’t exist, and Claude picks an incorrect library. In contrast, our model, Gorilla, can identify the task correctly and suggest a fully-qualified API call.

Citation

@article{patil2023gorilla,
  title={Gorilla: Large Language Model Connected with Massive APIs},
  author={Shishir G. Patil and Tianjun Zhang and Xin Wang and Joseph E. Gonzalez},
  journal={arXiv preprint arXiv:2305.15334},
  year={2023},
}