Teach LLM tool use
πIn OpenFunctions-v2, we natively train the model to support parallel functions (generate multiple functions at a time) and multiple functions (select one or more functions). Java/REST/Python APIs are also supported for the first time with extended data typesπ·
Benchmarking LLMs on function calling capabilities
π Berkeley Function-Calling Leaderboard (BFCL) π aims to provide a thorough study of the function-calling capability of different LLMs. It consists of 2k π question-function-answer pairs with multiple languages (π Python, β Java, π¨ JavaScript, π REST API), diverse application domains, and complex use cases (multiple and parallel function calls). We also investigate function relevance detection π΅οΈββοΈ, to determine how the model will react when the provided function is not suitable to answer the user's question.
Better way to do RAG
RAFT: Retriever-Aware FineTuning for domain-specific RAG π Drawing parallels between LLMs and students in open-book (RAG) π and closed-book exams (SFT) π§ , we present a better recipe for fine-tuning a base LLM for RAG-focused challenges. Discover how RAFT prepares LLMs to excel with a specific document set, mirroring students' prep for finals! π
Runtime for executing LLM-generated actions
Gorilla Execution Engine (GoEX) is a runtime for LLM-generated actions like code, API calls, and more. Featuring "post-facto validation" for assessing LLM actions after execution π Key to our approach is "undo" π and "damage confinement" abstractions to manage unintended actions & risks. This paves the way for fully autonomous LLM agents, enhancing interaction between apps & services with human-out-of-loopπ
Gorilla powered CLI
Get started with pip install gorilla-cli
Gorilla Powered Spotlight Search
Gorilla-Spotlight
Signup
@article{patil2023gorilla,
title={Gorilla: Large Language Model Connected with Massive APIs},
author={Shishir G. Patil and Tianjun Zhang and Xin Wang and Joseph E. Gonzalez},
journal={arXiv preprint arXiv:2305.15334},
year={2023},
}