Introducing Gradio ClientsJoin us on Thursday, 9am PST
LivestreamIntroducing Gradio ClientsJoin us on Thursday, 9am PST
LivestreamLarge Language Models (LLMs) are very impressive but they can be made even more powerful if we could give them skills to accomplish specialized tasks.
The gradio_tools library can turn any Gradio application into a tool that an agent can use to complete its task. For example, an LLM could use a Gradio tool to transcribe a voice recording it finds online and then summarize it for you. Or it could use a different Gradio tool to apply OCR to a document on your Google Drive and then answer questions about it.
This guide will show how you can use gradio_tools
to grant your LLM Agent access to the cutting edge Gradio applications hosted in the world. Although gradio_tools
are compatible with more than one agent framework, we will focus on Langchain Agents in this guide.
A LangChain agent is a Large Language Model (LLM) that takes user input and reports an output based on using one of many tools at its disposal.
Gradio is the defacto standard framework for building Machine Learning Web Applications and sharing them with the world - all with just python! 🐍
To get started with gradio_tools
, all you need to do is import and initialize your tools and pass them to the langchain agent!
In the following example, we import the StableDiffusionPromptGeneratorTool
to create a good prompt for stable diffusion, the
StableDiffusionTool
to create an image with our improved prompt, the ImageCaptioningTool
to caption the generated image, and
the TextToVideoTool
to create a video from a prompt.
We then tell our agent to create an image of a dog riding a skateboard, but to please improve our prompt ahead of time. We also ask it to caption the generated image and create a video for it. The agent can decide which tool to use without us explicitly telling it.
import os
if not os.getenv("OPENAI_API_KEY"):
raise ValueError("OPENAI_API_KEY must be set")
from langchain.agents import initialize_agent
from langchain.llms import OpenAI
from gradio_tools import (StableDiffusionTool, ImageCaptioningTool, StableDiffusionPromptGeneratorTool,
TextToVideoTool)
from langchain.memory import ConversationBufferMemory
llm = OpenAI(temperature=0)
memory = ConversationBufferMemory(memory_key="chat_history")
tools = [StableDiffusionTool().langchain, ImageCaptioningTool().langchain,
StableDiffusionPromptGeneratorTool().langchain, TextToVideoTool().langchain]
agent = initialize_agent(tools, llm, memory=memory, agent="conversational-react-description", verbose=True)
output = agent.run(input=("Please create a photo of a dog riding a skateboard "
"but improve my prompt prior to using an image generator."
"Please caption the generated image and create a video for it using the improved prompt."))
You'll note that we are using some pre-built tools that come with gradio_tools
. Please see this doc for a complete list of the tools that come with gradio_tools
.
If you would like to use a tool that's not currently in gradio_tools
, it is very easy to add your own. That's what the next section will cover.
The core abstraction is the GradioTool
, which lets you define a new tool for your LLM as long as you implement a standard interface:
class GradioTool(BaseTool):
def __init__(self, name: str, description: str, src: str) -> None:
@abstractmethod
def create_job(self, query: str) -> Job:
pass
@abstractmethod
def postprocess(self, output: Tuple[Any] | Any) -> str:
pass
The requirements are:
freddyaboulton/calculator
, of the Gradio application. Based on this value, gradio_tool
will create a gradio client instance to query the upstream application via API. Be sure to click the link and learn more about the gradio client library if you are not familiar with it.submit
function of the client. More info on creating jobs here_block_input(self, gr)
and _block_output(self, gr)
methods of the tool. The gr
variable is the gradio module (the result of import gradio as gr
). It will be
automatically imported by the GradiTool
parent class and passed to the _block_input
and _block_output
methods.And that's it!
Once you have created your tool, open a pull request to the gradio_tools
repo! We welcome all contributions.
Here is the code for the StableDiffusion tool as an example:
from gradio_tool import GradioTool
import os
class StableDiffusionTool(GradioTool):
"""Tool for calling stable diffusion from llm"""
def __init__(
self,
name="StableDiffusion",
description=(
"An image generator. Use this to generate images based on "
"text input. Input should be a description of what the image should "
"look like. The output will be a path to an image file."
),
src="gradio-client-demos/stable-diffusion",
hf_token=None,
) -> None:
super().__init__(name, description, src, hf_token)
def create_job(self, query: str) -> Job:
return self.client.submit(query, "", 9, fn_index=1)
def postprocess(self, output: str) -> str:
return [os.path.join(output, i) for i in os.listdir(output) if not i.endswith("json")][0]
def _block_input(self, gr) -> "gr.components.Component":
return gr.Textbox()
def _block_output(self, gr) -> "gr.components.Component":
return gr.Image()
Some notes on this implementation:
GradioTool
have an attribute called client
that is a pointed to the underlying gradio client. That is what you should use
in the create_job
method.create_job
just passes the query string to the submit
function of the client with some other parameters hardcoded, i.e. the negative prompt string and the guidance scale. We could modify our tool to also accept these values from the input string in a subsequent version.postprocess
method simply returns the first image from the gallery of images created by the stable diffusion space. We use the os
module to get the full path of the image.You now know how to extend the abilities of your LLM with the 1000s of gradio spaces running in the wild! Again, we welcome any contributions to the gradio_tools library. We're excited to see the tools you all build!