Skip to main content

Databricks Unity Catalog (UC)

This notebook shows how to use UC functions as LangChain tools, with both LangChain and LangGraph agent APIs.

See Databricks documentation (AWS|Azure|GCP) to learn how to create SQL or Python functions in UC. Do not skip function and parameter comments, which are critical for LLMs to call functions properly.

In this example notebook, we create a simple Python function that executes arbitrary code and use it as a LangChain tool:

CREATE FUNCTION main.tools.python_exec (
code STRING COMMENT 'Python code to execute. Remember to print the final result to stdout.'
)
RETURNS STRING
LANGUAGE PYTHON
COMMENT 'Executes Python code and returns its stdout.'
AS $$
import sys
from io import StringIO
stdout = StringIO()
sys.stdout = stdout
exec(code)
return stdout.getvalue()
$$

It runs in a secure and isolated environment within a Databricks SQL warehouse.

%pip install --upgrade --quiet databricks-sdk langchain-community langchain-databricks langgraph mlflow
Note: you may need to restart the kernel to use updated packages.
from langchain_databricks import ChatDatabricks

llm = ChatDatabricks(endpoint="databricks-meta-llama-3-70b-instruct")
from langchain_community.tools.databricks import UCFunctionToolkit
from databricks.sdk import WorkspaceClient

tools = (
UCFunctionToolkit(
# You can find the SQL warehouse ID in its UI after creation.
warehouse_id="xxxx123456789"
)
.include(
# Include functions as tools using their qualified names.
# You can use "{catalog_name}.{schema_name}.*" to get all functions in a schema.
"main.tools.python_exec",
)
.get_tools()
)
API Reference:UCFunctionToolkit

LangGraph agent exampleโ€‹

from langgraph.prebuilt import create_react_agent
agent = create_react_agent(llm, tools, state_modifier="You are a helpful assistant. Make sure to use tool for information.")
agent.invoke({"messages": [{"role": "user", "content": "36939 * 8922.4"}]})
API Reference:create_react_agent
{'messages': [HumanMessage(content='36939 * 8922.4', additional_kwargs={}, response_metadata={}, id='1a10b10b-8e37-48c7-97a1-cac5006228d5'),
AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_a8f3986f-4b91-40a3-8d6d-39f431dab69b', 'type': 'function', 'function': {'name': 'main__tools__python_exec', 'arguments': '{"code": "print(36939 * 8922.4)"}'}}]}, response_metadata={'prompt_tokens': 771, 'completion_tokens': 29, 'total_tokens': 800}, id='run-865c3613-20ba-4e80-afc8-fde1cfb26e5a-0', tool_calls=[{'name': 'main__tools__python_exec', 'args': {'code': 'print(36939 * 8922.4)'}, 'id': 'call_a8f3986f-4b91-40a3-8d6d-39f431dab69b', 'type': 'tool_call'}]),
ToolMessage(content='{"format": "SCALAR", "value": "329584533.59999996\\n", "truncated": false}', name='main__tools__python_exec', id='8b63d4c8-1a3d-46a5-a719-393b2ef36770', tool_call_id='call_a8f3986f-4b91-40a3-8d6d-39f431dab69b'),
AIMessage(content='The result of the multiplication is:\n\n329584533.59999996', additional_kwargs={}, response_metadata={'prompt_tokens': 846, 'completion_tokens': 22, 'total_tokens': 868}, id='run-22772404-611b-46e4-9956-b85e4a385f0f-0')]}

LangChain agent exampleโ€‹

from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are a helpful assistant. Make sure to use tool for information.",
),
("placeholder", "{chat_history}"),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
]
)

agent = create_tool_calling_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
agent_executor.invoke({"input": "36939 * 8922.4"})


> Entering new AgentExecutor chain...

Invoking: `main__tools__python_exec` with `{'code': 'print(36939 * 8922.4)'}`


{"format": "SCALAR", "value": "329584533.59999996\n", "truncated": false}The result of the multiplication is:

329584533.59999996

> Finished chain.
{'input': '36939 * 8922.4',
'output': 'The result of the multiplication is:\n\n329584533.59999996'}

Was this page helpful?


You can also leave detailed feedback on GitHub.