- Defines a custom tool (TakoTool) to query Tako’s Knowledge Search API
- Provides a structured system prompt guiding the agent’s behavior
- Creates a complete agent workflow that utilizes Tako as an external knowledge source
Define a LlamaIndex tool that will query Tako
import os
from llama_index.core.tools.tool_spec.base import BaseToolSpec
from tako.client import TakoClient
from tako.types.knowledge_search.types import KnowledgeSearchSourceIndex
## Securely load your API key
TAKO_API_KEY = os.getenv("TAKO_API_KEY")
## Initialize the Tako client
tako_client = TakoClient(TAKO_API_KEY)
class TakoToolSpec(BaseToolSpec):
spec_functions = ["search_tako"]
def search_tako(self, query: str) -> str:
"""Search Tako for any knowledge to get data and visualization."""
try:
tako_card = tako_client.knowledge_search(query)
except Exception:
return "No card found"
return tako_card.model_dump()
Write a system prompt
In the system prompt, direct the agent to query Tako and use the response whenever suited.
SYSTEM_PROMPT = """
You are Tako‑Agent.
1. Query `search_tako` exactly once when the user asks **any** question.
• If the user's question is a **comparison** (e.g. "US vs China GDP"), call:
{
"tool": "search_card",
"args": { "query": "US vs China GDP" }
}
• If the user's question is an **analytical/ranking** request (e.g. "Top countries by GDP"), call:
{
"tool": "search_card",
"args": { "query": "Top countries by GDP" }
}
2. Use the response from `search_tako` to answer user's question. Always embed the card to your response
"""
Define an Agent Workflow
In the agent workflow, pass in the tool and the system prompt.
tako_tools = TakoToolSpec().to_tool_list()
llm = OpenAI(model="gpt-4o", max_new_tokens=300)
workflow = AgentWorkflow.from_tools_or_functions(
tako_tools,
llm=llm,
system_prompt=SYSTEM_PROMPT,
)
Create an async method and run
async def main():
response = await workflow.run(
user_msg="Which countries received the most foreign aid from the US?"
)
print(response)
if __name__ == "__main__":
asyncio.run(main())