This guide shows how to fetch Tako knowledge cards inside a Next.js API route and feed them to an LLM through the Vercel AI SDK.
Copy & paste the snippets below and you’ll have a working endpoint in minutes.
0 · Bootstrap a Next.js project (skip if you already have one)
# Creates a Next.js 14 project (TypeScript, App Router, Tailwind pre‑configured)
npx create-next-app@latest my-tako-chat \
--typescript --app --eslint --tailwind
cd my-tako-chat
Scaffolded files of note:
package.json
— scripts (dev
, build
, start
) and dependencies.
tsconfig.json
— already set for Edge Runtime; no changes needed.
app/
— where your pages and API routes live.
Create a .env.local
file for local runs (kept out of Git automatically):
TAKO_API_KEY=sk_tako_...
OPENAI_API_KEY=sk_openai_...
1 · Prerequisites
- Node 18+
- Next.js 14+ with the App Router (from step 0)
- Two environment variables:
Name | Purpose |
---|
TAKO_API_KEY | Your key from https://trytako.com. |
OPENAI_API_KEY | Model provider key (swap to ANTHROPIC_API_KEY , GROQ_API_KEY , etc. as needed). |
2 · Install packages
npm install tako-sdk ai @ai-sdk/openai
# or
pnpm add tako-sdk ai @ai-sdk/openai
# or
yarn add tako-sdk ai @ai-sdk/openai
If you’ll deploy on Vercel, push the secrets once:
npx vercel link # one‑time project setup
vercel env add TAKO_API_KEY # Tako
vercel env add OPENAI_API_KEY # or ANTHROPIC_API_KEY, GROQ_API_KEY …
Why OPENAI_API_KEY
?
The AI SDK auto‑detects provider‑specific variables (OPENAI_API_KEY
, ANTHROPIC_API_KEY
, etc.), so you don’t need to reference the key in code.
3 · Hello‑World Edge Route
Create app/api/chat/route.ts
:
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
import { createTakoClient } from 'tako-sdk';
export const runtime = 'edge';
export async function POST(req: Request) {
try {
const { question } = await req.json();
console.log('Question:', question);
// Fetch knowledge cards
console.log('Fetching knowledge cards...');
const tako = createTakoClient(process.env.TAKO_API_KEY!);
const search = await tako.knowledgeSearch(question);
console.log('Got ' + search.outputs.knowledge_cards.length + ' knowledge cards');
// Extract only title and description from knowledge cards
const knowledge_for_model = search.outputs.knowledge_cards.map(card => ({
title: card.title,
description: card.description
}));
// Ask the model, providing cards as context
console.log('Generating response with model...');
const { text } = await generateText({
model: openai('gpt-4o-mini'),
prompt: `Answer the following question using the provided knowledge cards as citations.\n\nQuestion: ${question}\n\nKnowledge cards:\n${JSON.stringify(knowledge_for_model)}`,
});
console.log('Model generated response:', text);
return Response.json({
answer: text,
knowledge_cards: search.outputs.knowledge_cards,
});
} catch (error: any) {
return Response.json(
{ error: 'Internal server error', details: error?.message || 'Unknown error' },
{ status: 500 }
);
}
}
Run locally:
Then test:
curl -X POST http://localhost:3000/api/chat \
-H 'Content-Type: application/json' \
-d '{"question":"How do NVIDIA and AMD revenue compare?"}' | jq
You’ll get JSON with the model’s answer and the supporting knowledge cards. Deploy with vercel --prod
when you’re ready.