Build AI Agent Connected to Unlimited APIs with Vercel's AI SDK & Pica's OneTool
The ability to seamlessly connect and interact with multiple APIs can unlock incredible potential in software development. Today, we’re walking through how to build an AI agent that interfaces with APIs using tools like Express, Vercel’s AI SDK, and Pica’s AI infrastructure. Let’s dive in. Prerequisites Ensure you have Node.js and npm installed. You’ll also need an OpenAI API key and a Pica Secret Key to get started. Once ready, create a new project and install the necessary dependencies: npm install express @ai-sdk/openai ai @picahq/ai dotenv You also need to create a .env file in the root of your project and add the following: PICA_SECRET_KEY=your-pica-secret-key OPENAI_API_KEY=your-openai-api-key PORT=3000 Replace your-pica-secret-key and your-openai-api-key with your actual keys. Step 1: Set Up the Server Create a new file called server.js and set up a basic Express server with routes to handle AI interactions: import express from "express"; import { openai } from "@ai-sdk/openai"; import { generateText } from "ai"; import { Pica } from "@picahq/ai"; import * as dotenv from "dotenv"; dotenv.config(); const app = express(); const port = process.env.PORT || 3000; app.use(express.json()); app.post("/api/ai", async (req, res) => { try { const { message } = req.body; // Initialize Pica const pica = new Pica(process.env.PICA_SECRET_KEY); // Generate the system prompt const systemPrompt = await pica.generateSystemPrompt(); const { text } = await generateText({ model: openai("gpt-4o"), system: systemPrompt, tools: { ...pica.oneTool }, prompt: message, maxSteps: 5, }); res.setHeader("Content-Type", "application/json"); res.status(200).json({ text }); } catch (error) { console.error("Error processing AI request:", error); res.status(500).json({ error: "Internal server error" }); } }); app.listen(port, () => { console.log(`Server is running on port ${port}`); }); export default app; Step 2: Test Your API Once the server is running, you can test your AI endpoint using curl or any HTTP client like Postman. Here’s an example test: curl --location 'http://localhost:3000/api/ai' \ --header 'Content-Type: application/json' \ --data '{ "message": "What connections do I have access to?" }' By default, the AI will respond that no connections are available. This is expected because you need to add connections through Pica's dashboard. What’s Happening Here? Dependencies: express handles the server. @ai-sdk/openai and ai manage the OpenAI API interactions. @picahq/ai provides access to Pica’s AI tooling infrastructure. Environment Variables: We use dotenv to load sensitive keys, keeping them out of your code. Endpoint Logic: When a request hits /api/ai, it initializes Pica, generates a system prompt, and sends the AI’s response back to the client. Step 3: Next Steps Enhance: Add authentication or rate limiting to your server for production. Expand: Use Pica’s additional tools to interact with more APIs or data sources. Deploy: Host your server on a cloud platform like Vercel or AWS for broader access. Conclusion In just a few steps, you’ve built a lightweight AI agent capable of interfacing with an extensive range of APIs. This structure can be expanded to automate workflows, handle complex queries, or integrate with other tools seamlessly. Got questions? Share them in the comments below or connect with me on Twitter. Happy building!
The ability to seamlessly connect and interact with multiple APIs can unlock incredible potential in software development. Today, we’re walking through how to build an AI agent that interfaces with APIs using tools like Express, Vercel’s AI SDK, and Pica’s AI infrastructure. Let’s dive in.
Prerequisites
Ensure you have Node.js and npm installed. You’ll also need an OpenAI API key and a Pica Secret Key to get started. Once ready, create a new project and install the necessary dependencies:
npm install express @ai-sdk/openai ai @picahq/ai dotenv
You also need to create a .env file in the root of your project and add the following:
PICA_SECRET_KEY=your-pica-secret-key
OPENAI_API_KEY=your-openai-api-key
PORT=3000
Replace your-pica-secret-key and your-openai-api-key with your actual keys.
Step 1: Set Up the Server
Create a new file called server.js
and set up a basic Express server with routes to handle AI interactions:
import express from "express";
import { openai } from "@ai-sdk/openai";
import { generateText } from "ai";
import { Pica } from "@picahq/ai";
import * as dotenv from "dotenv";
dotenv.config();
const app = express();
const port = process.env.PORT || 3000;
app.use(express.json());
app.post("/api/ai", async (req, res) => {
try {
const { message } = req.body;
// Initialize Pica
const pica = new Pica(process.env.PICA_SECRET_KEY);
// Generate the system prompt
const systemPrompt = await pica.generateSystemPrompt();
const { text } = await generateText({
model: openai("gpt-4o"),
system: systemPrompt,
tools: { ...pica.oneTool },
prompt: message,
maxSteps: 5,
});
res.setHeader("Content-Type", "application/json");
res.status(200).json({ text });
} catch (error) {
console.error("Error processing AI request:", error);
res.status(500).json({ error: "Internal server error" });
}
});
app.listen(port, () => {
console.log(`Server is running on port ${port}`);
});
export default app;
Step 2: Test Your API
Once the server is running, you can test your AI endpoint using curl
or any HTTP client like Postman. Here’s an example test:
curl --location 'http://localhost:3000/api/ai' \
--header 'Content-Type: application/json' \
--data '{
"message": "What connections do I have access to?"
}'
By default, the AI will respond that no connections are available. This is expected because you need to add connections through Pica's dashboard.
What’s Happening Here?
-
Dependencies:
-
express
handles the server. -
@ai-sdk/openai
andai
manage the OpenAI API interactions. -
@picahq/ai
provides access to Pica’s AI tooling infrastructure.
-
-
Environment Variables: We use
dotenv
to load sensitive keys, keeping them out of your code. -
Endpoint Logic: When a request hits
/api/ai
, it initializes Pica, generates a system prompt, and sends the AI’s response back to the client.
Step 3: Next Steps
- Enhance: Add authentication or rate limiting to your server for production.
- Expand: Use Pica’s additional tools to interact with more APIs or data sources.
- Deploy: Host your server on a cloud platform like Vercel or AWS for broader access.
Conclusion
In just a few steps, you’ve built a lightweight AI agent capable of interfacing with an extensive range of APIs. This structure can be expanded to automate workflows, handle complex queries, or integrate with other tools seamlessly.
Got questions? Share them in the comments below or connect with me on Twitter.
Happy building!
What's Your Reaction?