- Why Tool Search
- How It Works
- Benchmarks
- Examples
The Problem: StackOne manages over 10,000 actions across connectors, with some containing 2,000+ actions. Loading all of them into an LLM’s context window is not practical:
- Token bloat: every tool definition consumes tokens, leaving less room for reasoning
- Accuracy drops: LLMs make worse tool selections as the candidate set grows
- Provider caps: OpenAI caps function definitions at ~128 per request
Key Features
Scales to Thousands of Tools
StackOne has 10,000+ actions. Search returns only the relevant ones for each query.
Improves Accuracy
Only relevant tools exposed per request, reducing misfires and hallucinations.
Account-Aware
Filters results to tools available for configured account IDs, respecting auth boundaries.
Framework-Ready
Returns a Tools collection with converters for OpenAI, LangChain, Vercel AI SDK, and more.
Architecture Overview
Flow:- User sends a natural language query to your AI agent
- Agent calls Search Tools to find relevant actions
- Search Tools fetches tool definitions from MCP and ranks them via the Semantic Search API
- Agent receives a ranked
Toolscollection - Agent calls
execute()on the selected tool
Quick Example
Next Steps
- Tool Search for full API reference and all three methods
- Basic Usage for fetching and executing tools
- Tool Filtering for glob pattern filtering