Skip to main content

Overview

Access RequiredIf you want access to the AI Actions & Playground section, reach out to StackOne support to get access to this section and more.
The StackOne AI Playground (app.stackone.com/playground) is an interactive testing interface that allows you to test any action from your linked accounts using leading Large Language Models (LLMs). This powerful tool enables you to:

Code-Free Testing

Test actions interactively without writing a single line of code

Multi-Account Support

Select multiple linked accounts to test across different integrations simultaneously

Multiple LLM Models

Choose from various LLM models including Anthropic and OpenAI models

Action Explorer

Explore available actions for each selected account with real-time discovery
Playground Main Interface
Prerequisites RequiredThe AI Playground requires linked accounts and configured integrations to function. Before using the playground:Without linked accounts, you won’t be able to select any accounts or test actions in the playground. If you don’t see any accounts available, ensure you’ve completed the account linking process first.

Getting Started

Accessing the AI Playground

  1. Navigate to app.stackone.com/playground
  2. Ensure you’re logged into your StackOne account
  3. The playground interface will load with your available linked accounts

Interface Overview

The playground is divided into three main sections:
  1. Left Panel - Configuration: Select linked accounts and view available actions
  2. Right Panel - Chat Interface: Interact with the LLM to test actions
  3. Top Bar: Access setup options

Selecting Linked Accounts

The playground allows you to select one or more linked accounts to test actions from. Each account represents a connection to a third-party integration.
Account Selector

How to Select Accounts

  1. Click on the “Select Account” dropdown in the left panel
  2. Browse through your available linked accounts
  3. Each account shows:
    • Organization name: The organization the account belongs to
    • Integration name: The provider/integration type (e.g., “Workday Learning”, “Hibob”, “Pinpoint”)
    • Category: The API category (e.g., lms, hris, ats, crm)
  4. Click on an account to select it
  5. You can select multiple accounts to test actions across different integrations simultaneously
Account AvailabilityOnly accounts that have been successfully linked and configured will appear in the account selector. If you don’t see an account, ensure it’s been properly connected in the Accounts section.

Viewing and Selecting Actions

Once you select an account, the playground will:
  1. Load available actions for the selected account(s)
  2. Display the count of available actions in the Actions section
  3. Show actions based on the integration configuration and enabled actions for that account
You can expand the Actions section to see all available actions for each selected account. Each action has a toggle switch that allows you to enable or disable it before starting a conversation.
Action Selection
Action Selection TimingImportant: Action selection must be done before starting a conversation. Once you begin chatting with the LLM, the selected actions are locked into the conversation context. Changing action selections after a conversation has started will not affect the current conversation - the LLM will only be aware of the actions that were enabled when the conversation began.Disabled actions will not be visible to the LLM, so it won’t be able to use them even if you ask about them.
The actions available depend on the integration type (HRIS, ATS, CRM, LMS, etc.). Each category has its own set of specialized tools.
Actions must be enabled in your integration configuration. Check your integration settings to ensure desired actions are activated.
The permissions and capabilities of the linked account determine which actions can be executed. Some accounts may have limited access.

Choosing an LLM Model

The playground supports multiple LLM models from leading providers. You can select the model that best suits your testing needs.
Model Selector

Available Models

The playground currently supports:
https://stackone-logos.com/api/anthropic/icon/svg

Anthropic Sonnet 4.5

More powerful model for complex tasks requiring sophisticated reasoning
https://stackone-logos.com/api/anthropic/icon/svg

Anthropic Haiku 4.5

Faster, more cost-effective model (default) - perfect for quick testing
https://mintcdn.com/stackone-60/O2F4c3prhb0R9e5f/images/agents/openai-logo.svg?fit=max&auto=format&n=O2F4c3prhb0R9e5f&q=85&s=6b6aa3edc45b40a576a0bee530e9f93c

GPT-5.1

Latest GPT model with advanced capabilities
https://mintcdn.com/stackone-60/O2F4c3prhb0R9e5f/images/agents/openai-logo.svg?fit=max&auto=format&n=O2F4c3prhb0R9e5f&q=85&s=6b6aa3edc45b40a576a0bee530e9f93c

GPT-5 mini

Smaller, faster variant for lightweight operations
https://mintcdn.com/stackone-60/O2F4c3prhb0R9e5f/images/agents/openai-logo.svg?fit=max&auto=format&n=O2F4c3prhb0R9e5f&q=85&s=6b6aa3edc45b40a576a0bee530e9f93c

GPT-5 nano

Lightweight variant optimized for speed

How to Change Models

  1. Click on the model selector at the bottom right of the chat interface
  2. Select your preferred model from the dropdown
  3. The selected model will be used for all subsequent interactions
Model SelectionFor quick testing and exploration, Anthropic Haiku 4.5 (the default) is recommended as it provides fast responses. For more complex queries or when you need more sophisticated reasoning, consider using Anthropic Sonnet 4.5 or GPT-5.1.

Testing Actions

Starting a Conversation

  1. Select one or more linked accounts from the account selector
  2. Wait for actions to load (you’ll see “Loading actions…” while they’re being fetched)
  3. Optionally configure which actions are enabled by expanding the Actions section and toggling individual actions on or off
  4. Type your question or request in the input field at the bottom of the chat interface
  5. Click the send button (arrow icon) or press Enter

Understanding Responses

The LLM will:
  1. Analyze your request to determine which action(s) to use
  2. Call the appropriate StackOne MCP server for the selected account(s)
  3. Execute the action and retrieve data
  4. Format and present the results in a conversational manner
How It WorksUnder the hood, the playground uses StackOne’s MCP (Model Context Protocol) server for each linked account. When you ask a question, the LLM determines which tools/actions to use and makes calls to the MCP server, which then executes the appropriate StackOne API requests.

Example Use Cases

The playground supports a wide variety of use cases across different integration types. Here are some common scenarios you can explore:
Manage tickets, users, and organizations in your customer support system.Example Query: “List the first 3 tickets from Zendesk”
Zendesk Ticket Listing
The LLM will execute the appropriate action (e.g., zendesk_list_tickets) and return the results with helpful context. You can see exactly which tool was called and what parameters were used.
Access employee information, manage job postings, and track time off requests.Example Query: “Show me all active employees in Workday”
Workday Employee Listing
The playground will use Workday-specific actions (e.g., workday_list_workers) to retrieve employee data, format it in a readable way, and provide insights about the results including total count and sample employee information.
Test cross-integration scenarios by selecting multiple accounts and asking questions that span different systems.Example Query: “What tools can I use with these accounts?”
Multi-Account Tools
The LLM will provide a comprehensive overview of all available tools across your selected accounts, organized by system and category.
Explore what data is available in your connected systems without writing code.Example Query: “What information can I access about users in Zendesk?”
Zendesk User Discovery
The LLM will help you discover available user-related actions (like zendesk_list_users, zendesk_get_users, zendesk_get_current_users) and explain what information you can access, including user IDs, emails, roles, and other profile data.
Test individual API operations to verify they work correctly with your account configuration.Example Query: “List the first 5 tickets from Zendesk”
Zendesk API Operation Test
You can test individual API operations like listing tickets. The playground shows you the exact tool called (zendesk_list_tickets), the parameters used (page_size: 5), and the execution result. Even if an operation encounters an error (like permission issues), the LLM provides helpful troubleshooting guidance.
Conversation Tips
  • Start with broad questions like “What tools can I use?” to discover capabilities
  • Be specific when you want to execute actions (e.g., “List 5 employees” vs “Show employees”)
  • The LLM will guide you if your request needs clarification
  • Check the tool execution details to understand what API calls were made

Advanced Features

Use Setup

The “Use Setup ▼” button in the top right provides configuration details for using StackOne’s MCP (Model Context Protocol) server outside of the playground interface. This allows you to integrate StackOne actions directly into your own AI applications and agents.
Use Setup Menu
The setup menu displays:
  • MCP Server URL: https://api.stackone.com/mcp - The endpoint for connecting to StackOne’s MCP server
  • Required Headers:
    • Authorization: Basic <BASE_64_STACKONE_API_KEY> - Your Base64-encoded StackOne API key
    • x-account-id: [Select an account] - The account ID for the linked account you want to use
  • Documentation Links: Quick access to MCP setup guides and AI Toolset documentation
This configuration information enables you to connect StackOne’s MCP server to various MCP-compatible clients and frameworks, including Claude Desktop, Cursor, and custom AI applications. For detailed setup instructions, see the StackOne MCP Introduction guide.

Best Practices

Testing Workflow

  1. Start Simple: Begin with a single account to understand the available actions before adding complexity.
  2. Explore Capabilities: Use simple queries first like “What tools can I use?” to see available capabilities and get familiar with the system.
  3. Test Specific Actions: Test specific actions with clear, direct requests once you understand what’s available.
  4. Scale Up: Try multiple accounts to test cross-integration scenarios and see how different systems work together.
  5. Optimize: Experiment with different models to see which works best for your specific use case and performance needs.

Troubleshooting

  • Ensure the account is properly linked and configured in the Accounts section
  • Check that actions are enabled in your integration configuration
  • Verify the account has the necessary permissions for the actions you’re trying to use
  • Try using a faster model like Anthropic Haiku 4.5 instead of larger models
  • Reduce the number of selected accounts to improve response time
  • Simplify your queries by breaking down complex requests into simpler, more focused ones
  • Be more specific in your queries to get the exact data you need
  • Verify that the account has access to the requested data
  • Review the Request Logs to see the actual API calls made and identify issues