Skip to main content
StackOne captures detailed logs for every API request. You can integrate these logs with your observability stack for centralized monitoring, alerting, and debugging.
This guide is for platform builders who want to integrate StackOne request logs with their existing monitoring infrastructure.

Integration Approaches

Choose the approach that fits your needs:
ApproachBest ForComplexity
Direct polling (Grafana Infinity)Simple dashboards, ad-hoc queriesLow
Push model (Sync worker)High-volume, real-time alerts, data transformationMedium
WebhooksReal-time alerts on specific eventsLow
StackOne retains logs for 90 days.

Query Request Logs

Use the Request Logs API to retrieve logs:
import { StackOne } from "@stackone/stackone-client-ts";
import {
  OrderBy,
  OrderDirection,
} from "@stackone/stackone-client-ts/sdk/models/operations";

const stackOne = new StackOne({
  security: {
    username: process.env.STACKONE_API_KEY!,
    password: "",
  },
});

const result = await stackOne.requestLogs.listLogs({
  pageSize: 100,
  orderBy: OrderBy.EventDatetime,
  orderDirection: OrderDirection.Desc,
});

const logs = result.unifiedLogsPaginated?.data ?? [];
const nextCursor = result.unifiedLogsPaginated?.next;

Log Entry Structure

{
  "requestId": "req_abc123",
  "accountId": "acct_xyz789",
  "provider": "bamboohr",
  "service": "hris",
  "action": "bamboohr_list_employees",
  "resource": "employees",
  "httpMethod": "POST",
  "path": "/actions/rpc",
  "url": "https://api.stackone.com/actions/rpc",
  "status": 200,
  "success": true,
  "duration": 245,
  "eventDatetime": "2024-01-15T10:30:00Z",
  "startTime": "2024-01-15T10:29:59Z",
  "endTime": "2024-01-15T10:30:00Z",
  "request": {
    "method": "POST",
    "url": {
      "hostname": "api.stackone.com",
      "path": "/actions/rpc"
    },
    "headers": {}
  },
  "response": {
    "statusCode": 200,
    "headers": {}
  }
}

Key Fields for Monitoring

FieldDescriptionUse Case
statusHTTP response statusError rate alerts
successBoolean success flagQuick filtering
durationRequest latency (ms)Performance monitoring
providerIntegration providerProvider-specific dashboards
accountIdLinked accountCustomer-level debugging
actionAction executedUsage analytics
serviceAPI category (hris, ats, etc.)Service-level metrics

Filter Logs

// Filter by time range and status
const result = await stackOne.requestLogs.listLogs({
  filter: {
    startDate: new Date("2024-01-15T09:00:00Z"),
    endDate: new Date("2024-01-15T17:00:00Z"),
    statusCodes: "400,401,500,502,503",  // Comma-separated
    providers: "bamboohr,workday",
    accountIds: "acct_xyz789",
  },
  pageSize: 100,
  orderBy: OrderBy.EventDatetime,
  orderDirection: OrderDirection.Desc,
});
See the List Logs API Reference for all available filter parameters including date ranges, account IDs, providers, status codes, and pagination options.

Grafana Direct Polling

The simplest approach is to let Grafana poll the StackOne API directly using the Infinity data source.

Setup

  1. Install the Infinity plugin in Grafana
  2. Add a new Infinity data source with these settings:
SettingValue
URLhttps://api.stackone.com
AuthBasic Auth
UserYour StackOne API key (e.g., v1.eu1.xxxxx)
PasswordLeave empty

Example Query

Configure a panel with these Infinity settings:
Type: JSON
Source: URL
Method: GET
URL: /requests/logs?page_size=100&order_by=eventDatetime&order_direction=desc
Parser: Backend

# For filtering (optional)
URL: /requests/logs?filter[start_date]=${__from:date:iso}&filter[end_date]=${__to:date:iso}&filter[status_codes]=400,500,502,503

Dashboard Variables

Create variables for dynamic filtering:
VariableQuery
providerStatic values: bamboohr, workday, greenhouse, etc.
accountUse /accounts endpoint to fetch linked accounts
Then use in queries: /requests/logs?filter[providers]=${provider}&filter[account_ids]=${account}
This approach is best for dashboards and ad-hoc analysis. For real-time alerting or high-volume ingestion, use the sync worker approach below.

Build a Log Sync Worker

For high-volume ingestion or when you need to transform logs before storing, build a sync worker that polls the StackOne API and forwards logs to your observability platform.
Popular approaches include using Temporal workflows, AWS Lambda with EventBridge schedules, or simple cron jobs. The core pattern is the same: poll for new logs since your last sync, then forward to your platform.

Core Pattern

The essential StackOne integration is fetching logs with cursor-based pagination:
import { StackOne } from "@stackone/stackone-client-ts";
import { OrderBy, OrderDirection } from "@stackone/stackone-client-ts/sdk/models/operations";

const stackOne = new StackOne({
  security: { username: process.env.STACKONE_API_KEY!, password: "" },
});

async function fetchLogsSince(startDate: Date) {
  const logs = [];
  let cursor: string | undefined;

  do {
    const result = await stackOne.requestLogs.listLogs({
      filter: { startDate },
      pageSize: 100,
      orderBy: OrderBy.EventDatetime,
      orderDirection: OrderDirection.Asc,
      next: cursor,
    });

    logs.push(...(result.unifiedLogsPaginated?.data ?? []));
    cursor = result.unifiedLogsPaginated?.next ?? undefined;
  } while (cursor);

  return logs;
}

// Usage: fetch logs from last sync, forward to your platform
const logs = await fetchLogsSince(lastSyncTime);
for (const log of logs) {
  await forwardToObservabilityPlatform(log); // Your implementation
}

Platform-Specific Examples

Grafana Loki

async function sendToLoki(log: TransformedLog) {
  await fetch('http://loki:3100/loki/api/v1/push', {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({
      streams: [{
        stream: log.labels,
        values: [[`${log.timestamp}000000`, JSON.stringify(log)]]
      }]
    })
  });
}

Datadog

async function sendToDatadog(log: DatadogLog) {
  await fetch('https://http-intake.logs.datadoghq.com/api/v2/logs', {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'DD-API-KEY': process.env.DATADOG_API_KEY!
    },
    body: JSON.stringify([log])
  });
}

OpenTelemetry

import { logs } from '@opentelemetry/api-logs';

function sendToOtel(log: UnifiedLogs) {
  const logger = logs.getLogger('stackone');

  logger.emit({
    severityNumber: (log.status ?? 0) >= 400 ? 17 : 9, // ERROR : INFO
    body: `${log.httpMethod} ${log.path}`,
    attributes: {
      'http.status_code': log.status,
      'http.method': log.httpMethod,
      'stackone.provider': log.provider,
      'stackone.account_id': log.accountId,
      'stackone.action': log.action,
      'duration_ms': log.duration
    }
  });
}

Suggested Dashboards

Key Metrics to Track

MetricQuery PatternAlert Threshold
Error Ratestatus >= 400> 5% over 5 min
P95 Latencypercentile(duration, 95)> 2000ms
Provider HealthGroup by provider, statusAny provider > 10% errors
Request VolumeCount by time bucketAnomaly detection

Grafana Dashboard JSON

{
  "panels": [
    {
      "title": "Error Rate by Provider",
      "type": "timeseries",
      "targets": [{
        "expr": "sum(rate(stackone_requests_total{status_code=~\"4..|5..\"}[5m])) by (provider) / sum(rate(stackone_requests_total[5m])) by (provider)"
      }]
    },
    {
      "title": "P95 Latency",
      "type": "stat",
      "targets": [{
        "expr": "histogram_quantile(0.95, sum(rate(stackone_request_duration_bucket[5m])) by (le))"
      }]
    }
  ]
}

Webhooks for Real-Time Alerts

For critical events, use webhooks instead of polling:
EventUse Case
account.errorAlert on integration failures
account.expiredNotify about expired credentials
request.failedReal-time error alerts

Webhooks Guide

Configure webhooks for real-time notifications