StackOne captures detailed logs for every API request. You can integrate these logs with your observability stack for centralized monitoring, alerting, and debugging.
This guide is for platform builders who want to integrate StackOne request logs with their existing monitoring infrastructure.
Integration Approaches
Choose the approach that fits your needs:
Approach Best For Complexity Direct polling (Grafana Infinity)Simple dashboards, ad-hoc queries Low Push model (Sync worker)High-volume, real-time alerts, data transformation Medium Webhooks Real-time alerts on specific events Low
StackOne retains logs for 90 days .
Query Request Logs
Use the Request Logs API to retrieve logs:
TypeScript SDK
cURL
Python
import { StackOne } from "@stackone/stackone-client-ts" ;
import {
OrderBy ,
OrderDirection ,
} from "@stackone/stackone-client-ts/sdk/models/operations" ;
const stackOne = new StackOne ({
security: {
username: process . env . STACKONE_API_KEY ! ,
password: "" ,
},
});
const result = await stackOne . requestLogs . listLogs ({
pageSize: 100 ,
orderBy: OrderBy . EventDatetime ,
orderDirection: OrderDirection . Desc ,
});
const logs = result . unifiedLogsPaginated ?. data ?? [];
const nextCursor = result . unifiedLogsPaginated ?. next ;
curl "https://api.stackone.com/requests/logs?page_size=100&order_by=eventDatetime&order_direction=desc" \
-u " $STACKONE_API_KEY :"
import requests
import base64
api_key = "v1.eu1.xxxxx"
headers = {
"Authorization" : f "Basic { base64.b64encode( f ' { api_key } :' .encode()).decode() } "
}
response = requests.get(
"https://api.stackone.com/requests/logs" ,
params = {
"page_size" : 100 ,
"order_by" : "eventDatetime" ,
"order_direction" : "desc"
},
headers = headers
)
data = response.json()
logs = data[ "data" ]
next_cursor = data.get( "next" )
Log Entry Structure
{
"requestId" : "req_abc123" ,
"accountId" : "acct_xyz789" ,
"provider" : "bamboohr" ,
"service" : "hris" ,
"action" : "bamboohr_list_employees" ,
"resource" : "employees" ,
"httpMethod" : "POST" ,
"path" : "/actions/rpc" ,
"url" : "https://api.stackone.com/actions/rpc" ,
"status" : 200 ,
"success" : true ,
"duration" : 245 ,
"eventDatetime" : "2024-01-15T10:30:00Z" ,
"startTime" : "2024-01-15T10:29:59Z" ,
"endTime" : "2024-01-15T10:30:00Z" ,
"request" : {
"method" : "POST" ,
"url" : {
"hostname" : "api.stackone.com" ,
"path" : "/actions/rpc"
},
"headers" : {}
},
"response" : {
"statusCode" : 200 ,
"headers" : {}
}
}
Key Fields for Monitoring
Field Description Use Case statusHTTP response status Error rate alerts successBoolean success flag Quick filtering durationRequest latency (ms) Performance monitoring providerIntegration provider Provider-specific dashboards accountIdLinked account Customer-level debugging actionAction executed Usage analytics serviceAPI category (hris, ats, etc.) Service-level metrics
Filter Logs
// Filter by time range and status
const result = await stackOne . requestLogs . listLogs ({
filter: {
startDate: new Date ( "2024-01-15T09:00:00Z" ),
endDate: new Date ( "2024-01-15T17:00:00Z" ),
statusCodes: "400,401,500,502,503" , // Comma-separated
providers: "bamboohr,workday" ,
accountIds: "acct_xyz789" ,
},
pageSize: 100 ,
orderBy: OrderBy . EventDatetime ,
orderDirection: OrderDirection . Desc ,
});
# Filter by time range
curl "https://api.stackone.com/requests/logs?filter[start_date]=2024-01-15T09:00:00Z&filter[end_date]=2024-01-15T17:00:00Z" \
-u " $STACKONE_API_KEY :"
# Filter by account
curl "https://api.stackone.com/requests/logs?filter[account_ids]=acct_xyz789" \
-u " $STACKONE_API_KEY :"
# Filter by status codes (errors only)
curl "https://api.stackone.com/requests/logs?filter[status_codes]=400,500,502,503" \
-u " $STACKONE_API_KEY :"
# Filter by provider
curl "https://api.stackone.com/requests/logs?filter[providers]=bamboohr,workday" \
-u " $STACKONE_API_KEY :"
See the List Logs API Reference for all available filter parameters including date ranges, account IDs, providers, status codes, and pagination options.
Grafana Direct Polling
The simplest approach is to let Grafana poll the StackOne API directly using the Infinity data source .
Setup
Install the Infinity plugin in Grafana
Add a new Infinity data source with these settings:
Setting Value URL https://api.stackone.comAuth Basic Auth User Your StackOne API key (e.g., v1.eu1.xxxxx) Password Leave empty
Example Query
Configure a panel with these Infinity settings:
Type : JSON
Source : URL
Method : GET
URL : /requests/logs?page_size=100&order_by=eventDatetime&order_direction=desc
Parser : Backend
# For filtering (optional)
URL : /requests/logs?filter[start_date]=${__from:date:iso}&filter[end_date]=${__to:date:iso}&filter[status_codes]=400,500,502,503
Dashboard Variables
Create variables for dynamic filtering:
Variable Query providerStatic values: bamboohr, workday, greenhouse, etc. accountUse /accounts endpoint to fetch linked accounts
Then use in queries: /requests/logs?filter[providers]=${provider}&filter[account_ids]=${account}
This approach is best for dashboards and ad-hoc analysis. For real-time alerting or high-volume ingestion, use the sync worker approach below.
Build a Log Sync Worker
For high-volume ingestion or when you need to transform logs before storing, build a sync worker that polls the StackOne API and forwards logs to your observability platform.
Popular approaches include using Temporal workflows, AWS Lambda with EventBridge schedules, or simple cron jobs. The core pattern is the same: poll for new logs since your last sync, then forward to your platform.
Core Pattern
The essential StackOne integration is fetching logs with cursor-based pagination:
import { StackOne } from "@stackone/stackone-client-ts" ;
import { OrderBy , OrderDirection } from "@stackone/stackone-client-ts/sdk/models/operations" ;
const stackOne = new StackOne ({
security: { username: process . env . STACKONE_API_KEY ! , password: "" },
});
async function fetchLogsSince ( startDate : Date ) {
const logs = [];
let cursor : string | undefined ;
do {
const result = await stackOne . requestLogs . listLogs ({
filter: { startDate },
pageSize: 100 ,
orderBy: OrderBy . EventDatetime ,
orderDirection: OrderDirection . Asc ,
next: cursor ,
});
logs . push ( ... ( result . unifiedLogsPaginated ?. data ?? []));
cursor = result . unifiedLogsPaginated ?. next ?? undefined ;
} while ( cursor );
return logs ;
}
// Usage: fetch logs from last sync, forward to your platform
const logs = await fetchLogsSince ( lastSyncTime );
for ( const log of logs ) {
await forwardToObservabilityPlatform ( log ); // Your implementation
}
import requests
import base64
import os
def fetch_logs_since ( start_date : str ) -> list[ dict ]:
api_key = os.environ[ "STACKONE_API_KEY" ]
headers = { "Authorization" : f "Basic { base64.b64encode( f ' { api_key } :' .encode()).decode() } " }
logs, cursor = [], None
while True :
params = { "filter[start_date]" : start_date, "page_size" : 100 , "order_by" : "eventDatetime" , "order_direction" : "asc" }
if cursor:
params[ "next" ] = cursor
data = requests.get( "https://api.stackone.com/requests/logs" , params = params, headers = headers).json()
logs.extend(data[ "data" ])
cursor = data.get( "next" )
if not cursor:
break
return logs
# Usage: fetch logs from last sync, forward to your platform
logs = fetch_logs_since(last_sync_time)
for log in logs:
forward_to_observability_platform(log) # Your implementation
Grafana Loki
async function sendToLoki ( log : TransformedLog ) {
await fetch ( 'http://loki:3100/loki/api/v1/push' , {
method: 'POST' ,
headers: { 'Content-Type' : 'application/json' },
body: JSON . stringify ({
streams: [{
stream: log . labels ,
values: [[ ` ${ log . timestamp } 000000` , JSON . stringify ( log )]]
}]
})
});
}
Datadog
async function sendToDatadog ( log : DatadogLog ) {
await fetch ( 'https://http-intake.logs.datadoghq.com/api/v2/logs' , {
method: 'POST' ,
headers: {
'Content-Type' : 'application/json' ,
'DD-API-KEY' : process . env . DATADOG_API_KEY !
},
body: JSON . stringify ([ log ])
});
}
OpenTelemetry
import { logs } from '@opentelemetry/api-logs' ;
function sendToOtel ( log : UnifiedLogs ) {
const logger = logs . getLogger ( 'stackone' );
logger . emit ({
severityNumber: ( log . status ?? 0 ) >= 400 ? 17 : 9 , // ERROR : INFO
body: ` ${ log . httpMethod } ${ log . path } ` ,
attributes: {
'http.status_code' : log . status ,
'http.method' : log . httpMethod ,
'stackone.provider' : log . provider ,
'stackone.account_id' : log . accountId ,
'stackone.action' : log . action ,
'duration_ms' : log . duration
}
});
}
Suggested Dashboards
Key Metrics to Track
Metric Query Pattern Alert Threshold Error Rate status >= 400> 5% over 5 min P95 Latency percentile(duration, 95)> 2000ms Provider Health Group by provider, status Any provider > 10% errors Request Volume Count by time bucket Anomaly detection
Grafana Dashboard JSON
{
"panels" : [
{
"title" : "Error Rate by Provider" ,
"type" : "timeseries" ,
"targets" : [{
"expr" : "sum(rate(stackone_requests_total{status_code=~ \" 4..|5.. \" }[5m])) by (provider) / sum(rate(stackone_requests_total[5m])) by (provider)"
}]
},
{
"title" : "P95 Latency" ,
"type" : "stat" ,
"targets" : [{
"expr" : "histogram_quantile(0.95, sum(rate(stackone_request_duration_bucket[5m])) by (le))"
}]
}
]
}
Webhooks for Real-Time Alerts
For critical events, use webhooks instead of polling:
Event Use Case account.errorAlert on integration failures account.expiredNotify about expired credentials request.failedReal-time error alerts
Webhooks Guide Configure webhooks for real-time notifications