Alerts Module
Set up intelligent alerts with configurable rules, cooldown periods, and multi-channel notifications. Get notified when your cross-chain operations need attention.
Overview
The Alerts Module provides a comprehensive system for creating and managing alerts based on interoperability metrics. It uses data from both the Tracking and Metrics modules to trigger intelligent notifications through various channels.
Smart Notifications
- • Multi-channel support (Slack, Discord, webhooks)
- • Configurable alert rules and conditions
- • Cooldown periods to prevent spam
- • Alert severity levels and categorization
Rule-Based System
- • Pre-defined rules for common scenarios
- • Custom rule creation and templates
- • Duration-based conditions
- • Context-aware alert generation
processAlerts
The main function for processing alert rules against current metrics and triggering notifications when conditions are met.
Function Signature
processAlerts(rules, context, notificationCallback): Promise<AlertRuleEvaluationResult[]>
import {
processAlerts,
createAlertContext,
createSimpleNotificationCallback,
DEFAULT_ALERT_RULES,
AlertNotification,
NotificationChannel
} from '@wakeuplabs/op-interop-alerts-sdk';
// Create notification callback for Slack
const alertNotificationCallback = createSimpleNotificationCallback({
[NotificationChannel.SLACK]: async (notification: AlertNotification) => {
const { alert, rule, context } = notification;
console.log(`🚨 ALERT: [${alert.severity}] ${alert.title}`);
console.log(`Message: ${alert.message}`);
// Send to Slack webhook
const slackMessage = {
text: `🚨 *${alert.title}*`,
attachments: [{
color: alert.severity === 'CRITICAL' ? 'danger' : 'warning',
fields: [
{ title: 'Severity', value: alert.severity, short: true },
{ title: 'Category', value: alert.category, short: true },
{ title: 'Message', value: alert.message, short: false }
],
ts: Math.floor(alert.timestamp.getTime() / 1000)
}]
};
try {
const response = await fetch(process.env.SLACK_WEBHOOK_URL!, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(slackMessage)
});
if (response.ok) {
console.log('✅ Alert sent to Slack');
} else {
console.error('❌ Failed to send Slack alert');
}
} catch (error) {
console.error('Error sending Slack alert:', error);
}
}
});
// Process alerts with metrics and tracking data
async function checkAlerts(metrics: InteropMetrics, trackingData: TrackingResult[]) {
try {
// Create alert context from current data
const alertContext = createAlertContext(
metrics,
trackingData,
undefined, // No previous metrics for comparison
60 * 60 * 1000 // 1 hour time window
);
// Process all default alert rules
const results = await processAlerts(
DEFAULT_ALERT_RULES,
alertContext,
alertNotificationCallback
);
// Log results
const triggeredAlerts = results.filter(r => r.triggered);
console.log(`📋 Processed ${results.length} rules, ${triggeredAlerts.length} alerts triggered`);
if (triggeredAlerts.length > 0) {
triggeredAlerts.forEach((result, index) => {
console.log(` ${index + 1}. [${result.alert?.severity}] ${result.rule.name}`);
});
}
return results;
} catch (error) {
console.error('❌ Error processing alerts:', error);
return [];
}
}
Creating Custom Alert Rules
You can create custom alert rules tailored to your specific monitoring needs:
import {
createAlertRule,
createRuleFromTemplate,
ALERT_RULE_TEMPLATES,
AlertSeverity,
AlertCategory
} from '@wakeuplabs/op-interop-alerts-sdk';
// Create a custom alert rule from scratch
const customLatencyRule = createAlertRule({
name: 'Custom High Latency Alert',
description: 'Triggers when average latency exceeds 2 minutes',
category: AlertCategory.LATENCY,
severity: AlertSeverity.HIGH,
conditions: [
{
field: 'coreMetrics.latency.averageLatencyMs',
operator: 'gt',
value: 120000 // 2 minutes in milliseconds
}
],
cooldownMs: 10 * 60 * 1000, // 10 minutes cooldown
channels: [NotificationChannel.SLACK, NotificationChannel.WEBHOOK]
});
// Create a rule from a template
const criticalSuccessRateRule = createRuleFromTemplate(
ALERT_RULE_TEMPLATES.CRITICAL_SUCCESS_RATE,
{
// Override template values
conditions: [
{
field: 'coreMetrics.throughput.successRate',
operator: 'lt',
value: 85 // Alert when success rate < 85%
}
],
cooldownMs: 5 * 60 * 1000 // 5 minutes cooldown
}
);
// Create a rule with duration-based conditions
const persistentErrorRule = createAlertRule({
name: 'Persistent Error Rate',
description: 'Triggers when error rate stays high for 15 minutes',
category: AlertCategory.ERROR_RATE,
severity: AlertSeverity.CRITICAL,
conditions: [
{
field: 'health.errorSummary.errorRate',
operator: 'gt',
value: 10, // Error rate > 10%
duration: 15 * 60 * 1000 // Must persist for 15 minutes
}
],
cooldownMs: 30 * 60 * 1000, // 30 minutes cooldown
channels: [NotificationChannel.SLACK]
});
// Use custom rules
const customRules = [
customLatencyRule,
criticalSuccessRateRule,
persistentErrorRule
];
// Process with custom rules
const results = await processAlerts(
customRules,
alertContext,
alertNotificationCallback
);
Default Alert Rules
The SDK comes with pre-configured alert rules for common monitoring scenarios:
Rule Name | Category | Severity | Condition |
---|---|---|---|
High Latency | LATENCY | HIGH | Average latency > 60s |
Critical Latency | LATENCY | CRITICAL | Average latency > 180s |
Low Success Rate | THROUGHPUT | HIGH | Success rate < 95% |
Critical Success Rate | THROUGHPUT | CRITICAL | Success rate < 90% |
System Down | SYSTEM_STATUS | CRITICAL | Interop status = DOWN |
Consecutive Failures | CONSECUTIVE_FAILURES | CRITICAL | 5+ consecutive failures |
Notification Channels
Configure multiple notification channels to ensure alerts reach the right people:
// Set up Slack notifications
const slackNotificationCallback = createSimpleNotificationCallback({
[NotificationChannel.SLACK]: async (notification: AlertNotification) => {
const { alert } = notification;
const slackPayload = {
username: 'OP Interop Alerts',
icon_emoji: ':warning:',
channel: '#alerts',
attachments: [{
color: alert.severity === 'CRITICAL' ? 'danger' : 'warning',
title: alert.title,
text: alert.message,
fields: [
{ title: 'Severity', value: alert.severity, short: true },
{ title: 'Category', value: alert.category, short: true },
{ title: 'Time', value: alert.timestamp.toISOString(), short: false }
],
footer: 'OP Interop Alerts',
ts: Math.floor(alert.timestamp.getTime() / 1000)
}]
};
const response = await fetch(process.env.SLACK_WEBHOOK_URL!, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(slackPayload)
});
return response.ok;
}
});
Best Practices
Use Appropriate Cooldown Periods
Set cooldown periods to prevent alert spam. Critical alerts might need shorter cooldowns (5-10 minutes), while warning alerts can have longer cooldowns (15-30 minutes).
Layer Alert Severity
Use different severity levels strategically. Start with warnings for early detection, then escalate to critical alerts for urgent issues that require immediate attention.
Test Your Alert Rules
Regularly test your alert rules with known conditions to ensure they trigger correctly. Consider creating test scenarios that simulate various failure modes.
Avoid Alert Fatigue
Don't create too many alerts or set thresholds too low. Focus on alerts that require action. Use duration-based conditions for transient issues.
Next Steps
Now that you understand the Alerts Module, explore complete examples that combine all three modules: