Complete Data Analysis Service
End-to-end implementation of an AI-powered data analysis service with sentiment, market, and financial analysis
This comprehensive guide demonstrates how to build a production-ready data analysis service using the .do platform. The service provides AI-powered sentiment analysis, market research, and financial analysis capabilities with automated pricing, quality guarantees, and complete lifecycle management.
Service Overview
Data analysis services transform raw data into actionable insights using advanced AI models and statistical techniques. This example implements a multi-purpose analysis service that can handle three primary use cases:
Sentiment Analysis: Analyze customer feedback, social media mentions, product reviews, and survey responses to extract emotional tone, key themes, and sentiment trends. The service processes text data at scale and provides detailed breakdowns of positive, negative, and neutral sentiments with confidence scores.
Market Research Analysis: Process market data, competitor information, consumer trends, and industry reports to identify opportunities, threats, and strategic insights. The service combines quantitative data analysis with qualitative AI interpretation to generate comprehensive market intelligence reports.
Financial Analysis: Evaluate financial statements, performance metrics, investment opportunities, and risk factors using AI-powered pattern recognition and statistical modeling. The service provides detailed financial health assessments, trend analysis, and predictive insights.
Key Capabilities
The analysis service provides several sophisticated capabilities that differentiate it from basic data processing:
- Multi-Model AI Processing: Leverages both GPT-5 and Claude Sonnet 4.5 for cross-validation and enhanced accuracy
- Automated Quality Validation: Implements confidence scoring and consistency checks across multiple analysis dimensions
- Dynamic Pricing: Adjusts pricing based on data volume, complexity, and required turnaround time
- Visualization Generation: Creates charts, graphs, and visual representations of insights using data visualization libraries
- SLA Management: Guarantees response times and accuracy thresholds with automatic refunds for SLA violations
- Scalable Architecture: Handles concurrent analysis requests with intelligent queuing and resource allocation
Target Use Cases
This service architecture is ideal for organizations that need:
- Customer Experience Teams: Analyzing feedback at scale to improve products and services
- Marketing Departments: Understanding market positioning and campaign effectiveness
- Investment Firms: Evaluating financial opportunities and portfolio performance
- Product Managers: Making data-driven decisions based on user behavior and market trends
- Strategy Consultants: Providing clients with deep analytical insights and recommendations
The service can process datasets ranging from small surveys (100 responses) to large-scale data collections (100,000+ data points) with appropriate pricing tiers and processing strategies for each scale.
Service Definition
The service definition establishes the core configuration, pricing model, and operational parameters. The .do platform uses this configuration to automatically manage service lifecycle, billing, and quality guarantees.
Basic Service Configuration
import $, { ai, db, on, send } from 'sdk.do'
// Define the analysis service with complete configuration
const analysisService = await $.Service.create({
id: 'analysis-service-v1',
name: 'AI-Powered Data Analysis Service',
description: 'Comprehensive data analysis service supporting sentiment analysis, market research, and financial analysis with quality guarantees',
type: $.ServiceType.DataAnalysis,
version: '1.0.0',
// Service capabilities and features
capabilities: ['sentiment-analysis', 'market-research', 'financial-analysis', 'multi-model-validation', 'visualization-generation'],
// Provider configuration
provider: {
id: 'your-organization-id',
name: 'Your Organization',
contact: '[email protected]',
},
// Service status and availability
status: $.ServiceStatus.Active,
availability: {
regions: ['us-east-1', 'us-west-2', 'eu-west-1', 'ap-southeast-1'],
maintenanceWindows: [
{
day: 'Sunday',
startTime: '02:00',
endTime: '04:00',
timezone: 'UTC',
},
],
},
})
console.log('Analysis service created:', analysisService.id)Advanced Pricing Configuration
The service implements a sophisticated pricing model that adjusts based on multiple factors including analysis type, data volume, complexity, and required turnaround time.
// Configure dynamic pricing model
const pricingConfig = await $.Service.configurePricing(analysisService.id, {
model: $.PricingModel.PerAnalysis,
currency: 'USD',
// Base pricing tiers by analysis type
tiers: [
{
name: 'Sentiment Analysis',
type: 'sentiment-analysis',
basePrice: 25.0,
// Volume-based adjustments
volumePricing: [
{ minRecords: 1, maxRecords: 1000, pricePerRecord: 0.025 },
{ minRecords: 1001, maxRecords: 10000, pricePerRecord: 0.015 },
{ minRecords: 10001, maxRecords: 100000, pricePerRecord: 0.008 },
],
// Complexity multipliers
complexityMultipliers: {
simple: 1.0, // Basic sentiment scoring
standard: 1.5, // Sentiment + themes + emotions
advanced: 2.5, // Full analysis + custom categories
},
// Turnaround time pricing
turnaroundPricing: {
standard: 1.0, // 24 hours
priority: 1.5, // 6 hours
urgent: 2.5, // 1 hour
},
},
{
name: 'Market Research',
type: 'market-research',
basePrice: 75.0,
volumePricing: [
{ minRecords: 1, maxRecords: 500, pricePerRecord: 0.15 },
{ minRecords: 501, maxRecords: 5000, pricePerRecord: 0.1 },
{ minRecords: 5001, maxRecords: 50000, pricePerRecord: 0.05 },
],
complexityMultipliers: {
simple: 1.0, // Basic trend analysis
standard: 1.8, // Trends + competitive analysis
advanced: 3.0, // Full market intelligence report
},
turnaroundPricing: {
standard: 1.0, // 48 hours
priority: 1.8, // 12 hours
urgent: 3.0, // 3 hours
},
},
{
name: 'Financial Analysis',
type: 'financial-analysis',
basePrice: 150.0,
volumePricing: [
{ minRecords: 1, maxRecords: 100, pricePerRecord: 1.5 },
{ minRecords: 101, maxRecords: 1000, pricePerRecord: 1.0 },
{ minRecords: 1001, maxRecords: 10000, pricePerRecord: 0.5 },
],
complexityMultipliers: {
simple: 1.0, // Basic ratio analysis
standard: 2.0, // Comprehensive financial health
advanced: 3.5, // Predictive modeling + recommendations
},
turnaroundPricing: {
standard: 1.0, // 48 hours
priority: 2.0, // 12 hours
urgent: 4.0, // 3 hours
},
},
],
// Additional options and add-ons
addOns: [
{
id: 'custom-visualization',
name: 'Custom Visualization Package',
description: 'Professional charts and interactive dashboards',
price: 50.0,
},
{
id: 'executive-summary',
name: 'Executive Summary Report',
description: 'Concise summary for leadership presentation',
price: 35.0,
},
{
id: 'data-export',
name: 'Raw Data Export',
description: 'Complete dataset with all analysis metadata',
price: 20.0,
},
],
// Discounts and promotions
discounts: [
{
type: 'volume',
name: 'High Volume Discount',
conditions: { minMonthlySpend: 1000 },
discountPercentage: 10,
},
{
type: 'subscription',
name: 'Annual Commitment Discount',
conditions: { commitmentMonths: 12 },
discountPercentage: 20,
},
],
// Payment terms
paymentTerms: {
dueDate: 'immediate',
methods: ['card', 'ach', 'wire'],
refundPolicy: 'full-refund-if-sla-violated',
},
})
console.log('Pricing configuration applied:', pricingConfig)SLA and Quality Guarantees
Service Level Agreements ensure customers receive consistent quality and performance. The platform automatically monitors SLA compliance and triggers compensations when thresholds are not met.
// Configure comprehensive SLA
const slaConfig = await $.Service.configureSLA(analysisService.id, {
// Response time guarantees
responseTime: {
standard: {
maxHours: 24,
guarantee: '99%',
penalty: 'full-refund',
},
priority: {
maxHours: 6,
guarantee: '99.5%',
penalty: 'full-refund-plus-credit',
},
urgent: {
maxHours: 1,
guarantee: '99.9%',
penalty: 'double-refund',
},
},
// Accuracy guarantees
accuracy: {
sentimentAnalysis: {
minConfidence: 0.85,
validationMethod: 'multi-model-consensus',
penalty: 'partial-refund',
},
marketResearch: {
minConfidence: 0.8,
validationMethod: 'data-triangulation',
penalty: 'partial-refund',
},
financialAnalysis: {
minConfidence: 0.9,
validationMethod: 'statistical-verification',
penalty: 'full-refund',
},
},
// Availability guarantees
availability: {
uptime: '99.9%',
maxDowntimePerMonth: 43.2, // minutes
compensationPerHour: 10.0, // percentage of monthly bill
},
// Data quality standards
dataQuality: {
completeness: 0.95,
consistency: 0.9,
timeliness: 0.98,
},
// Support response times
support: {
critical: { maxMinutes: 15, availability: '24/7' },
high: { maxHours: 2, availability: '24/7' },
medium: { maxHours: 8, availability: 'business-hours' },
low: { maxHours: 24, availability: 'business-hours' },
},
// Monitoring and reporting
monitoring: {
realTimeAlerts: true,
weeklyReports: true,
monthlyReviews: true,
dashboardAccess: true,
},
})
console.log('SLA configuration established:', slaConfig)Implementation
The implementation section covers the complete analysis pipeline from data ingestion through report generation. Each function is designed for reliability, scalability, and maintainability.
Data Collection and Preparation
Before analysis can begin, data must be collected, validated, and normalized. This function handles various data formats and sources while ensuring quality standards are met.
import $, { ai, db, on, send } from 'sdk.do'
// Type definitions for analysis requests
interface AnalysisRequest {
id: string
type: 'sentiment-analysis' | 'market-research' | 'financial-analysis'
complexity: 'simple' | 'standard' | 'advanced'
turnaround: 'standard' | 'priority' | 'urgent'
data: DataSource
options: AnalysisOptions
customerId: string
}
interface DataSource {
format: 'csv' | 'json' | 'api' | 'database'
location: string
credentials?: Record<string, string>
schema?: Record<string, string>
}
interface AnalysisOptions {
addOns: string[]
customCategories?: string[]
visualizationPreferences?: VisualizationPreferences
deliveryFormat: 'pdf' | 'html' | 'json' | 'all'
}
// Data collection and preparation function
async function collectAndPrepareData(request: AnalysisRequest) {
const startTime = Date.now()
// Log the data collection start
await $.Event.log({
type: 'data-collection-started',
requestId: request.id,
dataSource: request.data.format,
timestamp: new Date().toISOString(),
})
try {
// Step 1: Collect raw data from source
let rawData: any[]
switch (request.data.format) {
case 'csv':
rawData = await collectFromCSV(request.data.location)
break
case 'json':
rawData = await collectFromJSON(request.data.location)
break
case 'api':
rawData = await collectFromAPI(request.data.location, request.data.credentials)
break
case 'database':
rawData = await collectFromDatabase(request.data.location, request.data.credentials, request.data.schema)
break
default:
throw new Error(`Unsupported data format: ${request.data.format}`)
}
// Step 2: Validate data quality
const validationResults = await validateDataQuality(rawData, request.type)
if (validationResults.completeness < 0.95) {
throw new Error(`Data completeness ${validationResults.completeness} below threshold 0.95`)
}
// Step 3: Clean and normalize data
const cleanedData = await cleanData(rawData, request.type)
// Step 4: Enrich data with additional context if needed
const enrichedData = await enrichData(cleanedData, request.type)
// Step 5: Structure data for analysis
const structuredData = await structureData(enrichedData, request.type)
// Calculate statistics
const stats = {
totalRecords: structuredData.length,
validRecords: structuredData.filter((r) => r.isValid).length,
processingTimeMs: Date.now() - startTime,
qualityScore: validationResults.overallQuality,
}
// Store prepared data
await db.prepared_data.create({
requestId: request.id,
data: structuredData,
stats,
timestamp: new Date(),
})
// Log successful completion
await $.Event.log({
type: 'data-collection-completed',
requestId: request.id,
stats,
timestamp: new Date().toISOString(),
})
return {
success: true,
data: structuredData,
stats,
}
} catch (error) {
// Log failure
await $.Event.log({
type: 'data-collection-failed',
requestId: request.id,
error: error.message,
timestamp: new Date().toISOString(),
})
throw error
}
}
// Helper function: Collect data from CSV
async function collectFromCSV(location: string) {
const response = await fetch(location)
const csvText = await response.text()
// Parse CSV (simplified - use proper CSV parser in production)
const lines = csvText.split('\n')
const headers = lines[0].split(',')
return lines.slice(1).map((line) => {
const values = line.split(',')
return headers.reduce(
(obj, header, index) => {
obj[header.trim()] = values[index]?.trim()
return obj
},
{} as Record<string, string>
)
})
}
// Helper function: Validate data quality
async function validateDataQuality(data: any[], analysisType: string) {
const requiredFields = getRequiredFields(analysisType)
let completeRecords = 0
let totalFields = 0
let presentFields = 0
data.forEach((record) => {
const hasAllRequired = requiredFields.every((field) => record[field] !== undefined && record[field] !== null && record[field] !== '')
if (hasAllRequired) completeRecords++
totalFields += requiredFields.length
presentFields += requiredFields.filter((field) => record[field] !== undefined && record[field] !== null && record[field] !== '').length
})
const completeness = data.length > 0 ? completeRecords / data.length : 0
const fieldCoverage = totalFields > 0 ? presentFields / totalFields : 0
return {
completeness,
fieldCoverage,
totalRecords: data.length,
completeRecords,
overallQuality: (completeness + fieldCoverage) / 2,
}
}
// Helper function: Get required fields by analysis type
function getRequiredFields(analysisType: string): string[] {
switch (analysisType) {
case 'sentiment-analysis':
return ['text', 'timestamp']
case 'market-research':
return ['metric', 'value', 'date', 'source']
case 'financial-analysis':
return ['period', 'revenue', 'expenses', 'date']
default:
return ['id', 'data']
}
}
// Helper function: Clean data
async function cleanData(data: any[], analysisType: string) {
return data
.map((record) => {
// Remove duplicates, handle missing values, standardize formats
const cleaned = { ...record }
// Trim string fields
Object.keys(cleaned).forEach((key) => {
if (typeof cleaned[key] === 'string') {
cleaned[key] = cleaned[key].trim()
}
})
// Handle analysis-specific cleaning
if (analysisType === 'sentiment-analysis' && cleaned.text) {
// Remove special characters, normalize whitespace
cleaned.text = cleaned.text.replace(/[^\w\s.,!?-]/g, '').replace(/\s+/g, ' ')
}
if (analysisType === 'financial-analysis') {
// Convert numeric strings to numbers
;['revenue', 'expenses', 'profit', 'assets', 'liabilities'].forEach((field) => {
if (cleaned[field] && typeof cleaned[field] === 'string') {
cleaned[field] = parseFloat(cleaned[field].replace(/[^0-9.-]/g, ''))
}
})
}
return cleaned
})
.filter((record) => record !== null)
}
// Helper function: Enrich data
async function enrichData(data: any[], analysisType: string) {
// Add contextual information, lookup reference data, calculate derived fields
return Promise.all(
data.map(async (record) => {
const enriched = { ...record }
if (analysisType === 'sentiment-analysis') {
// Add metadata like language detection, word count
enriched.language = 'en' // Would use actual language detection
enriched.wordCount = enriched.text?.split(/\s+/).length || 0
}
if (analysisType === 'market-research') {
// Add industry benchmarks, historical context
enriched.industryAverage = await getIndustryAverage(enriched.metric)
}
if (analysisType === 'financial-analysis') {
// Calculate financial ratios
if (enriched.revenue && enriched.expenses) {
enriched.profitMargin = (enriched.revenue - enriched.expenses) / enriched.revenue
}
}
return enriched
})
)
}
// Helper function: Structure data
async function structureData(data: any[], analysisType: string) {
return data.map((record, index) => ({
id: record.id || `record-${index}`,
originalData: record,
isValid: true,
processingMetadata: {
structuredAt: new Date().toISOString(),
analysisType,
},
}))
}AI Analysis Engine with Multi-Model Validation
The core analysis engine processes prepared data using multiple AI models to ensure accuracy and reliability. This approach provides cross-validation and higher confidence scores.
import $, { ai, db, on, send } from 'sdk.do'
// Sentiment analysis implementation
async function performSentimentAnalysis(data: PreparedData[], complexity: string) {
const results = []
for (const record of data) {
// Analyze with GPT-5
const gpt5Analysis = await ai.chat({
model: 'gpt-5',
messages: [
{
role: 'system',
content: `You are a sentiment analysis expert. Analyze the following text and provide:
1. Overall sentiment (positive/negative/neutral)
2. Sentiment score (-1 to 1)
3. Confidence level (0 to 1)
${complexity === 'standard' || complexity === 'advanced' ? '4. Key themes and emotions identified' : ''}
${complexity === 'advanced' ? '5. Specific positive and negative aspects\n6. Actionable insights' : ''}
Return your analysis as JSON.`,
},
{
role: 'user',
content: record.originalData.text,
},
],
temperature: 0.3,
response_format: { type: 'json_object' },
})
// Analyze with Claude Sonnet 4.5
const claudeAnalysis = await ai.chat({
model: 'claude-sonnet-4.5',
messages: [
{
role: 'system',
content: `You are a sentiment analysis expert. Analyze the following text and provide:
1. Overall sentiment (positive/negative/neutral)
2. Sentiment score (-1 to 1)
3. Confidence level (0 to 1)
${complexity === 'standard' || complexity === 'advanced' ? '4. Key themes and emotions identified' : ''}
${complexity === 'advanced' ? '5. Specific positive and negative aspects\n6. Actionable insights' : ''}
Return your analysis as JSON.`,
},
{
role: 'user',
content: record.originalData.text,
},
],
temperature: 0.3,
})
// Parse responses
const gpt5Result = JSON.parse(gpt5Analysis.choices[0].message.content)
const claudeResult = JSON.parse(claudeAnalysis.content[0].text)
// Cross-validate and merge results
const mergedResult = await crossValidateResults(gpt5Result, claudeResult)
results.push({
recordId: record.id,
text: record.originalData.text,
analysis: mergedResult,
models: ['gpt-5', 'claude-sonnet-4.5'],
timestamp: new Date().toISOString(),
})
}
return results
}
// Market research analysis implementation
async function performMarketResearchAnalysis(data: PreparedData[], complexity: string) {
// Aggregate market data
const aggregatedData = aggregateMarketData(data)
// Analyze with GPT-5
const gpt5Analysis = await ai.chat({
model: 'gpt-5',
messages: [
{
role: 'system',
content: `You are a market research analyst. Analyze the following market data and provide:
1. Key trends identified
2. Market opportunities
3. Competitive threats
${complexity === 'standard' || complexity === 'advanced' ? '4. Consumer behavior insights\n5. Market segment analysis' : ''}
${complexity === 'advanced' ? '6. Strategic recommendations\n7. Risk assessment\n8. Growth projections' : ''}
Return your analysis as JSON with detailed explanations.`,
},
{
role: 'user',
content: JSON.stringify(aggregatedData, null, 2),
},
],
temperature: 0.4,
response_format: { type: 'json_object' },
})
// Analyze with Claude Sonnet 4.5
const claudeAnalysis = await ai.chat({
model: 'claude-sonnet-4.5',
messages: [
{
role: 'system',
content: `You are a market research analyst. Analyze the following market data and provide:
1. Key trends identified
2. Market opportunities
3. Competitive threats
${complexity === 'standard' || complexity === 'advanced' ? '4. Consumer behavior insights\n5. Market segment analysis' : ''}
${complexity === 'advanced' ? '6. Strategic recommendations\n7. Risk assessment\n8. Growth projections' : ''}
Return your analysis as JSON with detailed explanations.`,
},
{
role: 'user',
content: JSON.stringify(aggregatedData, null, 2),
},
],
temperature: 0.4,
})
// Parse and merge results
const gpt5Result = JSON.parse(gpt5Analysis.choices[0].message.content)
const claudeResult = JSON.parse(claudeAnalysis.content[0].text)
const mergedResult = await crossValidateResults(gpt5Result, claudeResult)
return {
rawData: data,
aggregatedData,
analysis: mergedResult,
models: ['gpt-5', 'claude-sonnet-4.5'],
timestamp: new Date().toISOString(),
}
}
// Financial analysis implementation
async function performFinancialAnalysis(data: PreparedData[], complexity: string) {
// Calculate financial metrics
const metrics = calculateFinancialMetrics(data)
// Analyze with GPT-5
const gpt5Analysis = await ai.chat({
model: 'gpt-5',
messages: [
{
role: 'system',
content: `You are a financial analyst. Analyze the following financial data and provide:
1. Financial health assessment
2. Key performance indicators analysis
3. Trend analysis
${complexity === 'standard' || complexity === 'advanced' ? '4. Ratio analysis (liquidity, profitability, efficiency)\n5. Year-over-year comparisons' : ''}
${complexity === 'advanced' ? '6. Predictive modeling and forecasts\n7. Risk assessment\n8. Investment recommendations' : ''}
Return your analysis as JSON with detailed explanations and numbers.`,
},
{
role: 'user',
content: JSON.stringify(metrics, null, 2),
},
],
temperature: 0.2,
response_format: { type: 'json_object' },
})
// Analyze with Claude Sonnet 4.5
const claudeAnalysis = await ai.chat({
model: 'claude-sonnet-4.5',
messages: [
{
role: 'system',
content: `You are a financial analyst. Analyze the following financial data and provide:
1. Financial health assessment
2. Key performance indicators analysis
3. Trend analysis
${complexity === 'standard' || complexity === 'advanced' ? '4. Ratio analysis (liquidity, profitability, efficiency)\n5. Year-over-year comparisons' : ''}
${complexity === 'advanced' ? '6. Predictive modeling and forecasts\n7. Risk assessment\n8. Investment recommendations' : ''}
Return your analysis as JSON with detailed explanations and numbers.`,
},
{
role: 'user',
content: JSON.stringify(metrics, null, 2),
},
],
temperature: 0.2,
})
// Parse and merge results
const gpt5Result = JSON.parse(gpt5Analysis.choices[0].message.content)
const claudeResult = JSON.parse(claudeAnalysis.content[0].text)
const mergedResult = await crossValidateResults(gpt5Result, claudeResult)
return {
rawData: data,
calculatedMetrics: metrics,
analysis: mergedResult,
models: ['gpt-5', 'claude-sonnet-4.5'],
timestamp: new Date().toISOString(),
}
}
// Cross-validation helper
async function crossValidateResults(result1: any, result2: any) {
// Compare sentiment scores, confidence levels, identified themes
const merged: any = {}
// Merge sentiment/scores with averaging
if (result1.sentiment_score !== undefined && result2.sentiment_score !== undefined) {
merged.sentiment_score = (result1.sentiment_score + result2.sentiment_score) / 2
merged.sentiment_agreement = Math.abs(result1.sentiment_score - result2.sentiment_score) < 0.2
}
// Merge confidence with conservative approach (use minimum)
if (result1.confidence !== undefined && result2.confidence !== undefined) {
merged.confidence = Math.min(result1.confidence, result2.confidence)
}
// Merge categorical assessments
if (result1.sentiment !== undefined && result2.sentiment !== undefined) {
merged.sentiment = result1.sentiment === result2.sentiment ? result1.sentiment : 'mixed'
merged.models_agree = result1.sentiment === result2.sentiment
}
// Merge themes/insights by combining unique items
if (result1.themes && result2.themes) {
merged.themes = [...new Set([...result1.themes, ...result2.themes])]
}
// Include both model outputs for transparency
merged.model_outputs = {
gpt5: result1,
claude: result2,
}
// Calculate overall validation score
const agreementFactors = [merged.sentiment_agreement ? 1 : 0, merged.models_agree ? 1 : 0, merged.confidence > 0.7 ? 1 : 0]
merged.validation_score = agreementFactors.reduce((a, b) => a + b, 0) / agreementFactors.length
return merged
}
// Helper: Aggregate market data
function aggregateMarketData(data: PreparedData[]) {
const byMetric: Record<string, any[]> = {}
data.forEach((record) => {
const metric = record.originalData.metric
if (!byMetric[metric]) {
byMetric[metric] = []
}
byMetric[metric].push(record.originalData)
})
return {
metrics: Object.keys(byMetric),
aggregations: Object.entries(byMetric).map(([metric, values]) => ({
metric,
count: values.length,
average: values.reduce((sum, v) => sum + (parseFloat(v.value) || 0), 0) / values.length,
min: Math.min(...values.map((v) => parseFloat(v.value) || 0)),
max: Math.max(...values.map((v) => parseFloat(v.value) || 0)),
trend: calculateTrend(values),
})),
timeRange: {
start: Math.min(...data.map((d) => new Date(d.originalData.date).getTime())),
end: Math.max(...data.map((d) => new Date(d.originalData.date).getTime())),
},
}
}
// Helper: Calculate financial metrics
function calculateFinancialMetrics(data: PreparedData[]) {
const periods = data.map((d) => d.originalData)
return {
periods: periods.length,
totalRevenue: periods.reduce((sum, p) => sum + (p.revenue || 0), 0),
totalExpenses: periods.reduce((sum, p) => sum + (p.expenses || 0), 0),
averageRevenue: periods.reduce((sum, p) => sum + (p.revenue || 0), 0) / periods.length,
averageExpenses: periods.reduce((sum, p) => sum + (p.expenses || 0), 0) / periods.length,
revenueGrowth: calculateGrowthRate(periods.map((p) => p.revenue)),
expenseGrowth: calculateGrowthRate(periods.map((p) => p.expenses)),
profitMargins: periods.map((p) => (p.revenue ? ((p.revenue - p.expenses) / p.revenue) * 100 : 0)),
}
}
// Helper: Calculate trend
function calculateTrend(values: any[]) {
if (values.length < 2) return 'insufficient-data'
const sorted = values.sort((a, b) => new Date(a.date).getTime() - new Date(b.date).getTime())
const first = parseFloat(sorted[0].value) || 0
const last = parseFloat(sorted[sorted.length - 1].value) || 0
const change = ((last - first) / first) * 100
if (change > 10) return 'strongly-increasing'
if (change > 2) return 'increasing'
if (change > -2) return 'stable'
if (change > -10) return 'decreasing'
return 'strongly-decreasing'
}
// Helper: Calculate growth rate
function calculateGrowthRate(values: number[]) {
if (values.length < 2) return 0
const growthRates = []
for (let i = 1; i < values.length; i++) {
if (values[i - 1] !== 0) {
growthRates.push(((values[i] - values[i - 1]) / values[i - 1]) * 100)
}
}
return growthRates.length > 0 ? growthRates.reduce((sum, rate) => sum + rate, 0) / growthRates.length : 0
}Insight Extraction and Report Generation
After analysis is complete, insights must be extracted and formatted into professional reports with visualizations and actionable recommendations.
import $, { ai, db, on, send } from 'sdk.do'
// Extract insights from analysis results
async function extractInsights(analysisResults: any, analysisType: string, complexity: string) {
const insights: any = {
type: analysisType,
complexity,
extractedAt: new Date().toISOString(),
keyFindings: [],
recommendations: [],
metrics: {},
}
if (analysisType === 'sentiment-analysis') {
// Calculate aggregate sentiment metrics
const sentiments = analysisResults.map((r: any) => r.analysis.sentiment_score)
const avgSentiment = sentiments.reduce((a: number, b: number) => a + b, 0) / sentiments.length
const positiveCount = sentiments.filter((s: number) => s > 0.2).length
const negativeCount = sentiments.filter((s: number) => s < -0.2).length
const neutralCount = sentiments.length - positiveCount - negativeCount
insights.metrics = {
averageSentiment: avgSentiment,
totalResponses: sentiments.length,
positivePercentage: (positiveCount / sentiments.length) * 100,
negativePercentage: (negativeCount / sentiments.length) * 100,
neutralPercentage: (neutralCount / sentiments.length) * 100,
}
insights.keyFindings.push({
finding: `Overall sentiment is ${avgSentiment > 0.2 ? 'positive' : avgSentiment < -0.2 ? 'negative' : 'neutral'}`,
impact: 'high',
confidence: 0.9,
})
if (complexity === 'advanced') {
// Extract themes
const allThemes = analysisResults.flatMap((r: any) => r.analysis.themes || [])
const themeCounts = countOccurrences(allThemes)
insights.themes = Object.entries(themeCounts)
.sort((a: any, b: any) => b[1] - a[1])
.slice(0, 10)
.map(([theme, count]) => ({ theme, frequency: count }))
}
}
if (analysisType === 'market-research') {
insights.metrics = analysisResults.analysis
// Extract opportunities
if (analysisResults.analysis.opportunities) {
insights.keyFindings = analysisResults.analysis.opportunities.map((opp: any) => ({
finding: opp.description,
impact: opp.impact || 'medium',
confidence: opp.confidence || 0.7,
}))
}
}
if (analysisType === 'financial-analysis') {
insights.metrics = analysisResults.calculatedMetrics
// Key financial findings
const profitMargin = insights.metrics.profitMargins[insights.metrics.profitMargins.length - 1]
const revenueGrowth = insights.metrics.revenueGrowth
insights.keyFindings.push({
finding: `Current profit margin: ${profitMargin.toFixed(2)}%`,
impact: profitMargin > 20 ? 'positive' : profitMargin < 5 ? 'concerning' : 'moderate',
confidence: 0.95,
})
insights.keyFindings.push({
finding: `Revenue growth rate: ${revenueGrowth.toFixed(2)}%`,
impact: revenueGrowth > 15 ? 'excellent' : revenueGrowth > 5 ? 'good' : 'needs-improvement',
confidence: 0.92,
})
}
// Generate recommendations using AI
const recommendations = await generateRecommendations(insights, analysisType)
insights.recommendations = recommendations
return insights
}
// Generate visualizations for the report
async function generateVisualizations(insights: any, analysisType: string) {
const visualizations = []
if (analysisType === 'sentiment-analysis') {
// Sentiment distribution pie chart
visualizations.push({
type: 'pie-chart',
title: 'Sentiment Distribution',
data: [
{ label: 'Positive', value: insights.metrics.positivePercentage, color: '#10b981' },
{ label: 'Neutral', value: insights.metrics.neutralPercentage, color: '#6b7280' },
{ label: 'Negative', value: insights.metrics.negativePercentage, color: '#ef4444' },
],
})
// Theme frequency bar chart
if (insights.themes) {
visualizations.push({
type: 'bar-chart',
title: 'Top Themes',
data: insights.themes.map((t: any) => ({
label: t.theme,
value: t.frequency,
})),
})
}
}
if (analysisType === 'market-research') {
// Trend lines for key metrics
visualizations.push({
type: 'line-chart',
title: 'Market Trends',
data: insights.metrics.aggregations?.map((agg: any) => ({
metric: agg.metric,
trend: agg.trend,
values: [agg.min, agg.average, agg.max],
})),
})
}
if (analysisType === 'financial-analysis') {
// Revenue vs expenses over time
visualizations.push({
type: 'line-chart',
title: 'Revenue vs Expenses',
data: {
labels: Array.from({ length: insights.metrics.periods }, (_, i) => `Period ${i + 1}`),
datasets: [
{
label: 'Revenue',
data: Array.from({ length: insights.metrics.periods }, () => insights.metrics.averageRevenue + (Math.random() - 0.5) * 10000),
},
{
label: 'Expenses',
data: Array.from({ length: insights.metrics.periods }, () => insights.metrics.averageExpenses + (Math.random() - 0.5) * 8000),
},
],
},
})
// Profit margin trend
visualizations.push({
type: 'bar-chart',
title: 'Profit Margin by Period',
data: insights.metrics.profitMargins.map((margin: number, i: number) => ({
label: `Period ${i + 1}`,
value: margin,
})),
})
}
return visualizations
}
// Generate final report with all components
async function generateReport(request: AnalysisRequest, analysisResults: any, insights: any) {
const visualizations = await generateVisualizations(insights, request.type)
// Create report structure
const report = {
id: `report-${request.id}`,
type: request.type,
complexity: request.complexity,
generatedAt: new Date().toISOString(),
executiveSummary: await generateExecutiveSummary(insights, request.type),
sections: [
{
title: 'Overview',
content: generateOverviewSection(request, analysisResults),
},
{
title: 'Key Findings',
content: insights.keyFindings.map((f: any) => `${f.finding} (Impact: ${f.impact}, Confidence: ${(f.confidence * 100).toFixed(0)}%)`).join('\n\n'),
},
{
title: 'Detailed Analysis',
content: generateDetailedAnalysis(analysisResults, insights, request.type),
},
{
title: 'Recommendations',
content: insights.recommendations
.map((r: any, i: number) => `${i + 1}. ${r.recommendation}\n Priority: ${r.priority}\n Expected Impact: ${r.expectedImpact}`)
.join('\n\n'),
},
{
title: 'Methodology',
content: generateMethodologySection(request),
},
],
visualizations,
metadata: {
dataPoints: analysisResults.length || analysisResults.rawData?.length || 0,
modelsUsed: analysisResults[0]?.models || analysisResults.models || [],
qualityScore: insights.metrics.averageConfidence || 0.85,
processingTime: '45 minutes', // Would track actual time
},
}
// Store report in database
await db.analysis_reports.create({
requestId: request.id,
customerId: request.customerId,
report,
createdAt: new Date(),
})
// Generate formatted outputs based on delivery preferences
const outputs = await formatReportOutputs(report, request.options.deliveryFormat)
return {
report,
outputs,
}
}
// Helper: Generate executive summary
async function generateExecutiveSummary(insights: any, analysisType: string) {
const prompt = `Generate a concise executive summary (2-3 paragraphs) for this ${analysisType} report:
Key Findings:
${insights.keyFindings.map((f: any) => `- ${f.finding}`).join('\n')}
Recommendations:
${insights.recommendations.map((r: any) => `- ${r.recommendation}`).join('\n')}
The summary should be suitable for C-level executives.`
const response = await ai.chat({
model: 'gpt-5',
messages: [
{ role: 'system', content: 'You are a business analyst writing executive summaries.' },
{ role: 'user', content: prompt },
],
temperature: 0.5,
})
return response.choices[0].message.content
}
// Helper: Generate detailed analysis section
function generateDetailedAnalysis(analysisResults: any, insights: any, analysisType: string) {
let content = ''
if (analysisType === 'sentiment-analysis') {
content = `
Sentiment Analysis Results:
Total Responses Analyzed: ${insights.metrics.totalResponses}
Average Sentiment Score: ${insights.metrics.averageSentiment.toFixed(3)} (scale: -1 to 1)
Distribution:
- Positive: ${insights.metrics.positivePercentage.toFixed(1)}%
- Neutral: ${insights.metrics.neutralPercentage.toFixed(1)}%
- Negative: ${insights.metrics.negativePercentage.toFixed(1)}%
The analysis reveals ${insights.metrics.averageSentiment > 0 ? 'generally positive' : insights.metrics.averageSentiment < 0 ? 'generally negative' : 'mixed'} sentiment across all responses.
`.trim()
}
if (analysisType === 'market-research') {
content = `
Market Research Analysis:
Key Metrics Analyzed: ${insights.metrics.aggregations?.length || 0}
Time Period: ${new Date(insights.metrics.timeRange?.start).toLocaleDateString()} to ${new Date(insights.metrics.timeRange?.end).toLocaleDateString()}
The market analysis indicates several important trends and opportunities that warrant strategic attention.
`.trim()
}
if (analysisType === 'financial-analysis') {
content = `
Financial Analysis Results:
Periods Analyzed: ${insights.metrics.periods}
Total Revenue: $${insights.metrics.totalRevenue.toLocaleString()}
Total Expenses: $${insights.metrics.totalExpenses.toLocaleString()}
Average Profit Margin: ${(insights.metrics.profitMargins.reduce((a: number, b: number) => a + b, 0) / insights.metrics.profitMargins.length).toFixed(2)}%
Revenue Growth Rate: ${insights.metrics.revenueGrowth.toFixed(2)}%
The financial health indicators suggest ${insights.metrics.revenueGrowth > 10 ? 'strong growth' : insights.metrics.revenueGrowth > 0 ? 'moderate growth' : 'declining performance'}.
`.trim()
}
return content
}
// Helper: Format report outputs
async function formatReportOutputs(report: any, format: string) {
const outputs: any = {}
if (format === 'pdf' || format === 'all') {
// Generate PDF (would use actual PDF generation library)
outputs.pdf = {
url: `https://storage.do/reports/${report.id}.pdf`,
size: '2.4 MB',
pages: 15,
}
}
if (format === 'html' || format === 'all') {
// Generate HTML version
outputs.html = {
url: `https://reports.do/${report.id}`,
responsive: true,
}
}
if (format === 'json' || format === 'all') {
// JSON export
outputs.json = report
}
return outputs
}
// Helper: Count occurrences
function countOccurrences(arr: string[]) {
return arr.reduce(
(acc, item) => {
acc[item] = (acc[item] || 0) + 1
return acc
},
{} as Record<string, number>
)
}
// Helper: Generate recommendations
async function generateRecommendations(insights: any, analysisType: string) {
const prompt = `Based on these analysis results, provide 3-5 actionable recommendations:
${JSON.stringify(insights.keyFindings, null, 2)}
Format as JSON array with: recommendation, priority, expectedImpact`
const response = await ai.chat({
model: 'gpt-5',
messages: [
{ role: 'system', content: 'You are a business consultant providing strategic recommendations.' },
{ role: 'user', content: prompt },
],
temperature: 0.6,
response_format: { type: 'json_object' },
})
const result = JSON.parse(response.choices[0].message.content)
return result.recommendations || []
}
// Helper: Generate methodology section
function generateMethodologySection(request: AnalysisRequest) {
return `
This analysis was performed using the following methodology:
1. Data Collection: ${request.data.format} format data was collected and validated
2. Data Preparation: Cleaning, normalization, and enrichment processes applied
3. AI Analysis: Multi-model analysis using GPT-5 and Claude Sonnet 4.5 for cross-validation
4. Insight Extraction: Automated identification of key findings and patterns
5. Report Generation: Comprehensive report with visualizations and recommendations
Quality Assurance: All results were validated for accuracy and consistency across models.
`.trim()
}Complete Event Handler with Service Lifecycle
The event handler orchestrates the entire analysis pipeline and manages the service lifecycle from request to delivery.
import $, { ai, db, on, send } from 'sdk.do'
import Stripe from 'stripe'
const stripe = new Stripe(process.env.STRIPE_SECRET_KEY!)
// Main analysis service event handler
on.Analysis.requested, async (event) => {
const request: AnalysisRequest = event.data
const startTime = Date.now()
try {
// Stage 1: Validate and price the request
await $.Event.emit($.Analysis.validating, { requestId: request.id })
const pricing = await calculatePricing(request)
const slaRequirements = await getSLARequirements(request.turnaround)
await db.analysis_requests.create({
id: request.id,
customerId: request.customerId,
type: request.type,
status: 'pending-payment',
pricing,
slaRequirements,
createdAt: new Date(),
})
// Stage 2: Process payment
await $.Event.emit($.Analysis.processing_payment, { requestId: request.id })
const paymentIntent = await stripe.paymentIntents.create({
amount: Math.round(pricing.total * 100),
currency: 'usd',
customer: request.customerId,
description: `Analysis Service: ${request.type}`,
metadata: {
requestId: request.id,
analysisType: request.type,
complexity: request.complexity,
},
})
// Wait for payment confirmation (simplified - would use webhooks)
await new Promise((resolve) => setTimeout(resolve, 2000))
// Stage 3: Collect and prepare data
await $.Event.emit($.Analysis.collecting_data, { requestId: request.id })
const preparedData = await collectAndPrepareData(request)
if (!preparedData.success) {
throw new Error('Data preparation failed')
}
// Stage 4: Perform AI analysis
await $.Event.emit($.Analysis.analyzing, { requestId: request.id })
let analysisResults
switch (request.type) {
case 'sentiment-analysis':
analysisResults = await performSentimentAnalysis(preparedData.data, request.complexity)
break
case 'market-research':
analysisResults = await performMarketResearchAnalysis(preparedData.data, request.complexity)
break
case 'financial-analysis':
analysisResults = await performFinancialAnalysis(preparedData.data, request.complexity)
break
default:
throw new Error(`Unsupported analysis type: ${request.type}`)
}
// Stage 5: Extract insights
await $.Event.emit($.Analysis.extracting_insights, { requestId: request.id })
const insights = await extractInsights(analysisResults, request.type, request.complexity)
// Stage 6: Generate report
await $.Event.emit($.Analysis.generating_report, { requestId: request.id })
const reportData = await generateReport(request, analysisResults, insights)
// Stage 7: Quality validation
await $.Event.emit($.Analysis.validating_quality, { requestId: request.id })
const qualityCheck = await validateQuality(reportData.report, slaRequirements, request.type)
if (!qualityCheck.passed) {
throw new Error(`Quality validation failed: ${qualityCheck.issues.join(', ')}`)
}
// Stage 8: Check SLA compliance
const processingTime = Date.now() - startTime
const slaCompliant = processingTime <= slaRequirements.maxProcessingTime
if (!slaCompliant) {
await handleSLAViolation(request.id, paymentIntent.id, slaRequirements)
}
// Stage 9: Deliver results
await $.Event.emit($.Analysis.delivering, { requestId: request.id })
await deliverResults(request, reportData)
// Stage 10: Complete and log
await $.Event.emit($.Analysis.completed, {
requestId: request.id,
processingTime,
qualityScore: qualityCheck.score,
slaCompliant,
})
await db.analysis_requests.update(request.id, {
status: 'completed',
completedAt: new Date(),
processingTime,
qualityScore: qualityCheck.score,
slaCompliant,
})
// Send completion notification to customer
send.Email.to(request.customerId), {
subject: 'Your Analysis Report is Ready',
template: 'analysis-complete',
data: {
reportUrl: reportData.outputs.html?.url,
analysisType: request.type,
qualityScore: qualityCheck.score,
},
})
} catch (error) {
// Handle failure
await $.Event.emit($.Analysis.failed, {
requestId: request.id,
error: error.message,
timestamp: new Date().toISOString(),
})
await db.analysis_requests.update(request.id, {
status: 'failed',
error: error.message,
failedAt: new Date(),
})
// Refund customer
if (paymentIntent?.id) {
await stripe.refunds.create({
payment_intent: paymentIntent.id,
reason: 'requested_by_customer',
})
}
// Notify customer of failure
send.Email.to(request.customerId), {
subject: 'Analysis Request Failed',
template: 'analysis-failed',
data: {
requestId: request.id,
error: error.message,
refundAmount: pricing?.total || 0,
},
})
throw error
}
})
// Helper: Calculate pricing
async function calculatePricing(request: AnalysisRequest) {
const config = await $.Service.getPricing('analysis-service-v1')
const tier = config.tiers.find((t: any) => t.type === request.type)
if (!tier) {
throw new Error(`No pricing tier found for ${request.type}`)
}
// Determine record count (would parse actual data)
const recordCount = 1000 // Example
// Find volume pricing
const volumePrice = tier.volumePricing.find((vp: any) => recordCount >= vp.minRecords && recordCount <= vp.maxRecords)
// Calculate base cost
let cost = tier.basePrice + volumePrice.pricePerRecord * recordCount
// Apply complexity multiplier
cost *= tier.complexityMultipliers[request.complexity]
// Apply turnaround multiplier
cost *= tier.turnaroundPricing[request.turnaround]
// Add-ons
const addOnsCost = request.options.addOns.reduce((total, addOnId) => {
const addOn = config.addOns.find((a: any) => a.id === addOnId)
return total + (addOn?.price || 0)
}, 0)
return {
base: tier.basePrice,
volume: volumePrice.pricePerRecord * recordCount,
complexityMultiplier: tier.complexityMultipliers[request.complexity],
turnaroundMultiplier: tier.turnaroundPricing[request.turnaround],
addOns: addOnsCost,
subtotal: cost + addOnsCost,
total: cost + addOnsCost,
}
}
// Helper: Get SLA requirements
async function getSLARequirements(turnaround: string) {
const sla = await $.Service.getSLA('analysis-service-v1')
return {
maxProcessingTime: sla.responseTime[turnaround].maxHours * 60 * 60 * 1000,
guaranteePercentage: parseFloat(sla.responseTime[turnaround].guarantee),
penalty: sla.responseTime[turnaround].penalty,
}
}
// Helper: Validate quality
async function validateQuality(report: any, slaRequirements: any, analysisType: string) {
const issues = []
let score = 1.0
// Check confidence levels
const avgConfidence = report.metadata.qualityScore
if (avgConfidence < 0.8) {
issues.push('Confidence level below threshold')
score -= 0.2
}
// Check completeness
if (!report.executiveSummary || report.sections.length < 5) {
issues.push('Report incomplete')
score -= 0.3
}
// Check visualizations
if (!report.visualizations || report.visualizations.length === 0) {
issues.push('Missing visualizations')
score -= 0.1
}
return {
passed: issues.length === 0,
score: Math.max(0, score),
issues,
}
}
// Helper: Handle SLA violation
async function handleSLAViolation(requestId: string, paymentIntentId: string, slaRequirements: any) {
await $.Event.log({
type: 'sla-violation',
requestId,
penalty: slaRequirements.penalty,
timestamp: new Date().toISOString(),
})
// Apply penalty based on SLA configuration
if (slaRequirements.penalty === 'full-refund') {
await stripe.refunds.create({
payment_intent: paymentIntentId,
reason: 'requested_by_customer',
})
} else if (slaRequirements.penalty === 'partial-refund') {
const paymentIntent = await stripe.paymentIntents.retrieve(paymentIntentId)
await stripe.refunds.create({
payment_intent: paymentIntentId,
amount: Math.round(paymentIntent.amount * 0.5),
})
}
}
// Helper: Deliver results
async function deliverResults(request: AnalysisRequest, reportData: any) {
// Upload files to storage
const storageUrls = await uploadReportFiles(reportData.outputs)
// Create delivery record
await db.report_deliveries.create({
requestId: request.id,
customerId: request.customerId,
urls: storageUrls,
deliveredAt: new Date(),
})
// Send webhook notification if configured
if (request.options.webhookUrl) {
await fetch(request.options.webhookUrl, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
requestId: request.id,
status: 'completed',
reportUrls: storageUrls,
}),
})
}
}
// Helper: Upload report files
async function uploadReportFiles(outputs: any) {
// Would use actual storage service (S3, R2, etc.)
return {
pdf: outputs.pdf?.url || null,
html: outputs.html?.url || null,
json: outputs.json ? 'https://storage.do/reports/data.json' : null,
}
}Testing
Comprehensive testing ensures the analysis service operates reliably under various conditions and produces accurate results.
Unit Tests for Analysis Functions
import { describe, it, expect, beforeEach } from 'vitest'
import { collectAndPrepareData, performSentimentAnalysis, extractInsights } from './analysis-service'
describe('Analysis Service Unit Tests', () => {
describe('Data Collection', () => {
it('should successfully collect and prepare CSV data', async () => {
const mockRequest = {
id: 'test-request-1',
type: 'sentiment-analysis',
complexity: 'simple',
turnaround: 'standard',
data: {
format: 'csv',
location: 'https://example.com/test-data.csv',
},
options: { addOns: [], deliveryFormat: 'json' },
customerId: 'cust-123',
}
const result = await collectAndPrepareData(mockRequest)
expect(result.success).toBe(true)
expect(result.data).toBeDefined()
expect(result.stats.totalRecords).toBeGreaterThan(0)
expect(result.stats.qualityScore).toBeGreaterThan(0.9)
})
it('should handle invalid data format gracefully', async () => {
const mockRequest = {
id: 'test-request-2',
type: 'sentiment-analysis',
data: {
format: 'invalid-format',
location: 'https://example.com/data',
},
}
await expect(collectAndPrepareData(mockRequest)).rejects.toThrow('Unsupported data format')
})
it('should validate data quality thresholds', async () => {
const mockRequest = {
id: 'test-request-3',
type: 'sentiment-analysis',
data: {
format: 'json',
location: 'https://example.com/incomplete-data.json',
},
}
// Mock data with poor quality
const result = await collectAndPrepareData(mockRequest)
if (result.stats.qualityScore < 0.95) {
expect(result.success).toBe(false)
}
})
})
describe('Sentiment Analysis', () => {
it('should correctly identify positive sentiment', async () => {
const mockData = [
{
id: 'record-1',
originalData: {
text: 'This product is absolutely amazing! Best purchase ever.',
},
isValid: true,
},
]
const results = await performSentimentAnalysis(mockData, 'simple')
expect(results[0].analysis.sentiment_score).toBeGreaterThan(0.5)
expect(results[0].analysis.sentiment).toBe('positive')
expect(results[0].analysis.confidence).toBeGreaterThan(0.7)
})
it('should correctly identify negative sentiment', async () => {
const mockData = [
{
id: 'record-2',
originalData: {
text: 'Terrible experience. Would not recommend to anyone.',
},
isValid: true,
},
]
const results = await performSentimentAnalysis(mockData, 'simple')
expect(results[0].analysis.sentiment_score).toBeLessThan(-0.5)
expect(results[0].analysis.sentiment).toBe('negative')
})
it('should extract themes in advanced mode', async () => {
const mockData = [
{
id: 'record-3',
originalData: {
text: 'Great customer service but the product quality could be better.',
},
isValid: true,
},
]
const results = await performSentimentAnalysis(mockData, 'advanced')
expect(results[0].analysis.themes).toBeDefined()
expect(results[0].analysis.themes.length).toBeGreaterThan(0)
})
})
describe('Insight Extraction', () => {
it('should calculate aggregate sentiment metrics', async () => {
const mockResults = [
{ analysis: { sentiment_score: 0.8 } },
{ analysis: { sentiment_score: 0.6 } },
{ analysis: { sentiment_score: -0.3 } },
{ analysis: { sentiment_score: 0.1 } },
]
const insights = await extractInsights(mockResults, 'sentiment-analysis', 'simple')
expect(insights.metrics.averageSentiment).toBeCloseTo(0.3, 1)
expect(insights.metrics.totalResponses).toBe(4)
expect(insights.keyFindings.length).toBeGreaterThan(0)
})
})
})Integration Test for Complete Workflow
import { describe, it, expect, beforeAll, afterAll } from 'vitest'
import $, { on } from 'sdk.do'
describe('Analysis Service Integration Tests', () => {
let testRequestId: string
beforeAll(async () => {
// Setup test environment
await $.Service.init('analysis-service-v1')
})
afterAll(async () => {
// Cleanup
if (testRequestId) {
await db.analysis_requests.delete(testRequestId)
}
})
it('should complete full sentiment analysis workflow', async () => {
// Create test request
const request = {
id: `test-${Date.now()}`,
type: 'sentiment-analysis',
complexity: 'standard',
turnaround: 'standard',
data: {
format: 'json',
location: 'https://storage.do/test-data/sentiment-test.json',
},
options: {
addOns: ['custom-visualization'],
deliveryFormat: 'all',
},
customerId: 'test-customer-1',
}
testRequestId = request.id
// Track events
const events: string[] = []
const unsubscribeValidating = on.Analysis.validating, () => {
events.push('validating')
})
const unsubscribeAnalyzing = on.Analysis.analyzing, () => {
events.push('analyzing')
})
const unsubscribeCompleted = on.Analysis.completed, () => {
events.push('completed')
})
// Trigger analysis
await $.Event.emit($.Analysis.requested, request)
// Wait for completion (with timeout)
await new Promise((resolve, reject) => {
const timeout = setTimeout(() => reject(new Error('Timeout')), 120000)
const checkCompletion = setInterval(async () => {
const status = await db.analysis_requests.findOne({ id: request.id })
if (status?.status === 'completed') {
clearInterval(checkCompletion)
clearTimeout(timeout)
resolve(status)
}
}, 1000)
})
// Verify workflow
expect(events).toContain('validating')
expect(events).toContain('analyzing')
expect(events).toContain('completed')
// Verify results
const result = await db.analysis_requests.findOne({ id: request.id })
expect(result.status).toBe('completed')
expect(result.qualityScore).toBeGreaterThan(0.8)
expect(result.slaCompliant).toBe(true)
const report = await db.analysis_reports.findOne({ requestId: request.id })
expect(report).toBeDefined()
expect(report.report.sections.length).toBeGreaterThan(4)
expect(report.report.visualizations.length).toBeGreaterThan(0)
// Cleanup listeners
unsubscribeValidating()
unsubscribeAnalyzing()
unsubscribeCompleted()
}, 180000) // 3 minute timeout
it('should handle SLA violations correctly', async () => {
// Create request with urgent turnaround
const request = {
id: `test-urgent-${Date.now()}`,
type: 'financial-analysis',
complexity: 'advanced',
turnaround: 'urgent', // 1 hour SLA
data: {
format: 'csv',
location: 'https://storage.do/test-data/large-financial-data.csv',
},
options: {
addOns: [],
deliveryFormat: 'pdf',
},
customerId: 'test-customer-2',
}
// Mock slow processing (would simulate in test environment)
await $.Event.emit($.Analysis.requested, request)
// Check for SLA violation handling
const slaEvent = await new Promise((resolve) => {
on.Analysis.sla_violated, (event) => {
resolve(event)
})
})
expect(slaEvent).toBeDefined()
// Verify refund was issued
const result = await db.analysis_requests.findOne({ id: request.id })
expect(result.refundIssued).toBe(true)
})
})Deployment
Production deployment requires careful configuration to ensure reliability, scalability, and security.
Production Configuration
import $, { ai, db } from 'sdk.do'
// Production deployment configuration
const productionConfig = {
service: {
id: 'analysis-service-v1',
environment: 'production',
region: 'us-east-1',
// Scaling configuration
scaling: {
minInstances: 3,
maxInstances: 20,
targetUtilization: 0.7,
scaleUpThreshold: 0.8,
scaleDownThreshold: 0.4,
cooldownPeriod: 300, // seconds
},
// Resource allocation
resources: {
memory: '2GB',
cpu: '1.0',
timeout: 900, // 15 minutes
},
// Queue configuration
queue: {
maxConcurrent: 10,
maxRetries: 3,
retryDelay: 60, // seconds
deadLetterQueue: true,
},
// Database configuration
database: {
connectionPool: {
min: 5,
max: 20,
},
queryTimeout: 30000, // ms
},
// AI model configuration
ai: {
models: ['gpt-5', 'claude-sonnet-4.5'],
fallbackModel: 'gpt-4o',
timeout: 60000, // ms per call
maxRetries: 2,
},
// Storage configuration
storage: {
provider: 'cloudflare-r2',
bucket: 'analysis-reports-production',
cdnEnabled: true,
retentionDays: 90,
},
// Payment configuration
payment: {
provider: 'stripe',
mode: 'live',
webhookSecret: process.env.STRIPE_WEBHOOK_SECRET,
},
// Monitoring configuration
monitoring: {
enabled: true,
metricsInterval: 60, // seconds
alerting: {
errorRate: { threshold: 0.05, window: 300 },
responseTime: { threshold: 60000, window: 300 },
queueDepth: { threshold: 50, window: 60 },
},
},
// Security configuration
security: {
apiKeyRequired: true,
rateLimiting: {
enabled: true,
requestsPerMinute: 60,
burstSize: 100,
},
encryption: {
atRest: true,
inTransit: true,
},
},
},
}
// Deploy to production
async function deployToProduction() {
console.log('Deploying analysis service to production...')
// Validate configuration
const validation = await $.Service.validateConfig(productionConfig)
if (!validation.valid) {
throw new Error(`Invalid configuration: ${validation.errors.join(', ')}`)
}
// Deploy service
const deployment = await $.Service.deploy('analysis-service-v1', productionConfig)
console.log(`Deployed successfully: ${deployment.url}`)
console.log(`Version: ${deployment.version}`)
console.log(`Status: ${deployment.status}`)
// Run health check
const health = await $.Service.healthCheck('analysis-service-v1')
if (health.status !== 'healthy') {
throw new Error(`Health check failed: ${health.message}`)
}
// Enable traffic
await $.Service.enableTraffic('analysis-service-v1', {
percentage: 100,
strategy: 'rolling',
})
console.log('Service is now receiving traffic')
return deployment
}
// Environment variables configuration
const requiredEnvVars = [
'STRIPE_SECRET_KEY',
'STRIPE_WEBHOOK_SECRET',
'OPENAI_API_KEY',
'ANTHROPIC_API_KEY',
'DATABASE_URL',
'STORAGE_BUCKET',
'STORAGE_ACCESS_KEY',
'STORAGE_SECRET_KEY',
]
// Validate environment
function validateEnvironment() {
const missing = requiredEnvVars.filter((key) => !process.env[key])
if (missing.length > 0) {
throw new Error(`Missing required environment variables: ${missing.join(', ')}`)
}
console.log('Environment validation passed')
}
// Main deployment script
async function main() {
try {
validateEnvironment()
const deployment = await deployToProduction()
console.log('Deployment completed successfully')
process.exit(0)
} catch (error) {
console.error('Deployment failed:', error)
process.exit(1)
}
}
// Run if executed directly
if (require.main === module) {
main()
}Monitoring
Comprehensive monitoring ensures the service maintains high quality and performance standards.
Metrics Tracking
import $, { on } from 'sdk.do'
// Define custom metrics
const metrics = {
// Analysis performance metrics
analysisExecutionTime: $.Metric.histogram({
name: 'analysis_execution_time_ms',
description: 'Time taken to complete analysis',
buckets: [1000, 5000, 10000, 30000, 60000, 120000, 300000],
}),
analysisAccuracy: $.Metric.gauge({
name: 'analysis_accuracy_score',
description: 'Analysis accuracy/confidence score',
labels: ['analysis_type', 'complexity'],
}),
// Quality metrics
qualityValidationScore: $.Metric.gauge({
name: 'quality_validation_score',
description: 'Quality validation score for completed analyses',
labels: ['analysis_type'],
}),
slaComplianceRate: $.Metric.gauge({
name: 'sla_compliance_rate',
description: 'Percentage of analyses meeting SLA requirements',
labels: ['turnaround_type'],
}),
// Volume metrics
analysisRequestsTotal: $.Metric.counter({
name: 'analysis_requests_total',
description: 'Total number of analysis requests',
labels: ['analysis_type', 'complexity', 'status'],
}),
activeAnalyses: $.Metric.gauge({
name: 'active_analyses',
description: 'Number of currently processing analyses',
}),
// Revenue metrics
revenueGenerated: $.Metric.counter({
name: 'revenue_generated_usd',
description: 'Total revenue generated from analyses',
labels: ['analysis_type'],
}),
// Error metrics
analysisFailures: $.Metric.counter({
name: 'analysis_failures_total',
description: 'Total number of failed analyses',
labels: ['failure_reason'],
}),
}
// Track metrics on events
on.Analysis.requested, (event) => {
metrics.analysisRequestsTotal.inc({
analysis_type: event.data.type,
complexity: event.data.complexity,
status: 'requested',
})
metrics.activeAnalyses.inc()
})
on.Analysis.completed, (event) => {
metrics.analysisRequestsTotal.inc({
analysis_type: event.data.type,
status: 'completed',
})
metrics.analysisExecutionTime.observe(event.data.processingTime)
metrics.analysisAccuracy.set({ analysis_type: event.data.type, complexity: event.data.complexity }, event.data.qualityScore)
metrics.slaComplianceRate.set({ turnaround_type: event.data.turnaround }, event.data.slaCompliant ? 1 : 0)
metrics.activeAnalyses.dec()
})
on.Analysis.failed, (event) => {
metrics.analysisFailures.inc({
failure_reason: event.data.error,
})
metrics.activeAnalyses.dec()
})
// Custom monitoring dashboard
async function generateMonitoringDashboard() {
const dashboard = await $.Monitoring.createDashboard({
name: 'Analysis Service Dashboard',
panels: [
{
title: 'Request Volume',
type: 'graph',
metric: 'analysis_requests_total',
timeRange: '1h',
groupBy: ['analysis_type'],
},
{
title: 'Average Execution Time',
type: 'gauge',
metric: 'analysis_execution_time_ms',
aggregation: 'avg',
timeRange: '5m',
},
{
title: 'Accuracy Score',
type: 'gauge',
metric: 'analysis_accuracy_score',
aggregation: 'avg',
threshold: { warning: 0.85, critical: 0.8 },
},
{
title: 'SLA Compliance',
type: 'percentage',
metric: 'sla_compliance_rate',
aggregation: 'avg',
threshold: { warning: 0.95, critical: 0.9 },
},
{
title: 'Active Analyses',
type: 'number',
metric: 'active_analyses',
},
{
title: 'Failure Rate',
type: 'graph',
metric: 'analysis_failures_total',
timeRange: '24h',
},
],
refreshInterval: 30, // seconds
})
console.log(`Dashboard created: ${dashboard.url}`)
return dashboard
}Alert Configuration
import $ from 'sdk.do'
// Configure alerting rules
const alertRules = [
{
name: 'High Error Rate',
condition: {
metric: 'analysis_failures_total',
operator: 'rate',
threshold: 0.05, // 5% error rate
window: '5m',
},
severity: 'critical',
notification: {
channels: ['slack', 'email', 'pagerduty'],
message: 'Analysis service error rate exceeds 5%',
},
},
{
name: 'Slow Processing',
condition: {
metric: 'analysis_execution_time_ms',
operator: 'percentile',
percentile: 95,
threshold: 120000, // 2 minutes
window: '10m',
},
severity: 'warning',
notification: {
channels: ['slack'],
message: 'P95 analysis processing time exceeds 2 minutes',
},
},
{
name: 'Low Quality Score',
condition: {
metric: 'quality_validation_score',
operator: 'avg',
threshold: 0.8,
window: '15m',
},
severity: 'warning',
notification: {
channels: ['slack', 'email'],
message: 'Average quality score below threshold',
},
},
{
name: 'SLA Violations',
condition: {
metric: 'sla_compliance_rate',
operator: 'avg',
threshold: 0.95,
window: '1h',
},
severity: 'critical',
notification: {
channels: ['slack', 'email', 'pagerduty'],
message: 'SLA compliance rate below 95%',
},
},
{
name: 'Queue Depth High',
condition: {
metric: 'active_analyses',
operator: 'value',
threshold: 50,
window: '5m',
},
severity: 'warning',
notification: {
channels: ['slack'],
message: 'Analysis queue depth is high - consider scaling',
},
},
]
// Setup alerts
async function configureAlerts() {
for (const rule of alertRules) {
await $.Monitoring.createAlert(rule)
console.log(`Alert configured: ${rule.name}`)
}
}
// Alert handler
on.Alert.triggered, async (event) => {
console.log(`Alert triggered: ${event.data.ruleName}`)
console.log(`Severity: ${event.data.severity}`)
console.log(`Message: ${event.data.message}`)
// Auto-remediation for specific alerts
if (event.data.ruleName === 'Queue Depth High') {
await $.Service.scale('analysis-service-v1', {
instances: '+5',
})
console.log('Auto-scaled service in response to high queue depth')
}
// Log incident
await db.incidents.create({
alertName: event.data.ruleName,
severity: event.data.severity,
triggeredAt: new Date(),
status: 'open',
})
})Usage Examples
Real-world scenarios demonstrating how customers use the analysis service.
Customer Sentiment Analysis Example
import $ from 'sdk.do'
// Example: Analyze customer feedback from multiple sources
async function analyzeCustomerFeedback() {
// Collect feedback from various sources
const feedbackSources = {
surveyResponses: await fetchSurveyData(),
productReviews: await fetchReviews(),
supportTickets: await fetchSupportTickets(),
socialMedia: await fetchSocialMentions(),
}
// Combine and format data
const allFeedback = [
...feedbackSources.surveyResponses.map((r) => ({ text: r.comment, source: 'survey' })),
...feedbackSources.productReviews.map((r) => ({ text: r.review, source: 'review' })),
...feedbackSources.supportTickets.map((t) => ({ text: t.description, source: 'support' })),
...feedbackSources.socialMedia.map((m) => ({ text: m.content, source: 'social' })),
]
console.log(`Analyzing ${allFeedback.length} feedback items...`)
// Request sentiment analysis
const analysisRequest = await $.Analysis.request({
type: 'sentiment-analysis',
complexity: 'advanced',
turnaround: 'priority',
data: {
format: 'json',
data: allFeedback,
},
options: {
addOns: ['custom-visualization', 'executive-summary'],
customCategories: ['product-quality', 'customer-service', 'pricing', 'features'],
deliveryFormat: 'all',
},
})
console.log(`Analysis requested: ${analysisRequest.id}`)
// Wait for completion
const result = await analysisRequest.waitForCompletion()
// Display results
console.log('\n=== Customer Sentiment Analysis Results ===')
console.log(`\nOverall Sentiment: ${result.insights.metrics.averageSentiment > 0 ? 'Positive' : 'Negative'}`)
console.log(`Positive: ${result.insights.metrics.positivePercentage.toFixed(1)}%`)
console.log(`Neutral: ${result.insights.metrics.neutralPercentage.toFixed(1)}%`)
console.log(`Negative: ${result.insights.metrics.negativePercentage.toFixed(1)}%`)
console.log('\nTop Themes:')
result.insights.themes.slice(0, 5).forEach((theme, i) => {
console.log(`${i + 1}. ${theme.theme} (${theme.frequency} mentions)`)
})
console.log('\nKey Recommendations:')
result.insights.recommendations.forEach((rec, i) => {
console.log(`${i + 1}. ${rec.recommendation}`)
console.log(` Priority: ${rec.priority} | Impact: ${rec.expectedImpact}`)
})
console.log(`\nFull report: ${result.outputs.html.url}`)
return result
}
// Mock data fetchers (would connect to real systems)
async function fetchSurveyData() {
return [
{ comment: 'Great product, exceeded expectations!', date: '2025-10-20' },
{ comment: 'Customer service was helpful and responsive', date: '2025-10-21' },
{ comment: 'A bit expensive but worth the quality', date: '2025-10-22' },
]
}
async function fetchReviews() {
return [
{ review: 'Best purchase this year. Highly recommend!', rating: 5 },
{ review: 'Good but has room for improvement', rating: 4 },
{ review: 'Not what I expected, disappointed', rating: 2 },
]
}
async function fetchSupportTickets() {
return [
{ description: 'Issue with feature X, needs improvement', status: 'resolved' },
{ description: 'Amazing support team, resolved quickly', status: 'closed' },
]
}
async function fetchSocialMentions() {
return [
{ content: "Just tried @product and I'm impressed!", platform: 'twitter' },
{ content: 'Finally a product that delivers on its promises', platform: 'linkedin' },
]
}Market Research Analysis Example
import $ from 'sdk.do'
// Example: Comprehensive market analysis for product launch
async function conductMarketResearch(productCategory: string) {
console.log(`Conducting market research for: ${productCategory}`)
// Gather market data
const marketData = {
competitorPricing: await fetchCompetitorPricing(productCategory),
marketSize: await fetchMarketSize(productCategory),
trendData: await fetchMarketTrends(productCategory),
consumerBehavior: await fetchConsumerData(productCategory),
}
// Format for analysis
const analysisData = [
...marketData.competitorPricing.map((c) => ({
metric: 'competitor_price',
value: c.price,
source: c.competitor,
date: c.date,
})),
...marketData.trendData.map((t) => ({
metric: t.trendName,
value: t.score,
source: 'trend_analysis',
date: t.date,
})),
{
metric: 'market_size',
value: marketData.marketSize.total,
source: 'industry_report',
date: new Date().toISOString(),
},
]
// Request market research analysis
const analysisRequest = await $.Analysis.request({
type: 'market-research',
complexity: 'advanced',
turnaround: 'standard',
data: {
format: 'json',
data: analysisData,
},
options: {
addOns: ['custom-visualization', 'executive-summary'],
deliveryFormat: 'all',
},
})
console.log(`Analysis requested: ${analysisRequest.id}`)
// Monitor progress
analysisRequest.on('progress', (event) => {
console.log(`Progress: ${event.stage}`)
})
// Wait for completion
const result = await analysisRequest.waitForCompletion()
// Display market insights
console.log('\n=== Market Research Analysis Results ===')
console.log(`\nMarket Size: $${result.insights.metrics.marketSize?.toLocaleString()}`)
console.log(`Growth Rate: ${result.insights.metrics.growthRate}%`)
console.log('\nKey Opportunities:')
result.insights.keyFindings
.filter((f) => f.impact === 'high')
.forEach((finding, i) => {
console.log(`${i + 1}. ${finding.finding}`)
})
console.log('\nCompetitive Positioning:')
result.insights.competitiveAnalysis?.forEach((comp) => {
console.log(`- ${comp.competitor}: ${comp.positioning}`)
})
console.log('\nStrategic Recommendations:')
result.insights.recommendations
.filter((r) => r.priority === 'high')
.forEach((rec, i) => {
console.log(`${i + 1}. ${rec.recommendation}`)
})
console.log(`\nFull report: ${result.outputs.pdf.url}`)
return result
}
// Mock market data functions
async function fetchCompetitorPricing(category: string) {
return [
{ competitor: 'Competitor A', price: 49.99, date: '2025-10-01' },
{ competitor: 'Competitor B', price: 59.99, date: '2025-10-01' },
{ competitor: 'Competitor C', price: 45.0, date: '2025-10-01' },
]
}
async function fetchMarketSize(category: string) {
return {
total: 5000000000, // $5B
yearOverYear: 15.3, // 15.3% growth
}
}
async function fetchMarketTrends(category: string) {
return [
{ trendName: 'sustainability_focus', score: 85, date: '2025-10-01' },
{ trendName: 'digital_adoption', score: 92, date: '2025-10-01' },
{ trendName: 'price_sensitivity', score: 68, date: '2025-10-01' },
]
}
async function fetchConsumerData(category: string) {
return {
demographics: { age: '25-45', income: 'middle-upper' },
preferences: ['quality', 'sustainability', 'convenience'],
purchaseBehavior: 'research-intensive',
}
}Financial Performance Analysis Example
import $ from 'sdk.do'
// Example: Quarterly financial analysis for executive team
async function analyzeQuarterlyFinancials(quarters: number = 4) {
console.log(`Analyzing last ${quarters} quarters of financial data...`)
// Fetch financial statements
const financialData = await fetchFinancialStatements(quarters)
// Format for analysis
const analysisData = financialData.map((quarter, index) => ({
period: `Q${index + 1}`,
revenue: quarter.revenue,
expenses: quarter.expenses,
operatingIncome: quarter.operatingIncome,
netIncome: quarter.netIncome,
assets: quarter.assets,
liabilities: quarter.liabilities,
cashFlow: quarter.cashFlow,
date: quarter.date,
}))
// Request financial analysis
const analysisRequest = await $.Analysis.request({
type: 'financial-analysis',
complexity: 'advanced',
turnaround: 'priority',
data: {
format: 'json',
data: analysisData,
},
options: {
addOns: ['custom-visualization', 'executive-summary', 'data-export'],
deliveryFormat: 'all',
},
})
console.log(`Analysis requested: ${analysisRequest.id}`)
// Wait for completion
const result = await analysisRequest.waitForCompletion()
// Display financial insights
console.log('\n=== Financial Analysis Results ===')
console.log(`\nTotal Revenue: $${result.insights.metrics.totalRevenue.toLocaleString()}`)
console.log(`Total Expenses: $${result.insights.metrics.totalExpenses.toLocaleString()}`)
console.log(`Average Profit Margin: ${result.insights.metrics.avgProfitMargin}%`)
console.log(`Revenue Growth: ${result.insights.metrics.revenueGrowth.toFixed(2)}%`)
console.log('\nFinancial Health Assessment:')
console.log(result.insights.healthAssessment)
console.log('\nKey Performance Indicators:')
result.insights.kpis.forEach((kpi) => {
console.log(`- ${kpi.name}: ${kpi.value} (${kpi.trend})`)
})
console.log('\nRisk Assessment:')
result.insights.risks.forEach((risk, i) => {
console.log(`${i + 1}. ${risk.description} (Severity: ${risk.severity})`)
})
console.log('\nInvestment Recommendations:')
result.insights.recommendations
.filter((r) => r.category === 'investment')
.forEach((rec, i) => {
console.log(`${i + 1}. ${rec.recommendation}`)
console.log(` Expected ROI: ${rec.expectedROI}`)
})
console.log(`\nExecutive Summary: ${result.outputs.pdf.url}`)
console.log(`Interactive Dashboard: ${result.outputs.html.url}`)
return result
}
// Mock financial data
async function fetchFinancialStatements(quarters: number) {
const baseRevenue = 10000000
const baseExpenses = 7500000
return Array.from({ length: quarters }, (_, i) => {
const growthFactor = 1 + 0.05 * i // 5% quarterly growth
const revenue = baseRevenue * growthFactor
const expenses = baseExpenses * (1 + 0.03 * i) // 3% expense growth
return {
revenue,
expenses,
operatingIncome: revenue - expenses,
netIncome: (revenue - expenses) * 0.85, // After tax
assets: revenue * 2.5,
liabilities: revenue * 1.2,
cashFlow: (revenue - expenses) * 0.7,
date: new Date(2025, 9 - i, 1).toISOString(),
}
})
}Troubleshooting
Common issues and their solutions for operating the analysis service.
Common Issues and Solutions
Issue: Analysis takes longer than expected
Symptoms:
- Processing time exceeds SLA requirements
- Queue depth increasing
- Customer complaints about slow turnaround
Solutions:
- Check resource utilization and scale up if needed
- Review data volume - large datasets may need chunking
- Verify AI model availability and response times
- Consider enabling parallel processing for certain analysis types
- Check for database query performance issues
// Debug slow analysis
async function debugSlowAnalysis(requestId: string) {
const request = await db.analysis_requests.findOne({ id: requestId })
const events = await db.events.find({ requestId })
// Calculate time spent in each stage
const stages = ['collecting_data', 'analyzing', 'generating_report']
const timings = stages.map((stage) => {
const start = events.find((e) => e.type === stage)
const next = events.find((e) => e.timestamp > start?.timestamp)
return {
stage,
duration: next ? new Date(next.timestamp) - new Date(start.timestamp) : null,
}
})
console.log('Stage timings:', timings)
return timings
}Issue: Low quality scores
Symptoms:
- Analysis confidence below threshold
- Customer dissatisfaction with results
- Frequent quality validation failures
Solutions:
- Review data quality - ensure completeness and accuracy of input data
- Verify AI model prompts are properly configured
- Check for model output inconsistencies
- Consider adjusting complexity tier for analysis type
- Review and update validation criteria
Issue: Payment failures
Symptoms:
- Stripe payment intents failing
- Customers not charged correctly
- Revenue metrics don't match analysis volume
Solutions:
- Verify Stripe API keys are correct
- Check customer payment method validity
- Review pricing calculation logic
- Ensure webhook endpoints are accessible
- Check for currency conversion issues
Issue: SLA violations
Symptoms:
- Frequent automatic refunds
- SLA compliance rate below target
- Alert spam from monitoring
Solutions:
- Analyze processing time distribution to identify bottlenecks
- Implement request prioritization for urgent analyses
- Scale resources during peak hours
- Optimize AI model calls (batching, caching)
- Review SLA thresholds - may need adjustment based on actual performance
Performance Optimization Tips
// Optimization: Batch processing for large datasets
async function batchProcess(data: any[], batchSize: number = 100) {
const batches = []
for (let i = 0; i < data.length; i += batchSize) {
batches.push(data.slice(i, i + batchSize))
}
const results = await Promise.all(batches.map((batch) => performSentimentAnalysis(batch, 'simple')))
return results.flat()
}
// Optimization: Cache frequently accessed data
const cache = new Map()
async function getCachedIndustryAverage(metric: string) {
const cacheKey = `industry_avg_${metric}`
if (cache.has(cacheKey)) {
return cache.get(cacheKey)
}
const value = await fetchIndustryAverage(metric)
cache.set(cacheKey, value)
// Expire after 1 hour
setTimeout(() => cache.delete(cacheKey), 3600000)
return value
}
// Optimization: Parallel AI model calls
async function parallelAnalysis(text: string) {
const [gpt5Result, claudeResult] = await Promise.all([
ai.chat({ model: 'gpt-5', messages: [{ role: 'user', content: text }] }),
ai.chat({ model: 'claude-sonnet-4.5', messages: [{ role: 'user', content: text }] }),
])
return crossValidateResults(JSON.parse(gpt5Result.choices[0].message.content), JSON.parse(claudeResult.content[0].text))
}Service Architecture Visualization
Summary
This comprehensive guide has covered the complete implementation of an AI-powered data analysis service using the .do platform. The service provides:
- Multi-domain analysis capabilities: Sentiment analysis, market research, and financial analysis
- Enterprise-grade quality: Multi-model validation, SLA guarantees, and comprehensive testing
- Production-ready deployment: Scalable architecture, monitoring, and alerting
- Real-world examples: Practical usage scenarios for common business needs
The service architecture demonstrates best practices for building autonomous services including semantic patterns, event-driven workflows, payment integration, and comprehensive observability.
For additional resources and support:
- Documentation: docs.do/services
- SDK Reference: sdk.do/reference
- Community: community.do
- Support: [email protected]
Complete Content Generation Service
Production-ready blog post generation service with research, writing, SEO optimization, and quality assurance
Complete Invoice Processing Workflow Service
End-to-end implementation of a multi-step invoice processing workflow with OCR, validation, approval, and payment