๐ฏ Supernal TTS Quick Reference
Click to copy this entire guide to your clipboard, then paste it into your LLM conversation to give it complete instructions for integrating Supernal TTS.
๐ Quickstart Commandsโ
# Test without API keys
curl -X POST https://tts.supernal.ai/api/v1/generate \
-H "Content-Type: application/json" \
-d '{"text": "Hello!", "options": {"provider": "mock"}}'
# Compare all providers
./demo-scripts/compare-all-providers.sh "Test text"
# Open interactive demo
open demo-scripts/demo.html
๐ Provider Cheat Sheetโ
| Provider | Latency | Cost/1M | Best For | Key Feature |
|---|---|---|---|---|
mock | 500ms | $0 | Testing | No API key |
openai | 200ms | $15-30 | Quality | 7 voices + instructions |
cartesia | Low latency | ~$24 | Real-time | Emotions |
azure | 300ms | $0-16 | Budget | 500K free |
๐ค Voice Optionsโ
// OpenAI
['alloy', 'echo', 'fable', 'onyx', 'nova', 'shimmer', 'coral']
// Cartesia
['barbershop-man', 'broadway-diva', 'confident-british-man',
'doctor-mischief', 'friendly-sidekick']
// Azure
['en-US-JennyNeural', 'en-US-GuyNeural', 'en-US-AriaNeural']
// Mock
['mock-voice-1', 'mock-voice-2', 'mock-voice-3']
๐ง Environment Variablesโ
# Required for providers
OPENAI_API_KEY=sk-...
CARTESIA_API_KEY=...
AZURE_API_KEY=...
AZURE_REGION=eastus
# Optional
ENABLE_MOCK_PROVIDER=true
DEFAULT_PROVIDER=openai
CACHE_DIR=.tts-cache
PORT=3030
๐ก Key Endpointsโ
# Generate audio
POST /api/v1/generate
{
"text": "Your text",
"options": {
"provider": "openai",
"voice": "coral",
"speed": 1.0,
"instructions": "Speak in a cheerful and positive tone."
}
}
# Get audio file
GET /api/v1/audio/{hash}
# Get metadata
GET /api/v1/audio/{hash}/metadata
# List providers
GET /api/v1/providers
# Provider stats
GET /api/v1/stats/providers
# Cache stats
GET /api/v1/cache/stats
# Estimate cost
POST /api/v1/estimate
๐ป Client Usageโ
import { TTSClient } from '@supernal-tts/client';
const client = new TTSClient({
apiUrl: 'https://tts.supernal.ai'
});
// Basic generation
const { audioUrl } = await client.generate({
text: "Hello world!"
});
// With options
const response = await client.generate({
text: "Hello world!",
options: {
provider: 'cartesia',
voice: 'confident-british-man',
speed: 1.2
}
});
// Get metadata
const metadata = await client.getMetadata(response.hash);
๐ฏ Provider Selection Logicโ
function selectProvider(requirements) {
if (!hasApiKeys) return 'mock';
if (needsLatency < 100) return 'cartesia';
if (monthlyChars < 500000) return 'azure';
if (priority === 'quality') return 'openai';
return 'openai'; // default
}
๐ฐ Cost Examplesโ
30K character blog post:
- Mock: $0
- OpenAI: $0.45-0.90
- Cartesia: $0.72
- Azure: $0 (if under 500K/month)
100 posts (3M chars):
- Mock: $0
- OpenAI: $45-90
- Cartesia: $72
- Azure: $40 (after free tier)
๐ Common Issuesโ
# Provider not configured
# โ Add API key to .env
# Audio not playing
# โ Check CORS settings
# High latency
# โ Use Cartesia provider
# Over budget
# โ Use Azure free tier or caching
๐ Documentation Linksโ
- API Reference - Complete API documentation
- Security Guide - Production deployment
- Widget Guide - Web integration
- Examples - Integration examples
๐ Deploy to Productionโ
# docker-compose.yml
services:
tts-api:
build: ./families/supernal-tts/apps/api
environment:
- NODE_ENV=production
- OPENAI_API_KEY=${OPENAI_API_KEY}
- DEFAULT_PROVIDER=openai
ports:
- "3030:3030"
Pro Tip: Start with mock provider, test your integration, then add real API keys!