Cache Automation and Performance Optimization¶
This tutorial teaches you how to implement automated cache management workflows using Peakhour's comprehensive caching API and automation capabilities to optimize content delivery performance and reduce origin server load.
Duration: 60 minutes
Prerequisites: Understanding of caching concepts, API key management, and basic CI/CD workflows
Learning Goals: Master automated cache invalidation, implement performance-driven caching strategies, integrate with deployment pipelines, and monitor cache effectiveness
What You'll Build: A complete cache automation system that automatically invalidates content, optimizes cache performance, integrates with your deployment workflow, and provides comprehensive cache analytics.
Understanding Peakhour's Cache Architecture¶
Peakhour's caching system provides multi-layer caching with sophisticated automation capabilities designed for modern web applications and APIs.
Cache Hierarchy¶
Edge Cache Layer:
- Global CDN caching
- Configurable TTL for different content types
- Smart invalidation with soft and hard purge options
- Serve-stale capabilities for high availability
Browser Cache Layer:
- Client-side caching with independent TTL settings
- Cache-Control header optimization
- Mobile-aware caching variations
- Cookie-based cache bypass options
Origin Cache Integration:
- Automatic cache tag detection from origin responses
- Query string handling and normalization
- Vary header processing for content variations
- Conditional request optimization
Cache Automation Features¶
- API-Driven Management: RESTful API for all cache operations
- Tag-Based Invalidation: Group and invalidate related content
- Webhook Integration: Event-driven cache automation
- CI/CD Pipeline Integration: Deployment-triggered invalidation
- Performance Monitoring: Real-time cache analytics and optimization
Configure Optimal Cache Settings¶
Set Up Base Cache Configuration¶
Configure foundational caching behavior for your domain:
- Navigate to Performance > CDN Settings
- Configure base cache settings:
{
"cache_enabled": true,
"soft_purge": true,
"cdn_query_mode": "none",
"cdn_serve_stale": true,
"cache_vary_ua_mode": "device-type",
"cache_implicit_ttl": 0,
"cache_store_require_cache_control": true
}
Configuration Explanation:
cache_enabled
: Enable CDN caching globallysoft_purge
: Use soft purge (keeps stale content for fallback)cdn_query_mode
: How to handle query strings (full
,none
,strip
)cdn_serve_stale
: Serve cached content during origin issuescache_vary_ua_mode
: Vary caching by device typecache_implicit_ttl
: Default TTL when no Cache-Control headers present
Configure TTL Settings¶
Set appropriate Time-To-Live values for different content types:
Static Assets (long-term caching):
Dynamic Content (short-term caching):
API Responses (minimal caching):
Advanced Cache Control Settings¶
Configure sophisticated cache behavior:
Cookie Handling:
{
"cache_strip_cookies": true, // Remove cookies for caching
"cache_strip_set_cookies": false, // Keep Set-Cookie headers
"cdn_skip_cookie": "nocache" // Bypass cache if cookie present
}
Query String Management:
{
"cdn_remove_query_args": [
"utm_source", "utm_medium", "utm_campaign", // Marketing parameters
"fbclid", "gclid", // Tracking parameters
"_t", "timestamp" // Cache-busting parameters
]
}
Cache Tag Configuration:
{
"cache_tag_header": "Cache-Tag", // Header name for cache tags
"cache_tag_header_separator": "," // Tag separator in header
}
Implement API-Based Cache Management¶
Set Up API Authentication¶
Create API credentials for automated cache management:
- Generate API Token:
- Navigate to Account > API Keys
- Create new API key with
cache:manage
permissions -
Securely store the token for automation scripts
-
Test API Access:
Automated Cache Configuration¶
Create scripts to manage cache settings programmatically:
Update Cache Configuration:
#!/bin/bash
# update_cache_config.sh
API_TOKEN="your_api_token"
DOMAIN="yourdomain.com"
# Configure optimized cache settings
curl -X PATCH "https://api.peakhour.io/domains/$DOMAIN/cdn" \
-H "Authorization: Bearer $API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"cache_enabled": true,
"soft_purge": true,
"browser_ttl_sec": 3600,
"edge_ttl_sec": 86400,
"cache_tag_header": "Cache-Tag",
"cdn_serve_stale": true
}'
Environment-Specific Configuration:
# cache_config.py
import requests
import os
class CacheManager:
def __init__(self, api_token, domain):
self.api_token = api_token
self.domain = domain
self.base_url = f"https://api.peakhour.io/domains/{domain}/cdn"
def update_config(self, config):
headers = {
"Authorization": f"Bearer {self.api_token}",
"Content-Type": "application/json"
}
response = requests.patch(self.base_url, json=config, headers=headers)
return response.json()
def production_config(self):
return {
"cache_enabled": True,
"soft_purge": True,
"browser_ttl_sec": 86400, # 24 hours
"edge_ttl_sec": 604800, # 7 days
"cdn_serve_stale": True
}
def development_config(self):
return {
"cache_enabled": True,
"soft_purge": False, # Hard purge for development
"browser_ttl_sec": 0, # No browser caching
"edge_ttl_sec": 60, # 1 minute edge caching
"cdn_serve_stale": False
}
# Usage
cache_mgr = CacheManager(os.getenv("PEAKHOUR_API_TOKEN"), "yourdomain.com")
if os.getenv("ENVIRONMENT") == "production":
cache_mgr.update_config(cache_mgr.production_config())
else:
cache_mgr.update_config(cache_mgr.development_config())
Implement Automated Cache Invalidation¶
Path-Based Cache Purging¶
Automate cache invalidation for specific content:
Single Path Purging:
#!/bin/bash
# purge_path.sh
purge_path() {
local path="$1"
local soft="${2:-true}"
curl -X DELETE "https://api.peakhour.io/domains/$DOMAIN/cdn/resources" \
-H "Authorization: Bearer $API_TOKEN" \
-H "Content-Type: application/json" \
-d "{\"paths\": [\"$path\"], \"soft\": $soft}"
}
# Usage examples
purge_path "/products/featured.json" true
purge_path "/api/v1/catalog" false
Bulk Path Purging:
# bulk_purge.py
import requests
import json
def bulk_purge(api_token, domain, paths, soft=True):
"""Purge multiple paths efficiently (max 1000 per request)"""
url = f"https://api.peakhour.io/domains/{domain}/cdn/resources"
headers = {
"Authorization": f"Bearer {api_token}",
"Content-Type": "application/json"
}
# Split into chunks of 1000 paths
chunk_size = 1000
for i in range(0, len(paths), chunk_size):
chunk = paths[i:i + chunk_size]
payload = {
"paths": chunk,
"soft": soft
}
response = requests.delete(url, json=payload, headers=headers)
print(f"Purged {len(chunk)} paths: {response.status_code}")
# Example usage
updated_paths = [
"/products/page-1.html",
"/products/page-2.html",
"/api/v1/products",
"/assets/catalog.json"
]
bulk_purge("your_api_token", "yourdomain.com", updated_paths)
Tag-Based Cache Invalidation¶
Implement sophisticated cache tagging for grouped invalidation:
Add Cache Tags to Origin Responses:
# Django example - add cache tags to responses
from django.http import HttpResponse
def product_detail(request, product_id):
product = get_object_or_404(Product, id=product_id)
response = HttpResponse(render_template("product.html", product=product))
# Add cache tags for automatic invalidation
cache_tags = [
f"product:{product_id}",
f"category:{product.category_id}",
"products",
"homepage" if product.featured else None
]
response["Cache-Tag"] = ",".join(filter(None, cache_tags))
response["Cache-Control"] = "public, max-age=3600"
return response
Tag-Based Purging:
# tag_purge.py
import requests
def purge_by_tags(api_token, domain, tags, soft=True):
"""Purge cache by tags - efficient for related content"""
url = f"https://api.peakhour.io/domains/{domain}/cdn/tags"
headers = {
"Authorization": f"Bearer {api_token}",
"Content-Type": "application/json"
}
payload = {
"tags": tags,
"soft": soft
}
response = requests.delete(url, json=payload, headers=headers)
return response.json()
# Example usage - invalidate all product-related content
purge_by_tags("your_api_token", "yourdomain.com", ["products", "homepage"])
# Invalidate specific product and related content
purge_by_tags("your_api_token", "yourdomain.com", ["product:123", "category:electronics"])
Site-Wide Cache Invalidation¶
Implement full site cache clearing for major updates:
#!/bin/bash
# purge_all.sh - Full site cache invalidation
API_TOKEN="your_api_token"
DOMAIN="yourdomain.com"
echo "Performing full site cache purge..."
# Note: Full site purge uses different endpoint
curl -X DELETE "https://api.peakhour.io/domains/$DOMAIN/cdn/all" \
-H "Authorization: Bearer $API_TOKEN" \
-H "Content-Type: application/json" \
-d '{"soft": true}'
echo "Full site purge completed"
Integrate with CI/CD Pipelines¶
GitHub Actions Integration¶
Create automated cache invalidation in your deployment workflow:
# .github/workflows/deploy.yml
name: Deploy and Cache Management
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Deploy Application
run: |
# Your deployment commands here
./deploy.sh
- name: Invalidate Cache
env:
PEAKHOUR_API_TOKEN: ${{ secrets.PEAKHOUR_API_TOKEN }}
DOMAIN: ${{ vars.DOMAIN }}
run: |
# Get list of changed files
CHANGED_FILES=$(git diff --name-only HEAD~1 HEAD)
# Create purge list
PURGE_PATHS=""
for file in $CHANGED_FILES; do
if [[ $file == "templates/"* ]]; then
# Template changes affect multiple pages
PURGE_PATHS="$PURGE_PATHS \"/\""
elif [[ $file == "static/"* ]]; then
# Static file changes
PURGE_PATHS="$PURGE_PATHS \"/$file\""
elif [[ $file == "api/"* ]]; then
# API changes
PURGE_PATHS="$PURGE_PATHS \"/api/\""
fi
done
# Perform selective purge
if [ ! -z "$PURGE_PATHS" ]; then
curl -X DELETE "https://api.peakhour.io/domains/$DOMAIN/cdn/resources" \
-H "Authorization: Bearer $PEAKHOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d "{\"paths\": [$PURGE_PATHS], \"soft\": true}"
fi
- name: Warm Cache
run: |
# Warm critical pages after purge
curl -s "https://$DOMAIN/" > /dev/null
curl -s "https://$DOMAIN/products" > /dev/null
curl -s "https://$DOMAIN/api/v1/featured" > /dev/null
Jenkins Pipeline Integration¶
// Jenkinsfile
pipeline {
agent any
environment {
PEAKHOUR_API_TOKEN = credentials('peakhour-api-token')
DOMAIN = 'yourdomain.com'
}
stages {
stage('Build') {
steps {
script {
// Your build steps
sh 'npm run build'
}
}
}
stage('Deploy') {
steps {
script {
// Your deployment steps
sh './deploy.sh'
}
}
}
stage('Cache Management') {
steps {
script {
// Smart cache invalidation based on changes
def changedFiles = sh(
script: 'git diff --name-only HEAD~1 HEAD',
returnStdout: true
).trim().split('\n')
def purgePaths = []
def purgeTags = []
changedFiles.each { file ->
if (file.startsWith('src/components/')) {
purgeTags.add('components')
} else if (file.startsWith('src/pages/')) {
purgePaths.add("/${file.replaceAll('src/pages/', '').replaceAll('.jsx?', '')}")
} else if (file.startsWith('public/assets/')) {
purgePaths.add("/${file}")
}
}
// Execute purge operations
if (purgePaths.size() > 0) {
sh """
curl -X DELETE "https://api.peakhour.io/domains/${DOMAIN}/cdn/resources" \\
-H "Authorization: Bearer ${PEAKHOUR_API_TOKEN}" \\
-H "Content-Type: application/json" \\
-d '{"paths": ${groovy.json.JsonBuilder(purgePaths)}, "soft": true}'
"""
}
if (purgeTags.size() > 0) {
sh """
curl -X DELETE "https://api.peakhour.io/domains/${DOMAIN}/cdn/tags" \\
-H "Authorization: Bearer ${PEAKHOUR_API_TOKEN}" \\
-H "Content-Type: application/json" \\
-d '{"tags": ${groovy.json.JsonBuilder(purgeTags)}, "soft": true}'
"""
}
}
}
}
stage('Cache Warming') {
steps {
script {
// Warm important pages
def criticalPages = [
"https://${DOMAIN}/",
"https://${DOMAIN}/products",
"https://${DOMAIN}/about"
]
criticalPages.each { page ->
sh "curl -s '${page}' > /dev/null"
}
}
}
}
}
}
Webhook-Based Automation¶
Set up webhooks for real-time cache management:
# webhook_handler.py - Flask example
from flask import Flask, request, jsonify
import requests
import json
app = Flask(__name__)
PEAKHOUR_CONFIG = {
'api_token': 'your_api_token',
'domain': 'yourdomain.com',
'base_url': 'https://api.peakhour.io'
}
@app.route('/cms-webhook', methods=['POST'])
def handle_cms_update():
"""Handle content management system webhooks"""
data = request.get_json()
# Parse CMS event
event_type = data.get('event_type')
content_type = data.get('content_type')
content_id = data.get('content_id')
if event_type in ['create', 'update', 'delete']:
# Determine purge strategy based on content type
if content_type == 'product':
tags_to_purge = [f'product:{content_id}', 'products', 'homepage']
elif content_type == 'category':
tags_to_purge = [f'category:{content_id}', 'navigation']
elif content_type == 'page':
paths_to_purge = [f'/{data.get("slug", "")}']
return purge_paths(paths_to_purge)
else:
# Default: purge homepage and related content
tags_to_purge = ['homepage']
return purge_tags(tags_to_purge)
return jsonify({'status': 'ignored'})
@app.route('/ecommerce-webhook', methods=['POST'])
def handle_ecommerce_update():
"""Handle e-commerce platform webhooks"""
data = request.get_json()
# Handle inventory updates
if data.get('event') == 'inventory_updated':
product_ids = data.get('product_ids', [])
tags = [f'product:{pid}' for pid in product_ids] + ['inventory']
return purge_tags(tags)
# Handle price changes
elif data.get('event') == 'price_updated':
product_ids = data.get('product_ids', [])
tags = [f'product:{pid}' for pid in product_ids] + ['pricing']
return purge_tags(tags)
return jsonify({'status': 'success'})
def purge_tags(tags):
"""Purge cache by tags"""
url = f"{PEAKHOUR_CONFIG['base_url']}/domains/{PEAKHOUR_CONFIG['domain']}/cdn/tags"
headers = {
'Authorization': f"Bearer {PEAKHOUR_CONFIG['api_token']}",
'Content-Type': 'application/json'
}
payload = {'tags': tags, 'soft': True}
response = requests.delete(url, json=payload, headers=headers)
return jsonify({
'status': 'success' if response.status_code == 200 else 'error',
'tags_purged': tags,
'response': response.status_code
})
def purge_paths(paths):
"""Purge cache by paths"""
url = f"{PEAKHOUR_CONFIG['base_url']}/domains/{PEAKHOUR_CONFIG['domain']}/cdn/resources"
headers = {
'Authorization': f"Bearer {PEAKHOUR_CONFIG['api_token']}",
'Content-Type': 'application/json'
}
payload = {'paths': paths, 'soft': True}
response = requests.delete(url, json=payload, headers=headers)
return jsonify({
'status': 'success' if response.status_code == 200 else 'error',
'paths_purged': paths,
'response': response.status_code
})
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000)
Implement Smart Cache Warming¶
Priority-Based Cache Warming¶
Warm critical content immediately after cache invalidation:
# cache_warmer.py
import asyncio
import aiohttp
import json
from datetime import datetime
class CacheWarmer:
def __init__(self, domain, priority_urls=None):
self.domain = domain
self.priority_urls = priority_urls or []
async def warm_url(self, session, url, priority=1):
"""Warm a single URL with priority handling"""
try:
headers = {
'User-Agent': 'PeakHour-CacheWarmer/1.0',
'Cache-Control': 'no-cache', # Force fresh fetch
'Pragma': 'no-cache'
}
start_time = datetime.now()
async with session.get(url, headers=headers) as response:
await response.read() # Ensure complete response
duration = (datetime.now() - start_time).total_seconds()
return {
'url': url,
'status': response.status,
'duration': duration,
'priority': priority
}
except Exception as e:
return {
'url': url,
'status': 'error',
'error': str(e),
'priority': priority
}
async def warm_urls_batch(self, urls, batch_size=10):
"""Warm URLs in batches to avoid overwhelming origin"""
connector = aiohttp.TCPConnector(limit=batch_size)
timeout = aiohttp.ClientTimeout(total=30)
async with aiohttp.ClientSession(connector=connector, timeout=timeout) as session:
# Process in batches
results = []
for i in range(0, len(urls), batch_size):
batch = urls[i:i + batch_size]
tasks = [self.warm_url(session, url['url'], url.get('priority', 1)) for url in batch]
batch_results = await asyncio.gather(*tasks)
results.extend(batch_results)
# Small delay between batches
if i + batch_size < len(urls):
await asyncio.sleep(1)
return results
def get_critical_urls(self):
"""Define critical URLs for immediate warming"""
return [
{'url': f'https://{self.domain}/', 'priority': 1},
{'url': f'https://{self.domain}/products', 'priority': 1},
{'url': f'https://{self.domain}/api/v1/featured', 'priority': 1},
{'url': f'https://{self.domain}/about', 'priority': 2},
{'url': f'https://{self.domain}/contact', 'priority': 2},
{'url': f'https://{self.domain}/api/v1/categories', 'priority': 2},
]
def get_sitemap_urls(self, limit=100):
"""Extract URLs from sitemap for comprehensive warming"""
import xml.etree.ElementTree as ET
import requests
try:
sitemap_url = f'https://{self.domain}/sitemap.xml'
response = requests.get(sitemap_url, timeout=10)
if response.status_code == 200:
root = ET.fromstring(response.content)
urls = []
for url_element in root.findall('.//{http://www.sitemaps.org/schemas/sitemap/0.9}url'):
loc = url_element.find('{http://www.sitemaps.org/schemas/sitemap/0.9}loc')
if loc is not None:
urls.append({'url': loc.text, 'priority': 3})
return urls[:limit]
except Exception as e:
print(f"Error fetching sitemap: {e}")
return []
async def warm_cache(self, include_sitemap=True):
"""Perform comprehensive cache warming"""
print("Starting cache warming...")
# Start with critical URLs
urls = self.get_critical_urls()
# Add sitemap URLs if requested
if include_sitemap:
sitemap_urls = self.get_sitemap_urls(limit=50)
urls.extend(sitemap_urls)
# Sort by priority (lower number = higher priority)
urls.sort(key=lambda x: x['priority'])
# Warm in priority order
results = await self.warm_urls_batch(urls, batch_size=5)
# Report results
successful = [r for r in results if isinstance(r.get('status'), int) and r['status'] < 400]
failed = [r for r in results if r.get('status') == 'error' or (isinstance(r.get('status'), int) and r['status'] >= 400)]
print(f"Cache warming completed:")
print(f" Successful: {len(successful)}")
print(f" Failed: {len(failed)}")
print(f" Average response time: {sum(r['duration'] for r in successful)/len(successful):.2f}s")
return results
# Usage
async def main():
warmer = CacheWarmer('yourdomain.com')
results = await warmer.warm_cache()
# Print detailed results
for result in results:
if result.get('status') == 'error':
print(f"ERROR: {result['url']} - {result.get('error')}")
else:
print(f"OK: {result['url']} - {result['status']} ({result['duration']:.2f}s)")
if __name__ == '__main__':
asyncio.run(main())
Scheduled Cache Warming¶
Set up automated cache warming for peak traffic periods:
# scheduled_warming.py
import schedule
import time
import asyncio
from cache_warmer import CacheWarmer
def schedule_cache_warming():
"""Schedule cache warming based on traffic patterns"""
warmer = CacheWarmer('yourdomain.com')
# Warm cache before business hours
schedule.every().day.at("08:00").do(
lambda: asyncio.run(warmer.warm_cache(include_sitemap=True))
)
# Quick warm during lunch peak
schedule.every().day.at("11:30").do(
lambda: asyncio.run(warmer.warm_cache(include_sitemap=False))
)
# Warm cache before evening traffic
schedule.every().day.at("17:00").do(
lambda: asyncio.run(warmer.warm_cache(include_sitemap=False))
)
print("Cache warming scheduler started...")
while True:
schedule.run_pending()
time.sleep(60) # Check every minute
if __name__ == '__main__':
schedule_cache_warming()
Monitor Cache Performance¶
Set Up Cache Analytics¶
Monitor cache effectiveness using Peakhour's analytics:
# cache_analytics.py
import requests
import json
from datetime import datetime, timedelta
class CacheAnalytics:
def __init__(self, api_token, domain):
self.api_token = api_token
self.domain = domain
self.base_url = f"https://api.peakhour.io/domains/{domain}"
def get_cache_metrics(self, time_range="24h"):
"""Get cache performance metrics"""
headers = {"Authorization": f"Bearer {self.api_token}"}
# Get cache hit/miss rates
cache_url = f"{self.base_url}/analytics/cache"
params = {"time_range": time_range}
response = requests.get(cache_url, headers=headers, params=params)
return response.json()
def analyze_cache_performance(self):
"""Analyze and report cache performance"""
metrics = self.get_cache_metrics("7d") # Last 7 days
total_requests = metrics.get("total_requests", 0)
cache_hits = metrics.get("cache_hits", 0)
cache_misses = metrics.get("cache_misses", 0)
hit_rate = (cache_hits / total_requests * 100) if total_requests > 0 else 0
print(f"Cache Performance Report:")
print(f" Total Requests: {total_requests:,}")
print(f" Cache Hits: {cache_hits:,}")
print(f" Cache Misses: {cache_misses:,}")
print(f" Hit Rate: {hit_rate:.1f}%")
# Performance recommendations
if hit_rate < 80:
print("⚠️ Low cache hit rate - consider:")
print(" - Increasing TTL for static content")
print(" - Reducing query string variations")
print(" - Implementing better cache tags")
elif hit_rate > 95:
print("✅ Excellent cache performance!")
else:
print("✅ Good cache performance")
return metrics
def get_top_uncached_urls(self, limit=20):
"""Identify frequently requested uncached content"""
headers = {"Authorization": f"Bearer {self.api_token}"}
url = f"{self.base_url}/analytics/cache/uncached"
params = {"limit": limit, "time_range": "24h"}
response = requests.get(url, headers=headers, params=params)
uncached_urls = response.json().get("urls", [])
print(f"Top {limit} Uncached URLs:")
for i, url_data in enumerate(uncached_urls, 1):
print(f" {i}. {url_data['path']} ({url_data['requests']} requests)")
return uncached_urls
# Usage
analytics = CacheAnalytics("your_api_token", "yourdomain.com")
analytics.analyze_cache_performance()
analytics.get_top_uncached_urls()
Automated Performance Alerting¶
Set up alerts for cache performance issues:
# cache_monitoring.py
import requests
import smtplib
from email.mime.text import MIMEText
from datetime import datetime
class CacheMonitor:
def __init__(self, api_token, domain, alert_config):
self.api_token = api_token
self.domain = domain
self.alert_config = alert_config
def check_cache_health(self):
"""Monitor cache health and send alerts if needed"""
metrics = self.get_current_metrics()
issues = []
# Check hit rate
hit_rate = metrics.get("hit_rate", 0)
if hit_rate < self.alert_config["min_hit_rate"]:
issues.append(f"Low cache hit rate: {hit_rate:.1f}% (threshold: {self.alert_config['min_hit_rate']}%)")
# Check response times
avg_response_time = metrics.get("avg_response_time", 0)
if avg_response_time > self.alert_config["max_response_time"]:
issues.append(f"High response time: {avg_response_time:.0f}ms (threshold: {self.alert_config['max_response_time']}ms)")
# Check error rate
error_rate = metrics.get("error_rate", 0)
if error_rate > self.alert_config["max_error_rate"]:
issues.append(f"High error rate: {error_rate:.1f}% (threshold: {self.alert_config['max_error_rate']}%)")
if issues:
self.send_alert(issues, metrics)
return len(issues) == 0
def send_alert(self, issues, metrics):
"""Send alert email for cache issues"""
subject = f"Cache Performance Alert - {self.domain}"
body = f"""
Cache Performance Alert for {self.domain}
Issues detected:
{''.join(f'- {issue}' for issue in issues)}
Current Metrics:
- Hit Rate: {metrics.get('hit_rate', 0):.1f}%
- Response Time: {metrics.get('avg_response_time', 0):.0f}ms
- Error Rate: {metrics.get('error_rate', 0):.1f}%
- Total Requests (1h): {metrics.get('total_requests', 0):,}
Time: {datetime.now().isoformat()}
Please investigate cache configuration and performance.
"""
# Send email alert
msg = MIMEText(body)
msg['Subject'] = subject
msg['From'] = self.alert_config['from_email']
msg['To'] = ', '.join(self.alert_config['alert_emails'])
try:
with smtplib.SMTP(self.alert_config['smtp_host'], self.alert_config['smtp_port']) as server:
if self.alert_config.get('smtp_tls'):
server.starttls()
if self.alert_config.get('smtp_username'):
server.login(self.alert_config['smtp_username'], self.alert_config['smtp_password'])
server.send_message(msg)
print(f"Alert sent for {len(issues)} cache issues")
except Exception as e:
print(f"Failed to send alert: {e}")
# Monitor configuration
alert_config = {
"min_hit_rate": 85.0, # Alert if hit rate below 85%
"max_response_time": 2000, # Alert if avg response time > 2s
"max_error_rate": 5.0, # Alert if error rate above 5%
"alert_emails": ["ops@company.com"],
"from_email": "cache-monitor@company.com",
"smtp_host": "smtp.gmail.com",
"smtp_port": 587,
"smtp_tls": True,
"smtp_username": "your_email@company.com",
"smtp_password": "your_app_password"
}
# Usage - run as scheduled job
monitor = CacheMonitor("your_api_token", "yourdomain.com", alert_config)
is_healthy = monitor.check_cache_health()
if is_healthy:
print("✅ Cache performance is healthy")
else:
print("⚠️ Cache performance issues detected and alerts sent")
Troubleshooting Common Cache Issues¶
Purge Not Working¶
Problem¶
Cache invalidation commands succeed but content isn't updated
Solutions¶
- Check if using soft purge vs hard purge appropriately
- Verify paths are exact matches (case-sensitive)
- Confirm cache tags are properly set in origin responses
- Wait for global propagation (can take 2-3 minutes)
Cache Hit Rate Too Low¶
Problem¶
Poor cache performance with high miss rates
Solutions¶
- Review TTL settings for different content types
- Check for cache-busting query parameters
- Analyze Vary headers that might fragment cache
- Review cookie handling settings
Automation Not Triggering¶
Problem¶
CI/CD integration not invalidating cache
Solutions¶
- Verify API token permissions and validity
- Check webhook endpoints are accessible
- Review deployment script logic and error handling
- Test API calls manually to isolate issues
Performance Degradation¶
Problem¶
Cache warming causing origin server stress
Solutions¶
- Reduce batch sizes in warming scripts
- Add delays between warming requests
- Implement rate limiting in warming logic
- Prioritize critical content only during peak hours
Best Practices for Cache Automation¶
Configuration Management¶
- Use environment-specific cache settings
- Version control your cache configuration
- Test cache changes in staging environments
- Document cache invalidation strategies for your team
API Usage Optimization¶
- Implement proper error handling and retries
- Use bulk operations when possible (max 1000 paths per request)
- Monitor API rate limits and usage
- Cache API responses where appropriate to reduce calls
Performance Monitoring¶
- Set up automated cache performance monitoring
- Track cache hit rates, response times, and error rates
- Alert on performance degradation
- Regular review of cache effectiveness and optimization opportunities
Integration Strategy¶
- Design cache invalidation to match your deployment workflow
- Implement cache warming for critical content
- Use cache tags for efficient content grouping
- Plan for emergency cache clearing procedures
Your comprehensive cache automation system is now configured to provide optimal performance with intelligent invalidation, automated warming, and continuous monitoring for peak content delivery efficiency.