alles andere

This commit is contained in:
2026-02-04 09:58:52 +01:00
parent c7ec1fbae2
commit 79ec515b98
57 changed files with 11768 additions and 0 deletions

522
docs/deployment-guide.md Normal file
View File

@@ -0,0 +1,522 @@
# Deployment Guide
Complete guide for deploying the Self-Replicating Business System to production.
## Production Deployment Options
### Option 1: Single VPS (Recommended for Start)
**Specifications**:
- 4 vCPU
- 8GB RAM
- 160GB SSD
- Ubuntu 22.04 LTS
**Providers**:
- DigitalOcean ($48/month)
- Hetzner ($35/month)
- Linode ($48/month)
### Option 2: Kubernetes (For Scale)
For managing 10+ businesses simultaneously.
## Step-by-Step Production Deployment
### 1. Server Setup
```bash
# SSH into your VPS
ssh root@your-server-ip
# Update system
apt update && apt upgrade -y
# Install Docker
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh
# Install Docker Compose
apt install docker-compose-plugin -y
# Install Node.js
curl -fsSL https://deb.nodesource.com/setup_20.x | bash -
apt install -y nodejs
# Install pnpm
npm install -g pnpm
```
### 2. Clone Repository
```bash
# Create application directory
mkdir -p /opt/srb
cd /opt/srb
# Clone repository (or upload files)
git clone <your-repo-url> .
# Or upload via SCP
# scp -r self-replicating-business/* root@your-server:/opt/srb/
```
### 3. Configure Environment
```bash
# Copy environment template
cp .env.example .env
# Edit with production values
nano .env
```
**Critical Production Settings**:
```env
# Set to production
NODE_ENV=production
# Use strong passwords
POSTGRES_PASSWORD=<strong-random-password>
# Production database URL
DATABASE_URL=postgresql://srb:<strong-password>@postgres:5432/srb
# All your API keys
ANTHROPIC_API_KEY=sk-ant-...
FACEBOOK_ACCESS_TOKEN=...
GOOGLE_ADS_DEVELOPER_TOKEN=...
# ... etc
# Production alerts
SLACK_WEBHOOK_URL=https://hooks.slack.com/services/...
ALERT_EMAIL=alerts@yourdomain.com
# n8n auth
N8N_BASIC_AUTH_USER=admin
N8N_BASIC_AUTH_PASSWORD=<strong-random-password>
```
### 4. Start Services
```bash
# Build and start all services
docker-compose -f infra/docker/docker-compose.yml up -d
# Check status
docker ps
# View logs
docker-compose -f infra/docker/docker-compose.yml logs -f
```
### 5. Initialize Database
```bash
# Run migrations
docker exec srb-orchestrator pnpm db:migrate
# Verify database
docker exec -it srb-postgres psql -U srb -d srb -c "\dt"
```
### 6. SSL/TLS Setup
Using Nginx reverse proxy with Let's Encrypt:
```bash
# Install Nginx
apt install nginx certbot python3-certbot-nginx -y
# Create Nginx config
nano /etc/nginx/sites-available/srb
```
**Nginx Configuration**:
```nginx
server {
listen 80;
server_name yourdomain.com;
# Orchestrator API
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
# Dashboard
location /dashboard {
proxy_pass http://localhost:3001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
# n8n
location /n8n {
proxy_pass http://localhost:5678;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
```
```bash
# Enable site
ln -s /etc/nginx/sites-available/srb /etc/nginx/sites-enabled/
# Test config
nginx -t
# Restart Nginx
systemctl restart nginx
# Get SSL certificate
certbot --nginx -d yourdomain.com
# Auto-renewal
systemctl enable certbot.timer
```
### 7. Systemd Service (Auto-restart)
Create `/etc/systemd/system/srb.service`:
```ini
[Unit]
Description=Self-Replicating Business System
After=docker.service
Requires=docker.service
[Service]
Type=oneshot
RemainAfterExit=yes
WorkingDirectory=/opt/srb
ExecStart=/usr/bin/docker-compose -f infra/docker/docker-compose.yml up -d
ExecStop=/usr/bin/docker-compose -f infra/docker/docker-compose.yml down
[Install]
WantedBy=multi-user.target
```
```bash
# Enable service
systemctl enable srb.service
systemctl start srb.service
# Check status
systemctl status srb.service
```
### 8. Monitoring Setup
```bash
# Install monitoring tools
apt install prometheus grafana -y
# Configure Prometheus
nano /etc/prometheus/prometheus.yml
```
**Prometheus Config**:
```yaml
scrape_configs:
- job_name: 'srb-orchestrator'
static_configs:
- targets: ['localhost:3000']
- job_name: 'postgres'
static_configs:
- targets: ['localhost:5432']
- job_name: 'redis'
static_configs:
- targets: ['localhost:6379']
```
```bash
# Start monitoring
systemctl start prometheus grafana-server
systemctl enable prometheus grafana-server
# Access Grafana at http://your-server:3000
```
### 9. Backup Setup
```bash
# Create backup script
nano /opt/srb/scripts/backup.sh
```
**Backup Script**:
```bash
#!/bin/bash
BACKUP_DIR="/opt/srb/backups"
DATE=$(date +%Y%m%d_%H%M%S)
# Create backup directory
mkdir -p $BACKUP_DIR
# Backup database
docker exec srb-postgres pg_dump -U srb srb > $BACKUP_DIR/db_$DATE.sql
# Backup business data
tar -czf $BACKUP_DIR/data_$DATE.tar.gz /opt/srb/data
# Upload to S3 (optional)
# aws s3 cp $BACKUP_DIR/db_$DATE.sql s3://your-bucket/backups/
# Delete old backups (keep last 30 days)
find $BACKUP_DIR -name "*.sql" -mtime +30 -delete
find $BACKUP_DIR -name "*.tar.gz" -mtime +30 -delete
echo "Backup completed: $DATE"
```
```bash
# Make executable
chmod +x /opt/srb/scripts/backup.sh
# Add to crontab (daily at 2 AM)
crontab -e
# Add: 0 2 * * * /opt/srb/scripts/backup.sh
```
### 10. Firewall Configuration
```bash
# Install UFW
apt install ufw -y
# Allow SSH
ufw allow 22/tcp
# Allow HTTP/HTTPS
ufw allow 80/tcp
ufw allow 443/tcp
# Enable firewall
ufw enable
# Check status
ufw status
```
## Post-Deployment Checklist
- [ ] All Docker containers running (`docker ps`)
- [ ] Database accessible and migrated
- [ ] SSL certificate installed (https://yourdomain.com)
- [ ] Environment variables configured
- [ ] Backups running daily
- [ ] Monitoring dashboards accessible
- [ ] Alerts configured (Slack/Email)
- [ ] Firewall enabled
- [ ] systemd service enabled
- [ ] Test creating a business
## Creating First Production Business
```bash
# SSH into server
ssh root@your-server
# Enter orchestrator container
docker exec -it srb-orchestrator sh
# Run CLI
node dist/cli/create-business.js \
--name "My First Business" \
--idea "AI-powered meal planning SaaS"
```
## Monitoring Production
### Health Checks
```bash
# Check all services
docker ps
# Check logs
docker-compose logs -f orchestrator
# Check database
docker exec -it srb-postgres psql -U srb -d srb -c "SELECT COUNT(*) FROM \"Business\";"
# Check n8n
curl http://localhost:5678
# Check dashboard
curl http://localhost:3001
```
### Key Metrics to Monitor
1. **System Health**
- CPU usage < 70%
- Memory usage < 80%
- Disk space > 20% free
2. **Application Health**
- Workflow success rate > 95%
- API response time < 500ms
- Database connections < 100
3. **Business Health**
- Active businesses count
- Total monthly revenue
- Workflow execution rate
## Scaling Production
### Vertical Scaling (Upgrade VPS)
```bash
# Stop services
docker-compose down
# Resize VPS in provider panel
# Start services
docker-compose up -d
```
### Horizontal Scaling (Multiple Workers)
Edit `docker-compose.yml`:
```yaml
orchestrator:
...
deploy:
replicas: 3 # Run 3 instances
```
### Database Scaling
For high load:
```yaml
postgres:
...
environment:
- POSTGRES_MAX_CONNECTIONS=200
- POSTGRES_SHARED_BUFFERS=2GB
```
## Troubleshooting
### Container Won't Start
```bash
# Check logs
docker logs srb-orchestrator
# Restart container
docker restart srb-orchestrator
# Rebuild if needed
docker-compose build orchestrator
docker-compose up -d
```
### Database Connection Issues
```bash
# Check PostgreSQL logs
docker logs srb-postgres
# Verify connection
docker exec -it srb-postgres psql -U srb -d srb
# Reset database (DANGER: loses data)
docker-compose down -v
docker-compose up -d
```
### High CPU/Memory Usage
```bash
# Check resource usage
docker stats
# Limit resources in docker-compose.yml
services:
orchestrator:
deploy:
resources:
limits:
cpus: '2'
memory: 4G
```
## Security Best Practices
1. **API Keys**
- Rotate every 90 days
- Use different keys for dev/prod
- Never commit to git
2. **Database**
- Strong passwords (20+ chars)
- Disable remote access if not needed
- Regular backups
3. **Server**
- Keep system updated
- Disable root SSH (use sudo user)
- Enable fail2ban
4. **Application**
- Set budget limits
- Monitor spending daily
- Review decisions weekly
## Maintenance
### Weekly Tasks
- Review business performance
- Check error logs
- Verify backups
### Monthly Tasks
- Update dependencies
- Review and optimize budgets
- Audit API usage and costs
- Security updates
### Quarterly Tasks
- Rotate API keys
- Review and update strategies
- Performance optimization
- Capacity planning
## Cost Optimization
1. **Use Reserved Instances** (save 30-50%)
2. **Optimize Docker Images** (smaller = faster)
3. **Cache Aggressively** (reduce API calls)
4. **Schedule Non-Critical Tasks** (off-peak hours)
5. **Monitor API Usage** (avoid overages)
---
**Deployment Status**: ✅ Ready for Production
For support: See logs or contact admin

View File

@@ -0,0 +1,565 @@
# Workflow Specifications
Detailed specifications for all 8 autonomous workflows in the Self-Replicating Business System.
## Workflow Execution Model
All workflows extend `WorkflowBase` which provides:
- ✅ Automatic retry logic (3 attempts with exponential backoff)
- ✅ Error handling and logging
- ✅ Database state tracking
- ✅ Alert integration
- ✅ Execution metrics
---
## 1. Market Validation Workflow
**Type**: `MARKET_VALIDATION`
**Phase**: Validation (Sequential)
**Critical**: Yes (pauses business on failure)
### Purpose
Determine if a business idea is viable before investing resources.
### Inputs
- Business ID
- Business idea (text)
- Business name
### Process
1. **Competitor Search**
- Google Custom Search API (if configured)
- Fallback: Web scraping
- Returns: Top 5 competitors with URLs, descriptions
2. **Demand Analysis**
- Google Trends API
- Search volume estimation
- Trend direction (rising/stable/declining)
- Seasonality detection
3. **Claude AI Analysis**
- Combines competitor data + demand data
- Generates viability score (0-100)
- Identifies top 3 risks
- Identifies top 3 opportunities
- Makes go/no-go recommendation
4. **Decision**
- Score ≥ 60 && Viable = Proceed to MVP
- Score < 60 || Not Viable = Shutdown business
### Outputs
```json
{
"viable": true,
"score": 75,
"analysis": "Strong market opportunity...",
"risks": ["High competition", "Seasonal demand", "..."],
"opportunities": ["Underserved niche", "Growing market", "..."],
"competitors": [...],
"demandData": {...}
}
```
### Success Criteria
- Competitor search finds 2+ competitors
- Demand data shows search volume > 100/month
- Claude analysis completes within 30 seconds
---
## 2. MVP Development Workflow
**Type**: `MVP_DEVELOPMENT`
**Phase**: Development (Sequential)
**Critical**: Yes (pauses business on failure)
### Purpose
Generate and deploy a functional MVP product.
### Inputs
- Business ID
- Business idea
- Business name
### Process
1. **Code Generation**
- Claude generates Next.js 14+ code
- Includes: Landing page, API routes, Tailwind styling
- Returns: Map of filename → code content
2. **Local Storage**
- Saves code to `/data/businesses/{id}/mvp/`
- Creates directory structure
- Writes all files
- Generates README
3. **Git Initialization** (if Vercel enabled)
- `git init`
- `git add .`
- `git commit -m "Initial commit"`
4. **Deployment** (if Vercel token configured)
- Creates vercel.json
- Runs `vercel --prod`
- Extracts deployment URL
5. **Database Update**
- Stores MVP URL
- Updates status to `LAUNCHING`
### Outputs
```json
{
"mvpUrl": "https://business-abc123.vercel.app",
"filesGenerated": 8,
"projectDir": "/data/businesses/abc123/mvp"
}
```
### Success Criteria
- Generates minimum 5 files
- Deployment succeeds (or saves locally)
- MVP URL is accessible
---
## 3. Landing Page SEO Workflow
**Type**: `LANDING_PAGE_SEO`
**Phase**: Marketing (Parallel)
**Critical**: No
### Purpose
Optimize landing page for search engines and organic traffic.
### Inputs
- Business ID
- Business idea
- Target audience
### Process
1. **Keyword Research**
- Extract seed keywords from idea
- Use Claude SEO Expert skill
- Generate keyword list (10-20 keywords)
2. **SEO Strategy**
- Claude generates comprehensive SEO plan
- On-page optimization tips
- Technical SEO checklist
- Link building strategy
3. **Content Optimization** (if MVP exists)
- Analyze current content
- Generate optimized copy
- Meta title (60 chars)
- Meta description (155 chars)
- H1, H2 tags
4. **Content Calendar**
- Generate 10-20 blog post ideas
- Keyword-focused topics
- Publishing schedule
5. **Database Update**
- Set `seoOptimized = true`
### Outputs
```json
{
"keywordResearch": "...",
"seoStrategy": "...",
"contentIdeas": [...]
}
```
### Success Criteria
- Keyword list generated
- SEO strategy document created
- Content calendar populated
---
## 4. Paid Ads Workflow
**Type**: `PAID_ADS`
**Phase**: Marketing (Parallel)
**Critical**: No
### Purpose
Launch paid advertising campaigns on Facebook and Google.
### Inputs
- Business ID
- Business idea
- Budget
- Target audience
### Process
1. **Facebook Ads**
- Claude Ads Expert generates strategy
- Creates campaign via Facebook Ads API
- Objective: CONVERSIONS
- Budget: From business.budget or $500 default
- Saves campaign to database
2. **Google Ads**
- Claude Ads Expert generates strategy
- Creates Search campaign via Google Ads API
- Keywords: Extracted from idea
- Budget: From business.budget or $500 default
- Saves campaign to database
3. **Database Update**
- Set `adsActive = true`
- Set status = `RUNNING_ADS`
### Outputs
```json
{
"facebook": {
"success": true,
"campaignId": "fb_123456",
"strategy": "..."
},
"google": {
"success": true,
"campaignId": "google_789012",
"strategy": "..."
}
}
```
### Success Criteria
- At least 1 campaign created (FB or Google)
- Campaign is active
- Budget allocated correctly
---
## 5. Content Marketing Workflow
**Type**: `CONTENT_MARKETING`
**Phase**: Marketing (Parallel)
**Critical**: No
### Purpose
Create and publish SEO-optimized content.
### Inputs
- Business ID
- Business idea
- Keywords
### Process
1. **Content Generation**
- Claude generates SEO-optimized content
- Meta title, description
- Headlines (H1, H2, H3)
- 500+ word landing page copy
2. **Publishing** (future)
- Publish to CMS
- Schedule blog posts
- Social media sharing
### Outputs
```json
{
"contentGenerated": true,
"contentLength": 1200
}
```
### Success Criteria
- Content generated (500+ words)
- SEO-optimized format
---
## 6. Email Automation Workflow
**Type**: `EMAIL_AUTOMATION`
**Phase**: Marketing (Parallel)
**Critical**: No
### Purpose
Set up email sequences for lead nurturing.
### Inputs
- Business ID
- Business name
### Process
1. **Template Creation**
- Welcome email
- Onboarding sequence (3-5 emails)
- Drip campaign
2. **Sendgrid Setup** (if configured)
- Create templates via API
- Set up automation rules
3. **Database Update**
- Set `emailAutomation = true`
### Outputs
```json
{
"configured": true,
"templates": 2
}
```
### Success Criteria
- Templates created
- Automation configured
---
## 7. Analytics Setup Workflow
**Type**: `ANALYTICS_SETUP`
**Phase**: Marketing (Parallel)
**Critical**: No
### Purpose
Install tracking and analytics for data-driven optimization.
### Inputs
- Business ID
- MVP URL
### Process
1. **Google Analytics**
- Create GA4 property
- Install tracking code
- Set up conversion goals
2. **Meta Pixel**
- Create Facebook Pixel
- Install pixel code
- Configure custom events
3. **Conversion Tracking**
- Track: page views, signups, purchases
- Configure event tracking
### Outputs
```json
{
"googleAnalytics": {
"configured": true,
"trackingId": "G-XXXXXXXXXX"
},
"metaPixel": {
"configured": true,
"pixelId": "1234567890"
}
}
```
### Success Criteria
- GA4 tracking installed
- Pixel tracking installed
- Events configured
---
## 8. Optimization Loop Workflow
**Type**: `OPTIMIZATION_LOOP`
**Phase**: Continuous (Forever)
**Critical**: No
### Purpose
Continuously optimize campaigns and budgets for maximum ROI.
### Schedule
Runs every 24 hours (configurable via `OPTIMIZATION_INTERVAL_MINUTES`)
### Inputs
- Business ID
### Process
1. **Metrics Collection**
- Fetch from Google Analytics
- Fetch from Facebook Ads API
- Fetch from Google Ads API
- Aggregate data
2. **Campaign Analysis**
- Calculate ROAS for each campaign
- Calculate CTR, conversion rate
- Classify performance: good/acceptable/poor
3. **Budget Optimization**
- High performers (ROAS > 3): +20% budget
- Poor performers (ROAS < 1): -30% budget
- Record budget changes
4. **Pause Underperformers**
- ROAS < 0.5: Pause campaign
- Losing 50%+ on ad spend
5. **A/B Testing** (future)
- Create ad variants
- Test different copy/targeting
6. **Metrics Recording**
- Save daily snapshot to database
- Update business revenue
### Outputs
```json
{
"metrics": {
"revenue": 5000,
"adSpend": 1500,
"roas": 3.33
},
"budgetChanges": [
{
"campaignId": "...",
"action": "increase_budget",
"change": "+20%"
}
],
"pausedCampaigns": ["..."],
"adTests": {
"testsRunning": 2
}
}
```
### Success Criteria
- Metrics collected successfully
- At least 1 optimization performed
- No campaigns with ROAS < 0.5 left active
---
## Workflow Dependencies
```
Market Validation
└─> MVP Development
└─> [PARALLEL]
├─> Landing Page SEO
├─> Paid Ads
├─> Content Marketing
├─> Email Automation
└─> Analytics Setup
└─> Optimization Loop (forever)
```
## Error Handling
### Retry Policy
- Max retries: 3
- Backoff: Exponential (2s, 4s, 8s)
- Timeout: 2 minutes per attempt
### Failure Actions
**Critical Workflows** (1, 2):
- Pause business
- Send critical alert
- Stop lifecycle execution
**Non-Critical Workflows** (3-8):
- Log error
- Send warning alert
- Continue lifecycle
### Recovery
Workflows can be manually re-run:
```bash
# Retry failed workflow
pnpm retry-workflow --business-id abc123 --type MARKET_VALIDATION
```
---
## Monitoring Workflows
### Database Tracking
Each workflow run creates a `WorkflowRun` record:
```sql
SELECT
workflowType,
status,
attempts,
error,
completedAt - startedAt as duration
FROM "WorkflowRun"
WHERE businessId = 'abc123'
ORDER BY createdAt DESC;
```
### Logs
```bash
# View workflow logs
docker logs srb-orchestrator | grep "MARKET_VALIDATION"
# Real-time monitoring
docker logs -f srb-orchestrator
```
### Metrics
- Execution time (should be < 5 minutes)
- Success rate (should be > 95%)
- Retry rate (should be < 10%)
---
## Extending Workflows
To add a new workflow:
1. Create file: `src/workflows/09-new-workflow.ts`
2. Extend `WorkflowBase`
3. Implement `execute()` method
4. Add to `WorkflowExecutor` map
5. Add to Prisma `WorkflowType` enum
6. Update orchestrator lifecycle
Example:
```typescript
export class NewWorkflow extends WorkflowBase {
protected type: WorkflowType = 'NEW_WORKFLOW';
protected async execute(context: WorkflowContext): Promise<WorkflowResult> {
const business = await this.getBusiness(context.businessId);
// Your logic here
return {
success: true,
data: { ... }
};
}
}
```
---
**Last Updated**: 2026-02-04