
Agent-Workflows mit Next.js und Python
Multi-Agent-Workflows sind die Zukunft komplexer Softwareentwicklung. Dieser Guide zeigt, wie du verschiedene Agenten für Frontend, API und Backend-Logic koordinierst.
Das Szenario
Wir bauen eine Datenanalyse-Plattform:
- Frontend: Next.js Dashboard mit Charts
- API: Next.js API Routes als Gateway
- Backend: Python FastAPI für Datenverarbeitung
- ML: Python-Modelle für Predictions
Agent-Setup
Agent 1: Frontend-Developer
Verantwortlich für:
- React Components & UI
- State Management
- Client-side Logic
- Styling mit Tailwind
Kontext:
# Frontend Agent
Tech: Next.js 13+, TypeScript, Tailwind
Style: Modern, data-focused, responsive
Patterns: Server Components wo möglich, Client Components für Interaktivität
State: Context API für global, useState für local
Agent 2: API-Developer
Verantwortlich für:
- Next.js API Routes
- Request Validation
- Error Handling
- API-to-Backend Communication
Kontext:
# API Agent
Tech: Next.js API Routes, TypeScript
Validation: Zod für Request/Response
Error Handling: Structured error responses
Security: Rate limiting, auth checks
Agent 3: Backend-Developer
Verantwortlich für:
- FastAPI Endpoints
- Business Logic
- Database Operations
- Data Processing
Kontext:
# Backend Agent
Tech: Python 3.11+, FastAPI, SQLAlchemy
DB: PostgreSQL
Patterns: Repository Pattern, Service Layer
Validation: Pydantic Models
Agent 4: ML-Engineer
Verantwortlich für:
- Model Training
- Predictions
- Data Transformations
- Performance Optimization
Kontext:
# ML Agent
Tech: Python, scikit-learn, pandas
Focus: Predictive models, data pipelines
Output: REST API endpoints via FastAPI
Monitoring: Model performance metrics
Workflow-Orchestrierung
Phase 1: Planung
Human Conductor definiert die High-Level Requirements:
Feature: Sales Prediction Dashboard
- User kann Zeitraum wählen
- System zeigt historische Daten als Chart
- ML-Modell macht Prediction für nächste 30 Tage
- Confidence Intervals anzeigen
Dann:
- Requirements in Agent-Tasks zerlegen
- Dependencies identifizieren
- Execution Order festlegen
Phase 2: Contract Definition
Bevor Agenten parallel arbeiten, definieren wir Interfaces:
API Contract (TypeScript):
// lib/types/api.ts
export interface PredictionRequest {
startDate: string;
endDate: string;
metric: 'sales' | 'revenue' | 'customers';
}
export interface PredictionResponse {
historical: DataPoint[];
prediction: DataPoint[];
confidence: {
lower: number[];
upper: number[];
};
modelInfo: {
accuracy: number;
lastTrained: string;
};
}
export interface DataPoint {
date: string;
value: number;
}
Backend Contract (Python):
# app/schemas/prediction.py
from pydantic import BaseModel
from datetime import date
class PredictionRequest(BaseModel):
start_date: date
end_date: date
metric: str
class DataPoint(BaseModel):
date: date
value: float
class PredictionResponse(BaseModel):
historical: list[DataPoint]
prediction: list[DataPoint]
confidence: dict
model_info: dict
Phase 3: Parallele Entwicklung
Mit klaren Contracts können Agenten parallel arbeiten:
Task 1 (Frontend Agent):
Erstelle Dashboard Component mit:
- Date Range Picker
- Chart für Historical + Prediction Data
- Confidence Interval Visualization
- Loading States
Nutze die Types aus lib/types/api.ts
Task 2 (API Agent):
Erstelle API Route /api/predictions/[metric]
- POST Request mit PredictionRequest Body
- Validation mit Zod
- Forward zu Python Backend (http://localhost:8000)
- Error Handling für Backend Failures
Task 3 (Backend Agent):
Erstelle FastAPI Endpoint /predict
- Pydantic Validation
- Lade historische Daten aus DB
- Rufe ML Service auf
- Return PredictionResponse
Task 4 (ML Agent):
Erstelle Prediction Service:
- Input: Historical data
- Model: Time Series Forecasting (ARIMA oder Prophet)
- Output: Predictions + Confidence Intervals
- Cache Predictions für 1h
Phase 4: Integration
Sequential execution mit Validierung:
- ML Agent → Service fertig, Unit Tests passed
- Backend Agent → Integration ML Service, API Tests
- API Agent → Integration Backend, Contract Tests
- Frontend Agent → Integration API, E2E Tests
Phase 5: Human Review
Code Review Check:
- ✅ Type Safety durchgängig
- ✅ Error Handling implementiert
- ✅ Tests vorhanden und passing
- ✅ Performance acceptable
- ✅ Security considerations beachtet
Praktische Implementation
1. Project Setup
# Root Project
mkdir sales-prediction-platform
cd sales-prediction-platform
# Frontend + API
npx create-next-app@latest frontend --typescript
cd frontend && npm install zod date-fns recharts
# Backend
cd ..
mkdir backend
cd backend
python -m venv venv
source venv/bin/activate
pip install fastapi uvicorn sqlalchemy pydantic[email] python-multipart
pip install scikit-learn pandas prophet
# Structure
backend/
├── app/
│ ├── api/
│ │ └── routes/
│ ├── core/
│ ├── ml/
│ ├── models/
│ └── schemas/
└── main.py
2. Development Flow
Terminal 1 (Backend):
cd backend
uvicorn main:app --reload --port 8000
Terminal 2 (Frontend):
cd frontend
npm run dev
Terminal 3 (Agent Orchestration): Hier läuft deine Agent-Steuerung
3. Agent Prompts in der Praxis
Frontend Agent Prompt:
Context: Next.js App mit Sales Prediction Dashboard
File: @folder app/dashboard
Task: Erstelle PredictionChart Component
- Props: data (PredictionResponse from API)
- Library: recharts
- Features:
* Line Chart mit Historical + Prediction
* Different colors für each
* Shaded area für Confidence Interval
* Responsive
* Loading Skeleton
* Error State
Style: Tailwind, dark theme, modern
Backend Agent Prompt:
Context: FastAPI Backend für ML Predictions
Files: @folder app/api/routes
Task: Implementiere /predict endpoint
- Method: POST
- Input: PredictionRequest (app/schemas/prediction.py)
- Steps:
1. Validate Request
2. Query historical data from DB
3. Call ML Service
4. Transform response
5. Return PredictionResponse
Include: Error handling, logging, type hints
Advanced Patterns
Agent Communication
Agenten können Nachrichten austauschen:
Frontend Agent: "Backend returns 500 for date range > 1 year"
Backend Agent: "Checking... ML model can't handle > 365 days. Adding validation."
API Agent: "Should I add validation in API layer too?"
Conductor: "Yes, fail fast. Both layers validate."
Conflict Resolution
Frontend Agent ändert: TypeScript Interface
Backend Agent hat: Pydantic Model bereits committed
Conflict: Field names don't match (camelCase vs snake_case)
Resolution:
1. API Agent erstellt Transformer
2. Frontend behält camelCase
3. Backend behält snake_case
4. Transformation in API Layer
Progressive Enhancement
Phase 1: Basic Feature (alle Agenten)
Phase 2: Optimization (Backend + ML Agent)
Phase 3: Advanced UI (Frontend Agent)
Phase 4: Monitoring (DevOps Agent)
Monitoring & Debugging
Logging Strategy
Jeder Agent loggt in eigenen Namespace:
// Frontend
console.log('[Frontend]', 'Fetching predictions...');
// API
console.log('[API]', 'Forwarding to backend:', request);
// Backend
logger.info('[Backend] Processing prediction request', extra={...})
// ML
logger.info('[ML] Model inference completed', metrics={...})
Distributed Tracing
// Trace ID durch alle Layers
const traceId = crypto.randomUUID();
// Frontend Request
fetch('/api/predict', {
headers: { 'X-Trace-Id': traceId }
});
// API forwards
fetch('http://backend:8000/predict', {
headers: { 'X-Trace-Id': traceId }
});
// Backend & ML loggen mit trace ID
Best Practices
- Contracts first: Interfaces vor Implementation
- Clear ownership: Jeder Agent hat seine Domain
- Communication channels: Structured Agent-to-Agent Communication
- Human oversight: Bei kritischen Entscheidungen
- Automated testing: Jeder Agent schreibt Tests
- Documentation: Agents dokumentieren ihre Entscheidungen
Troubleshooting
Problem: Agenten produzieren inkompatiblen Code Solution: Strikte Type Contracts, Shared Types Package
Problem: Merge Conflicts Solution: Feature Branches pro Agent, klare File Ownership
Problem: Circular Dependencies Solution: Dependency Graph visualisieren, refactoren
Nächste Schritte
- Starte mit 2 Agenten (Frontend + Backend)
- Füge incrementally weitere hinzu
- Entwickle eigene Orchestration Patterns
- Teile Learnings im Forum
Multi-Agent Development ist die Zukunft. Viel Erfolg! 🤖