A browser-based AI agent that generates, tests, and refines code using local LLM inference
A browser-based AI agent that generates code using local LLM inference (Ollama). Demonstrates real-time streaming, token tracking, and interactive help features.
Code appears as it's generated, providing immediate feedback
Monitor token usage with detailed breakdown (prompt + generated)
Contextual help buttons explain each component and file structure
Automatic dependency verification ensures everything works
View raw prompts and responses for learning and debugging
Test the UI without Ollama using simulated responses
React 18+ • Ollama (phi3:latest) • Monaco Editor • Vite
ollama pull phi3:latest & ollama servecd my-agent-app & npm install & npm run devWatch the agent in action
Demo Video: Watch the Self-Improving Coding Agent in action. Try it yourself →