Wednesday, December 10, 2025
How I Built My First Production Website for IELTS Writing Practice
A practical architecture breakdown of my first production website: React frontend, secured API layer, Supabase auth/data, and AI-assisted grading workflows.
Hook
I recently launched my first production website: https://myielts.ai.
This post is a technical deep dive into how I structured it so I could ship fast, keep the app stable, and still leave room for future changes.
Problem
I needed to build an IELTS writing practice product with:
- A responsive web app for real users
- AI-assisted grading, hints, and rewriting
- Secure authentication and user-level data isolation
- Reliable behavior across desktop and mobile
- A deployment setup that is easy to operate
The challenge was balancing speed and maintainability without overengineering.
Approach
I used a straightforward full-stack structure:
- Frontend: React 18 + TypeScript SPA with route-level lazy loading
- API layer: Node handlers under
/api/**with centralized auth, validation, and retries - Data/auth: Supabase (Auth + Postgres + RLS)
- AI services: grading, hint generation, and essay rewrite endpoints
- Hosting: Vercel for frontend, Railway for API, with
/api/*rewrite routing
Core frontend composition:
- Providers:
ThemeProvider,UserProvider,AppProvider,ToastProvider - Route model:
- Public:
/welcome - Protected:
/,/practice,/practice/custom,/result,/history,/analytics,/profile,/settings
- Public:
- Resilience: app-wide error boundary and retry-aware async loading
Core backend/API patterns:
- Token-based auth checks in handlers
- Request validation and rate limiting
- RLS-aware database context for row-level data access
- Centralized API client behavior:
- Environment-based base URL resolution
- Auth token injection
- Exponential backoff retries
- Queued refresh handling on
401
Diagram
Tradeoffs
- Context-based state kept the architecture simpler than adding a heavier global state library, but requires discipline as features grow.
- Hybrid persistence (
localStorage+ backend sync) improved UX resilience, but increased synchronization complexity. - Splitting frontend and API hosting improved operational flexibility, but introduced cross-service deployment coordination.
- AI features behind API services kept the frontend stable, but moved prompt/model complexity into backend ownership.
For related decisions on response-time tradeoffs in AI systems, see /blog/rag-latency-tradeoffs-that-matter.
Links and References
- Product: https://myielts.ai
- Vercel: https://vercel.com
- Railway: https://railway.app
- Supabase: https://supabase.com
Key Takeaways
- A clear boundary between UI, API, and data makes iteration faster.
- Centralizing auth/retry/error handling prevents repeated bugs.
- RLS and authenticated DB context are critical for multi-user safety.
- Reliability patterns (lazy loading, retries, local cache, smoke tests) matter as much as features in early production systems.
Related Posts
Researching the Current LLM Memory Problem
My first research on the LLM memory issue.