Suzy AI
Real-time AI companion with Live2D avatar, voice conversation, and emotion-aware animation.
Why This Project
I built Suzy as a safe, non-judgmental companion for people who feel lonely, socially awkward, or misunderstood.
Goal: let users talk naturally in any language and feel emotionally supported.
Standout Features
- Real-time voice chat using ElevenLabs conversational API.
- Live2D character rendering with lip-sync, gaze tracking, and idle animations.
- Emotion + gesture-driven visual feedback during conversation.
- Companion-style UX: onboarding profile, daily mood check-in, continue-chat flow.
- Responsive UI optimized for desktop and mobile.
Tech Stack
- React + TypeScript + Vite
- Tailwind CSS + shadcn/ui
- PixiJS + pixi-live2d-display
- ElevenLabs (voice)
- Supabase Edge Functions (session token backend)
- Vercel (deployment)
Architecture (Essential)
src/
├── pages/
│ └── Index.tsx # main conversation and UI orchestration
└── components/
├── CharacterDisplay.tsx # model switching + loading/fallback behavior
└── Live2D/ # avatar rendering and animation pipeline
supabase/
└── functions/
└── elevenlabs-session/
└── index.ts # signed session URL generationQuick Run
npm install
npm run devCreate .env.local:
VITE_SUPABASE_URL=your_supabase_project_url
VITE_SUPABASE_ANON_KEY=your_supabase_anon_key