Open source toolkit for building AI VTubers
For developers and creators who want to build AI VTubers like Neuro-sama, or open-source AI characters in the spirit of Project AIRI. Start from the hosted web app, self-host a working example, or assemble your own stack from modular TypeScript packages for chat, voice, streaming, and viewer relationships.
Try the hosted web app ・ See example apps ・ Browse packages
- AI VTubers that chat and speak with live viewers
- Streaming assistants that react to YouTube / Twitch comments
- AI character apps with text, voice, vision, and long-term memory
- Viewer relationship systems with points, levels, and achievements
- Browser- and Node.js-based integrations, composed from independent packages
AITuber OnAir is a full, standalone AITuber streaming web app built on top of @aituber-onair/core. It's both the quickest way to experience the toolkit end-to-end and a working reference for what you can ship with it. No setup required.
Three full, ready-to-run React apps built on @aituber-onair/core. Pick the
avatar style that fits your project. All three share the same broad LLM / TTS
provider coverage and in-app Settings UI.
Swap in 4 PNG states (mouth/eyes open/close) and get real-time lip-sync driven from actual audio output. See packages/core/examples/react-pngtuber-app.
git clone https://github.com/shinshin86/aituber-onair.git
cd aituber-onair/packages/core/examples/react-pngtuber-app
npm install
npm run devRender a 3D VRM avatar (miko.vrm) with optional idle VRMA animation, real-time mouth lip-sync driven from audio output, and camera controls (drag to rotate / wheel to zoom). See packages/core/examples/react-vrm-app.
git clone https://github.com/shinshin86/aituber-onair.git
cd aituber-onair/packages/core/examples/react-vrm-app
npm install
npm run devLoad a local Live2D model folder that contains .model3.json, render it in
the browser, and drive mouth movement from actual audio output volume. This
example intentionally does not bundle any Live2D assets. See
packages/core/examples/react-live2d-app.
git clone https://github.com/shinshin86/aituber-onair.git
cd aituber-onair/packages/core/examples/react-live2d-app
npm install
npm run devOpen http://localhost:5173 in any case, then set API keys and provider options in Settings.
Install only what you need and drop it into your own app:
npm install @aituber-onair/chatimport { ChatServiceFactory } from '@aituber-onair/chat';
const chat = ChatServiceFactory.createChatService('openai', {
apiKey: process.env.OPENAI_API_KEY!,
});
await chat.processChat(
[{ role: 'user', content: 'Hello!' }],
(partial) => process.stdout.write(partial),
async (full) => console.log('\nDone:', full),
);See each package README for provider setup and fuller usage.
Core runtime tying chat, voice, memory, and conversation context together for full AITuber experiences.
npm install @aituber-onair/coreUnified LLM layer across OpenAI, Claude, Gemini, Z.ai, Kimi, and OpenRouter — streaming, tool/function calling, vision, and MCP support included.
npm install @aituber-onair/chatStandalone TTS library with VOICEVOX, VoicePeak, OpenAI TTS, MiniMax, AIVIS Speech, and more, plus emotion-aware synthesis.
npm install @aituber-onair/voiceDetects repetitive conversation patterns and injects topic-diversification prompts to keep dialogue fresh.
npm install @aituber-onair/manneriWebSocket chat client with React hooks, auto-reconnect, rate limiting, mentions, and voice integration. Browser and Node.js.
npm install @aituber-onair/bushitsu-clientRelationship / bond system (絆) for AI characters and viewers: points, achievements, emotion-based bonuses, level progression, persistent storage.
npm install @aituber-onair/kizuna- Proven in production — powers AITuber OnAir, a live AITuber streaming web app, so you're building on the same code path a real product ships on
- Pick any entry point: hosted web app, self-hosted example, or modular npm packages
- First-class coverage of the providers AITuber builders actually use — OpenAI / Claude / Gemini for chat, VOICEVOX / OpenAI TTS / AIVIS Speech and more for voice
- Chat, voice, streaming (YouTube / Twitch / WebSocket), and viewer relationships in a single, consistent stack
- MIT-licensed TypeScript — you keep control of hosting, data, and integrations
aituber-onair/
└── packages/
├── core/ # AITuberOnAirCore, memory, orchestration
├── chat/ # LLM providers, streaming, tools, MCP
├── voice/ # TTS engines, emotion, playback
├── manneri/ # Conversation pattern detection
├── bushitsu-client/ # WebSocket chat client + React hooks
└── kizuna/ # Viewer relationship / bond systemMIT — see LICENSE.
This project is based on the work referenced here. Without the contributions of these pioneers, it would not exist.
Working on the monorepo itself:
git clone https://github.com/shinshin86/aituber-onair.git
cd aituber-onair
npm install
npm run build
npm run test
npm run fmtShared Agent Skills so Codex and Claude Code use the same workflow definitions.
See docs/agent-skills.md for the full guide. Canonical sources live in skills/, with Claude Code runtime copies under .claude/skills/.
Releases are driven by manual version bumps + per-package CHANGELOG.md, published automatically by GitHub Actions on merge to main. Do not run npm publish directly.
- Patch: bug fixes, dependency updates
- Minor: new features, backward-compatible changes
- Major: breaking changes to public API
release.yml uses Changesets to publish packages, create tags (@aituber-onair/<pkg>@x.y.z), and create GitHub Releases for packages published in that run. If CI fails mid-run, re-running publishes the remainder but does not backfill Releases for already-published packages — create those manually from the package CHANGELOG (tag will already exist). prerelease-next.yml only updates the next prerelease tag.









