Chapter 3 ยท CORE

The "Stage" (Visual Presentation Layer)

๐Ÿ“„ 03_the__stage___visual_presentation_layer_.md ๐Ÿท Core

Chapter 3: The "Stage" (Visual Presentation Layer)

In Chapter 1: Agent Adapters, we gave our AI a way to interact with the world (via Telegram or Minecraft). In Chapter 2: The Cognitive Brain, we gave it the ability to think and make decisions.

Now we have a thinking entity that can send text messages, but it is invisible. It has no face.

In this chapter, we will build The Stage. This is the visual interface where your character actually "lives." Whether you are looking at your AI on a website, on your phone, or as a desktop pet, they need a place to stand, move, and smile.

The Motivation: Write Once, Render Anywhere

Imagine you want to launch your AI character on three different devices:

  1. Web: A website people can visit.
  2. Desktop: A "Tamagotchi" style widget that sits in the corner of your screen.
  3. Mobile: A pocket companion app.

The Problem: Rendering 3D models (VRM) or complex 2D animations (Live2D) is hard. If you write the code to load a 3D model for the Website, you don't want to rewrite it entirely for the Desktop app.

The Solution: We create a shared "Stage." Think of it like a traveling theater troupe. The Stage includes the lighting, the actors, the costumes, and the scripts. We just pack this Stage into different boxes (Web browser, Electron app, Mobile app).

Key Concepts

To understand the Stage, think of a theater production.

1. The Container (The Theater Building)

This is the specific app wrapper.

These containers handle device-specific things (like checking battery level on a phone or transparency on a desktop), but they all display the same Stage.

2. The Shared Stage (The Set)

This is the core library (packages/stage-ui). It contains the common UI elements: the chat bubbles, the settings menu, and the canvas where the character stands.

3. The Actor (The Model)

This is the character itself. Airi supports two types of actors:

How to Use: The Shared Architecture

The magic happens because all the specific apps import the same core components.

Let's look at apps/stage-web/src/App.vue. This is the entry point for the website version.

<script setup lang="ts">
import { RouterView } from 'vue-router'
// We import the shared transition logic
import { StageTransitionGroup } from '@proj-airi/ui-transitions'
</script>

<template>
  <!-- This wrapper handles page transitions and themes -->
  <StageTransitionGroup :colors="colors">
    
    <!-- The "RouterView" loads the actual Stage scene -->
    <RouterView /> 

  </StageTransitionGroup>
</template>

Explanation: This code is incredibly simple because all the hard work is hidden. The StageTransitionGroup applies the visual theme (colors, dark mode), and RouterView loads the actual character scene from the shared library.

If you looked at apps/stage-pocket/src/App.vue, you would see almost the exact same code! This is the power of the Stage abstraction.

Internal Implementation: Bringing the Actor to Life

What happens when the Stage loads? How does a file on your hard drive become a breathing character?

The Rendering Flow

sequenceDiagram participant App as Web/Desktop App participant Stage as Shared Stage participant 3D as VRM Loader (Three.js) participant Screen as User Screen App->>Stage: Start Application Stage->>3D: Load Model "airi.vrm" 3D-->>Stage: Model Geometry Loaded Stage->>3D: Start "Idle" Animation loop Every Frame (60fps) Stage->>3D: Update Eye Position (Look at Mouse) Stage->>3D: Update Lips (Sync with Audio) 3D->>Screen: Render Frame end

Deep Dive: Loading a 3D Model

The heavy lifting is done in packages/stage-ui-three/src/components/Model/VRMModel.vue.

This component wraps the Three.js library to make loading 3D models easy.

1. Loading the Asset

When the component mounts, it fetches the model file.

// derived from VRMModel.vue
async function loadModel() {
  // 1. Use a helper to load the VRM file into the 3D scene
  const _vrmInfo = await loadVrm(modelSrc.value, {
    scene: scene.value, // The 3D world
    lookAt: true,       // Enable eye tracking
  })

  // 2. Save the loaded model to our variable
  vrm.value = _vrmInfo._vrm
  
  // 3. Tell the rest of the app "I am ready!"
  emit('loaded', modelSrc.value)
}

Explanation: We don't deal with raw vertices or textures here. We call loadVrm, which handles the parsing. Once loaded, we emit a loaded event so the loading screen can disappear.

2. Making it Move (Animation)

A static model looks like a statue. We need to apply an "Idle" animation (breathing, slight swaying) so it feels alive.

// derived from VRMModel.vue
// Load the animation file
const animation = await loadVRMAnimation(idleAnimation.value)

// Create a mixer (this blends animations together)
vrmAnimationMixer.value = new AnimationMixer(_vrm.scene)

// Play the clip!
vrmAnimationMixer.value.clipAction(clip).play()

3. The "Soul" (Reactivity)

The defining feature of airi is that the character reacts to you. In the code, we "watch" for changes in the environment (like mouse position) and update the model's eyes.

// derived from VRMModel.vue
// Watch the "trackingMode" setting
watch(trackingMode, (newMode) => {
  
  if (newMode === 'mouse') {
    // If tracking mouse, update target when mouse moves
    watch([mouseX, mouseY], ([newX, newY]) => {
      
      // Calculate where the mouse is in the 3D world
      const target = lookAtMouse(newX, newY, camera)
      
      // Tell the model to look there
      emit('lookAtTarget', target)
    })
  }
})

Explanation: This is Vue.js reactivity in action.

  1. We listen for the trackingMode to change.
  2. If it is set to "mouse," we start listening to mouse coordinates.
  3. We convert 2D screen pixels (x, y) into a 3D point in space.
  4. The model's neck and eyes rotate to face that point.

2D vs 3D: The Live2D Component

Not everyone wants a 3D character. airi also supports Live2D (common in anime games). The architecture is identical, but the engine changes.

In packages/stage-ui-live2d/src/components/scenes/Live2D.vue, we see the same pattern:

<template>
  <!-- The Canvas for 2D drawing -->
  <Live2DCanvas :width="width" :height="height">
    
    <!-- The Model Component -->
    <Live2DModel
      :model-src="modelSrc"
      :focus-at="focusAt"   <!-- Where to look -->
      :mouth-open-size="mouthOpenSize" <!-- Lip syncing -->
    />
    
  </Live2DCanvas>
</template>

Because the Stage abstracts the differences, the Cognitive Brain doesn't care if the body is 2D or 3D. It just sends a command like "Smile," and the Stage figures out how to render that smile on the specific model.

Summary

The Stage is the visual presentation layer of the project.

  1. It solves the problem of creating apps for Web, Desktop, and Mobile without rewriting code.
  2. It provides a shared "Theater" (UI layout) and "Actors" (VRM/Live2D components).
  3. It connects the visual model to real-time data, allowing the character to look at your mouse or sync its lips to audio.

Now that our character has a Brain, a Body, and a Face, we need to make sure it remembers who you are.

Next Chapter: Central Data & Identity Server


Generated by Code IQ