How Rive Brings Arabic Characters to Life
5 min readMohammad Shaker

How Rive Brings Arabic Characters to Life

Amal uses Rive for lip-synced speech, avatar customization, and game characters, giving us more flexible state machines than Lottie.

Engineering

Quick Answer

Amal uses Rive for lip-synced speech, avatar customization, and game characters, giving us more flexible state machines than Lottie.

Our Rive Animation Pipeline: Bringing Arabic Characters to Life

Amal uses Rive (formerly Flare) for all character animations — including lip-synced speech, avatar customization, feedback reactions, and game characters. We chose Rive over Lottie or sprite sheets because it supports runtime state machines, programmatic manipulation, and GPU-accelerated rendering at 60fps, all in a single compact file per character.

The Animation Asset Library

Core Characters

lip-sync-amal-01.riv

  • Main Amal character (full-body and face-only variants)
  • Multiple artboards per mouth position (for phoneme mapping)
  • States: idle, speaking, error, celebration, sleeping
  • File size: 1.2 MB (vs. 50+ MB for sprite sheets)

avatar.riv

  • Customizable user avatar (3 artboards)
    1. Full-body: head, torso, limbs with clothing
    2. Head-only: for dashboard and parent app
    3. Butterfly companion: reward animation
  • Component-based: head shape, hair, eyes, clothes, accessories, colors
  • File size: 2.4 MB

coin-01.riv & coins-01.riv

  • Reward animations (coins floating, collecting)
  • Single coin: 150 KB
  • Multiple coins: 300 KB

cute-monster-final.riv

  • Feedback character with multiple emotion states
  • States: happy (correct answer), confused (incorrect), thinking (processing), celebrating (streak)
  • File size: 1.8 MB

Android-Specific Optimization

  • Custom NDK build (Rive NDK-r28) for 16KB page alignment compliance
  • Reduces binary size by 8% vs. standard build
  • Ensures compatibility with aggressive memory management in Android 12+

Lip-Sync Pipeline (Technical Deep-Dive)

Step 1: TTS Audio Generation + Speech Marks Extraction

# src/services/tts_client.py
from google.cloud import texttospeech

def generate_speech_with_marks(text: str, language: str = 'ar-SA'):
    client = texttospeech.TextToSpeechClient()
    
    synthesis_input = texttospeech.SynthesisInput(text=text)
    voice = texttospeech.VoiceSelectionParams(
        language_code=language,
        name='ar-SA-Neural2-A'  # WaveNet voice
    )
    audio_config = texttospeech.AudioConfig(
        audio_encoding=texttospeech.AudioEncoding.MP3,
        effects_profile_id=['small-bluetooth-speaker-class-device']  # Child's speaker
    )
    
    # Request speech marks (phoneme timing)
    request = texttospeech.SynthesizeSpeechRequest(
        input=synthesis_input,
        voice=voice,
        audio_config=audio_config,
        enable_text_to_speech_as_cloud_service=True
    )
    
    response = client.synthesize_speech(request=request)
    
    # Response includes:
    # - audio_content: MP3 bytes
    # - timepoints: [{character, byte_pos, time_ms}, ...]
    
    return {
        'audio': response.audio_content,
        'speech_marks': response.timepoints  # Phoneme-level timestamps
    }

Step 2: Map Phonemes to Rive Mouth States

lip_sync_avatar.json maps Arabic phonemes to mouth positions:

{
  "phoneme_map": {
    "ا": { "rive_state": "mouth_a_open", "duration_ms": 200 },
    "ب": { "rive_state": "mouth_lips_closed", "duration_ms": 150 },
    "ع": { "rive_state": "mouth_pharyngeal", "duration_ms": 250 },
    "ق": { "rive_state": "mouth_uvular", "duration_ms": 180 },
    ...
  },
  "mouth_positions": [
    { "id": "mouth_a_open", "blend_values": { "jaw_open": 0.8, "lips_rounded": 0.2 } },
    { "id": "mouth_lips_closed", "blend_values": { "jaw_open": 0.1, "lips_rounded": 0.9 } },
    ...
  ]
}

Step 3: LipSyncController Orchestrates Playback

// lib/src/modules/animations/controllers/lip_sync_controller.dart
class LipSyncController extends GetxController {
  late Rive riveCharacter;
  late AudioPlayer audioPlayer;
  late LipSyncMapper mapper;
  
  void playWithLipSync(String text, String audioPath) {
    // Step 1: Load Rive character
    riveCharacter.loadRiveFile('lip-sync-amal-01.riv');
    
    // Step 2: Load speech marks from TTS output
    mapper = LipSyncMapper.fromJson(loadJsonAsset('lip_sync_avatar.json'));
    
    // Step 3: Play audio while driving mouth animation
    audioPlayer.play(AudioSource.file(audioPath));
    
    // Step 4: On every audio frame, update mouth position
    audioPlayer.onPositionChanged.listen((Duration position) {
      String phoneme = mapper.phonemeAtTime(position.inMilliseconds);
      String riveState = mapper.riveStateForPhoneme(phoneme);
      
      riveCharacter.setStateInput('mouth_state', riveState);
    });
  }
}

Step 4: RiveCharacterController Manages Lifecycle

// Manages full character animation state (not just mouth)
class RiveCharacterController extends GetxController {
  States: idle → prepare → speaking → idle → error/celebration
  
  void startExercise() {
    // Character transitions: idle → prepare (ready to listen)
    character.setStateInput('state_machine', 'prepare');
  }
  
  void childSpeaks(String recognizedText, double accuracy) {
    character.setStateInput('state_machine', 'speaking');
    lipSyncController.playFeedback(recognizedText);
  }
  
  void onFeedbackComplete(bool wasCorrect) {
    if (wasCorrect) {
      character.setStateInput('state_machine', 'celebrate');
      playRewardAnimation();
    } else {
      character.setStateInput('state_machine', 'error');
      playEncouragingPhrase();
    }
  }
}

Avatar Customization System

Component-Based Architecture

Children customize their avatar from parts:

{
  "avatar_customization": {
    "head_shapes": [
      { "id": "round", "rive_element": "head_round" },
      { "id": "oval", "rive_element": "head_oval" },
      { "id": "square", "rive_element": "head_square" }
    ],
    "hair_styles": [
      { "id": "ponytail", "rive_element": "hair_ponytail" },
      { "id": "braids", "rive_element": "hair_braids" },
      { "id": "straight", "rive_element": "hair_straight" }
    ],
    "colors": {
      "skin_tone": ["light", "medium", "dark"],
      "hair_color": ["black", "brown", "blonde", "red"],
      "shirt_color": ["blue", "pink", "green", "yellow", "purple"],
      "accent_color": ["red", "orange", "green", "blue"]
    }
  }
}

Rive Named Elements Mapping (avatar_customization_rive_names.dart)

const avatarRiveNames = {
  'head_round': 'Head_Round',
  'head_oval': 'Head_Oval',
  'hair_ponytail': 'Hair_Ponytail',
  'shirt_blue': 'Shirt_Blue',
  'shirt_pink': 'Shirt_Pink',
  // ... 50+ element mappings
};

When a child selects "round head + blue shirt," the app:

  1. Enables Rive element Head_Round
  2. Enables Rive element Shirt_Blue
  3. Disables all other head shapes and shirt colors
  4. Child's personalized avatar now appears throughout the entire app

Why Rive Over Alternatives

Feature Rive Lottie Sprite Sheets Video
State machines
Runtime control ✓ (full) Partial Manual ✗ (passive)
File size 1-2 MB 2-3 MB 50+ MB 100+ MB
Performance 60fps GPU 30fps CPU 60fps GPU Variable
Interactivity ✓ Full ✓ Partial ✓ Full ✗ None
Learning curve Moderate Easy Easy Easy
Maintenance One .riv file One JSON Hundreds of images One video

Rive wins because we need programmatic control, state machines, and compactness for a mobile app.

Performance Optimization

  • Preload characters: Load .riv files during app startup, not per-exercise
  • GPU rendering: Rive automatically uses GPU when available, CPU fallback on old devices
  • Memory pooling: Reuse Rive controllers across screens to avoid garbage collection pauses
  • Compression: Rive files are already compressed; no additional optimization needed

Result: 60fps animations on Snapdragon 662+ (2019 mid-range phones).

FAQ

Q: Can I export animations from Adobe Animate to Rive? A: Not directly. We use Rive's native editor (rive.app). Animators design in Rive, not Animate or After Effects. The workflow is: design character in Rive → export as .riv → integrate into Flutter app.

Q: How do you handle different body types or disabilities? A: Avatar customization includes body type options (slender, athletic, round) and accessories (glasses, hearing aids, mobility aids). This ensures all children see representation.

Q: What if a child dislikes their avatar? A: They can customize at any time. The app doesn't force a particular look — children have full creative control.

Related Articles