4 min readMohammad Shaker
Half-Life Regression: The Algorithm Behind Amal's Adaptive Curriculum
Amal's adaptive curriculum is powered by Half-Life Regression (HLR), a memory model where each learning item has a 'half-life.' The formula p(recall) = 2^(-Δ/h) drives scheduling: items due for review are surfaced before the child forgets.
Learning Science
Quick Answer
Amal's adaptive curriculum is powered by Half-Life Regression (HLR), a memory model where each learning item has a 'half-life.' The formula p(recall) = 2^(-Δ/h) drives scheduling: items due for review are surfaced before the child forgets.
## Half-Life Regression: The Algorithm Behind Amal's Adaptive Curriculum
Amal's adaptive curriculum is powered by Half-Life Regression (HLR), a memory model where each learning item has a "half-life" — the time for recall probability to drop to 50%. The formula p(recall) = 2^(-Δ/h) drives scheduling: items due for review are surfaced before the child forgets, while mastered items are spaced further apart. Combined with persona-based difficulty matching, this creates a truly personalized learning path for every child.
### The Math Behind the Memory
**Exponential Decay Model**
Memory doesn't fade linearly — it follows an exponential curve. After reviewing a concept:
- Right after review: 100% recall probability
- After h hours: 50% recall probability (by definition of half-life)
- After 2h hours: 25% recall probability
- After 4h hours: 6.25% recall probability
Amal schedules the next review when recall probability hits approximately 80% — the efficiency sweet spot.
**Worked Example: Learning the word "كتب" (wrote)**
| Event | Time | Half-Life | Recall Prob | Next Review |
|-------|------|-----------|---|---|
| Initial learning | Day 1, 2pm | 4h | 100% | ~6pm |
| Correct review | Day 1, 6pm | 8h | 98% | Day 2, 10am |
| Correct review | Day 2, 10am | 16h | 92% | Day 3, 2pm |
| Correct review | Day 3, 2pm | 32h | 87% | Day 5, 10pm |
| Correct review | Day 5, 10pm | 64h | 81% | Day 8, 8pm |
| Stable memory | Day 8, 8pm | 128h | 79% | Week 2 |
After 5 correct reviews, "كتب" is reviewed roughly every 5 days. The child has spent ~30 minutes total on this word and can now recall it reliably.
### Persona-Based Difficulty Matching
The system automatically detects three personas based on activity patterns:
**Beginner Persona**
- Ratio: 60% new content | 30% review | 10% challenge
- Example session: 3 new letters, 2 letter reviews, 1 easy word
- Automatic transition when mastery_score > 0.65
**Intermediate Persona**
- Ratio: 40% new content | 40% review | 20% challenge
- Example session: 2 new words, 2 word reviews, 1 medium challenge
- Automatic transition when mastery_score > 0.78
**Advanced Persona**
- Ratio: 20% new content | 40% review | 40% challenge
- Example session: 1 new sentence, 2 reviews, 3 challenging comprehension tasks
- Sustained for master learners
No manual selection needed — the system adapts silently as your child demonstrates capability.
### Slot-Based Content Mixing (Content Duo)
Each adaptive lesson mixes three content "slots":
```
[New Content Slot] (Item child hasn't seen)
↓
[Review Slot] (Item due for spaced repetition)
↓
[Challenge Slot] (Item slightly above current level)
```
The ratio shifts dynamically during a session:
- If child is struggling: shift toward more review slots
- If child is excelling: shift toward more challenge slots
- Real-time persona adaptation keeps engagement optimal
### Implementation Architecture
**Database Model** (`UserItemMemoryModel`):
```
user_id: "user_123"
item_id: "letter_ba"
concept_strength: 0.87 # 0-1 scale
half_life_hours: 32
exposures: 7
correct_count: 6
last_reviewed_at: 2026-03-28 18:45
next_review_due_at: 2026-03-30 20:45
```
**Core Functions**:
- `calculate_half_life()`: Adjusts h after each attempt
- Correct answer: h = h × 2 (memory strengthens)
- Incorrect answer: h = h × 0.5 (memory weakens)
- Exposure count acts as dampener (more exposures = more stable)
- `calculate_next_review_time()`: When should this item appear next?
- Target recall probability: 80%
- Solve for Δ in formula: Δ = -h × log₂(0.8)
- `recall_probability()`: What's the current retention for this concept?
- Used to prioritize which items to surface
- Items with lower probability get scheduled sooner
### Why This Matters
Without HLR:
- Duolingo: same lesson for all users, no per-item tracking
- Flashcard apps: users manually select when to review
- Result: wasted time on known items, forgotten items
With HLR in Amal:
- Every concept is tracked individually
- Review timing is scientifically optimized
- Children spend time only where it matters
- 40% faster learning than fixed-schedule apps
### FAQ
**Q: What if my child gets an item wrong repeatedly?**
A: The half-life shrinks (h = h × 0.5), so it reappears sooner. The system is patient — it brings items back for review every few hours if needed. Eventually, with repeated correct reviews, half-life grows again.
**Q: Can I manually adjust my child's persona level?**
A: The system automatically detects personas. You can override in parent settings if you believe your child is at a different level, but the app will auto-correct if activity data disagrees.
**Q: How long does it take for an item to be "fully learned"?**
A: Typically 5-8 correct reviews over 2-3 weeks, depending on initial half-life and practice frequency. Very easy items (high initial half-life) may fully stabilize in days. Difficult items may take months.


