AdapterLLRD
Memory-free multilingual continual learning with PEFT + LLRD.
🎯The Problem
Catastrophic forgetting when fine-tuning sequentially by language/domain. Privacy constraints prevent storing prior data.
💡The Solution
Train only PEFT modules (LoRA/adapters) at each hop. Apply LLRD across the base model to stabilize low layers while allowing adaptation.
✨Key Highlights
- 50+ hop sequences without replay
- Privacy-compatible (no buffers)
- BWT/FWT/AULC metrics