metadata
license: mit
tags:
- mneme
- memory
- weight-injection
- qwen
Mneme: Neural Episodic Weight Injection Encoder
Trained encoder for the Mneme memory system - injects facts directly into LLM weights.
Usage
# Clone the repo
git clone https://github.com/Yusuffarhan13/Mneme-v1-mvp.git
cd Mneme-v1-mvp
# Download the encoder
pip install huggingface_hub
python -c "from huggingface_hub import hf_hub_download; hf_hub_download(repo_id='yusuffarhan/qwen-memory', filename='best_encoder.pt', local_dir='mneme_trained')"
# Run
python qwen.py --encoder mneme_trained/best_encoder.pt
Training Config
- Delta rank: 16
- Target layers: [4, 8, 12, 16, 20, 24]
- Encoder: 768 hidden, 4 layers
- Base model: Qwen/Qwen3-4B
What This Does
Injects facts directly INTO model weights (no RAG, no prompt injection):
/remember My name is Yusuf
/remember I work at Google
What is my name? → "Your name is Yusuf"
Where do I work? → "You work at Google"