Jamba: A Hybrid Transformer-Mamba Language Model
Paper
•
2403.19887
•
Published
•
111
This is a pruned version of AI21 Labs' Jamba-v0.1 model that is ~25% the size of Jamba-v0.1.
Whereas Jamba-v0.1 contains 4 Jamba blocks, Jamba-Small contains only 1 Jamba block. Jamba-Small's Jamba blocks follow the same structure seen in Jamba-v0.1, with a 1:7 ratio of attention-to-Mamba layers and MoE applied every 2 layers.
Jamba-Small's weights are initialized from various layers in the original Jamba-v0.1 model. For v1, the layer weights are mapped as follows (left is Jamba-Small layer number, right is Jamba-v0.1 layer number):
0: 0
1: 1
2: 2
3: 3
4: 4
5: 5
6: 30
7: 31
Note that no additional fine-tuning has been performed on this model. As such, its performance is exceptionally poor. This should not be used in production without additional training.