JIM GPT-OSS 120B Financial Adapters
Fine-tuned LoRA adapters for GPT-OSS 120B, trained on EVA Financial AI datasets.
Model Details
- Base Model: GPT-OSS 120B (mlx-community/gpt-oss-120b-4bit)
- Training Method: SFT โ RLHF โ DPO pipeline
- LoRA Configuration: Rank 64, 64 layers
- Domain: Financial services, lending, SEC filings
Adapters Included
| Queue | Dataset | Stage | Size |
|---|---|---|---|
| queue1_Synthetic_Lenders_Data | Synthetic_Lenders_Data | DPO | 4.3GB |
| queue2_biz-training-data | biz-training-data | DPO | 4.3GB |
| queue10_EVA_Datasets_Export | EVA_Datasets_Export | DPO | 4.3GB |
| queue11_EVA_Training_Data | EVA_Training_Data | DPO | 4.3GB |
| queue12_EVA_Training_Data_Medium | EVA_Training_Data_Medium | DPO | 4.3GB |
| queue5_biz_datasets | biz_datasets | DPO | 4.3GB |
| queue6_Comprehensive_Loan_Packages | Comprehensive_Loan_Packages | DPO | 4.3GB |
| queue7_comprehensive_naics_20251107_131639 | comprehensive_naics_20251107_131639 | DPO | 4.3GB |
| queue8_enhanced_450gb_20251107_141953 | enhanced_450gb_20251107_141953 | DPO | 4.3GB |
| queue9_enhanced_450gb_20251107_181029 | enhanced_450gb_20251107_181029 | DPO | 4.3GB |
| queue15_15k_full_sequenced_20251018_182458 | 15k_full_sequenced_20251018_182458 | RLHF | 4.3GB |
Usage
from mlx_lm import load, generate
# Load base model with adapter
model, tokenizer = load(
"mlx-community/gpt-oss-120b-4bit",
adapter_path="Eva-Financial-Ai/jim-gpt-oss-120b-adapters/queue1_Synthetic_Lenders_Data"
)
# Generate
response = generate(model, tokenizer, prompt="Analyze this loan application...")
Training Data Sources
- OWC Drive: Synthetic lenders, SEC filings, business data
- Evadata2: Enhanced business datasets, loan packages, NAICS data
- Evadata3: Full sequenced batches (15k-123k)
License
Apache 2.0
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support