Update README.md
Browse files
README.md
CHANGED
|
@@ -529,4 +529,363 @@ configs:
|
|
| 529 |
data_files:
|
| 530 |
- split: train
|
| 531 |
path: single_agent_val_webwalkerqa_repeat_2/train-*
|
|
|
|
| 532 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 529 |
data_files:
|
| 530 |
- split: train
|
| 531 |
path: single_agent_val_webwalkerqa_repeat_2/train-*
|
| 532 |
+
license: apache-2.0
|
| 533 |
---
|
| 534 |
+
|
| 535 |
+
<div align="center">
|
| 536 |
+
|
| 537 |
+
# MATPO: Multi-Agent Tool-Integrated Policy Optimization
|
| 538 |
+
|
| 539 |
+
Train Multiple Agent Roles Within a Single LLM via Reinforcement Learning.
|
| 540 |
+
|
| 541 |
+
<!-- [](https://arxiv.org/pdf/2510.04678)
|
| 542 |
+
[](LICENSE)
|
| 543 |
+
[](https://www.python.org/downloads/)
|
| 544 |
+
[](https://github.com/mzf666/MATPO) -->
|
| 545 |
+
|
| 546 |
+
<!-- <hr> -->
|
| 547 |
+
<div align="center">
|
| 548 |
+
|
| 549 |
+
[](https://huggingface.co/veggiebird/MATPO-14b)
|
| 550 |
+
[](https://huggingface.co/datasets/veggiebird/MATPO-data)
|
| 551 |
+
[](https://arxiv.org/abs/2510.04678)
|
| 552 |
+
[](https://github.com/mzf666/MATPO)
|
| 553 |
+
</div>
|
| 554 |
+
|
| 555 |
+
|
| 556 |
+
</div>
|
| 557 |
+
|
| 558 |
+
<div align="center">
|
| 559 |
+
<table>
|
| 560 |
+
<tr>
|
| 561 |
+
<td align="center">
|
| 562 |
+
<img src="assets/main_gaia.png" width="220px" alt="GAIA Results"><br>
|
| 563 |
+
<em>GAIA Results</em>
|
| 564 |
+
</td>
|
| 565 |
+
<td align="center">
|
| 566 |
+
<img src="assets/main_frameqa.png" width="220px" alt="FRAMES Results"><br>
|
| 567 |
+
<em>FRAMES Results</em>
|
| 568 |
+
</td>
|
| 569 |
+
<td align="center">
|
| 570 |
+
<img src="assets/main_webwalkerqa.png" width="220px" alt="WebWalkerQA Results"><br>
|
| 571 |
+
<em>WebWalkerQA Results</em>
|
| 572 |
+
</td>
|
| 573 |
+
</tr>
|
| 574 |
+
</table>
|
| 575 |
+
</div>
|
| 576 |
+
|
| 577 |
+
<p align="center">
|
| 578 |
+
<img src="assets/multi_agent_framework.png" width="500px" alt="MATPO Framework">
|
| 579 |
+
</p>
|
| 580 |
+
|
| 581 |
+
|
| 582 |
+
<p align="center">
|
| 583 |
+
<em>MATPO allows planner and worker agents to coexist within a single LLM and be trained via RL, achieving an 18.38% relative improvement over single-agent baselines on GAIA-text, FRAMES, and WebWalker-QA.</em>
|
| 584 |
+
</p>
|
| 585 |
+
|
| 586 |
+
## News & Updates
|
| 587 |
+
|
| 588 |
+
- **[2025-Oct-08]** MATPO-Qwen3-14B checkpoints and rollouts released
|
| 589 |
+
- **[2025-Oct-08]** Code and training scripts released
|
| 590 |
+
- **[2025-Oct-06]** Arxiv Paper released
|
| 591 |
+
|
| 592 |
+
|
| 593 |
+
## Overview
|
| 594 |
+
|
| 595 |
+
**MATPO** (Multi-Agent Tool-Integrated Policy Optimization) is a novel reinforcement learning framework that enables training multiple specialized agent roles (planner and worker agents) within a single large language model.
|
| 596 |
+
|
| 597 |
+
### The Problem
|
| 598 |
+
Current single-agent approaches for multi-turn tool-integrated planning face critical limitations:
|
| 599 |
+
- **Context Length Bottleneck**: Tool responses (e.g., web scraping) consume excessive tokens, making long-range planning prohibitive
|
| 600 |
+
- **Noisy Tool Responses**: Raw tool responses interfere with the model's attention and planning capabilities
|
| 601 |
+
|
| 602 |
+
### Our Solution
|
| 603 |
+
MATPO introduces a **multi-agent-in-one-model** architecture where:
|
| 604 |
+
- A **planner-agent** orchestrates high-level planning and delegates subtasks
|
| 605 |
+
- **Worker-agents** handle specific browsing and search tasks with isolated contexts
|
| 606 |
+
- Both roles are trained within a **single LLM** using role-specific prompts via reinforcement learning
|
| 607 |
+
|
| 608 |
+
|
| 609 |
+
## Key Features
|
| 610 |
+
|
| 611 |
+
- **Multi-Agent-in-One-Model**: Train planner and worker agents within a single LLM using role-specific system prompts
|
| 612 |
+
- **Principled Credit Assignment**: Extends GRPO with theoretically grounded reward distribution across planner and worker rollouts
|
| 613 |
+
- **Easy Integration**: Built on top of [veRL](https://github.com/volcengine/verl), compatible with existing RL training frameworks
|
| 614 |
+
- **Robust Training**: More stable learning curves compared to single-agent approaches, especially with noisy tool responses
|
| 615 |
+
- **Infrastructure Efficient**: No need for deployment of separate models or additional rollout engines
|
| 616 |
+
|
| 617 |
+
|
| 618 |
+
## MATPO Architecture
|
| 619 |
+
|
| 620 |
+
MATPO employs a hierarchical multi-agent framework where a single LLM serves multiple roles:
|
| 621 |
+
|
| 622 |
+
```
|
| 623 |
+
User Query → Planner Agent → Subtask 1 → Worker Agent → Result 1
|
| 624 |
+
→ Subtask 2 → Worker Agent → Result 2
|
| 625 |
+
→ ...
|
| 626 |
+
→ Final Answer
|
| 627 |
+
```
|
| 628 |
+
|
| 629 |
+
|
| 630 |
+
<p align="center">
|
| 631 |
+
<img src="assets/single_agent.png" width="600px" alt="Single-agent GRPO Framework">
|
| 632 |
+
<img src="assets/multi_agent_RL_rollout.png" width="600px" alt="MATPO Framework">
|
| 633 |
+
</p>
|
| 634 |
+
|
| 635 |
+
<p align="center">
|
| 636 |
+
<em>Comparison between the rollout trajectories between the single-agent GRPO (top) and the multi-agent MATPO (bottom).</em>
|
| 637 |
+
</p>
|
| 638 |
+
|
| 639 |
+
|
| 640 |
+
### Multi-Agent Rollout Process
|
| 641 |
+
|
| 642 |
+
1. **Planner Agent**:
|
| 643 |
+
- Receives user query with planner-specific system prompt
|
| 644 |
+
- Generates high-level plan and decomposes it into subtasks
|
| 645 |
+
- Delegates subtasks to worker agents
|
| 646 |
+
- Synthesizes worker responses into final answer
|
| 647 |
+
|
| 648 |
+
2. **Worker Agent**:
|
| 649 |
+
- Receives subtask with worker-specific system prompt
|
| 650 |
+
- Performs multi-turn tool-integrated planning (search, scrape, analyze)
|
| 651 |
+
- Returns summarized result to planner
|
| 652 |
+
- Maintains isolated context to prevent token overflow
|
| 653 |
+
|
| 654 |
+
3. **Credit Assignment**:
|
| 655 |
+
- Final answer accuracy determines the reward
|
| 656 |
+
- Reward is normalized across all planner-worker rollout groups
|
| 657 |
+
- Gradient flows to both planner actions and worker actions proportionally
|
| 658 |
+
|
| 659 |
+
|
| 660 |
+
<p align="center">
|
| 661 |
+
<img src="assets/multi-agent-grpo-implementation.png" width="600px" alt="MATPO Framework">
|
| 662 |
+
</p>
|
| 663 |
+
|
| 664 |
+
<p align="center">
|
| 665 |
+
<em>Visualization of MATPO implementation.</em>
|
| 666 |
+
</p>
|
| 667 |
+
|
| 668 |
+
|
| 669 |
+
|
| 670 |
+
## Quick Start
|
| 671 |
+
|
| 672 |
+
Prerequisites:
|
| 673 |
+
- Python 3.10 or higher
|
| 674 |
+
- CUDA 12.4+ (for GPU support)
|
| 675 |
+
- 16 x (8 x 80G-A800) GPUs (for training with Qwen3-14B-base)
|
| 676 |
+
|
| 677 |
+
Clone the repository.
|
| 678 |
+
```bash
|
| 679 |
+
git clone https://github.com/mzf666/MATPO.git
|
| 680 |
+
cd MATPO
|
| 681 |
+
```
|
| 682 |
+
|
| 683 |
+
For prerequisites installation (CUDA, cuDNN, Apex), we recommend following the [verl prerequisites guide](https://verl.readthedocs.io/en/latest/start/install.html#pre-requisites) which provides detailed instructions for:
|
| 684 |
+
|
| 685 |
+
- CUDA: Version >= 12.4
|
| 686 |
+
- cuDNN: Version >= 9.8.0
|
| 687 |
+
- Apex
|
| 688 |
+
|
| 689 |
+
Setup environment and install dependencies.
|
| 690 |
+
```bash
|
| 691 |
+
conda create -n matpo python==3.10 -y
|
| 692 |
+
conda activate matpo
|
| 693 |
+
bash examples/sglang_multiturn/install.sh
|
| 694 |
+
```
|
| 695 |
+
|
| 696 |
+
Setup Node.js for Serper API support.
|
| 697 |
+
|
| 698 |
+
MCP (Model Context Protocol) requires Node.js to run MCP servers. Node.js version 18+ is recommended for optimal compatibility with MCP tools.
|
| 699 |
+
```bash
|
| 700 |
+
target_path=YOUR_TARGET_PATH
|
| 701 |
+
|
| 702 |
+
# Download Node.js binary (example for Linux x64)
|
| 703 |
+
wget https://nodejs.org/dist/v24.2.0/node-v24.2.0-linux-x64.tar.xz
|
| 704 |
+
|
| 705 |
+
# Extract to your target path
|
| 706 |
+
tar -xf node-v24.2.0-linux-x64.tar.xz -C $target_path
|
| 707 |
+
|
| 708 |
+
# Add to PATH
|
| 709 |
+
export NODEJS_HOME=$target_path/node-v24.2.0-linux-x64
|
| 710 |
+
export PATH=$NODEJS_HOME/bin:$PATH
|
| 711 |
+
export NODE_SHARED=$target_path/node-shared/node_modules
|
| 712 |
+
export PATH=$NODE_SHARED/.bin:$PATH
|
| 713 |
+
|
| 714 |
+
# Verify installation
|
| 715 |
+
node --version
|
| 716 |
+
npm --version
|
| 717 |
+
|
| 718 |
+
# Install serper mcp server
|
| 719 |
+
mkdir -p $target_path/node-shared
|
| 720 |
+
cd $target_path/node-shared
|
| 721 |
+
npm init -y
|
| 722 |
+
npm install serper-search-scrape-mcp-server
|
| 723 |
+
```
|
| 724 |
+
|
| 725 |
+
Configure the Node.js paths and HTTP / HTTPS proxies (if necessary) in the `examples/sglang_multiturn/launch.sh` script properly.
|
| 726 |
+
|
| 727 |
+
Download the training and testing datasets to the `data` directory. The prerpocessed datasets can be downloaded [here](https://huggingface.co/datasets/veggiebird/MATPO-data).
|
| 728 |
+
|
| 729 |
+
|
| 730 |
+
Train a Qwen3-14B-base model with MATPO on the MuSiQue dataset and evaluate on the GAIA-text datasets:
|
| 731 |
+
|
| 732 |
+
```bash
|
| 733 |
+
# tested on 16 x (8 x 80G-A800) nodes
|
| 734 |
+
|
| 735 |
+
export SERPER_API_KEY="YOUR_SERPER_API_KEY" && \
|
| 736 |
+
export OPENAI_API_KEY="YOUR_OPENAI_API_KEY" && \
|
| 737 |
+
export WANDB_API_KEY="YOUR_WANDB_API_KEY" && \
|
| 738 |
+
export SINGLENODE=true && \
|
| 739 |
+
export RAY_DEBUG=legacy && \
|
| 740 |
+
export HYDRA_FULL_ERROR=1 && \
|
| 741 |
+
source YOUR_CONDA_PATH activate matpo && \
|
| 742 |
+
cd YOUR_PROJECT_PATH && \
|
| 743 |
+
bash examples/sglang_multiturn/launch.sh \
|
| 744 |
+
examples/sglang_multiturn/qwen3-14b_musique_MATPO.sh
|
| 745 |
+
```
|
| 746 |
+
|
| 747 |
+
## Experiments and Results
|
| 748 |
+
|
| 749 |
+
### Main Results
|
| 750 |
+
|
| 751 |
+
MATPO consistently outperforms single-agent GRPO baselines across all benchmarks:
|
| 752 |
+
|
| 753 |
+
| Method | GAIA-text | WebWalkerQA | FRAMES | Relative Average Improvement |
|
| 754 |
+
|--------|-----------|-------------|---------|---------------------|
|
| 755 |
+
| Single-Agent GRPO | 32.16% | 30.14% | 56.22% | - |
|
| 756 |
+
| **MATPO (Ours)** | **42.60%** | **33.00%** | **63.64%** | **+18.38%** |
|
| 757 |
+
|
| 758 |
+
### Training Configuration
|
| 759 |
+
|
| 760 |
+
- **Base Model**: Qwen3-14B-base
|
| 761 |
+
- **Training Dataset**: Filtered MuSiQue dataset.
|
| 762 |
+
- **Training Steps**: 180 steps
|
| 763 |
+
- **Rollouts per Query**: 8 (for group normalization)
|
| 764 |
+
- **Reward Function**: 0.9 × accuracy + 0.1 × tool_format_reward
|
| 765 |
+
|
| 766 |
+
### Model Checkpoints and Rollouts
|
| 767 |
+
|
| 768 |
+
|
| 769 |
+
We release the trained Qwen3-14B-base model checkpoints at the 180th training step of both [single-agent GRPO](https://huggingface.co/veggiebird/MATPO-single-agent-14b) and [MATPO](https://huggingface.co/veggiebird/MATPO-14b).
|
| 770 |
+
|
| 771 |
+
The associated model rollouts across various training steps can be found [here](https://huggingface.co/datasets/veggiebird/MATPO-rollout).
|
| 772 |
+
|
| 773 |
+
|
| 774 |
+
### Key Findings
|
| 775 |
+
|
| 776 |
+
- **More Stable Training**: MATPO exhibits more stable learning curves and avoids catastrophic performance drops observed in single-agent training
|
| 777 |
+
|
| 778 |
+
- **Robustness to Noise**: Multi-agent decomposition effectively isolates noisy tool responses, preventing them from interfering with high-level planning
|
| 779 |
+
|
| 780 |
+
- **Better Credit Assignment**: Principled reward distribution across planner and worker rollouts leads to more effective learning
|
| 781 |
+
|
| 782 |
+
|
| 783 |
+
### Practical Implementation Tips
|
| 784 |
+
|
| 785 |
+
Based on our experiments, we recommend:
|
| 786 |
+
|
| 787 |
+
- **Final Summary**: Final summaries from worker agents are critical for clean planner-worker interfaces
|
| 788 |
+
- **Query Recap**: Recapping original user query in worker prompt significantly improves performance
|
| 789 |
+
- **URL Blocking**: Remember to blocking HuggingFace search results to avoid data leakage
|
| 790 |
+
|
| 791 |
+
## Citation
|
| 792 |
+
|
| 793 |
+
If you find MATPO helpful in your research, please consider citing our paper:
|
| 794 |
+
|
| 795 |
+
```bibtex
|
| 796 |
+
@misc{mo2025multiagenttoolintegratedpolicyoptimization,
|
| 797 |
+
title={Multi-Agent Tool-Integrated Policy Optimization},
|
| 798 |
+
author={Zhanfeng Mo and Xingxuan Li and Yuntao Chen and Lidong Bing},
|
| 799 |
+
year={2025},
|
| 800 |
+
eprint={2510.04678},
|
| 801 |
+
archivePrefix={arXiv},
|
| 802 |
+
primaryClass={cs.CL},
|
| 803 |
+
url={https://arxiv.org/abs/2510.04678},
|
| 804 |
+
}
|
| 805 |
+
```
|
| 806 |
+
|
| 807 |
+
|
| 808 |
+
## Acknowledgments
|
| 809 |
+
|
| 810 |
+
We would like to thank:
|
| 811 |
+
|
| 812 |
+
- **VolcEngine** for developing and open-sourcing [veRL](https://github.com/volcengine/verl), the RL training framework that powers MATPO
|
| 813 |
+
- **Alibaba Cloud** for the Qwen3 model series
|
| 814 |
+
- **Google** for the Serper API that enables web search capabilities
|
| 815 |
+
- The authors of **GAIA**, **WebWalkerQA**, **FRAMES**, and **MuSiQue** datasets
|
| 816 |
+
- The open-source community for valuable feedback and contributions
|
| 817 |
+
|
| 818 |
+
|
| 819 |
+
## FAQ
|
| 820 |
+
|
| 821 |
+
<details>
|
| 822 |
+
<summary><b>Q: What's the difference between MATPO and traditional multi-agent systems?</b></summary>
|
| 823 |
+
|
| 824 |
+
MATPO uses a single LLM to play multiple agent roles via different system prompts, rather than deploying separate models. This offers:
|
| 825 |
+
- Lower infrastructure complexity
|
| 826 |
+
- Better parameter efficiency
|
| 827 |
+
- Easier deployment and maintenance
|
| 828 |
+
- Compatible with existing RL frameworks
|
| 829 |
+
</details>
|
| 830 |
+
|
| 831 |
+
<details>
|
| 832 |
+
<summary><b>Q: Can I use MATPO with models other than Qwen3?</b></summary>
|
| 833 |
+
|
| 834 |
+
Yes! MATPO is model-agnostic. You can use any decoder-only LLM that supports tool calling and multi-turn conversations. We've tested with Qwen3-14B-base, but models like Llama 3, Mistral, or other reasoning-capable LLMs should work.
|
| 835 |
+
</details>
|
| 836 |
+
|
| 837 |
+
<details>
|
| 838 |
+
<summary><b>Q: How many GPUs do I need for training?</b></summary>
|
| 839 |
+
|
| 840 |
+
For Qwen3-14B-base, we recommend:
|
| 841 |
+
- **Training**: 8x A100/A800 GPUs (80GB)
|
| 842 |
+
- **Inference**: 1-2x A100/A800 GPUs (40GB/80GB)
|
| 843 |
+
|
| 844 |
+
</details>
|
| 845 |
+
|
| 846 |
+
<details>
|
| 847 |
+
<summary><b>Q: How does MATPO handle credit assignment?</b></summary>
|
| 848 |
+
|
| 849 |
+
MATPO extends GRPO with principled credit assignment:
|
| 850 |
+
1. The planner's final answer determines the accuracy reward
|
| 851 |
+
2. This reward is normalized across all rollouts in a group
|
| 852 |
+
3. Gradients flow proportionally to both planner and worker actions
|
| 853 |
+
4. Worker agents receive the same advantage value as their parent planner rollout
|
| 854 |
+
|
| 855 |
+
See our paper for more details.
|
| 856 |
+
</details>
|
| 857 |
+
|
| 858 |
+
<details>
|
| 859 |
+
<summary><b>Q: Can I use MATPO for tasks other than web search?</b></summary>
|
| 860 |
+
|
| 861 |
+
Absolutely! While our paper focuses on web search, MATPO's framework is general. You can extend it to:
|
| 862 |
+
- Code generation with execution feedback
|
| 863 |
+
- Scientific reasoning with calculator tools
|
| 864 |
+
- Data analysis with pandas/SQL tools
|
| 865 |
+
- Any multi-turn task with verifiable rewards
|
| 866 |
+
</details>
|
| 867 |
+
|
| 868 |
+
<details>
|
| 869 |
+
<summary><b>Q: How stable is MATPO training compared to single-agent RL?</b></summary>
|
| 870 |
+
|
| 871 |
+
MATPO is significantly more stable. Our experiments show:
|
| 872 |
+
- Single-agent GRPO often suffers catastrophic drops after step 120
|
| 873 |
+
- MATPO maintains steady improvement throughout training
|
| 874 |
+
- Multi-agent structure isolates noisy tool responses, preventing interference
|
| 875 |
+
|
| 876 |
+
See Figure 4 in our paper for training curves.
|
| 877 |
+
</details>
|
| 878 |
+
|
| 879 |
+
<details>
|
| 880 |
+
<summary><b>Q: Do I need to block HuggingFace URLs during training?</b></summary>
|
| 881 |
+
|
| 882 |
+
For research integrity, yes - especially if your evaluation benchmarks are hosted on HuggingFace. This prevents models from "cheating" by finding ground-truth answers online.
|
| 883 |
+
|
| 884 |
+
For production systems with no data leakage concerns, this is optional.
|
| 885 |
+
</details>
|
| 886 |
+
|
| 887 |
+
-----
|
| 888 |
+
|
| 889 |
+
<p align="center">
|
| 890 |
+
<strong>Star ⭐ this repository if you find it helpful!</strong>
|
| 891 |
+
</p>
|