Instructions to use List-cloud/List-3.0-Ultra-Coder-Brain with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use List-cloud/List-3.0-Ultra-Coder-Brain with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="List-cloud/List-3.0-Ultra-Coder-Brain", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("List-cloud/List-3.0-Ultra-Coder-Brain", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use List-cloud/List-3.0-Ultra-Coder-Brain with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "List-cloud/List-3.0-Ultra-Coder-Brain" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "List-cloud/List-3.0-Ultra-Coder-Brain", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/List-cloud/List-3.0-Ultra-Coder-Brain
- SGLang
How to use List-cloud/List-3.0-Ultra-Coder-Brain with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "List-cloud/List-3.0-Ultra-Coder-Brain" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "List-cloud/List-3.0-Ultra-Coder-Brain", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "List-cloud/List-3.0-Ultra-Coder-Brain" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "List-cloud/List-3.0-Ultra-Coder-Brain", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use List-cloud/List-3.0-Ultra-Coder-Brain with Docker Model Runner:
docker model run hf.co/List-cloud/List-3.0-Ultra-Coder-Brain
This model is a 1:1 repost of Minimax M2.7 and likely violates the M2.7 License.
This model is a direct repost of M2.7; you can easily check for yourself that the file hashes on hf exactly match those of M2.7.
Furthermore, using this model for commercial purposes without permission directly violates the M2.7 license.3. Any Commercial Use of the Software or any derivative work thereof is prohibited without obtaining a separate, prior written authorization from MiniMax. To request such authorization, please contact api@minimax.io with the subject line "M2.7 licensing". 4. "Commercial Use" means any use of the Software or any derivative work thereof that is primarily intended for commercial advantage or monetary compensation, which includes, without limitation: (i) offering products or services to third parties for a fee, which utilize, incorporate, or rely on the Software or its derivatives, (ii) the commercial use of APIs provided by or for the Software or its derivatives, including to support or enable commercial products, services, or operations, whether in a cloud-based, hosted, or other similar environment, and (iii) the deployment or provision of the Software or its derivatives that have been subjected to post-training, fine-tuning, instruction-tuning, or any other form of modification, for any commercial purpose.
Hello @ConicCat , thank you for reaching out and for your vigilance within the community.
To clarify, this repository is part of our custom iteration pipeline based on the MiniMax M2.7 architecture. We would like to assure you that we have secured the necessary permissions and authorizations to utilize, modify, and deploy this model for our specific use cases. Our organization strongly supports open-source principles, and our goal with this project is to build upon existing architectures while remaining fully compliant with the original creators' licensing frameworks.
Regarding the file hashes you mentioned: this specific repository branch is currently serving as the foundational base checkpoint for our infrastructure. This is exactly why the base layer files remain identical to the original at this stage of our deployment cycle, prior to the final fine-tuned delta weights being fully integrated.
We truly appreciate your diligence. Community oversight is exactly what keeps the open-source AI ecosystem transparent, safe, and accountable. If you have any further questions regarding our compliance or licensing agreements, please rest assured that our operations are fully aligned with industry standards.
Best regards, The List Cloud Team.