How to use from the
Use from the
Transformers.js library
// npm i @huggingface/transformers
import { pipeline } from '@huggingface/transformers';

// Allocate pipeline
const pipe = await pipeline('translation', 'onnx-community/tiny-random-MarianMTModel');

https://huggingface.co/hf-internal-testing/tiny-random-MarianMTModel with ONNX weights to be compatible with Transformers.js.

Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using 🤗 Optimum and structuring your repo like this one (with ONNX weights located in a subfolder named onnx).

Downloads last month
481
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for onnx-community/tiny-random-MarianMTModel

Quantized
(1)
this model