Instructions to use SEBIS/code_trans_t5_small_source_code_summarization_csharp with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use SEBIS/code_trans_t5_small_source_code_summarization_csharp with Transformers:
# Use a pipeline as a high-level helper # Warning: Pipeline type "summarization" is no longer supported in transformers v5. # You must load the model directly (see below) or downgrade to v4.x with: # 'pip install "transformers<5.0.0' from transformers import pipeline pipe = pipeline("summarization", model="SEBIS/code_trans_t5_small_source_code_summarization_csharp")# Load model directly from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_csharp") model = AutoModel.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_csharp") - Notebooks
- Google Colab
- Kaggle
| { | |
| "architectures": [ | |
| "T5Model" | |
| ], | |
| "d_ff": 2048, | |
| "d_kv": 64, | |
| "d_model": 512, | |
| "decoder_start_token_id": 0, | |
| "dropout_rate": 0.1, | |
| "eos_token_id": 1, | |
| "initializer_factor": 1.0, | |
| "is_encoder_decoder": true, | |
| "layer_norm_epsilon": 1e-06, | |
| "model_type": "t5", | |
| "n_positions": 512, | |
| "num_decoder_layers": 6, | |
| "num_heads": 8, | |
| "num_layers": 6, | |
| "output_past": true, | |
| "pad_token_id": 0, | |
| "relative_attention_num_buckets": 32, | |
| "task_specific_params": { | |
| "summarization": { | |
| "max_length": 512, | |
| "num_beams": 4, | |
| "prefix": "source code summarization csharp: " | |
| } | |
| }, | |
| "vocab_size": 32128 | |
| } | |