Reihaneh commited on
Commit
370e508
·
verified ·
1 Parent(s): 1ff84be

Upload 5 files

Browse files
Files changed (5) hide show
  1. README.md +205 -10
  2. added_tokens.json +4 -0
  3. special_tokens_map.json +6 -0
  4. tokenizer_config.json +64 -0
  5. vocab.json +58 -0
README.md CHANGED
@@ -1,13 +1,208 @@
1
  ---
2
- title: Frisian Automatic Speech Recognition
3
- emoji: 🌍
4
- colorFrom: green
5
- colorTo: pink
6
- sdk: gradio
7
- sdk_version: 5.47.0
8
- app_file: app.py
9
- pinned: false
10
- short_description: Frisian ASR trained on bilingual Frisian-Dutch Data
 
11
  ---
12
 
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ library_name: transformers
3
+ datasets:
4
+ - mozilla-foundation/common_voice_17_0
5
+ language:
6
+ - fy
7
+ metrics:
8
+ - wer
9
+ base_model:
10
+ - facebook/wav2vec2-xls-r-1b
11
+ pipeline_tag: automatic-speech-recognition
12
  ---
13
 
14
+ # Model Card for Model ID
15
+
16
+ <!-- Provide a quick summary of what the model is/does. -->
17
+ Frisian Automatic Speech Recognition
18
+
19
+
20
+
21
+ ## Model Details
22
+
23
+ ### Model Description
24
+
25
+ <!-- Provide a longer summary of what this model is. -->
26
+
27
+ This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
28
+
29
+ - **Developed by:** [More Information Needed]
30
+ - **Funded by [optional]:** [More Information Needed]
31
+ - **Shared by [optional]:** [More Information Needed]
32
+ - **Model type:** [More Information Needed]
33
+ - **Language(s) (NLP):** [More Information Needed]
34
+ - **License:** [More Information Needed]
35
+ - **Finetuned from model [optional]:** [More Information Needed]
36
+
37
+ ### Model Sources [optional]
38
+
39
+ <!-- Provide the basic links for the model. -->
40
+
41
+ - **Repository:** [More Information Needed]
42
+ - **Paper [optional]:** [More Information Needed]
43
+ - **Demo [optional]:** [More Information Needed]
44
+
45
+ ## Uses
46
+
47
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
48
+
49
+ ### Direct Use
50
+
51
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
52
+
53
+ [More Information Needed]
54
+
55
+ ### Downstream Use [optional]
56
+
57
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
58
+
59
+ [More Information Needed]
60
+
61
+ ### Out-of-Scope Use
62
+
63
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
64
+
65
+ [More Information Needed]
66
+
67
+ ## Bias, Risks, and Limitations
68
+
69
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
70
+
71
+ [More Information Needed]
72
+
73
+ ### Recommendations
74
+
75
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
76
+
77
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
78
+
79
+ ## How to Get Started with the Model
80
+
81
+ Use the code below to get started with the model.
82
+
83
+ [More Information Needed]
84
+
85
+ ## Training Details
86
+
87
+ ### Training Data
88
+
89
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
90
+
91
+ [More Information Needed]
92
+
93
+ ### Training Procedure
94
+
95
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
96
+
97
+ #### Preprocessing [optional]
98
+
99
+ [More Information Needed]
100
+
101
+
102
+ #### Training Hyperparameters
103
+
104
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
105
+
106
+ #### Speeds, Sizes, Times [optional]
107
+
108
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
109
+
110
+ [More Information Needed]
111
+
112
+ ## Evaluation
113
+
114
+ <!-- This section describes the evaluation protocols and provides the results. -->
115
+
116
+ ### Testing Data, Factors & Metrics
117
+
118
+ #### Testing Data
119
+
120
+ <!-- This should link to a Dataset Card if possible. -->
121
+
122
+ [More Information Needed]
123
+
124
+ #### Factors
125
+
126
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
127
+
128
+ [More Information Needed]
129
+
130
+ #### Metrics
131
+
132
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
133
+
134
+ [More Information Needed]
135
+
136
+ ### Results
137
+
138
+ [More Information Needed]
139
+
140
+ #### Summary
141
+
142
+
143
+
144
+ ## Model Examination [optional]
145
+
146
+ <!-- Relevant interpretability work for the model goes here -->
147
+
148
+ [More Information Needed]
149
+
150
+ ## Environmental Impact
151
+
152
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
153
+
154
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
155
+
156
+ - **Hardware Type:** [More Information Needed]
157
+ - **Hours used:** [More Information Needed]
158
+ - **Cloud Provider:** [More Information Needed]
159
+ - **Compute Region:** [More Information Needed]
160
+ - **Carbon Emitted:** [More Information Needed]
161
+
162
+ ## Technical Specifications [optional]
163
+
164
+ ### Model Architecture and Objective
165
+
166
+ [More Information Needed]
167
+
168
+ ### Compute Infrastructure
169
+
170
+ [More Information Needed]
171
+
172
+ #### Hardware
173
+
174
+ [More Information Needed]
175
+
176
+ #### Software
177
+
178
+ [More Information Needed]
179
+
180
+ ## Citation [optional]
181
+
182
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
183
+
184
+ **BibTeX:**
185
+
186
+ [More Information Needed]
187
+
188
+ **APA:**
189
+
190
+ [More Information Needed]
191
+
192
+ ## Glossary [optional]
193
+
194
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
195
+
196
+ [More Information Needed]
197
+
198
+ ## More Information [optional]
199
+
200
+ [More Information Needed]
201
+
202
+ ## Model Card Authors [optional]
203
+
204
+ [More Information Needed]
205
+
206
+ ## Model Card Contact
207
+
208
+ [More Information Needed]
added_tokens.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "</s>": 57,
3
+ "<s>": 56
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<s>",
3
+ "eos_token": "</s>",
4
+ "pad_token": "[PAD]",
5
+ "unk_token": "[UNK]"
6
+ }
tokenizer_config.json ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "52": {
4
+ "content": "[UNK]",
5
+ "lstrip": true,
6
+ "normalized": false,
7
+ "rstrip": true,
8
+ "single_word": false,
9
+ "special": false
10
+ },
11
+ "53": {
12
+ "content": "[PAD]",
13
+ "lstrip": true,
14
+ "normalized": false,
15
+ "rstrip": true,
16
+ "single_word": false,
17
+ "special": false
18
+ },
19
+ "54": {
20
+ "content": "FY-NL",
21
+ "lstrip": true,
22
+ "normalized": false,
23
+ "rstrip": true,
24
+ "single_word": false,
25
+ "special": false
26
+ },
27
+ "55": {
28
+ "content": "NL",
29
+ "lstrip": true,
30
+ "normalized": false,
31
+ "rstrip": true,
32
+ "single_word": false,
33
+ "special": false
34
+ },
35
+ "56": {
36
+ "content": "<s>",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ },
43
+ "57": {
44
+ "content": "</s>",
45
+ "lstrip": false,
46
+ "normalized": false,
47
+ "rstrip": false,
48
+ "single_word": false,
49
+ "special": true
50
+ }
51
+ },
52
+ "bos_token": "<s>",
53
+ "clean_up_tokenization_spaces": false,
54
+ "do_lower_case": false,
55
+ "eos_token": "</s>",
56
+ "extra_special_tokens": {},
57
+ "model_max_length": 1000000000000000019884624838656,
58
+ "pad_token": "[PAD]",
59
+ "replace_word_delimiter_char": " ",
60
+ "target_lang": null,
61
+ "tokenizer_class": "Wav2Vec2CTCTokenizer",
62
+ "unk_token": "[UNK]",
63
+ "word_delimiter_token": "|"
64
+ }
vocab.json ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "'": 36,
3
+ "-": 39,
4
+ "F": 31,
5
+ "FY-NL": 54,
6
+ "L": 20,
7
+ "N": 10,
8
+ "NL": 55,
9
+ "Y": 3,
10
+ "[": 26,
11
+ "[PAD]": 53,
12
+ "[UNK]": 52,
13
+ "]": 5,
14
+ "a": 19,
15
+ "b": 2,
16
+ "c": 6,
17
+ "d": 38,
18
+ "e": 46,
19
+ "f": 47,
20
+ "g": 17,
21
+ "h": 30,
22
+ "i": 22,
23
+ "j": 32,
24
+ "k": 8,
25
+ "l": 4,
26
+ "m": 29,
27
+ "n": 51,
28
+ "o": 33,
29
+ "p": 35,
30
+ "q": 14,
31
+ "r": 24,
32
+ "s": 37,
33
+ "t": 21,
34
+ "u": 9,
35
+ "v": 50,
36
+ "w": 13,
37
+ "x": 48,
38
+ "y": 41,
39
+ "z": 40,
40
+ "|": 43,
41
+ "à": 28,
42
+ "á": 49,
43
+ "â": 15,
44
+ "ä": 16,
45
+ "è": 1,
46
+ "é": 45,
47
+ "ê": 11,
48
+ "ë": 23,
49
+ "ï": 12,
50
+ "ó": 27,
51
+ "ô": 44,
52
+ "ö": 7,
53
+ "ú": 0,
54
+ "û": 42,
55
+ "ü": 25,
56
+ "’": 34,
57
+ "…": 18
58
+ }