html_url stringlengths 48 51 | title stringlengths 1 290 | comments listlengths 0 30 | body stringlengths 0 228k ⌀ | number int64 2 7.08k |
|---|---|---|---|---|
https://github.com/huggingface/datasets/issues/798 | Cannot load TREC dataset: ConnectionError | [
"Hi ! Indeed there's an issue with those links.\r\nWe should probably use the target urls of the redirections instead",
"Hi, the same issue here, could you tell me how to download it through datasets? thanks ",
"Same issue. ",
"Actually it's already fixed on the master branch since #740 \r\nI'll do the 1.1.3 ... | ## Problem
I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally.
* `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>.
* `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label', allow_redirects=True)` raises `requests.exceptions.TooManyRedirects: Exceeded 30 redirects.`
* Opening `http://cogcomp.org/Data/QA/QC/train_5500.label' in a browser works, but opens a different address
* Increasing max_redirects to 100 doesn't help
Also, while debugging I've seen that requesting 'https://storage.googleapis.com/huggingface-nlp/cache/datasets/trec/default/1.1.0/dataset_info.json' returns <Response [404]> before, but it doesn't raise any errors. Not sure if that's relevant.
* datasets.__version__ == '1.1.2'
* requests.__version__ == '2.24.0'
## Error trace
```
>>> import datasets
>>> datasets.__version__
'1.1.2'
>>> dataset = load_dataset("trec", split="train")
Using custom data configuration default
Downloading and preparing dataset trec/default (download: 350.79 KiB, generated: 403.39 KiB, post-processed: Unknown size, total: 754.18 KiB) to /home/przemyslaw/.cache/huggingface/datasets/trec/default/1.1.0/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset
ignore_verifications=ignore_verifications,
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/przemyslaw/.cache/huggingface/modules/datasets_modules/datasets/trec/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7/trec.py", line 140, in _split_generators
dl_files = dl_manager.download_and_extract(_URLs)
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download
num_proc=download_config.num_proc,
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp>
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested
return function(data_struct)
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path
use_etag=download_config.use_etag,
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label
```
I would appreciate some suggestions here. | 798 |
https://github.com/huggingface/datasets/issues/797 | Token classification labels are strings and we don't have the list of labels | [
"Indeed. Pinging @stefan-it here if he want to give an expert opinion :)",
"Related is https://github.com/huggingface/datasets/pull/636",
"Should definitely be a ClassLabel 👍 ",
"Already done."
] | Not sure if this is an issue we want to fix or not, putting it here so it's not forgotten. Right now, in token classification datasets, the labels for NER, POS and the likes are typed as `Sequence` of `strings`, which is wrong in my opinion. These should be `Sequence` of `ClassLabel` or some types that gives easy access to the underlying labels.
The main problem for preprocessing those datasets is that the list of possible labels is not stored inside the `Dataset` object which makes converting the labels to IDs quite difficult (you either have to know the list of labels in advance or run a full pass through the dataset to get the list of labels, the `unique` method being useless with the type `Sequence[str]`). | 797 |
https://github.com/huggingface/datasets/issues/795 | Descriptions of raw and processed versions of wikitext are inverted | [
"Yes indeed ! Thanks for reporting",
"Fixed by:\r\n- #3241"
] | Nothing of importance, but it looks like the descriptions of wikitext-n-v1 and wikitext-n-raw-v1 are inverted for both n=2 and n=103. I just verified by loading them and the `<unk>` tokens are present in the non-raw versions, which confirms that it's a mere inversion of the descriptions and not of the datasets themselves.
Also it would be nice if those descriptions appeared in the dataset explorer.
https://github.com/huggingface/datasets/blob/87bd0864845ea0a1dd7167918dc5f341bf807bd3/datasets/wikitext/wikitext.py#L52 | 795 |
https://github.com/huggingface/datasets/issues/794 | self.options cannot be converted to a Python object for pickling | [
"Hi ! Thanks for reporting that's a bug on master indeed.\r\nWe'll fix that soon"
] | Hi,
Currently I am trying to load csv file with customized read_options. And the latest master seems broken if we pass the ReadOptions object.
Here is a code snippet
```python
from datasets import load_dataset
from pyarrow.csv import ReadOptions
load_dataset("csv", data_files=["out.csv"], read_options=ReadOptions(block_size=16*1024*1024))
```
error is `self.options cannot be converted to a Python object for pickling`
Would you mind to take a look? Thanks!
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-28-ab83fec2ded4> in <module>
----> 1 load_dataset("csv", data_files=["out.csv"], read_options=ReadOptions(block_size=16*1024*1024))
/tmp/datasets/src/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)
602 hash=hash,
603 features=features,
--> 604 **config_kwargs,
605 )
606
/tmp/datasets/src/datasets/builder.py in __init__(self, cache_dir, name, hash, features, **config_kwargs)
162 name,
163 custom_features=features,
--> 164 **config_kwargs,
165 )
166
/tmp/datasets/src/datasets/builder.py in _create_builder_config(self, name, custom_features, **config_kwargs)
281 )
282 else:
--> 283 suffix = Hasher.hash(config_kwargs_to_add_to_suffix)
284
285 if builder_config.data_files is not None:
/tmp/datasets/src/datasets/fingerprint.py in hash(cls, value)
51 return cls.dispatch[type(value)](cls, value)
52 else:
---> 53 return cls.hash_default(value)
54
55 def update(self, value):
/tmp/datasets/src/datasets/fingerprint.py in hash_default(cls, value)
44 @classmethod
45 def hash_default(cls, value):
---> 46 return cls.hash_bytes(dumps(value))
47
48 @classmethod
/tmp/datasets/src/datasets/utils/py_utils.py in dumps(obj)
365 file = StringIO()
366 with _no_cache_fields(obj):
--> 367 dump(obj, file)
368 return file.getvalue()
369
/tmp/datasets/src/datasets/utils/py_utils.py in dump(obj, file)
337 def dump(obj, file):
338 """pickle an object to a file"""
--> 339 Pickler(file, recurse=True).dump(obj)
340 return
341
~/.local/lib/python3.6/site-packages/dill/_dill.py in dump(self, obj)
444 raise PicklingError(msg)
445 else:
--> 446 StockPickler.dump(self, obj)
447 stack.clear() # clear record of 'recursion-sensitive' pickled objects
448 return
/usr/lib/python3.6/pickle.py in dump(self, obj)
407 if self.proto >= 4:
408 self.framer.start_framing()
--> 409 self.save(obj)
410 self.write(STOP)
411 self.framer.end_framing()
/usr/lib/python3.6/pickle.py in save(self, obj, save_persistent_id)
474 f = self.dispatch.get(t)
475 if f is not None:
--> 476 f(self, obj) # Call unbound method with explicit self
477 return
478
~/.local/lib/python3.6/site-packages/dill/_dill.py in save_module_dict(pickler, obj)
931 # we only care about session the first pass thru
932 pickler._session = False
--> 933 StockPickler.save_dict(pickler, obj)
934 log.info("# D2")
935 return
/usr/lib/python3.6/pickle.py in save_dict(self, obj)
819
820 self.memoize(obj)
--> 821 self._batch_setitems(obj.items())
822
823 dispatch[dict] = save_dict
/usr/lib/python3.6/pickle.py in _batch_setitems(self, items)
850 k, v = tmp[0]
851 save(k)
--> 852 save(v)
853 write(SETITEM)
854 # else tmp is empty, and we're done
/usr/lib/python3.6/pickle.py in save(self, obj, save_persistent_id)
494 reduce = getattr(obj, "__reduce_ex__", None)
495 if reduce is not None:
--> 496 rv = reduce(self.proto)
497 else:
498 reduce = getattr(obj, "__reduce__", None)
~/.local/lib/python3.6/site-packages/pyarrow/_csv.cpython-36m-x86_64-linux-gnu.so in pyarrow._csv.ReadOptions.__reduce_cython__()
TypeError: self.options cannot be converted to a Python object for pickling
``` | 794 |
https://github.com/huggingface/datasets/issues/792 | KILT dataset: empty string in triviaqa input field | [
"Just found out about https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md\r\n(Not very clear in https://huggingface.co/datasets/kilt_tasks links to http://github.com/huggingface/datasets/datasets/kilt_tasks/README.md which is dead, closing the issue though :))"
] | # What happened
Both train and test splits of the triviaqa dataset (part of the KILT benchmark) seem to have empty string in their input field (unlike the natural questions dataset, part of the same benchmark)
# Versions
KILT version is `1.0.0`
`datasets` version is `1.1.2`
[more here](https://gist.github.com/PaulLerner/3768c8d25f723edbac20d99b6a4056c1)
# How to reproduce
```py
In [1]: from datasets import load_dataset
In [4]: dataset = load_dataset("kilt_tasks")
# everything works fine, removed output for a better readibility
Dataset kilt_tasks downloaded and prepared to /people/lerner/.cache/huggingface/datasets/kilt_tasks/all_tasks/1.0.0/821c4295a2c35db2847585918d9c47d7f028f1a26b78825d8e77cd3aeb2621a1. Subsequent calls will reuse this data.
# empty string in triviaqa input field
In [36]: dataset['train_triviaqa'][0]
Out[36]:
{'id': 'dpql_5197',
'input': '',
'meta': {'left_context': '',
'mention': '',
'obj_surface': {'text': []},
'partial_evidence': {'end_paragraph_id': [],
'meta': [],
'section': [],
'start_paragraph_id': [],
'title': [],
'wikipedia_id': []},
'right_context': '',
'sub_surface': {'text': []},
'subj_aliases': {'text': []},
'template_questions': {'text': []}},
'output': {'answer': ['five £', '5 £', '£5', 'five £'],
'meta': [],
'provenance': [{'bleu_score': [1.0],
'end_character': [248],
'end_paragraph_id': [30],
'meta': [],
'section': ['Section::::Question of legal tender.\n'],
'start_character': [246],
'start_paragraph_id': [30],
'title': ['Banknotes of the pound sterling'],
'wikipedia_id': ['270680']}]}}
In [35]: dataset['train_triviaqa']['input'][:10]
Out[35]: ['', '', '', '', '', '', '', '', '', '']
# same with test set
In [37]: dataset['test_triviaqa']['input'][:10]
Out[37]: ['', '', '', '', '', '', '', '', '', '']
# works fine with natural questions
In [34]: dataset['train_nq']['input'][:10]
Out[34]:
['how i.met your mother who is the mother',
'who had the most wins in the nfl',
'who played mantis guardians of the galaxy 2',
'what channel is the premier league on in france',
"god's not dead a light in the darkness release date",
'who is the current president of un general assembly',
'when do the eclipse supposed to take place',
'what is the name of the sea surrounding dubai',
'who holds the nba record for most points in a career',
'when did the new maze runner movie come out']
```
Stay safe :) | 792 |
https://github.com/huggingface/datasets/issues/790 | Error running pip install -e ".[dev]" on MacOS 10.13.6: faiss/python does not exist | [
"I saw that `faiss-cpu` 1.6.4.post2 was released recently to fix the installation on macos. It should work now",
"Closing this one.\r\nFeel free to re-open if you still have issues"
] | I was following along with https://huggingface.co/docs/datasets/share_dataset.html#adding-tests-and-metadata-to-the-dataset when I ran into this error.
```sh
git clone https://github.com/huggingface/datasets
cd datasets
virtualenv venv -p python3 --system-site-packages
source venv/bin/activate
pip install -e ".[dev]"
```


Python 3.7.7
| 790 |
https://github.com/huggingface/datasets/issues/788 | failed to reuse cache | [] | I packed the `load_dataset ` in a function of class, and cached data in a directory. But when I import the class and use the function, the data still have to be downloaded again. The information (Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to ******) which logged to terminal shows the path is right to the cache directory, but the files still have to be downloaded again. | 788 |
https://github.com/huggingface/datasets/issues/786 | feat(dataset): multiprocessing _generate_examples | [
"I agree that would be cool :)\r\nRight now the only distributed dataset builder is based on Apache Beam so you can use distributed processing frameworks like Dataflow, Spark, Flink etc. to build your dataset but it's not really well suited for single-worker parallel processing afaik",
"`_generate_examples` can n... | forking this out of #741, this issue is only regarding multiprocessing
I'd love if there was a dataset configuration parameter `workers`, where when it is `1` it behaves as it does right now, and when its `>1` maybe `_generate_examples` can also get the `pool` and return an iterable using the pool.
In my use case, I would instead of:
```python
for datum in data:
yield self.load_datum(datum)
```
do:
```python
return pool.map(self.load_datum, data)
```
As the dataset in question, as an example, has **only** 7000 rows, and takes 10 seconds to load each row on average, it takes almost 20 hours to load the entire dataset.
If this was a larger dataset (and many such datasets exist), it would take multiple days to complete.
Using multiprocessing, for example, 40 cores, could speed it up dramatically. For this dataset, hopefully to fully load in under an hour. | 786 |
https://github.com/huggingface/datasets/issues/784 | Issue with downloading Wikipedia data for low resource language | [
"Hello, maybe you could ty to use another date for the wikipedia dump (see the available [dates](https://dumps.wikimedia.org/jvwiki) here for `jv`) ?",
"@lhoestq\r\n\r\nI've tried `load_dataset('wikipedia', '20200501.zh', beam_runner='DirectRunner')` and got the same `FileNotFoundError` as @SamuelCahyawijaya.\r\n... | Hi, I tried to download Sundanese and Javanese wikipedia data with the following snippet
```
jv_wiki = datasets.load_dataset('wikipedia', '20200501.jv', beam_runner='DirectRunner')
su_wiki = datasets.load_dataset('wikipedia', '20200501.su', beam_runner='DirectRunner')
```
And I get the following error for these two languages:
Javanese
```
FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/jvwiki/20200501/dumpstatus.json
```
Sundanese
```
FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/suwiki/20200501/dumpstatus.json
```
I found from https://github.com/huggingface/datasets/issues/577#issuecomment-688435085 that for small languages, they are directly downloaded and parsed from the Wikipedia dump site, but both of `https://dumps.wikimedia.org/jvwiki/20200501/dumpstatus.json` and `https://dumps.wikimedia.org/suwiki/20200501/dumpstatus.json` are no longer valid.
Any suggestions on how to handle this issue? Thanks! | 784 |
https://github.com/huggingface/datasets/issues/778 | Unexpected behavior when loading cached csv file? | [
"Hi ! Thanks for reporting.\r\nThe same issue was reported in #730 (but with the encodings instead of the delimiter). It was fixed by #770 .\r\nThe fix will be available in the next release :)",
"Thanks for the prompt reply and terribly sorry for the spam! \r\nLooking forward to the new release! "
] | I read a csv file from disk and forgot so specify the right delimiter. When i read the csv file again specifying the right delimiter it had no effect since it was using the cached dataset. I am not sure if this is unwanted behavior since i can always specify `download_mode="force_redownload"`. But i think it would be nice if the information what `delimiter` or what `column_names` were used would influence the identifier of the cached dataset.
Small snippet to reproduce the behavior:
```python
import datasets
with open("dummy_data.csv", "w") as file:
file.write("test,this;text\n")
print(datasets.load_dataset("csv", data_files="dummy_data.csv", split="train").column_names)
# ["test", "this;text"]
print(datasets.load_dataset("csv", data_files="dummy_data.csv", split="train", delimiter=";").column_names)
# still ["test", "this;text"]
```
By the way, thanks a lot for this amazing library! :) | 778 |
https://github.com/huggingface/datasets/issues/773 | Adding CC-100: Monolingual Datasets from Web Crawl Data | [
"cc @aconneau ;) ",
"These dataset files are no longer available. https://data.statmt.org/cc-100/ files provided in this link are no longer available. Can anybody fix that issue?\r\n@abhishekkrthakur @yjernite ",
"Hi ! Can you open an issue to report this problem ? This will help keep track of the fix :)",
"... | ## Adding a Dataset
- **Name:** CC-100: Monolingual Datasets from Web Crawl Data
- **Description:** https://twitter.com/alex_conneau/status/1321507120848625665
- **Paper:** https://arxiv.org/abs/1911.02116
- **Data:** http://data.statmt.org/cc-100/
- **Motivation:** A large scale multi-lingual language modeling dataset. Text is de-duplicated and filtered by how "Wikipedia-like" it is, hopefully helping avoid some of the worst parts of the common crawl.
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
| 773 |
https://github.com/huggingface/datasets/issues/771 | Using `Dataset.map` with `n_proc>1` print multiple progress bars | [
"Yes it allows to monitor the speed of each process. Currently each process takes care of one shard of the dataset.\r\n\r\nAt one point we can consider using streaming batches to a pool of processes instead of sharding the dataset in `num_proc` parts. At that point it will be easy to use only one progress bar",
"... | When using `Dataset.map` with `n_proc > 1`, only one of the processes should print a progress bar (to make the output readable). Right now, `n_proc` progress bars are printed. | 771 |
https://github.com/huggingface/datasets/issues/769 | How to choose proper download_mode in function load_dataset? | [
"`download_mode=datasets.GenerateMode.FORCE_REDOWNLOAD` should work.\r\nThis makes me think we we should rename this to DownloadMode.FORCE_REDOWNLOAD. Currently that's confusing",
"Can we just use `features=...` in `load_dataset` for this @lhoestq?",
"Indeed you should use `features` in this case. \r\n```python... | Hi, I am a beginner to datasets and I try to use datasets to load my csv file.
my csv file looks like this
```
text,label
"Effective but too-tepid biopic",3
"If you sometimes like to go to the movies to have fun , Wasabi is a good place to start .",4
"Emerges as something rare , an issue movie that 's so honest and keenly observed that it does n't feel like one .",5
```
First I try to use this command to load my csv file .
``` python
dataset=load_dataset('csv', data_files=['sst_test.csv'])
```
It seems good, but when i try to overwrite the convert_options to convert 'label' columns from int64 to float32 like this.
``` python
import pyarrow as pa
from pyarrow import csv
read_options = csv.ReadOptions(block_size=1024*1024)
parse_options = csv.ParseOptions()
convert_options = csv.ConvertOptions(column_types={'text': pa.string(), 'label': pa.float32()})
dataset = load_dataset('csv', data_files=['sst_test.csv'], read_options=read_options,
parse_options=parse_options, convert_options=convert_options)
```
It keeps the same:
```shell
Dataset(features: {'text': Value(dtype='string', id=None), 'label': Value(dtype='int64', id=None)}, num_rows: 2210)
```
I think this issue is caused by the parameter "download_mode" Default to REUSE_DATASET_IF_EXISTS because after I delete the cache_dir, it seems right.
Is it a bug? How to choose proper download_mode to avoid this issue?
| 769 |
https://github.com/huggingface/datasets/issues/768 | Add a `lazy_map` method to `Dataset` and `DatasetDict` | [
"This is cool! I think some aspects to think about and decide in terms of API are:\r\n- do we allow several methods (chained i guess)\r\n- how do we inspect the currently set method(s)\r\n- how do we control/reset them"
] | The library is great, but it would be even more awesome with a `lazy_map` method implemented on `Dataset` and `DatasetDict`. This would apply a function on a give item but when the item is requested. Two use cases:
1. load image on the fly
2. apply a random function and get different outputs at each epoch (like data augmentation or randomly masking a part of a sentence for BERT-like objectives). | 768 |
https://github.com/huggingface/datasets/issues/767 | Add option for named splits when using ds.train_test_split | [
"Yes definitely we should give more flexibility to control the name of the splits outputted by `train_test_split`.\r\n\r\nRelated is the very interesting feedback from @bramvanroy on how we should improve this method: https://discuss.huggingface.co/t/how-to-split-main-dataset-into-train-dev-test-as-datasetdict/1090... | ### Feature Request 🚀
Can we add a way to name your splits when using the `.train_test_split` function?
In almost every use case I've come across, I have a `train` and a `test` split in my `DatasetDict`, and I want to create a `validation` split. Therefore, its kinda useless to get a `test` split back from `train_test_split`, as it'll just overwrite my real `test` split that I intended to keep.
### Workaround
this is my hack for dealin with this, for now :slightly_smiling_face:
```python
from datasets import load_dataset
ds = load_dataset('imdb')
ds['train'], ds['validation'] = ds['train'].train_test_split(.1).values()
```
| 767 |
https://github.com/huggingface/datasets/issues/766 | [GEM] add DART data-to-text generation dataset | [
"Is this a duplicate of #924 ?",
"Yup, closing! Haven't been keeping track of the solved issues during the sprint."
] | ## Adding a Dataset
- **Name:** DART
- **Description:** DART consists of 82,191 examples across different domains with each input being a semantic RDF triple set derived from data records in tables and the tree ontology of the schema, annotated with sentence descriptions that cover all facts in the triple set.
- **Paper:** https://arxiv.org/abs/2007.02871v1
- **Data:** https://github.com/Yale-LILY/dart
- **Motivation:** the dataset will likely be included in the GEM benchmark
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
| 766 |
https://github.com/huggingface/datasets/issues/765 | [GEM] Add DART data-to-text generation dataset | [] | ## Adding a Dataset
- **Name:** DART
- **Description:** DART consists of 82,191 examples across different domains with each input being a semantic RDF triple set derived from data records in tables and the tree ontology of the schema, annotated with sentence descriptions that cover all facts in the triple set.
- **Paper:** https://arxiv.org/abs/2007.02871v1
- **Data:** https://github.com/Yale-LILY/dart
- **Motivation:** It will likely be included in the GEM generation evaluation benchmark
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
| 765 |
https://github.com/huggingface/datasets/issues/762 | [GEM] Add Czech Restaurant data-to-text generation dataset | [] | - Paper: https://www.aclweb.org/anthology/W19-8670.pdf
- Data: https://github.com/UFAL-DSG/cs_restaurant_dataset
- The dataset will likely be part of the GEM benchmark | 762 |
https://github.com/huggingface/datasets/issues/761 | Downloaded datasets are not usable offline | [
"Yes currently you need an internet connection because the lib tries to check for the etag of the dataset script online to see if you don't have it locally already.\r\n\r\nIf we add a way to store the etag/hash locally after the first download, it would allow users to first download the dataset with an internet con... | I've been trying to use the IMDB dataset offline, but after downloading it and turning off the internet it still raises an error from the ```requests``` library trying to reach for the online dataset.
Is this the intended behavior ?
(Sorry, I wrote the the first version of this issue while still on nlp 0.3.0). | 761 |
https://github.com/huggingface/datasets/issues/760 | Add meta-data to the HANS dataset | [] | The current version of the [HANS dataset](https://github.com/huggingface/datasets/blob/master/datasets/hans/hans.py) is missing the additional information provided for each example, including the sentence parses, heuristic and subcase. | 760 |
https://github.com/huggingface/datasets/issues/759 | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py | [
"Are you running the script on a machine with an internet connection ?",
"Yes , I can browse the url through Google Chrome.",
"Does this HEAD request return 200 on your machine ?\r\n```python\r\nimport requests ... | Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_dailymail”, “3.0.0”, split=“train”)
And I got the following errors.
Traceback (most recent call last):
File “test.py”, line 7, in
test_dataset = load_dataset(“cnn_dailymail”, “3.0.0”, split=“test”)
File “C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\load.py”, line 589, in load_dataset
module_path, hash = prepare_module(
File “C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\load.py”, line 268, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File “C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\utils\file_utils.py”, line 300, in cached_path
output_path = get_from_cache(
File “C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\utils\file_utils.py”, line 475, in get_from_cache
raise ConnectionError(“Couldn’t reach {}”.format(url))
ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py
How can I fix this ? | 759 |
https://github.com/huggingface/datasets/issues/758 | Process 0 very slow when using num_procs with map to tokenizer | [
"Hi ! Thanks for reporting.\r\nIs the distribution of text length of your data evenly distributed across your dataset ? I mean, could it be because the examples in the first part of your dataset are slower to process ?\r\nAlso could how many CPUs can you use for multiprocessing ?\r\n```python\r\nimport multiprocess... | <img width="721" alt="image" src="https://user-images.githubusercontent.com/17930170/97066109-776d0d00-15ed-11eb-8bba-bb4d2e0fcc33.png">
The code I am using is
```
dataset = load_dataset("text", data_files=[file_path], split='train')
dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True,
truncation=True, max_length=args.block_size), num_proc=8)
dataset.set_format(type='torch', columns=['input_ids'])
dataset.save_to_disk(file_path+'.arrow')
```
| 758 |
https://github.com/huggingface/datasets/issues/757 | CUDA out of memory | [
"Could you provide more details ? What's the code you ran ?",
"```python\r\ntokenizer = FunnelTokenizer.from_pretrained('funnel-transformer/small')\r\n\r\ndef tokenize(batch):\r\n return tokenizer(batch['text'], padding='max_length', truncation=True,max_length=512)\r\n\r\ndataset = load_dataset(\"bookcorpus\",... | In your dataset ,cuda run out of memory as long as the trainer begins:
however, without changing any other element/parameter,just switch dataset to `LineByLineTextDataset`,everything becames OK.
| 757 |
https://github.com/huggingface/datasets/issues/752 | Clicking on a metric in the search page points to datasets page giving "Missing dataset" warning | [
"Thanks for the report, can reproduce. Will fix",
"Fixed now @ogabrielluiz "
] | Hi! Sorry if this isn't the right place to talk about the website, I just didn't exactly where to write this.
Searching a metric in https://huggingface.co/metrics gives the right results but clicking on a metric (E.g ROUGE) points to https://huggingface.co/datasets/rouge. Clicking on a metric without searching points to the right page.
Thanks for all the great work! | 752 |
https://github.com/huggingface/datasets/issues/751 | Error loading ms_marco v2.1 using load_dataset() | [
"There was a similar issue in #294 \r\nClearing the cache and download again the dataset did the job. Could you try to clear your cache and download the dataset again ?",
"I was able to load the dataset successfully, I'm pretty sure it's just a cache issue that you have.\r\nLet me know if clearing your cache fixe... | Code:
`dataset = load_dataset('ms_marco', 'v2.1')`
Error:
```
`---------------------------------------------------------------------------
JSONDecodeError Traceback (most recent call last)
<ipython-input-16-34378c057212> in <module>()
9
10 # Downloading and loading a dataset
---> 11 dataset = load_dataset('ms_marco', 'v2.1')
10 frames
/usr/lib/python3.6/json/decoder.py in raw_decode(self, s, idx)
353 """
354 try:
--> 355 obj, end = self.scan_once(s, idx)
356 except StopIteration as err:
357 raise JSONDecodeError("Expecting value", s, err.value) from None
JSONDecodeError: Unterminated string starting at: line 1 column 388988661 (char 388988660)
`
``` | 751 |
https://github.com/huggingface/datasets/issues/750 | load_dataset doesn't include `features` in its hash | [] | It looks like the function `load_dataset` does not include what's passed in the `features` argument when creating a hash for a given dataset. As a result, if a user includes new features from an already downloaded dataset, those are ignored.
Example: some models on the hub have a different ordering for the labels than what `datasets` uses for MNLI so I'd like to do something along the lines of:
```
dataset = load_dataset("glue", "mnli")
features = dataset["train"].features
features["label"] = ClassLabel(names = ['entailment', 'contradiction', 'neutral']) # new label order
dataset = load_dataset("glue", "mnli", features=features)
``` | 750 |
https://github.com/huggingface/datasets/issues/749 | [XGLUE] Adding new dataset | [
"Amazing! ",
"Small poll @thomwolf @yjernite @lhoestq @JetRunner @qiweizhen .\r\n\r\nAs stated in the XGLUE paper: https://arxiv.org/pdf/2004.01401.pdf , for each of the 11 down-stream tasks training data is only available in English, whereas development and test data is available in multiple different language ... | XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf).
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance | 749 |
https://github.com/huggingface/datasets/issues/744 | Dataset Explorer Doesn't Work for squad_es and squad_it | [
"Oups wrong click.\r\nThis one is for you @srush"
] | https://huggingface.co/nlp/viewer/?dataset=squad_es
https://huggingface.co/nlp/viewer/?dataset=squad_it
Both pages show "OSError: [Errno 28] No space left on device". | 744 |
https://github.com/huggingface/datasets/issues/743 | load_dataset for CSV files not working | [
"Thank you !\r\nCould you provide a csv file that reproduces the error ?\r\nIt doesn't have to be one of your dataset. As long as it reproduces the error\r\nThat would help a lot !",
"I think another good example is the following:\r\n`\r\nfrom datasets import load_dataset\r\n`\r\n`\r\ndataset = load_dataset(\"csv... | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInvalid: CSV parse error: Expected 2 columns, got 1
`
I should mention that when I've tried to read data from `https://github.com/lhoestq/transformers/tree/custom-dataset-in-rag-retriever/examples/rag/test_data/my_knowledge_dataset.csv` it worked without a problem. I've read that there might be some problems with /r character, so I've removed them from the custom dataset, but the problem still remains.
I've added a colab reproducing the bug, but unfortunately I cannot provide the dataset.
https://colab.research.google.com/drive/1Qzu7sC-frZVeniiWOwzoCe_UHZsrlxu8?usp=sharing
Are there any work around for it ?
Thank you | 743 |
https://github.com/huggingface/datasets/issues/741 | Creating dataset consumes too much memory | [
"Thanks for reporting.\r\nIn theory since the dataset script is just made to yield examples to write them into an arrow file, it's not supposed to create memory issues.\r\n\r\nCould you please try to run this exact same loop in a separate script to see if it's not an issue with `PIL` ?\r\nYou can just copy paste wh... | Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, base_path, split):
""" Yields examples. """
filepath = os.path.join(base_path, "annotations", "manual", "PHOENIX-2014-T." + split + ".corpus.csv")
images_path = os.path.join(base_path, "features", "fullFrame-210x260px", split)
with open(filepath, "r", encoding="utf-8") as f:
data = csv.DictReader(f, delimiter="|", quoting=csv.QUOTE_NONE)
for row in data:
frames_path = os.path.join(images_path, row["video"])[:-7]
np_frames = []
for frame_name in os.listdir(frames_path):
frame_path = os.path.join(frames_path, frame_name)
im = Image.open(frame_path)
np_frames.append(np.asarray(im))
im.close()
yield row["name"], {"video": np_frames}
```
The dataset creation process goes out of memory on a machine with 500GB RAM.
I was under the impression that the "generator" here is exactly for that, to avoid memory constraints.
However, even if you want the entire dataset in memory, it would be in the worst case
`260x210x3 x 400 max length x 7000 samples` in bytes (uint8) = 458.64 gigabytes
So I'm not sure why it's taking more than 500GB.
And the dataset creation fails after 170 examples on a machine with 120gb RAM, and after 672 examples on a machine with 500GB RAM.
---
## Info that might help:
Iterating over examples is extremely slow.

If I perform this iteration in my own, custom loop (Without saving to file), it runs at 8-9 examples/sec
And you can see at this state it is using 94% of the memory:

And it is only using one CPU core, which is probably why it's so slow:

| 741 |
https://github.com/huggingface/datasets/issues/737 | Trec Dataset Connection Error | [
"Thanks for reporting.\r\nThat's because the download url has changed. The old url now redirects to the new one but we don't support redirection for downloads.\r\n\r\nI'm opening a PR to update the url"
] | **Datasets Version:**
1.1.2
**Python Version:**
3.6/3.7
**Code:**
```python
from datasets import load_dataset
load_dataset("trec")
```
**Expected behavior:**
Download Trec dataset and load Dataset object
**Current Behavior:**
Get a connection error saying it couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label (but the link doesn't seem broken)
<details>
<summary>Error Logs</summary>
Using custom data configuration default
Downloading and preparing dataset trec/default (download: 350.79 KiB, generated: 403.39 KiB, post-processed: Unknown size, total: 754.18 KiB) to /root/.cache/huggingface/datasets/trec/default/1.1.0/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7...
---------------------------------------------------------------------------
ConnectionError Traceback (most recent call last)
<ipython-input-8-66bf1242096e> in <module>()
----> 1 load_dataset("trec")
10 frames
/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag)
473 elif response is not None and response.status_code == 404:
474 raise FileNotFoundError("Couldn't find file at {}".format(url))
--> 475 raise ConnectionError("Couldn't reach {}".format(url))
476
477 # Try a second time
ConnectionError: Couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label
</details> | 737 |
https://github.com/huggingface/datasets/issues/735 | Throw error when an unexpected key is used in data_files | [
"Thanks for reporting !\r\nWe'll add support for other keys"
] | I have found that only "train", "validation" and "test" are valid keys in the `data_files` argument. When you use any other ones, those attached files are silently ignored - leading to unexpected behaviour for the users.
So the following, unintuitively, returns only one key (namely `train`).
```python
datasets = load_dataset("text", data_files={"train": train_f, "valid": valid_f})
print(datasets.keys())
# dict_keys(['train'])
```
whereas using `validation` instead, does return the expected result:
```python
datasets = load_dataset("text", data_files={"train": train_f, "validation": valid_f})
print(datasets.keys())
# dict_keys(['train', 'validation'])
```
I would like to see more freedom in which keys one can use, but if that is not possible at least an error should be thrown when using an unexpected key. | 735 |
https://github.com/huggingface/datasets/issues/730 | Possible caching bug | [
"Thanks for reporting. That's a bug indeed.\r\nApparently only the `data_files` parameter is taken into account right now in `DatasetBuilder._create_builder_config` but it should also be the case for `config_kwargs` (or at least the instantiated `builder_config`)",
"Hi, does this bug be fixed? when I load JSON fi... | The following code with `test1.txt` containing just "🤗🤗🤗":
```
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1")
print(dataset[0])
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8")
print(dataset[0])
```
produces this output:
```
Downloading and preparing dataset text/default-15600e4d83254059 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155...
Dataset text downloaded and prepared to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155. Subsequent calls will reuse this data.
{'text': 'ð\x9f¤\x97ð\x9f¤\x97ð\x9f¤\x97'}
Using custom data configuration default
Reusing dataset text (/home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155)
{'text': 'ð\x9f¤\x97ð\x9f¤\x97ð\x9f¤\x97'}
```
Just changing the order (and deleting the temp files):
```
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8")
print(dataset[0])
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1")
print(dataset[0])
```
produces this:
```
Using custom data configuration default
Downloading and preparing dataset text/default-15600e4d83254059 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155...
Dataset text downloaded and prepared to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155. Subsequent calls will reuse this data.
{'text': '🤗🤗🤗'}
Using custom data configuration default
Reusing dataset text (/home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155)
{'text': '🤗🤗🤗'}
```
Is it intended that the cache path does not depend on the config entries?
tested with datasets==1.1.2 and python==3.8.5 | 730 |
https://github.com/huggingface/datasets/issues/729 | Better error message when one forgets to call `add_batch` before `compute` | [] | When using metrics, if for some reason a user forgets to call `add_batch` to a metric before `compute` (with no arguments), the error message is a bit cryptic and could probably be made clearer.
## Reproducer
```python
import datasets
import torch
from datasets import Metric
class GatherMetric(Metric):
def _info(self):
return datasets.MetricInfo(
description="description",
citation="citation",
inputs_description="kwargs",
features=datasets.Features({
'predictions': datasets.Value('int64'),
'references': datasets.Value('int64'),
}),
codebase_urls=[],
reference_urls=[],
format='numpy'
)
def _compute(self, predictions, references):
return {"predictions": predictions, "labels": references}
metric = GatherMetric(cache_dir="test-metric")
inputs = torch.randint(0, 2, (1024,))
targets = torch.randint(0, 2, (1024,))
batch_size = 8
for i in range(0, 1024, batch_size):
pass # User forgets to call `add_batch`
result = metric.compute()
```
## Stack trace:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-13-267729d187fa> in <module>
3 pass
4 # metric.add_batch(predictions=inputs[i:i+batch_size], references=targets[i:i+batch_size])
----> 5 result = metric.compute()
~/git/datasets/src/datasets/metric.py in compute(self, *args, **kwargs)
380 if predictions is not None:
381 self.add_batch(predictions=predictions, references=references)
--> 382 self._finalize()
383
384 self.cache_file_name = None
~/git/datasets/src/datasets/metric.py in _finalize(self)
343 elif self.process_id == 0:
344 # Let's acquire a lock on each node files to be sure they are finished writing
--> 345 file_paths, filelocks = self._get_all_cache_files()
346
347 # Read the predictions and references
~/git/datasets/src/datasets/metric.py in _get_all_cache_files(self)
280 filelocks = []
281 for process_id, file_path in enumerate(file_paths):
--> 282 filelock = FileLock(file_path + ".lock")
283 try:
284 filelock.acquire(timeout=self.timeout)
TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
```
| 729 |
https://github.com/huggingface/datasets/issues/728 | Passing `cache_dir` to a metric does not work | [] | When passing `cache_dir` to a custom metric, the folder is concatenated to itself at some point and this results in a FileNotFoundError:
## Reproducer
```python
import datasets
import torch
from datasets import Metric
class GatherMetric(Metric):
def _info(self):
return datasets.MetricInfo(
description="description",
citation="citation",
inputs_description="kwargs",
features=datasets.Features({
'predictions': datasets.Value('int64'),
'references': datasets.Value('int64'),
}),
codebase_urls=[],
reference_urls=[],
format='numpy'
)
def _compute(self, predictions, references):
return {"predictions": predictions, "labels": references}
metric = GatherMetric(cache_dir="test-metric")
inputs = torch.randint(0, 2, (1024,))
targets = torch.randint(0, 2, (1024,))
batch_size = 8
for i in range(0, 1024, batch_size):
metric.add_batch(predictions=inputs[i:i+batch_size], references=targets[i:i+batch_size])
result = metric.compute()
```
## Stack trace:
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
~/git/datasets/src/datasets/metric.py in _finalize(self)
349 reader = ArrowReader(path=self.data_dir, info=DatasetInfo(features=self.features))
--> 350 self.data = Dataset(**reader.read_files([{"filename": f} for f in file_paths]))
351 except FileNotFoundError:
~/git/datasets/src/datasets/arrow_reader.py in read_files(self, files, original_instructions)
227 # Prepend path to filename
--> 228 pa_table = self._read_files(files)
229 files = copy.deepcopy(files)
~/git/datasets/src/datasets/arrow_reader.py in _read_files(self, files)
166 for f_dict in files:
--> 167 pa_table: pa.Table = self._get_dataset_from_filename(f_dict)
168 pa_tables.append(pa_table)
~/git/datasets/src/datasets/arrow_reader.py in _get_dataset_from_filename(self, filename_skip_take)
291 )
--> 292 mmap = pa.memory_map(filename)
293 f = pa.ipc.open_stream(mmap)
~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.memory_map()
~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.MemoryMappedFile._open()
~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
FileNotFoundError: [Errno 2] Failed to open local file 'test-metric/gather_metric/default/test-metric/gather_metric/default/default_experiment-1-0.arrow'. Detail: [errno 2] No such file or directory
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
<ipython-input-17-e42d43cc981f> in <module>
2 for i in range(0, 1024, batch_size):
3 metric.add_batch(predictions=inputs[i:i+batch_size], references=targets[i:i+batch_size])
----> 4 result = metric.compute()
~/git/datasets/src/datasets/metric.py in compute(self, *args, **kwargs)
380 if predictions is not None:
381 self.add_batch(predictions=predictions, references=references)
--> 382 self._finalize()
383
384 self.cache_file_name = None
~/git/datasets/src/datasets/metric.py in _finalize(self)
351 except FileNotFoundError:
352 raise ValueError(
--> 353 "Error in finalize: another metric instance is already using the local cache file. "
354 "Please specify an experiment_id to avoid colision between distributed metric instances."
355 )
ValueError: Error in finalize: another metric instance is already using the local cache file. Please specify an experiment_id to avoid colision between distributed metric instances.
```
The code works when we remove the `cache_dir=...` from the metric. | 728 |
https://github.com/huggingface/datasets/issues/727 | Parallel downloads progress bar flickers | [] | When there are parallel downloads using the download manager, the tqdm progress bar flickers since all the progress bars are on the same line.
To fix that we could simply specify `position=i` for i=0 to n the number of files to download when instantiating the tqdm progress bar.
Another way would be to have one "master" progress bar that tracks the number of finished downloads, and then one progress bar per process that show the current downloads. | 727 |
https://github.com/huggingface/datasets/issues/726 | "Checksums didn't match for dataset source files" error while loading openwebtext dataset | [
"Hi try, to provide more information please.\r\n\r\nExample code in a colab to reproduce the error, details on what you are trying to do and what you were expected and details on your environment (OS, PyPi packages version).",
"> Hi try, to provide more information please.\r\n> \r\n> Example code in a colab to re... | Hi,
I have encountered this problem during loading the openwebtext dataset:
```
>>> dataset = load_dataset('openwebtext')
Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, post-processed: Unknown size, total: 49.03 GiB) to /home/admin/.cache/huggingface/datasets/openwebtext/plain_text/1.0.0/5c636399c7155da97c982d0d70ecdce30fbca66a4eb4fc768ad91f8331edac02...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/admin/workspace/anaconda3/envs/torch1.6-py3.7/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset
ignore_verifications=ignore_verifications,
File "/home/admin/workspace/anaconda3/envs/torch1.6-py3.7/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/admin/workspace/anaconda3/envs/torch1.6-py3.7/lib/python3.7/site-packages/datasets/builder.py", line 536, in _download_and_prepare
self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files"
File "/home/admin/workspace/anaconda3/envs/torch1.6-py3.7/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 39, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://zenodo.org/record/3834942/files/openwebtext.tar.xz']
```
I think this problem is caused because the released dataset has changed. Or I should download the dataset manually?
Sorry for release the unfinised issue by mistake. | 726 |
https://github.com/huggingface/datasets/issues/724 | need to redirect /nlp to /datasets and remove outdated info | [
"Should be fixed now: \r\n\r\n\r\n\r\nNot sure I understand what you mean by the second part?\r\n",
"Thank you!\r\n\r\n> Not sure I understand what you mean by the second part?\r\n\r\nCompare the 2:\r\n* htt... | It looks like the website still has all the `nlp` data, e.g.: https://huggingface.co/nlp/viewer/?dataset=wikihow&config=all
should probably redirect to: https://huggingface.co/datasets/wikihow
also for some reason the new information is slightly borked. If you look at the old one it was nicely formatted and had the links marked up, the new one is just a jumble of text in one chunk and no markup for links (i.e. not clickable). | 724 |
https://github.com/huggingface/datasets/issues/723 | Adding pseudo-labels to datasets | [
"Nice ! :)\r\nIt's indeed the first time we have such contributions so we'll have to figure out the appropriate way to integrate them.\r\nCould you add details on what they could be used for ?\r\n",
"They can be used as training data for a smaller model.",
"Sounds just like a regular dataset to me then, no?",
... | I recently [uploaded pseudo-labels](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md) for CNN/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo.
Since pseudo-labels are just a large model's generations on an existing dataset, what is the right way to structure this contribution.
I read https://huggingface.co/docs/datasets/add_dataset.html, but it doesn't really cover this type of contribution.
I could, for example, make a new directory, `xsum_bart_pseudolabels` for each set of pseudolabels or add some sort of parametrization to `xsum.py`: https://github.com/huggingface/datasets/blob/5f4c6e830f603830117877b8990a0e65a2386aa6/datasets/xsum/xsum.py
What do you think @lhoestq ?
| 723 |
https://github.com/huggingface/datasets/issues/721 | feat(dl_manager): add support for ftp downloads | [
"We only support http by default for downloading.\r\nIf you really need to use ftp, then feel free to use a library that allows to download through ftp in your dataset script (I see that you've started working on #722 , that's awesome !). The users will get a message to install the extra library when they load the ... | I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz"
dl_manager.download_and_extract(_URL)
```
I get an error:
> ValueError: unable to parse ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz as a URL or as a local path
I checked, and indeed you don't consider `ftp` as a remote file.
https://github.com/huggingface/datasets/blob/4c2af707a6955cf4b45f83ac67990395327c5725/src/datasets/utils/file_utils.py#L188
Adding `ftp` to that list does not immediately solve the issue, so there probably needs to be some extra work.
| 721 |
https://github.com/huggingface/datasets/issues/720 | OSError: Cannot find data file when not using the dummy dataset in RAG | [
"Same issue here. I will be digging further, but it looks like the [script](https://github.com/huggingface/datasets/blob/master/datasets/wiki_dpr/wiki_dpr.py#L132) is attempting to open a file that is not downloaded yet. \r\n\r\n```\r\n99dcbca09109e58502e6b9271d4d3f3791b43f61f3161a76b25d2775ab1a4498.lock\r\n```\r\n... | ## Environment info
transformers version: 3.3.1
Platform: Linux-4.19
Python version: 3.7.7
PyTorch version (GPU?): 1.6.0
Tensorflow version (GPU?): No
Using GPU in script?: Yes
Using distributed or parallel set-up in script?: No
## To reproduce
Steps to reproduce the behaviour:
```
import os
os.environ['HF_DATASETS_CACHE'] = '/workspace/notebooks/POCs/cache'
from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration
tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq")
retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=False)
```
Plese note that I'm using the whole dataset: **use_dummy_dataset=False**
After around 4 hours (downloading and some other things) this is returned:
```
Downloading and preparing dataset wiki_dpr/psgs_w100.nq.exact (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /workspace/notebooks/POCs/cache/wiki_dpr/psgs_w100.nq.exact/0.0.0/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2...
---------------------------------------------------------------------------
UnpicklingError Traceback (most recent call last)
/opt/conda/lib/python3.7/site-packages/numpy/lib/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding)
459 try:
--> 460 return pickle.load(fid, **pickle_kwargs)
461 except Exception:
UnpicklingError: pickle data was truncated
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
/opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
552 # Prepare split will record examples associated to the split
--> 553 self._prepare_split(split_generator, **prepare_split_kwargs)
554 except OSError:
/opt/conda/lib/python3.7/site-packages/datasets/builder.py in _prepare_split(self, split_generator)
840 for key, record in utils.tqdm(
--> 841 generator, unit=" examples", total=split_info.num_examples, leave=False, disable=not_verbose
842 ):
/opt/conda/lib/python3.7/site-packages/tqdm/notebook.py in __iter__(self, *args, **kwargs)
217 try:
--> 218 for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs):
219 # return super(tqdm...) will not catch exception
/opt/conda/lib/python3.7/site-packages/tqdm/std.py in __iter__(self)
1128 try:
-> 1129 for obj in iterable:
1130 yield obj
~/.cache/huggingface/modules/datasets_modules/datasets/wiki_dpr/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2/wiki_dpr.py in _generate_examples(self, data_file, vectors_files)
131 break
--> 132 vecs = np.load(open(vectors_files.pop(0), "rb"), allow_pickle=True)
133 vec_idx = 0
/opt/conda/lib/python3.7/site-packages/numpy/lib/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding)
462 raise IOError(
--> 463 "Failed to interpret file %s as a pickle" % repr(file))
464 finally:
OSError: Failed to interpret file <_io.BufferedReader name='/workspace/notebooks/POCs/cache/downloads/f34d5f091294259b4ca90e813631e69a6ded660d71b6cbedf89ddba50df94448'> as a pickle
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<ipython-input-10-f28df370ac47> in <module>
1 # ln -s /workspace/notebooks/POCs/cache /root/.cache/huggingface/datasets
----> 2 retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=False)
/opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in from_pretrained(cls, retriever_name_or_path, **kwargs)
307 generator_tokenizer = rag_tokenizer.generator
308 return cls(
--> 309 config, question_encoder_tokenizer=question_encoder_tokenizer, generator_tokenizer=generator_tokenizer
310 )
311
/opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in __init__(self, config, question_encoder_tokenizer, generator_tokenizer)
298 self.config = config
299 if self._init_retrieval:
--> 300 self.init_retrieval()
301
302 @classmethod
/opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in init_retrieval(self)
324
325 logger.info("initializing retrieval")
--> 326 self.index.init_index()
327
328 def postprocess_docs(self, docs, input_strings, prefix, n_docs, return_tensors=None):
/opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in init_index(self)
238 split=self.dataset_split,
239 index_name=self.index_name,
--> 240 dummy=self.use_dummy_dataset,
241 )
242 self.dataset.set_format("numpy", columns=["embeddings"], output_all_columns=True)
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)
609 download_config=download_config,
610 download_mode=download_mode,
--> 611 ignore_verifications=ignore_verifications,
612 )
613
/opt/conda/lib/python3.7/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
474 if not downloaded_from_gcs:
475 self._download_and_prepare(
--> 476 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
477 )
478 # Sync info
/opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
553 self._prepare_split(split_generator, **prepare_split_kwargs)
554 except OSError:
--> 555 raise OSError("Cannot find data file. " + (self.manual_download_instructions or ""))
556
557 if verify_infos:
OSError: Cannot find data file.
```
Thanks
| 720 |
https://github.com/huggingface/datasets/issues/712 | Error in the notebooks/Overview.ipynb notebook | [
"Do this:\r\n``` python\r\nsquad_dataset = list_datasets(with_details=True)[datasets.index('squad')]\r\npprint(squad_dataset.__dict__) # It's a simple python dataclass\r\n```",
"Thanks! This worked. I have created a PR to fix this in the notebook. "
] | Hi,
I got the following error in **cell number 3** while exploring the **Overview.ipynb** notebook in google colab. I used the [link ](https://colab.research.google.com/github/huggingface/datasets/blob/master/notebooks/Overview.ipynb) provided in the main README file to open it in colab.
```python
# You can access various attributes of the datasets before downloading them
squad_dataset = list_datasets()[datasets.index('squad')]
pprint(squad_dataset.__dict__) # It's a simple python dataclass
```
Error message
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-5-8dc805c4949c> in <module>()
2 squad_dataset = list_datasets()[datasets.index('squad')]
3
----> 4 pprint(squad_dataset.__dict__) # It's a simple python dataclass
AttributeError: 'str' object has no attribute '__dict__'
```
The object `squad_dataset` is a `str` not a `dataclass` . | 712 |
https://github.com/huggingface/datasets/issues/709 | How to use similarity settings other then "BM25" in Elasticsearch index ? | [
"Datasets does not use elasticsearch API to define custom similarity. If you want to use a custom similarity, the best would be to run a curl request directly to your elasticsearch instance (see sample hereafter, directly from ES documentation), then you should be able to use `my_similarity` in your configuration p... | **QUESTION : How should we use other similarity algorithms supported by Elasticsearch other than "BM25" ?**
**ES Reference**
https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules-similarity.html
**HF doc reference:**
https://huggingface.co/docs/datasets/faiss_and_ea.html
**context :**
========
I used the latest Elasticsearch server version 7.9.2
When I set DFR which is one of the other similarity algorithms supported by elasticsearch in the mapping, I get an error
For example DFR that I had tried in the first instance in mappings as below.,
`"mappings": {"properties": {"text": {"type": "text", "analyzer": "standard", "similarity": "DFR"}}},`
I get the following error
RequestError: RequestError(400, 'mapper_parsing_exception', 'Unknown Similarity type [DFR] for field [text]')
The other thing as another option I had tried was to declare "similarity": "my_similarity" within settings and then assigning "my_similarity" inside the mappings as below
`es_config = {
"settings": {
"number_of_shards": 1,
**"similarity": "my_similarity"**: {
"type": "DFR",
"basic_model": "g",
"after_effect": "l",
"normalization": "h2",
"normalization.h2.c": "3.0"
} ,
"analysis": {"analyzer": {"stop_standard": {"type": "standard", " stopwords": "_english_"}}},
},
"mappings": {"properties": {"text": {"type": "text", "analyzer": "standard", "similarity": "my_similarity"}}},
}`
For this , I got the following error
RequestError: RequestError(400, 'illegal_argument_exception', 'unknown setting [index.similarity] please check that any required plugins are installed, or check the breaking changes documentation for removed settings')
| 709 |
https://github.com/huggingface/datasets/issues/708 | Datasets performance slow? - 6.4x slower than in memory dataset | [
"Facing a similar issue here. My model using SQuAD dataset takes about 1h to process with in memory data and more than 2h with datasets directly.",
"And if you use in-memory-data with datasets with `load_dataset(..., keep_in_memory=True)`?",
"Thanks for the tip @thomwolf ! I did not see that flag in the docs. I... | I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset.
Now, this is expected I guess, due to memory mapping data using arrow files, and you don't get anything for free. But I was surprised at how much slower.
For example, in the `yelp_polarity` dataset (560000 datapoints, or 17500 batches of 32), it was taking me 3:31 to just get process the data and get it on the GPU (no model involved). Whereas, the equivalent in-memory dataset would finish in just 0:33.
Is this expected? Given that one of the goals of this project is also accelerate dataset processing, this seems a bit slower than I would expect. I understand the advantages of being able to work on datasets that exceed memory, and that's very exciting to me, but thought I'd open this issue to discuss.
For reference I'm running a AMD Ryzen Threadripper 1900X 8-Core Processor CPU, with 128 GB of RAM and an NVME SSD Samsung 960 EVO. I'm running with an RTX Titan 24GB GPU.
I can see with `iotop` that the dataset gets quickly loaded into the system read buffers, and thus doesn't incur any additional IO reads. Thus in theory, all the data *should* be in RAM, but in my benchmark code below it's still 6.4 times slower.
What am I doing wrong? And is there a way to force the datasets to completely load into memory instead of being memory mapped in cases where you want maximum performance?
At 3:31 for 17500 batches, that's 12ms per batch. Does this 12ms just become insignificant as a proportion of forward and backward passes in practice, and thus it's not worth worrying about this in practice?
In any case, here's my code `benchmark.py`. If you run it with an argument of `memory` it will copy the data into memory before executing the same test.
``` py
import sys
from datasets import load_dataset
from transformers import DataCollatorWithPadding, BertTokenizerFast
from torch.utils.data import DataLoader
from tqdm import tqdm
if __name__ == '__main__':
tokenizer = BertTokenizerFast.from_pretrained('bert-base-cased')
collate_fn = DataCollatorWithPadding(tokenizer, padding=True)
ds = load_dataset('yelp_polarity')
def do_tokenize(x):
return tokenizer(x['text'], truncation=True)
ds = ds.map(do_tokenize, batched=True)
ds.set_format('torch', ['input_ids', 'token_type_ids', 'attention_mask'])
if len(sys.argv) == 2 and sys.argv[1] == 'memory':
# copy to memory - probably a faster way to do this - but demonstrates the point
# approximately 530 batches per second - 17500 batches in 0:33
print('using memory')
_ds = [data for data in tqdm(ds['train'])]
else:
# approximately 83 batches per second - 17500 batches in 3:31
print('using datasets')
_ds = ds['train']
dl = DataLoader(_ds, shuffle=True, collate_fn=collate_fn, batch_size=32, num_workers=4)
for data in tqdm(dl):
for k, v in data.items():
data[k] = v.to('cuda')
```
For reference, my conda environment is [here](https://gist.github.com/05b6101518ff70ed42a858b302a0405d)
Once again, I'm very excited about this library, and how easy it is to load datasets, and to do so without worrying about system memory constraints.
Thanks for all your great work.
| 708 |
https://github.com/huggingface/datasets/issues/707 | Requirements should specify pyarrow<1 | [
"Hello @mathcass I would want to work on this issue. May I do the same? ",
"@punitaojha, certainly. Feel free to work on this. Let me know if you need any help or clarity.",
"Hello @mathcass \r\n1. I did fork the repository and clone the same on my local system. \r\n\r\n2. Then learnt about how we can publish o... | I was looking at the docs on [Perplexity](https://huggingface.co/transformers/perplexity.html) via GPT2. When you load datasets and try to load Wikitext, you get the error,
```
module 'pyarrow' has no attribute 'PyExtensionType'
```
I traced it back to datasets having installed PyArrow 1.0.1 but there's not pinning in the setup file.
https://github.com/huggingface/datasets/blob/e86a2a8f869b91654e782c9133d810bb82783200/setup.py#L68
Downgrading by installing `pip install "pyarrow<1"` resolved the issue. | 707 |
https://github.com/huggingface/datasets/issues/705 | TypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit' | [
"Hi !\r\nThanks for reporting :) \r\nIndeed this is an issue on the `datasets` side.\r\nI'm creating a PR",
"Thanks @lhoestq !"
] | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1 (installed from master)
- `datasets` version: 1.0.2 (installed as a dependency from transformers)
- Platform: Linux-4.15.0-118-generic-x86_64-with-debian-stretch-sid
- Python version: 3.7.9
I'm testing my own text classification dataset using [this example](https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-generic-text-classification-script-in-tensorflow) from transformers. The dataset is split into train / dev / test, and in csv format, containing just a text and a label columns, using comma as sep. Here's a sample:
```
text,label
"Registra-se a presença do acadêmico <name> . <REL_SEP> Ao me deparar com a descrição de dois autores no polo ativo da ação junto ao PJe , margem esquerda foi informado pela procuradora do reclamante que se trata de uma reclamação trabalhista individual . <REL_SEP> Diante disso , face a ausência injustificada do autor <name> , determina-se o ARQUIVAMENTO do presente processo , com relação a este , nos termos do [[ art . 844 da CLT ]] . <REL_SEP> CUSTAS AUTOR - DISPENSADO <REL_SEP> Custas pelo autor no importe de R $326,82 , calculadas sobre R $16.341,03 , dispensadas na forma da lei , em virtude da concessão dos benefícios da Justiça Gratuita , ora deferida . <REL_SEP> Cientes os presentes . <REL_SEP> Audiência encerrada às 8h42min . <REL_SEP> <name> <REL_SEP> Juíza do Trabalho <REL_SEP> Ata redigida por << <name> >> , Secretário de Audiência .",NO_RELATION
```
However, @Santosh-Gupta reported in #7351 that he had the exact same problem using the ChemProt dataset. His colab notebook is referenced in the following section.
## To reproduce
Steps to reproduce the behavior:
1. Created a new conda environment using conda env -n transformers python=3.7
2. Cloned transformers master, `cd` into it and installed using pip install --editable . -r examples/requirements.txt
3. Installed tensorflow with `pip install tensorflow`
3. Ran `run_tf_text_classification.py` with the following parameters:
```
--train_file <DATASET_PATH>/train.csv \
--dev_file <DATASET_PATH>/dev.csv \
--test_file <DATASET_PATH>/test.csv \
--label_column_id 1 \
--model_name_or_path neuralmind/bert-base-portuguese-cased \
--output_dir <OUTPUT_PATH> \
--num_train_epochs 4 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--do_train \
--do_eval \
--do_predict \
--logging_steps 1000 \
--evaluate_during_training \
--save_steps 1000 \
--overwrite_output_dir \
--overwrite_cache
```
I have also copied [@Santosh-Gupta 's colab notebook](https://colab.research.google.com/drive/11APei6GjphCZbH5wD9yVlfGvpIkh8pwr?usp=sharing) as a reference.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Here is the stack trace:
```
2020-10-02 07:33:41.622011: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
/media/discoD/repositorios/transformers_pedro/src/transformers/training_args.py:333: FutureWarning: The `evaluate_during_training` argument is deprecated in favor of `evaluation_strategy` (which has more options)
FutureWarning,
2020-10-02 07:33:43.471648: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1
2020-10-02 07:33:43.471791: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:43.472664: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX 1070 computeCapability: 6.1
coreClock: 1.7085GHz coreCount: 15 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 238.66GiB/s
2020-10-02 07:33:43.472684: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2020-10-02 07:33:43.472765: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-10-02 07:33:43.472809: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2020-10-02 07:33:43.472848: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2020-10-02 07:33:43.474209: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2020-10-02 07:33:43.474276: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2020-10-02 07:33:43.561219: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2020-10-02 07:33:43.561397: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:43.562345: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:43.563219: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2020-10-02 07:33:43.563595: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2020-10-02 07:33:43.570091: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 3591830000 Hz
2020-10-02 07:33:43.570494: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x560842432400 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-10-02 07:33:43.570511: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2020-10-02 07:33:43.570702: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:43.571599: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX 1070 computeCapability: 6.1
coreClock: 1.7085GHz coreCount: 15 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 238.66GiB/s
2020-10-02 07:33:43.571633: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2020-10-02 07:33:43.571645: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-10-02 07:33:43.571654: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2020-10-02 07:33:43.571664: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2020-10-02 07:33:43.571691: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2020-10-02 07:33:43.571704: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2020-10-02 07:33:43.571718: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2020-10-02 07:33:43.571770: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:43.572641: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:43.573475: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2020-10-02 07:33:47.139227: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-10-02 07:33:47.139265: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1263] 0
2020-10-02 07:33:47.139272: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 0: N
2020-10-02 07:33:47.140323: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:47.141248: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:47.142085: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:47.142854: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5371 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1)
2020-10-02 07:33:47.146317: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5608b95dc5c0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-10-02 07:33:47.146336: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): GeForce GTX 1070, Compute Capability 6.1
10/02/2020 07:33:47 - INFO - __main__ - n_replicas: 1, distributed training: False, 16-bits training: False
10/02/2020 07:33:47 - INFO - __main__ - Training/evaluation parameters TFTrainingArguments(output_dir='/media/discoD/models/datalawyer/pedidos/transformers_tf', overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=True, evaluate_during_training=True, evaluation_strategy=<EvaluationStrategy.STEPS: 'steps'>, prediction_loss_only=False, per_device_train_batch_size=4, per_device_eval_batch_size=4, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=4.0, max_steps=-1, warmup_steps=0, logging_dir='runs/Oct02_07-33-43_user-XPS-8700', logging_first_step=False, logging_steps=1000, save_steps=1000, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=1000, dataloader_num_workers=0, past_index=-1, run_name='/media/discoD/models/datalawyer/pedidos/transformers_tf', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=False, tpu_name=None, xla=False)
10/02/2020 07:33:53 - INFO - filelock - Lock 140407857405776 acquired on /home/user/.cache/huggingface/datasets/e0f1e9ed46db1e2429189f06b479cbd4075c0976104c1aacf8f77d9a53d2ad87.03756fef6da334f50a7ff73608e21b5018229944ca250416ce7352e25d84a552.py.lock
10/02/2020 07:33:53 - INFO - filelock - Lock 140407857405776 released on /home/user/.cache/huggingface/datasets/e0f1e9ed46db1e2429189f06b479cbd4075c0976104c1aacf8f77d9a53d2ad87.03756fef6da334f50a7ff73608e21b5018229944ca250416ce7352e25d84a552.py.lock
Using custom data configuration default
Traceback (most recent call last):
File "run_tf_text_classification.py", line 283, in <module>
main()
File "run_tf_text_classification.py", line 222, in main
max_seq_length=data_args.max_seq_length,
File "run_tf_text_classification.py", line 43, in get_tfds
ds = datasets.load_dataset("csv", data_files=files)
File "/media/discoD/anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/load.py", line 604, in load_dataset
**config_kwargs,
File "/media/discoD/anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/builder.py", line 158, in __init__
**config_kwargs,
File "/media/discoD/anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/builder.py", line 269, in _create_builder_config
for key in sorted(data_files.keys()):
TypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit'
```
## Expected behavior
Should be able to run the text-classification example as described in [https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-generic-text-classification-script-in-tensorflow](https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-generic-text-classification-script-in-tensorflow)
Originally opened this issue at transformers' repository: [https://github.com/huggingface/transformers/issues/7535](https://github.com/huggingface/transformers/issues/7535). @jplu instructed me to open here, since according to [this](https://github.com/huggingface/transformers/issues/7535#issuecomment-702778885) evidence, the problem is from datasets.
Thanks! | 705 |
https://github.com/huggingface/datasets/issues/699 | XNLI dataset is not loading | [
"also i tried below code to solve checksum error \r\n`datasets-cli test ./datasets/xnli --save_infos --all_configs`\r\n\r\nand it shows \r\n\r\n```\r\n2020-10-02 07:06:16.588760: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\r\nTraceback (most ... | `dataset = datasets.load_dataset(path='xnli')`
showing below error
```
/opt/conda/lib/python3.7/site-packages/nlp/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
36 if len(bad_urls) > 0:
37 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 38 raise NonMatchingChecksumError(error_msg + str(bad_urls))
39 logger.info("All the checksums matched successfully" + for_verification_name)
40
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://www.nyu.edu/projects/bowman/xnli/XNLI-1.0.zip']
```
I think URL is now changed to "https://cims.nyu.edu/~sbowman/xnli/XNLI-MT-1.0.zip" | 699 |
https://github.com/huggingface/datasets/issues/691 | Add UI filter to filter datasets based on task | [
"Already supported."
] | This is great work, so huge shoutout to contributors and huggingface.
The [/nlp/viewer](https://huggingface.co/nlp/viewer/) is great and the [/datasets](https://huggingface.co/datasets) page is great. I was wondering if in both or either places we can have a filter that selects if a dataset is good for the following tasks (non exhaustive list)
- Classification
- Multi label
- Multi class
- Q&A
- Summarization
- Translation
I believe this feature might have some value, for folks trying to find datasets for a particular task, and then testing their model capabilities.
Thank you :) | 691 |
https://github.com/huggingface/datasets/issues/690 | XNLI dataset: NonMatchingChecksumError | [
"Thanks for reporting.\r\nThe data file must have been updated by the host.\r\nI'll update the checksum with the new one.",
"Well actually it looks like the link isn't working anymore :(",
"The new link is https://cims.nyu.edu/~sbowman/xnli/XNLI-1.0.zip\r\nI'll update the dataset script",
"I'll do a release i... | Hi,
I tried to download "xnli" dataset in colab using
`xnli = load_dataset(path='xnli')`
but got 'NonMatchingChecksumError' error
`NonMatchingChecksumError Traceback (most recent call last)
<ipython-input-27-a87bedc82eeb> in <module>()
----> 1 xnli = load_dataset(path='xnli')
3 frames
/usr/local/lib/python3.6/dist-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
37 if len(bad_urls) > 0:
38 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 39 raise NonMatchingChecksumError(error_msg + str(bad_urls))
40 logger.info("All the checksums matched successfully" + for_verification_name)
41
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://www.nyu.edu/projects/bowman/xnli/XNLI-1.0.zip']`
The same code worked well several days ago in colab but stopped working now. Thanks! | 690 |
https://github.com/huggingface/datasets/issues/687 | `ArrowInvalid` occurs while running `Dataset.map()` function | [
"Hi !\r\n\r\nThis is because `encode` expects one single text as input (str), or one tokenized text (List[str]).\r\nI believe that you actually wanted to use `encode_batch` which expects a batch of texts.\r\nHowever this method is only available for our \"fast\" tokenizers (ex: BertTokenizerFast).\r\nBertJapanese i... | It seems to fail to process the final batch. This [colab](https://colab.research.google.com/drive/1_byLZRHwGP13PHMkJWo62Wp50S_Z2HMD?usp=sharing) can reproduce the error.
Code:
```python
# train_ds = Dataset(features: {
# 'title': Value(dtype='string', id=None),
# 'score': Value(dtype='float64', id=None)
# }, num_rows: 99999)
# suggested in #665
class PicklableTokenizer(BertJapaneseTokenizer):
def __getstate__(self):
state = dict(self.__dict__)
state['do_lower_case'] = self.word_tokenizer.do_lower_case
state['never_split'] = self.word_tokenizer.never_split
del state['word_tokenizer']
return state
def __setstate(self):
do_lower_case = state.pop('do_lower_case')
never_split = state.pop('never_split')
self.__dict__ = state
self.word_tokenizer = MecabTokenizer(
do_lower_case=do_lower_case, never_split=never_split
)
t = PicklableTokenizer.from_pretrained('bert-base-japanese-whole-word-masking')
encoded = train_ds.map(
lambda examples: {'tokens': t.encode(examples['title'], max_length=1000)}, batched=True, batch_size=1000
)
```
Error Message:
```
99% 99/100 [00:22<00:00, 39.07ba/s]
---------------------------------------------------------------------------
ArrowInvalid Traceback (most recent call last)
<timed exec> in <module>
/usr/local/lib/python3.6/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)
1242 fn_kwargs=fn_kwargs,
1243 new_fingerprint=new_fingerprint,
-> 1244 update_data=update_data,
1245 )
1246 else:
/usr/local/lib/python3.6/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
151 "output_all_columns": self._output_all_columns,
152 }
--> 153 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
154 if new_format["columns"] is not None:
155 new_format["columns"] = list(set(new_format["columns"]) & set(out.column_names))
/usr/local/lib/python3.6/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
161 # Call actual function
162
--> 163 out = func(self, *args, **kwargs)
164
165 # Update fingerprint of in-place transforms + update in-place history of transforms
/usr/local/lib/python3.6/site-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, update_data)
1496 if update_data:
1497 batch = cast_to_python_objects(batch)
-> 1498 writer.write_batch(batch)
1499 if update_data:
1500 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file
/usr/local/lib/python3.6/site-packages/datasets/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size)
271 typed_sequence = TypedSequence(batch_examples[col], type=col_type, try_type=col_try_type)
272 typed_sequence_examples[col] = typed_sequence
--> 273 pa_table = pa.Table.from_pydict(typed_sequence_examples)
274 self.write_table(pa_table)
275
/usr/local/lib/python3.6/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_pydict()
/usr/local/lib/python3.6/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_arrays()
/usr/local/lib/python3.6/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.validate()
/usr/local/lib/python3.6/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowInvalid: Column 4 named tokens expected length 999 but got length 1000
```
| 687 |
https://github.com/huggingface/datasets/issues/686 | Dataset browser url is still https://huggingface.co/nlp/viewer/ | [
"Yes! might do it with @srush one of these days. Hopefully it won't break too many links (we can always redirect from old url to new)",
"This was fixed but forgot to close the issue. cc @lhoestq @yjernite \r\n\r\nThanks @jarednielsen!"
] | Might be worth updating to https://huggingface.co/datasets/viewer/ | 686 |
https://github.com/huggingface/datasets/issues/678 | The download instructions for c4 datasets are not contained in the error message | [
"Good catch !\r\nIndeed the `@property` is missing.\r\n\r\nFeel free to open a PR :)",
"Also not that C4 is a dataset that needs an Apache Beam runtime to be generated.\r\nFor example Dataflow, Spark, Flink etc.\r\n\r\nUsually we generate the dataset on our side once and for all, but we haven't done it for C4 yet... | The manual download instructions are not clear
```The dataset c4 with config en requires manual data.
Please follow the manual download instructions: <bound method C4.manual_download_instructions of <datasets_modules.datasets.c4.830b0c218bd41fed439812c8dd19dbd4767d2a3faa385eb695cf8666c982b1b3.c4.C4 object at 0x7ff8c5969760>>.
Manual data can be loaded with `datasets.load_dataset(c4, data_dir='<path/to/manual/data>')
```
Either `@property` could be added to C4.manual_download_instrcutions (or make it a real property), or the manual_download_instructions function needs to be called I think.
Let me know if you want a PR for this, but I'm not sure which possible fix is the correct one. | 678 |
https://github.com/huggingface/datasets/issues/676 | train_test_split returns empty dataset item | [
"The problem still exists after removing the cache files.",
"Can you reproduce this example in a Colab so we can investigate? (or give more information on your software/hardware config)",
"Thanks for reporting.\r\nI just found the issue, I'm creating a PR",
"We'll do a release pretty soon to include the fix :... | I try to split my dataset by `train_test_split`, but after that the item in `train` and `test` `Dataset` is empty.
The codes:
```
yelp_data = datasets.load_from_disk('/home/ssd4/huanglianzhe/test_yelp')
print(yelp_data[0])
yelp_data = yelp_data.train_test_split(test_size=0.1)
print(yelp_data)
print(yelp_data['test'])
print(yelp_data['test'][0])
```
The outputs:
```
{'stars': 2.0, 'text': 'xxxx'}
Loading cached split indices for dataset at /home/ssd4/huanglianzhe/test_yelp/cache-f9b22d8b9d5a7346.arrow and /home/ssd4/huanglianzhe/test_yelp/cache-4aa26fa4005059d1.arrow
DatasetDict({'train': Dataset(features: {'stars': Value(dtype='float64', id=None), 'text': Value(dtype='string', id=None)}, num_rows: 7219009), 'test': Dataset(features: {'stars': Value(dtype='float64', id=None), 'text': Value(dtype='string', id=None)}, num_rows: 802113)})
Dataset(features: {'stars': Value(dtype='float64', id=None), 'text': Value(dtype='string', id=None)}, num_rows: 802113)
{} # yelp_data['test'][0] is empty
``` | 676 |
https://github.com/huggingface/datasets/issues/675 | Add custom dataset to NLP? | [
"Yes you can have a look here: https://huggingface.co/docs/datasets/loading_datasets.html#csv-files",
"No activity, closing"
] | Is it possible to add a custom dataset such as a .csv to the NLP library?
Thanks. | 675 |
https://github.com/huggingface/datasets/issues/674 | load_dataset() won't download in Windows | [
"I have the same issue. Tried to download a few of them and not a single one is downloaded successfully.\r\n\r\nThis is the output:\r\n```\r\n>>> dataset = load_dataset('blended_skill_talk', split='train')\r\nUsing custom data configuration default <-- This step never ends\r\n```",
"This was fixed i... | I don't know if this is just me or Windows. Maybe other Windows users can chime in if they don't have this problem. I've been trying to get some of the tutorials working on Windows, but when I use the load_dataset() function, it just stalls and the script keeps running indefinitely without downloading anything. I've waited upwards of 18 hours to download the 'multi-news' dataset (which isn't very big), and still nothing. I've tried running it through different IDE's and the command line, but it had the same behavior. I've also tried it with all virus and malware protection turned off. I've made sure python and all IDE's are exceptions to the firewall and all the requisite permissions are enabled.
Additionally, I checked to see if other packages could download content such as an nltk corpus, and they could. I've also run the same script using Ubuntu and it downloaded fine (and quickly). When I copied the downloaded datasets from my Ubuntu drive to my Windows .cache folder it worked fine by reusing the already-downloaded dataset, but it's cumbersome to do that for every dataset I want to try in my Windows environment.
Could this be a bug, or is there something I'm doing wrong or not thinking of?
Thanks. | 674 |
https://github.com/huggingface/datasets/issues/673 | blog_authorship_corpus crashed | [
"Thanks for reporting !\r\nWe'll free some memory"
] | This is just to report that When I pick blog_authorship_corpus in
https://huggingface.co/nlp/viewer/?dataset=blog_authorship_corpus
I get this:

| 673 |
https://github.com/huggingface/datasets/issues/672 | Questions about XSUM | [
"We should try to regenerate the data using the official script.\r\nBut iirc that's what we used in the first place, so not sure why it didn't match in the first place.\r\n\r\nI'll let you know when the dataset is updated",
"Thanks, looking forward to hearing your update on this thread. \r\n\r\nThis is a blocking... | Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, num_rows: 204017)
>>> data['test']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, num_rows: 11333)
```
The first issue is, the instance counts don’t match what I see on [the dataset's website](https://github.com/EdinburghNLP/XSum/tree/master/XSum-Dataset#what-builds-the-xsum-dataset) (11,333 vs 11,334 for test set; 204,017 vs 204,045 for training set)
```
… training (90%, 204,045), validation (5%, 11,332), and test (5%, 11,334) set.
```
Any thoughts why? Perhaps @mariamabarham could help here, since she recently had a PR on this dataaset https://github.com/huggingface/datasets/pull/289 (reviewed by @patrickvonplaten)
Another issue is that the instances don't seem to have IDs. The original datasets provides IDs for the instances: https://github.com/EdinburghNLP/XSum/blob/master/XSum-Dataset/XSum-TRAINING-DEV-TEST-SPLIT-90-5-5.json but to be able to use them, the dataset sizes need to match.
CC @jbragg
| 672 |
https://github.com/huggingface/datasets/issues/671 | [BUG] No such file or directory | [] | This happens when both
1. Huggingface datasets cache dir does not exist
2. Try to load a local dataset script
builder.py throws an error when trying to create a filelock in a directory (cache/datasets) that does not exist
https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L177
Tested on v1.0.2
@lhoestq | 671 |
https://github.com/huggingface/datasets/issues/669 | How to skip a example when running dataset.map | [
"Hi @xixiaoyao,\r\nDepending on what you want to do you can:\r\n- use a first step of `filter` to filter out the invalid examples: https://huggingface.co/docs/datasets/processing.html#filtering-rows-select-and-filter\r\n- or directly detect the invalid examples inside the callable used with `map` and return them un... | in processing func, I process examples and detect some invalid examples, which I did not want it to be added into train dataset. However I did not find how to skip this recognized invalid example when doing dataset.map. | 669 |
https://github.com/huggingface/datasets/issues/668 | OverflowError when slicing with an array containing negative ids | [] | ```python
from datasets import Dataset
d = ds.Dataset.from_dict({"a": range(10)})
print(d[0])
# {'a': 0}
print(d[-1])
# {'a': 9}
print(d[[0, -1]])
# OverflowError
```
results in
```
---------------------------------------------------------------------------
OverflowError Traceback (most recent call last)
<ipython-input-5-863dc3555598> in <module>
----> 1 d[[0, -1]]
~/Desktop/hf/nlp/src/datasets/arrow_dataset.py in __getitem__(self, key)
1070 format_columns=self._format_columns,
1071 output_all_columns=self._output_all_columns,
-> 1072 format_kwargs=self._format_kwargs,
1073 )
1074
~/Desktop/hf/nlp/src/datasets/arrow_dataset.py in _getitem(self, key, format_type, format_columns, output_all_columns, format_kwargs)
1025 indices = key
1026
-> 1027 indices_array = pa.array([int(i) for i in indices], type=pa.uint64())
1028
1029 # Check if we need to convert indices
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib.array()
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array()
OverflowError: can't convert negative value to unsigned int
``` | 668 |
https://github.com/huggingface/datasets/issues/667 | Loss not decrease with Datasets and Transformers | [
"And I tested it on T5ForConditionalGeneration, that works no problem.",
"Hi did you manage to fix your issue ?\r\n\r\nIf so feel free to share your fix and close this thread"
] | HI,
The following script is used to fine-tune a BertForSequenceClassification model on SST2.
The script is adapted from [this colab](https://colab.research.google.com/github/huggingface/datasets/blob/master/notebooks/Overview.ipynb) that presents an example of fine-tuning BertForQuestionAnswering using squad dataset. In that colab, loss works fine. When I adapt it to SST2, the loss fails to decrease as it should. I attach the adapted script below and appreciate anyone pointing out what I miss?
```python
import torch
from datasets import load_dataset
from transformers import BertForSequenceClassification
from transformers import BertTokenizerFast
# Load our training dataset and tokenizer
dataset = load_dataset("glue", 'sst2')
tokenizer = BertTokenizerFast.from_pretrained('bert-base-cased')
del dataset["test"] # let's remove it in this demo
# Tokenize our training dataset
def convert_to_features(example_batch):
encodings = tokenizer(example_batch["sentence"])
encodings.update({"labels": example_batch["label"]})
return encodings
encoded_dataset = dataset.map(convert_to_features, batched=True)
# Format our dataset to outputs torch.Tensor to train a pytorch model
columns = ['input_ids', 'token_type_ids', 'attention_mask', 'labels']
encoded_dataset.set_format(type='torch', columns=columns)
# Instantiate a PyTorch Dataloader around our dataset
# Let's do dynamic batching (pad on the fly with our own collate_fn)
def collate_fn(examples):
return tokenizer.pad(examples, return_tensors='pt')
dataloader = torch.utils.data.DataLoader(encoded_dataset['train'], collate_fn=collate_fn, batch_size=8)
# Now let's train our model
device = 'cuda' if torch.cuda.is_available() else 'cpu'
# Let's load a pretrained Bert model and a simple optimizer
model = BertForSequenceClassification.from_pretrained('bert-base-cased', return_dict=True)
optimizer = torch.optim.Adam(model.parameters(), lr=1e-5)
model.train().to(device)
for i, batch in enumerate(dataloader):
batch.to(device)
outputs = model(**batch)
loss = outputs.loss
loss.backward()
optimizer.step()
model.zero_grad()
print(f'Step {i} - loss: {loss:.3}')
```
In case needed.
- datasets == 1.0.2
- transformers == 3.2.0 | 667 |
https://github.com/huggingface/datasets/issues/666 | Does both 'bookcorpus' and 'wikipedia' belong to the same datasets which Google used for pretraining BERT? | [
"No they are other similar copies but they are not provided by the official Bert models authors."
] | 666 | |
https://github.com/huggingface/datasets/issues/665 | runing dataset.map, it raises TypeError: can't pickle Tokenizer objects | [
"Hi !\r\nIt works on my side with both the LongFormerTokenizer and the LongFormerTokenizerFast.\r\n\r\nWhich version of transformers/datasets are you using ?",
"transformers and datasets are both the latest",
"Then I guess you need to give us more informations on your setup (OS, python, GPU, etc) or a Google Co... | I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`.
```
def convert_to_features(example):
# Tokenize contexts and questions (as pairs of inputs)
input_pairs = [example['question'], example['context']]
encodings = tokenizer.encode_plus(input_pairs, pad_to_max_length=True, max_length=512)
context_encodings = tokenizer.encode_plus(example['context'])
# Compute start and end tokens for labels using Transformers's fast tokenizers alignement methodes.
# this will give us the position of answer span in the context text
start_idx, end_idx = get_correct_alignement(example['context'], example['answers'])
start_positions_context = context_encodings.char_to_token(start_idx)
end_positions_context = context_encodings.char_to_token(end_idx-1)
# here we will compute the start and end position of the answer in the whole example
# as the example is encoded like this <s> question</s></s> context</s>
# and we know the postion of the answer in the context
# we can just find out the index of the sep token and then add that to position + 1 (+1 because there are two sep tokens)
# this will give us the position of the answer span in whole example
sep_idx = encodings['input_ids'].index(tokenizer.sep_token_id)
start_positions = start_positions_context + sep_idx + 1
end_positions = end_positions_context + sep_idx + 1
if end_positions > 512:
start_positions, end_positions = 0, 0
encodings.update({'start_positions': start_positions,
'end_positions': end_positions,
'attention_mask': encodings['attention_mask']})
return encodings
```
Then I run `dataset.map(convert_to_features)`, it raise
```
In [59]: a.map(convert_to_features)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-59-c453b508761d> in <module>
----> 1 a.map(convert_to_features)
/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)
1242 fn_kwargs=fn_kwargs,
1243 new_fingerprint=new_fingerprint,
-> 1244 update_data=update_data,
1245 )
1246 else:
/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
151 "output_all_columns": self._output_all_columns,
152 }
--> 153 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
154 if new_format["columns"] is not None:
155 new_format["columns"] = list(set(new_format["columns"]) & set(out.column_names))
/opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
156 kwargs_for_fingerprint["fingerprint_name"] = fingerprint_name
157 kwargs[fingerprint_name] = update_fingerprint(
--> 158 self._fingerprint, transform, kwargs_for_fingerprint
159 )
160
/opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py in update_fingerprint(fingerprint, transform, transform_args)
103 for key in sorted(transform_args):
104 hasher.update(key)
--> 105 hasher.update(transform_args[key])
106 return hasher.hexdigest()
107
/opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py in update(self, value)
55 def update(self, value):
56 self.m.update(f"=={type(value)}==".encode("utf8"))
---> 57 self.m.update(self.hash(value).encode("utf-8"))
58
59 def hexdigest(self):
/opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py in hash(cls, value)
51 return cls.dispatch[type(value)](cls, value)
52 else:
---> 53 return cls.hash_default(value)
54
55 def update(self, value):
/opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py in hash_default(cls, value)
44 @classmethod
45 def hash_default(cls, value):
---> 46 return cls.hash_bytes(dumps(value))
47
48 @classmethod
/opt/conda/lib/python3.7/site-packages/datasets/utils/py_utils.py in dumps(obj)
365 file = StringIO()
366 with _no_cache_fields(obj):
--> 367 dump(obj, file)
368 return file.getvalue()
369
/opt/conda/lib/python3.7/site-packages/datasets/utils/py_utils.py in dump(obj, file)
337 def dump(obj, file):
338 """pickle an object to a file"""
--> 339 Pickler(file, recurse=True).dump(obj)
340 return
341
/opt/conda/lib/python3.7/site-packages/dill/_dill.py in dump(self, obj)
444 raise PicklingError(msg)
445 else:
--> 446 StockPickler.dump(self, obj)
447 stack.clear() # clear record of 'recursion-sensitive' pickled objects
448 return
/opt/conda/lib/python3.7/pickle.py in dump(self, obj)
435 if self.proto >= 4:
436 self.framer.start_framing()
--> 437 self.save(obj)
438 self.write(STOP)
439 self.framer.end_framing()
/opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
502 f = self.dispatch.get(t)
503 if f is not None:
--> 504 f(self, obj) # Call unbound method with explicit self
505 return
506
/opt/conda/lib/python3.7/site-packages/dill/_dill.py in save_function(pickler, obj)
1436 globs, obj.__name__,
1437 obj.__defaults__, obj.__closure__,
-> 1438 obj.__dict__, fkwdefaults), obj=obj)
1439 else:
1440 _super = ('super' in getattr(obj.func_code,'co_names',())) and (_byref is not None) and getattr(pickler, '_recurse', False)
/opt/conda/lib/python3.7/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj)
636 else:
637 save(func)
--> 638 save(args)
639 write(REDUCE)
640
/opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
502 f = self.dispatch.get(t)
503 if f is not None:
--> 504 f(self, obj) # Call unbound method with explicit self
505 return
506
/opt/conda/lib/python3.7/pickle.py in save_tuple(self, obj)
787 write(MARK)
788 for element in obj:
--> 789 save(element)
790
791 if id(obj) in memo:
/opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
502 f = self.dispatch.get(t)
503 if f is not None:
--> 504 f(self, obj) # Call unbound method with explicit self
505 return
506
/opt/conda/lib/python3.7/site-packages/dill/_dill.py in save_module_dict(pickler, obj)
931 # we only care about session the first pass thru
932 pickler._session = False
--> 933 StockPickler.save_dict(pickler, obj)
934 log.info("# D2")
935 return
/opt/conda/lib/python3.7/pickle.py in save_dict(self, obj)
857
858 self.memoize(obj)
--> 859 self._batch_setitems(obj.items())
860
861 dispatch[dict] = save_dict
/opt/conda/lib/python3.7/pickle.py in _batch_setitems(self, items)
883 for k, v in tmp:
884 save(k)
--> 885 save(v)
886 write(SETITEMS)
887 elif n:
/opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
547
548 # Save the reduce() output and finally memoize the object
--> 549 self.save_reduce(obj=obj, *rv)
550
551 def persistent_id(self, obj):
/opt/conda/lib/python3.7/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj)
660
661 if state is not None:
--> 662 save(state)
663 write(BUILD)
664
/opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
502 f = self.dispatch.get(t)
503 if f is not None:
--> 504 f(self, obj) # Call unbound method with explicit self
505 return
506
/opt/conda/lib/python3.7/site-packages/dill/_dill.py in save_module_dict(pickler, obj)
931 # we only care about session the first pass thru
932 pickler._session = False
--> 933 StockPickler.save_dict(pickler, obj)
934 log.info("# D2")
935 return
/opt/conda/lib/python3.7/pickle.py in save_dict(self, obj)
857
858 self.memoize(obj)
--> 859 self._batch_setitems(obj.items())
860
861 dispatch[dict] = save_dict
/opt/conda/lib/python3.7/pickle.py in _batch_setitems(self, items)
883 for k, v in tmp:
884 save(k)
--> 885 save(v)
886 write(SETITEMS)
887 elif n:
/opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
547
548 # Save the reduce() output and finally memoize the object
--> 549 self.save_reduce(obj=obj, *rv)
550
551 def persistent_id(self, obj):
/opt/conda/lib/python3.7/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj)
660
661 if state is not None:
--> 662 save(state)
663 write(BUILD)
664
/opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
502 f = self.dispatch.get(t)
503 if f is not None:
--> 504 f(self, obj) # Call unbound method with explicit self
505 return
506
/opt/conda/lib/python3.7/site-packages/dill/_dill.py in save_module_dict(pickler, obj)
931 # we only care about session the first pass thru
932 pickler._session = False
--> 933 StockPickler.save_dict(pickler, obj)
934 log.info("# D2")
935 return
/opt/conda/lib/python3.7/pickle.py in save_dict(self, obj)
857
858 self.memoize(obj)
--> 859 self._batch_setitems(obj.items())
860
861 dispatch[dict] = save_dict
/opt/conda/lib/python3.7/pickle.py in _batch_setitems(self, items)
883 for k, v in tmp:
884 save(k)
--> 885 save(v)
886 write(SETITEMS)
887 elif n:
/opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
522 reduce = getattr(obj, "__reduce_ex__", None)
523 if reduce is not None:
--> 524 rv = reduce(self.proto)
525 else:
526 reduce = getattr(obj, "__reduce__", None)
TypeError: can't pickle Tokenizer objects
```
| 665 |
https://github.com/huggingface/datasets/issues/664 | load_dataset from local squad.py, raise error: TypeError: 'NoneType' object is not callable | [
"Hi !\r\nThanks for reporting.\r\nIt looks like no object inherits from `datasets.GeneratorBasedBuilder` (or more generally from `datasets.DatasetBuilder`) in your script.\r\n\r\nCould you check that there exist at least one dataset builder class ?",
"Hi @xixiaoyao did you manage to fix your issue ?",
"No activ... |
version: 1.0.2
```
train_dataset = datasets.load_dataset('squad')
```
The above code can works. However, when I download the squad.py from your server, and saved as `my_squad.py` to local. I run followings raise errors.
```
train_dataset = datasets.load_dataset('./my_squad.py')
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-28-25a84b4d1581> in <module>
----> 1 train_dataset = nlp.load_dataset('./my_squad.py')
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)
602 hash=hash,
603 features=features,
--> 604 **config_kwargs,
605 )
606
TypeError: 'NoneType' object is not callable
| 664 |
https://github.com/huggingface/datasets/issues/657 | Squad Metric Description & Feature Mismatch | [
"Thanks for reporting !\r\nThere indeed a mismatch between the features and the kwargs description\r\n\r\nI believe `answer_start` was added to match the squad dataset format for consistency, even though it is not used in the metric computation. I think I'd rather keep it this way, so that you can just give `refere... | The [description](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L39) doesn't mention `answer_start` in squad. However the `datasets.features` require [it](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L68). It's also not used in the evaluation. | 657 |
https://github.com/huggingface/datasets/issues/651 | Problem with JSON dataset format | [
"Currently the `json` dataset doesn't support this format unfortunately.\r\nHowever you could load it with\r\n```python\r\nfrom datasets import Dataset\r\nimport pandas as pd\r\n\r\ndf = pd.read_json(\"path_to_local.json\", orient=\"index\")\r\ndataset = Dataset.from_pandas(df)\r\n```",
"or you can make a custom ... | I have a local json dataset with the following form.
{
'id01234': {'key1': value1, 'key2': value2, 'key3': value3},
'id01235': {'key1': value1, 'key2': value2, 'key3': value3},
.
.
.
'id09999': {'key1': value1, 'key2': value2, 'key3': value3}
}
Note that instead of a list of records it's basically a dictionary of key value pairs with the keys being the record_ids and the values being the corresponding record.
Reading this with json:
```
data = datasets.load('json', data_files='path_to_local.json')
```
Throws an error and asks me to chose a field. What's the right way to handle this? | 651 |
https://github.com/huggingface/datasets/issues/650 | dummy data testing can't test datasets using `dl_manager.extract` in `_split_generators` | [
"Hi :) \r\nIn your dummy data zip file you can just have `subset000.xz` as directories instead of compressed files.\r\nLet me know if it helps",
"Thanks for your comment @lhoestq ,\r\nJust for confirmation, changing dummy data like this won't make dummy test test the functionality to extract `subsetxxx.xz` but ac... | Hi, I recently want to add a dataset whose source data is like this
```
openwebtext.tar.xz
|__ openwebtext
|__subset000.xz
| |__ ....txt
| |__ ....txt
| ...
|__ subset001.xz
|
....
```
So I wrote `openwebtext.py` like this
```
def _split_generators(self, dl_manager):
dl_dir = dl_manager.download_and_extract(_URL)
owt_dir = os.path.join(dl_dir, 'openwebtext')
subset_xzs = [
os.path.join(owt_dir, file_name) for file_name in os.listdir(owt_dir) if file_name.endswith('xz') # filter out ...xz.lock
]
ex_dirs = dl_manager.extract(subset_xzs, num_proc=round(os.cpu_count()*0.75))
nested_txt_files = [
[
os.path.join(ex_dir,txt_file_name) for txt_file_name in os.listdir(ex_dir) if txt_file_name.endswith('txt')
] for ex_dir in ex_dirs
]
txt_files = chain(*nested_txt_files)
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN, gen_kwargs={"txt_files": txt_files}
),
]
```
All went good, I can load and use real openwebtext, except when I try to test with dummy data. The problem is `MockDownloadManager.extract` do nothing, so `ex_dirs = dl_manager.extract(subset_xzs)` won't decompress `subset_xxx.xz`s for me.
How should I do ? Or you can modify `MockDownloadManager` to make it like a real `DownloadManager` ? | 650 |
https://github.com/huggingface/datasets/issues/649 | Inconsistent behavior in map | [
"Thanks for reporting !\r\n\r\nThis issue must have appeared when we refactored type inference in `nlp`\r\nBy default the library tries to keep the same feature types when applying `map` but apparently it has troubles with nested structures. I'll try to fix that next week"
] | I'm observing inconsistent behavior when applying .map(). This happens specifically when I'm incrementally adding onto a feature that is a nested dictionary. Here's a simple example that reproduces the problem.
```python
import datasets
# Dataset with a single feature called 'field' consisting of two examples
dataset = datasets.Dataset.from_dict({'field': ['a', 'b']})
print(dataset[0])
# outputs
{'field': 'a'}
# Map this dataset to create another feature called 'otherfield', which is a dictionary containing a key called 'capital'
dataset = dataset.map(lambda example: {'otherfield': {'capital': example['field'].capitalize()}})
print(dataset[0])
# output is okay
{'field': 'a', 'otherfield': {'capital': 'A'}}
# Now I want to map again to modify 'otherfield', by adding another key called 'append_x' to the dictionary under 'otherfield'
print(dataset.map(lambda example: {'otherfield': {'append_x': example['field'] + 'x'}})[0])
# printing out the first example after applying the map shows that the new key 'append_x' doesn't get added
# it also messes up the value stored at 'capital'
{'field': 'a', 'otherfield': {'capital': None}}
# Instead, I try to do the same thing by using a different mapped fn
print(dataset.map(lambda example: {'otherfield': {'append_x': example['field'] + 'x', 'capital': example['otherfield']['capital']}})[0])
# this preserves the value under capital, but still no 'append_x'
{'field': 'a', 'otherfield': {'capital': 'A'}}
# Instead, I try to pass 'otherfield' to remove_columns
print(dataset.map(lambda example: {'otherfield': {'append_x': example['field'] + 'x', 'capital': example['otherfield']['capital']}}, remove_columns=['otherfield'])[0])
# this still doesn't fix the problem
{'field': 'a', 'otherfield': {'capital': 'A'}}
# Alternately, here's what happens if I just directly map both 'capital' and 'append_x' on a fresh dataset.
# Recreate the dataset
dataset = datasets.Dataset.from_dict({'field': ['a', 'b']})
# Now map the entire 'otherfield' dict directly, instead of incrementally as before
print(dataset.map(lambda example: {'otherfield': {'append_x': example['field'] + 'x', 'capital': example['field'].capitalize()}})[0])
# This looks good!
{'field': 'a', 'otherfield': {'append_x': 'ax', 'capital': 'A'}}
```
This might be a new issue, because I didn't see this behavior in the `nlp` library.
Any help is appreciated! | 649 |
https://github.com/huggingface/datasets/issues/648 | offset overflow when multiprocessing batched map on large datasets. | [
"This should be fixed with #645 ",
"Feel free to re-open if it still occurs"
] | It only happened when "multiprocessing" + "batched" + "large dataset" at the same time.
```
def bprocess(examples):
examples['len'] = []
for text in examples['text']:
examples['len'].append(len(text))
return examples
wiki.map(brpocess, batched=True, num_proc=8)
```
```
---------------------------------------------------------------------------
RemoteTraceback Traceback (most recent call last)
RemoteTraceback:
"""
Traceback (most recent call last):
File "/home/yisiang/miniconda3/envs/ml/lib/python3.7/multiprocessing/pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "/home/yisiang/datasets/src/datasets/arrow_dataset.py", line 153, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/yisiang/datasets/src/datasets/fingerprint.py", line 163, in wrapper
out = func(self, *args, **kwargs)
File "/home/yisiang/datasets/src/datasets/arrow_dataset.py", line 1486, in _map_single
batch = self[i : i + batch_size]
File "/home/yisiang/datasets/src/datasets/arrow_dataset.py", line 1071, in __getitem__
format_kwargs=self._format_kwargs,
File "/home/yisiang/datasets/src/datasets/arrow_dataset.py", line 972, in _getitem
data_subset = self._data.take(indices_array)
File "pyarrow/table.pxi", line 1145, in pyarrow.lib.Table.take
File "/home/yisiang/miniconda3/envs/ml/lib/python3.7/site-packages/pyarrow/compute.py", line 268, in take
return call_function('take', [data, indices], options)
File "pyarrow/_compute.pyx", line 298, in pyarrow._compute.call_function
File "pyarrow/_compute.pyx", line 192, in pyarrow._compute.Function.call
File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays
"""
The above exception was the direct cause of the following exception:
ArrowInvalid Traceback (most recent call last)
in
30 owt = datasets.load_dataset('/home/yisiang/datasets/datasets/openwebtext/openwebtext.py', cache_dir='./datasets')['train']
31 print('load/create data from OpenWebText Corpus for ELECTRA')
---> 32 e_owt = ELECTRAProcessor(owt, apply_cleaning=False).map(cache_file_name=f"electra_owt_{c.max_length}.arrow")
33 dsets.append(e_owt)
34
~/Reexamine_Attention/electra_pytorch/_utils/utils.py in map(self, **kwargs)
126 writer_batch_size=10**4,
127 num_proc=num_proc,
--> 128 **kwargs
129 )
130
~/hugdatafast/hugdatafast/transform.py in my_map(self, *args, **kwargs)
21 if not cache_file_name.endswith('.arrow'): cache_file_name += '.arrow'
22 if '/' not in cache_file_name: cache_file_name = os.path.join(self.cache_directory(), cache_file_name)
---> 23 return self.map(*args, cache_file_name=cache_file_name, **kwargs)
24
25 @patch
~/datasets/src/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)
1285 logger.info("Spawning {} processes".format(num_proc))
1286 results = [pool.apply_async(self.__class__._map_single, kwds=kwds) for kwds in kwds_per_shard]
-> 1287 transformed_shards = [r.get() for r in results]
1288 logger.info("Concatenating {} shards from multiprocessing".format(num_proc))
1289 result = concatenate_datasets(transformed_shards)
~/datasets/src/datasets/arrow_dataset.py in (.0)
1285 logger.info("Spawning {} processes".format(num_proc))
1286 results = [pool.apply_async(self.__class__._map_single, kwds=kwds) for kwds in kwds_per_shard]
-> 1287 transformed_shards = [r.get() for r in results]
1288 logger.info("Concatenating {} shards from multiprocessing".format(num_proc))
1289 result = concatenate_datasets(transformed_shards)
~/miniconda3/envs/ml/lib/python3.7/multiprocessing/pool.py in get(self, timeout)
655 return self._value
656 else:
--> 657 raise self._value
658
659 def _set(self, i, obj):
ArrowInvalid: offset overflow while concatenating arrays
``` | 648 |
https://github.com/huggingface/datasets/issues/647 | Cannot download dataset_info.json | [
"Thanks for reporting !\r\nWe should add support for servers without internet connection indeed\r\nI'll do that early next week",
"Thanks, @lhoestq !\r\nPlease let me know when it is available. ",
"Right now the recommended way is to create the dataset on a server with internet connection and then to save it an... | I am running my job on a cloud server where does not provide for connections from the standard compute nodes to outside resources. Hence, when I use `dataset.load_dataset()` to load data, I got an error like this:
```
ConnectionError: Couldn't reach https://storage.googleapis.com/huggingface-nlp/cache/datasets/text/default-53ee3045f07ba8ca/0.0.0/dataset_info.json
```
I tried to open this link manually, but I cannot access this file. How can I download this file and pass it through `dataset.load_dataset()` manually?
Versions:
Python version 3.7.3
PyTorch version 1.6.0
TensorFlow version 2.3.0
datasets version: 1.0.1
| 647 |
https://github.com/huggingface/datasets/issues/643 | Caching processed dataset at wrong folder | [
"Thanks for reporting !\r\nIt uses a temporary file to write the data.\r\nHowever it looks like the temporary file is not placed in the right directory during the processing",
"Well actually I just tested and the temporary file is placed in the same directory, so it should work as expected.\r\nWhich version of `d... | Hi guys, I run this on my Colab (PRO):
```python
from datasets import load_dataset
dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train')
def encode(examples):
return tokenizer(examples['text'], truncation=True, padding='max_length')
dataset = dataset.map(encode, batched=True)
```
The file is about 4 GB, so I cannot process it on the Colab HD because there is no enough space. So I decided to mount my Google Drive fs and do it on it.
The dataset is cached in the right place but by processing it (applying `encode` function) seems to use a different folder because Colab HD starts to grow and it crashes when it should be done in the Drive fs.
What gets me crazy, it prints it is processing/encoding the dataset in the right folder:
```
Testing the mapped function outputs
Testing finished, running the mapping function on the dataset
Caching processed dataset at /content/drive/My Drive/text/default-ad3e69d6242ee916/0.0.0/7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7/cache-b16341780a59747d.arrow
``` | 643 |
https://github.com/huggingface/datasets/issues/638 | GLUE/QQP dataset: NonMatchingChecksumError | [
"Hi ! Sure I'll take a look"
] | Hi @lhoestq , I know you are busy and there are also other important issues. But if this is easy to be fixed, I am shamelessly wondering if you can give me some help , so I can evaluate my models and restart with my developing cycle asap. 😚
datasets version: editable install of master at 9/17
`datasets.load_dataset('glue','qqp', cache_dir='./datasets')`
```
Downloading and preparing dataset glue/qqp (download: 57.73 MiB, generated: 107.02 MiB, post-processed: Unknown size, total: 164.75 MiB) to ./datasets/glue/qqp/1.0.0/7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4...
---------------------------------------------------------------------------
NonMatchingChecksumError Traceback (most recent call last)
in
----> 1 datasets.load_dataset('glue','qqp', cache_dir='./datasets')
~/datasets/src/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)
609 download_config=download_config,
610 download_mode=download_mode,
--> 611 ignore_verifications=ignore_verifications,
612 )
613
~/datasets/src/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
467 if not downloaded_from_gcs:
468 self._download_and_prepare(
--> 469 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
470 )
471 # Sync info
~/datasets/src/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
527 if verify_infos:
528 verify_checksums(
--> 529 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files"
530 )
531
~/datasets/src/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
37 if len(bad_urls) > 0:
38 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 39 raise NonMatchingChecksumError(error_msg + str(bad_urls))
40 logger.info("All the checksums matched successfully" + for_verification_name)
41
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://dl.fbaipublicfiles.com/glue/data/QQP-clean.zip']
``` | 638 |
https://github.com/huggingface/datasets/issues/633 | Load large text file for LM pre-training resulting in OOM | [
"Not sure what could cause that on the `datasets` side. Could this be a `Trainer` issue ? cc @julien-c @sgugger ?",
"There was a memory leak issue fixed recently in master. You should install from source and see if it fixes your problem.",
"@lhoestq @sgugger Thanks for your comments. I have install from source ... | I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this:
```python
from datasets import load_dataset
@dataclass
class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling):
"""
Data collator used for language modeling based on DataCollatorForLazyLanguageModeling
- collates batches of tensors, honoring their tokenizer's pad_token
- preprocesses batches for masked language modeling
"""
block_size: int = 512
def __call__(self, examples: List[dict]) -> Dict[str, torch.Tensor]:
examples = [example['text'] for example in examples]
batch, attention_mask = self._tensorize_batch(examples)
if self.mlm:
inputs, labels = self.mask_tokens(batch)
return {"input_ids": inputs, "labels": labels}
else:
labels = batch.clone().detach()
if self.tokenizer.pad_token_id is not None:
labels[labels == self.tokenizer.pad_token_id] = -100
return {"input_ids": batch, "labels": labels}
def _tensorize_batch(self, examples: List[str]) -> Tuple[torch.Tensor, torch.Tensor]:
if self.tokenizer._pad_token is None:
raise ValueError(
"You are attempting to pad samples but the tokenizer you are using"
f" ({self.tokenizer.__class__.__name__}) does not have one."
)
tensor_examples = self.tokenizer.batch_encode_plus(
[ex for ex in examples if ex],
max_length=self.block_size,
return_tensors="pt",
pad_to_max_length=True,
return_attention_mask=True,
truncation=True,
)
input_ids, attention_mask = tensor_examples["input_ids"], tensor_examples["attention_mask"]
return input_ids, attention_mask
dataset = load_dataset('text', data_files='train.txt',cache_dir="./", , split='train')
data_collator = DataCollatorForDatasetsLanguageModeling(tokenizer=tokenizer, mlm=True,
mlm_probability=0.15, block_size=tokenizer.max_len)
trainer = Trainer(model=model, args=args, data_collator=data_collator,
train_dataset=train_dataset, prediction_loss_only=True, )
trainer.train(model_path=model_path)
```
This train.txt is about 1.1GB and has 90k lines where each line is a sequence of 4k words.
During training, the memory usage increased fast as the following graph and resulted in OOM before the finish of training.

Could you please give me any suggestions on why this happened and how to fix it?
Thanks. | 633 |
https://github.com/huggingface/datasets/issues/630 | Text dataset not working with large files | [
"Seems like it works when setting ```block_size=2100000000``` or something arbitrarily large though.",
"Can you give us some stats on the data files you use as inputs?",
"Basically ~600MB txt files(UTF-8) * 59. \r\ncontents like ```안녕하세요, 이것은 예제로 한번 말해보는 텍스트입니다. 그냥 이렇다고요.<|endoftext|>\\n```\r\n\r\nAlso, it gets... | ```
Traceback (most recent call last):
File "examples/language-modeling/run_language_modeling.py", line 333, in <module>
main()
File "examples/language-modeling/run_language_modeling.py", line 262, in main
get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_train else None
File "examples/language-modeling/run_language_modeling.py", line 144, in get_dataset
dataset = load_dataset("text", data_files=file_path, split='train+test')
File "/home/ksjae/.local/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset
ignore_verifications=ignore_verifications,
File "/home/ksjae/.local/lib/python3.7/site-packages/datasets/builder.py", line 469, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/ksjae/.local/lib/python3.7/site-packages/datasets/builder.py", line 546, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/ksjae/.local/lib/python3.7/site-packages/datasets/builder.py", line 888, in _prepare_split
for key, table in utils.tqdm(generator, unit=" tables", leave=False, disable=not_verbose):
File "/home/ksjae/.local/lib/python3.7/site-packages/tqdm/std.py", line 1129, in __iter__
for obj in iterable:
File "/home/ksjae/.cache/huggingface/modules/datasets_modules/datasets/text/7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7/text.py", line 104, in _generate_tables
convert_options=self.config.convert_options,
File "pyarrow/_csv.pyx", line 714, in pyarrow._csv.read_csv
File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
```
**pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)**
It gives the same message for both 200MB, 10GB .tx files but not for 700MB file.
Can't upload due to size & copyright problem. sorry. | 630 |
https://github.com/huggingface/datasets/issues/629 | straddling object straddles two block boundaries | [
"sorry it's an apache arrow issue."
] | I am trying to read json data (it's an array with lots of dictionaries) and getting block boundaries issue as below :
I tried calling read_json with readOptions but no luck .
```
table = json.read_json(fn)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "pyarrow/_json.pyx", line 246, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)
```
| 629 |
https://github.com/huggingface/datasets/issues/625 | dtype of tensors should be preserved | [
"Indeed we convert tensors to list to be able to write in arrow format. Because of this conversion we lose the dtype information. We should add the dtype detection when we do type inference. However it would require a bit of refactoring since currently the conversion happens before the type inference..\r\n\r\nAnd t... | After switching to `datasets` my model just broke. After a weekend of debugging, the issue was that my model could not handle the double that the Dataset provided, as it expected a float (but didn't give a warning, which seems a [PyTorch issue](https://discuss.pytorch.org/t/is-it-required-that-input-and-hidden-for-gru-have-the-same-dtype-float32/96221)).
As a user I did not expect this bug. I have a `map` function that I call on the Dataset that looks like this:
```python
def preprocess(sentences: List[str]):
token_ids = [[vocab.to_index(t) for t in s.split()] for s in sentences]
sembeddings = stransformer.encode(sentences)
print(sembeddings.dtype)
return {"input_ids": token_ids, "sembedding": sembeddings}
```
Given a list of `sentences` (`List[str]`), it converts those into token_ids on the one hand (list of lists of ints; `List[List[int]]`) and into sentence embeddings on the other (Tensor of dtype `torch.float32`). That means that I actually set the column "sembedding" to a tensor that I as a user expect to be a float32.
It appears though that behind the scenes, this tensor is converted into a **list**. I did not find this documented anywhere but I might have missed it. From a user's perspective this is incredibly important though, because it means you cannot do any data_type or tensor casting yourself in a mapping function! Furthermore, this can lead to issues, as was my case.
My model expected float32 precision, which I thought `sembedding` was because that is what `stransformer.encode` outputs. But behind the scenes this tensor is first cast to a list, and when we then set its format, as below, this column is cast not to float32 but to double precision float64.
```python
dataset.set_format(type="torch", columns=["input_ids", "sembedding"])
```
This happens because apparently there is an intermediate step of casting to a **numpy** array (?) **whose dtype creation/deduction is different from torch dtypes** (see the snippet below). As you can see, this means that the dtype is not preserved: if I got it right, the dataset goes from torch.float32 -> list -> float64 (numpy) -> torch.float64.
```python
import torch
import numpy as np
l = [-0.03010837361216545, -0.035979013890028, -0.016949838027358055]
torch_tensor = torch.tensor(l)
np_array = np.array(l)
np_to_torch = torch.from_numpy(np_array)
print(torch_tensor.dtype)
# torch.float32
print(np_array.dtype)
# float64
print(np_to_torch.dtype)
# torch.float64
```
This might lead to unwanted behaviour. I understand that the whole library is probably built around casting from numpy to other frameworks, so this might be difficult to solve. Perhaps `set_format` should include a `dtypes` option where for each input column the user can specify the wanted precision.
The alternative is that the user needs to cast manually after loading data from the dataset but that does not seem user-friendly, makes the dataset less portable, and might use more space in memory as well as on disk than is actually needed. | 625 |
https://github.com/huggingface/datasets/issues/624 | Add learningq dataset | [] | Hi,
Thank you again for this amazing repo.
Would it be possible for y'all to add the LearningQ dataset - https://github.com/AngusGLChen/LearningQ ?
| 624 |
https://github.com/huggingface/datasets/issues/623 | Custom feature types in `load_dataset` from CSV | [
"Currently `csv` doesn't support the `features` attribute (unlike `json`).\r\nWhat you can do for now is cast the features using the in-place transform `cast_`\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset('csv', data_files=file_dict, delimiter=';', column_names=['text', 'label... | I am trying to load a local file with the `load_dataset` function and I want to predefine the feature types with the `features` argument. However, the types are always the same independent of the value of `features`.
I am working with the local files from the emotion dataset. To get the data you can use the following code:
```Python
from pathlib import Path
import wget
EMOTION_PATH = Path("./data/emotion")
DOWNLOAD_URLS = [
"https://www.dropbox.com/s/1pzkadrvffbqw6o/train.txt?dl=1",
"https://www.dropbox.com/s/2mzialpsgf9k5l3/val.txt?dl=1",
"https://www.dropbox.com/s/ikkqxfdbdec3fuj/test.txt?dl=1",
]
if not Path.is_dir(EMOTION_PATH):
Path.mkdir(EMOTION_PATH)
for url in DOWNLOAD_URLS:
wget.download(url, str(EMOTION_PATH))
```
The first five lines of the train set are:
```
i didnt feel humiliated;sadness
i can go from feeling so hopeless to so damned hopeful just from being around someone who cares and is awake;sadness
im grabbing a minute to post i feel greedy wrong;anger
i am ever feeling nostalgic about the fireplace i will know that it is still on the property;love
i am feeling grouchy;anger
```
Here the code to reproduce the issue:
```Python
from datasets import Features, Value, ClassLabel, load_dataset
class_names = ["sadness", "joy", "love", "anger", "fear", "surprise"]
emotion_features = Features({'text': Value('string'), 'label': ClassLabel(names=class_names)})
file_dict = {'train': EMOTION_PATH/'train.txt'}
dataset = load_dataset('csv', data_files=file_dict, delimiter=';', column_names=['text', 'label'], features=emotion_features)
```
**Observed behaviour:**
```Python
dataset['train'].features
```
```Python
{'text': Value(dtype='string', id=None),
'label': Value(dtype='string', id=None)}
```
**Expected behaviour:**
```Python
dataset['train'].features
```
```Python
{'text': Value(dtype='string', id=None),
'label': ClassLabel(num_classes=6, names=['sadness', 'joy', 'love', 'anger', 'fear', 'surprise'], names_file=None, id=None)}
```
**Things I've tried:**
- deleting the cache
- trying other types such as `int64`
Am I missing anything? Thanks for any pointer in the right direction. | 623 |
https://github.com/huggingface/datasets/issues/622 | load_dataset for text files not working | [
"Can you give us more information on your os and pip environments (pip list)?",
"@thomwolf Sure. I'll try downgrading to 3.7 now even though Arrow say they support >=3.5.\r\n\r\nLinux (Ubuntu 18.04) - Python 3.8\r\n======================\r\nPackage - Version\r\n---------------------\r\ncertifi 2... | Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that you can use a string as input for data_files, but the signature is `Union[Dict, List]`.)
The problem on Linux is that the script crashes with a CSV error (even though it isn't a CSV file). On Windows the script just seems to freeze or get stuck after loading the config file.
Linux stack trace:
```
PyTorch version 1.6.0+cu101 available.
Checking /home/bram/.cache/huggingface/datasets/b1d50a0e74da9a7b9822cea8ff4e4f217dd892e09eb14f6274a2169e5436e2ea.30c25842cda32b0540d88b7195147decf9671ee442f4bc2fb6ad74016852978e.py for additional imports.
Found main folder for dataset https://raw.githubusercontent.com/huggingface/datasets/1.0.1/datasets/text/text.py at /home/bram/.cache/huggingface/modules/datasets_modules/datasets/text
Found specific version folder for dataset https://raw.githubusercontent.com/huggingface/datasets/1.0.1/datasets/text/text.py at /home/bram/.cache/huggingface/modules/datasets_modules/datasets/text/7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7
Found script file from https://raw.githubusercontent.com/huggingface/datasets/1.0.1/datasets/text/text.py to /home/bram/.cache/huggingface/modules/datasets_modules/datasets/text/7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7/text.py
Couldn't find dataset infos file at https://raw.githubusercontent.com/huggingface/datasets/1.0.1/datasets/text/dataset_infos.json
Found metadata file for dataset https://raw.githubusercontent.com/huggingface/datasets/1.0.1/datasets/text/text.py at /home/bram/.cache/huggingface/modules/datasets_modules/datasets/text/7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7/text.json
Using custom data configuration default
Generating dataset text (/home/bram/.cache/huggingface/datasets/text/default-0907112cc6cd2a38/0.0.0/7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7)
Downloading and preparing dataset text/default-0907112cc6cd2a38 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/bram/.cache/huggingface/datasets/text/default-0907112cc6cd2a38/0.0.0/7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7...
Dataset not on Hf google storage. Downloading and preparing it from source
Downloading took 0.0 min
Checksum Computation took 0.0 min
Unable to verify checksums.
Generating split train
Traceback (most recent call last):
File "/home/bram/Python/projects/dutch-simplification/utils.py", line 45, in prepare_data
dataset = load_dataset("text", data_files=dataset_f)
File "/home/bram/.local/share/virtualenvs/dutch-simplification-NcpPZtDF/lib/python3.8/site-packages/datasets/load.py", line 608, in load_dataset
builder_instance.download_and_prepare(
File "/home/bram/.local/share/virtualenvs/dutch-simplification-NcpPZtDF/lib/python3.8/site-packages/datasets/builder.py", line 468, in download_and_prepare
self._download_and_prepare(
File "/home/bram/.local/share/virtualenvs/dutch-simplification-NcpPZtDF/lib/python3.8/site-packages/datasets/builder.py", line 546, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/bram/.local/share/virtualenvs/dutch-simplification-NcpPZtDF/lib/python3.8/site-packages/datasets/builder.py", line 888, in _prepare_split
for key, table in utils.tqdm(generator, unit=" tables", leave=False, disable=not_verbose):
File "/home/bram/.local/share/virtualenvs/dutch-simplification-NcpPZtDF/lib/python3.8/site-packages/tqdm/std.py", line 1130, in __iter__
for obj in iterable:
File "/home/bram/.cache/huggingface/modules/datasets_modules/datasets/text/7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7/text.py", line 100, in _generate_tables
pa_table = pac.read_csv(
File "pyarrow/_csv.pyx", line 714, in pyarrow._csv.read_csv
File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: CSV parse error: Expected 1 columns, got 2
```
Windows just seems to get stuck. Even with a tiny dataset of 10 lines, it has been stuck for 15 minutes already at this message:
```
Checking C:\Users\bramv\.cache\huggingface\datasets\b1d50a0e74da9a7b9822cea8ff4e4f217dd892e09eb14f6274a2169e5436e2ea.30c25842cda32b0540d88b7195147decf9671ee442f4bc2fb6ad74016852978e.py for additional imports.
Found main folder for dataset https://raw.githubusercontent.com/huggingface/datasets/1.0.1/datasets/text/text.py at C:\Users\bramv\.cache\huggingface\modules\datasets_modules\datasets\text
Found specific version folder for dataset https://raw.githubusercontent.com/huggingface/datasets/1.0.1/datasets/text/text.py at C:\Users\bramv\.cache\huggingface\modules\datasets_modules\datasets\text\7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7
Found script file from https://raw.githubusercontent.com/huggingface/datasets/1.0.1/datasets/text/text.py to C:\Users\bramv\.cache\huggingface\modules\datasets_modules\datasets\text\7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7\text.py
Couldn't find dataset infos file at https://raw.githubusercontent.com/huggingface/datasets/1.0.1/datasets/text\dataset_infos.json
Found metadata file for dataset https://raw.githubusercontent.com/huggingface/datasets/1.0.1/datasets/text/text.py at C:\Users\bramv\.cache\huggingface\modules\datasets_modules\datasets\text\7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7\text.json
Using custom data configuration default
```
| 622 |
https://github.com/huggingface/datasets/issues/620 | map/filter multiprocessing raises errors and corrupts datasets | [
"It seems that I ran into the same problem\r\n```\r\ndef tokenize(cols, example):\r\n for in_col, out_col in cols.items():\r\n example[out_col] = hf_tokenizer.convert_tokens_to_ids(hf_tokenizer.tokenize(example[in_col]))\r\n return example\r\ncola = datasets.load_dataset('glue', 'cola')\r\ntokenized_cola = col... | After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing.
```python
...
ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
ner_ds_dict["validation"] = ner_ds_dict["test"]
rel_ds_dict = rel_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
rel_ds_dict["validation"] = rel_ds_dict["test"]
return ner_ds_dict, rel_ds_dict
```
The first train_test_split, `ner_ds`/`ner_ds_dict`, returns a `train` and `test` split that are iterable.
The second, `rel_ds`/`rel_ds_dict` in this case, returns a Dataset dict that has rows but if selected from or sliced into into returns an empty dictionary. eg `rel_ds_dict['train'][0] == {}` and `rel_ds_dict['train'][0:100] == {}`.
Ok I think I know the problem -- the rel_ds was mapped though a mapper with `num_proc=12`. If I remove `num_proc`. The dataset loads.
I also see errors with other map and filter functions when `num_proc` is set.
```
Done writing 67 indices in 536 bytes .
Done writing 67 indices in 536 bytes .
Fatal Python error: PyCOND_WAIT(gil_cond) failed
``` | 620 |
https://github.com/huggingface/datasets/issues/619 | Mistakes in MLQA features names | [
"Indeed you're right ! Thanks for reporting that\r\n\r\nCould you open a PR to fix the features names ?"
] | I think the following features in MLQA shouldn't be named the way they are:
1. `questions` (should be `question`)
2. `ids` (should be `id`)
3. `start` (should be `answer_start`)
The reasons I'm suggesting these features be renamed are:
* To make them consistent with other QA datasets like SQuAD, XQuAD, TyDiQA etc. and hence make it easier to concatenate multiple QA datasets.
* The features names are not the same as the ones provided in the original MLQA datasets (it uses the names I suggested).
I know these columns can be renamed using using `Dataset.rename_column_`, `questions` and `ids` can be easily renamed but `start` on the other hand is annoying to rename since it's nested inside the feature `answers`.
| 619 |
https://github.com/huggingface/datasets/issues/617 | Compare different Rouge implementations | [
"Updates - the differences between the following three\r\n(1) https://github.com/bheinzerling/pyrouge (previously popular. The one I trust the most)\r\n(2) https://github.com/google-research/google-research/tree/master/rouge\r\n(3) https://github.com/pltrdy/files2rouge (used in fairseq)\r\ncan be explained by two t... | I used RougeL implementation provided in `datasets` [here](https://github.com/huggingface/datasets/blob/master/metrics/rouge/rouge.py) and it gives numbers that match those reported in the pegasus paper but very different from those reported in other papers, [this](https://arxiv.org/pdf/1909.03186.pdf) for example.
Can you make sure the google-research implementation you are using matches the official perl implementation?
There are a couple of python wrappers around the perl implementation, [this](https://pypi.org/project/pyrouge/) has been commonly used, and [this](https://github.com/pltrdy/files2rouge) is used in fairseq).
There's also a python reimplementation [here](https://github.com/pltrdy/rouge) but its RougeL numbers are way off.
| 617 |
https://github.com/huggingface/datasets/issues/616 | UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors | [
"I have the same issue",
"Same issue here when Trying to load a dataset from disk.",
"I am also experiencing this issue, and don't know if it's affecting my training.",
"Same here. I hope the dataset is not being modified in-place.",
"I think the only way to avoid this warning would be to do a copy of the n... | I am trying out the library and want to load in pickled data with `from_dict`. In that dict, one column `text` should be tokenized and the other (an embedding vector) should be retained. All other columns should be removed. When I eventually try to set the format for the columns with `set_format` I am getting this strange Userwarning without a stack trace:
> Set __getitem__(key) output type to torch for ['input_ids', 'sembedding'] columns (when key is int or slice) and don't output other (un-formatted) columns.
> C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\datasets\arrow_dataset.py:835: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at ..\torch\csrc\utils\tensor_numpy.cpp:141.)
> return torch.tensor(x, **format_kwargs)
The first one might not be related to the warning, but it is odd that it is shown, too. It is unclear whether that is something that I should do or something that that the program is doing at that moment.
Snippet:
```
dataset = Dataset.from_dict(torch.load("data/dummy.pt.pt"))
print(dataset)
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
keys_to_retain = {"input_ids", "sembedding"}
dataset = dataset.map(lambda example: tokenizer(example["text"], padding='max_length'), batched=True)
dataset.remove_columns_(set(dataset.column_names) - keys_to_retain)
dataset.set_format(type="torch", columns=["input_ids", "sembedding"])
dataloader = torch.utils.data.DataLoader(dataset, batch_size=2)
print(next(iter(dataloader)))
```
PS: the input type for `remove_columns_` should probably be an Iterable rather than just a List. | 616 |
https://github.com/huggingface/datasets/issues/615 | Offset overflow when slicing a big dataset with an array of indices in Pyarrow >= 1.0.0 | [
"Related: https://issues.apache.org/jira/browse/ARROW-9773\r\n\r\nIt's definitely a size thing. I took a smaller dataset with 87000 rows and did:\r\n```\r\nfor i in range(10,1000,20):\r\n table = pa.concat_tables([dset._data]*i)\r\n table.take([0])\r\n```\r\nand it broke at around i=300.\r\n\r\nAlso when `_in... | How to reproduce:
```python
from datasets import load_dataset
wiki = load_dataset("wikipedia", "20200501.en", split="train")
wiki[[0]]
---------------------------------------------------------------------------
ArrowInvalid Traceback (most recent call last)
<ipython-input-13-381aedc9811b> in <module>
----> 1 wikipedia[[0]]
~/Desktop/hf/nlp/src/datasets/arrow_dataset.py in __getitem__(self, key)
1069 format_columns=self._format_columns,
1070 output_all_columns=self._output_all_columns,
-> 1071 format_kwargs=self._format_kwargs,
1072 )
1073
~/Desktop/hf/nlp/src/datasets/arrow_dataset.py in _getitem(self, key, format_type, format_columns, output_all_columns, format_kwargs)
1037 )
1038 else:
-> 1039 data_subset = self._data.take(indices_array)
1040
1041 if format_type is not None:
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.take()
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/compute.py in take(data, indices, boundscheck)
266 """
267 options = TakeOptions(boundscheck)
--> 268 return call_function('take', [data, indices], options)
269
270
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/_compute.pyx in pyarrow._compute.call_function()
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/_compute.pyx in pyarrow._compute.Function.call()
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowInvalid: offset overflow while concatenating arrays
```
It seems to work fine with small datasets or with pyarrow 0.17.1 | 615 |
https://github.com/huggingface/datasets/issues/611 | ArrowCapacityError: List array cannot contain more than 2147483646 child elements, have 2147483648 | [
"Can you give us stats/information on your pandas DataFrame?",
"```\r\n<class 'pandas.core.frame.DataFrame'>\r\nInt64Index: 17136104 entries, 0 to 17136103\r\nData columns (total 6 columns):\r\n # Column Dtype \r\n--- ------ ----- \r\n 0 item_id int64 \r\n 1 item_titl object \r\n... | Hi, I'm trying to load a dataset from Dataframe, but I get the error:
```bash
---------------------------------------------------------------------------
ArrowCapacityError Traceback (most recent call last)
<ipython-input-7-146b6b495963> in <module>
----> 1 dataset = Dataset.from_pandas(emb)
~/miniconda3/envs/dev/lib/python3.7/site-packages/nlp/arrow_dataset.py in from_pandas(cls, df, features, info, split)
223 info.features = features
224 pa_table: pa.Table = pa.Table.from_pandas(
--> 225 df=df, schema=pa.schema(features.type) if features is not None else None
226 )
227 return cls(pa_table, info=info, split=split)
~/miniconda3/envs/dev/lib/python3.7/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_pandas()
~/miniconda3/envs/dev/lib/python3.7/site-packages/pyarrow/pandas_compat.py in dataframe_to_arrays(df, schema, preserve_index, nthreads, columns, safe)
591 for i, maybe_fut in enumerate(arrays):
592 if isinstance(maybe_fut, futures.Future):
--> 593 arrays[i] = maybe_fut.result()
594
595 types = [x.type for x in arrays]
~/miniconda3/envs/dev/lib/python3.7/concurrent/futures/_base.py in result(self, timeout)
426 raise CancelledError()
427 elif self._state == FINISHED:
--> 428 return self.__get_result()
429
430 self._condition.wait(timeout)
~/miniconda3/envs/dev/lib/python3.7/concurrent/futures/_base.py in __get_result(self)
382 def __get_result(self):
383 if self._exception:
--> 384 raise self._exception
385 else:
386 return self._result
~/miniconda3/envs/dev/lib/python3.7/concurrent/futures/thread.py in run(self)
55
56 try:
---> 57 result = self.fn(*self.args, **self.kwargs)
58 except BaseException as exc:
59 self.future.set_exception(exc)
~/miniconda3/envs/dev/lib/python3.7/site-packages/pyarrow/pandas_compat.py in convert_column(col, field)
557
558 try:
--> 559 result = pa.array(col, type=type_, from_pandas=True, safe=safe)
560 except (pa.ArrowInvalid,
561 pa.ArrowNotImplementedError,
~/miniconda3/envs/dev/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib.array()
~/miniconda3/envs/dev/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib._ndarray_to_array()
~/miniconda3/envs/dev/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowCapacityError: List array cannot contain more than 2147483646 child elements, have 2147483648
```
My code is :
```python
from nlp import Dataset
dataset = Dataset.from_pandas(emb)
``` | 611 |
https://github.com/huggingface/datasets/issues/610 | Load text file for RoBERTa pre-training. | [
"Could you try\r\n```python\r\nload_dataset('text', data_files='test.txt',cache_dir=\"./\", split=\"train\")\r\n```\r\n?\r\n\r\n`load_dataset` returns a dictionary by default, like {\"train\": your_dataset}",
"Hi @lhoestq\r\nThanks for your suggestion.\r\n\r\nI tried \r\n```\r\ndataset = load_dataset('text', data... | I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444
I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file.
According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file. This test.txt is a simple sample where each line is a sentence.
```
from datasets import load_dataset
dataset = load_dataset('text', data_files='test.txt',cache_dir="./")
dataset.set_format(type='torch',columns=["text"])
dataloader = torch.utils.data.DataLoader(dataset, batch_size=8)
next(iter(dataloader))
```
But dataload cannot yield sample and error is:
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-12-388aca337e2f> in <module>
----> 1 next(iter(dataloader))
/Library/Python/3.7/site-packages/torch/utils/data/dataloader.py in __next__(self)
361
362 def __next__(self):
--> 363 data = self._next_data()
364 self._num_yielded += 1
365 if self._dataset_kind == _DatasetKind.Iterable and \
/Library/Python/3.7/site-packages/torch/utils/data/dataloader.py in _next_data(self)
401 def _next_data(self):
402 index = self._next_index() # may raise StopIteration
--> 403 data = self._dataset_fetcher.fetch(index) # may raise StopIteration
404 if self._pin_memory:
405 data = _utils.pin_memory.pin_memory(data)
/Library/Python/3.7/site-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index)
42 def fetch(self, possibly_batched_index):
43 if self.auto_collation:
---> 44 data = [self.dataset[idx] for idx in possibly_batched_index]
45 else:
46 data = self.dataset[possibly_batched_index]
/Library/Python/3.7/site-packages/torch/utils/data/_utils/fetch.py in <listcomp>(.0)
42 def fetch(self, possibly_batched_index):
43 if self.auto_collation:
---> 44 data = [self.dataset[idx] for idx in possibly_batched_index]
45 else:
46 data = self.dataset[possibly_batched_index]
KeyError: 0
```
`dataset.set_format(type='torch',columns=["text"])` returns a log says:
```
Set __getitem__(key) output type to torch for ['text'] columns (when key is int or slice) and don't output other (un-formatted) columns.
```
I noticed the dataset is `DatasetDict({'train': Dataset(features: {'text': Value(dtype='string', id=None)}, num_rows: 44)})`.
Each sample can be accessed by `dataset["train"]["text"]` instead of `dataset["text"]`.
Could you please give me any suggestions on how to modify this code to load the text file?
Versions:
Python version 3.7.3
PyTorch version 1.6.0
TensorFlow version 2.3.0
datasets version: 1.0.1 | 610 |
https://github.com/huggingface/datasets/issues/608 | Don't use the old NYU GLUE dataset URLs | [
"Feel free to open the PR ;)\r\nThanks for updating the dataset_info.json file !"
] | NYU is switching dataset hosting from Google to FB. Initial changes to `datasets` are in https://github.com/jeswan/nlp/commit/b7d4a071d432592ded971e30ef73330529de25ce. What tests do you suggest I run before opening a PR?
See: https://github.com/jiant-dev/jiant/issues/161 and https://github.com/nyu-mll/jiant/pull/1112 | 608 |
https://github.com/huggingface/datasets/issues/600 | Pickling error when loading dataset | [
"When I change from python3.6 to python3.8, it works! ",
"Does it work when you install `nlp` from source on python 3.6?",
"No, still the pickling error.",
"I wasn't able to reproduce on google colab (python 3.6.9 as well) with \r\n\r\npickle==4.0\r\ndill=0.3.2\r\ntransformers==3.1.0\r\ndatasets=1.0.1 (also t... | Hi,
I modified line 136 in the original [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py) as:
```
# line 136: return LineByLineTextDataset(tokenizer=tokenizer, file_path=file_path, block_size=args.block_size)
dataset = load_dataset("text", data_files=file_path, split="train")
dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True,
truncation=True, max_length=args.block_size), batched=True)
dataset.set_format(type='torch', columns=['input_ids'])
return dataset
```
When I run this with transformers (3.1.0) and nlp (0.4.0), I get the following error:
```
Traceback (most recent call last):
File "src/run_language_modeling.py", line 319, in <module>
main()
File "src/run_language_modeling.py", line 248, in main
get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_train else None
File "src/run_language_modeling.py", line 139, in get_dataset
dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True, truncation=True, max_length=args.block_size), batched=True)
File "/data/nlp/src/nlp/arrow_dataset.py", line 1136, in map
new_fingerprint=new_fingerprint,
File "/data/nlp/src/nlp/fingerprint.py", line 158, in wrapper
self._fingerprint, transform, kwargs_for_fingerprint
File "/data/nlp/src/nlp/fingerprint.py", line 105, in update_fingerprint
hasher.update(transform_args[key])
File "/data/nlp/src/nlp/fingerprint.py", line 57, in update
self.m.update(self.hash(value).encode("utf-8"))
File "/data/nlp/src/nlp/fingerprint.py", line 53, in hash
return cls.hash_default(value)
File "/data/nlp/src/nlp/fingerprint.py", line 46, in hash_default
return cls.hash_bytes(dumps(value))
File "/data/nlp/src/nlp/utils/py_utils.py", line 362, in dumps
dump(obj, file)
File "/data/nlp/src/nlp/utils/py_utils.py", line 339, in dump
Pickler(file, recurse=True).dump(obj)
File "/root/miniconda3/envs/py3.6/lib/python3.6/site-packages/dill/_dill.py", line 446, in dump
StockPickler.dump(self, obj)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 409, in dump
self.save(obj)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/root/miniconda3/envs/py3.6/lib/python3.6/site-packages/dill/_dill.py", line 1438, in save_function
obj.__dict__, fkwdefaults), obj=obj)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 610, in save_reduce
save(args)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 751, in save_tuple
save(element)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 736, in save_tuple
save(element)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/root/miniconda3/envs/py3.6/lib/python3.6/site-packages/dill/_dill.py", line 1170, in save_cell
pickler.save_reduce(_create_cell, (f,), obj=obj)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 610, in save_reduce
save(args)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 736, in save_tuple
save(element)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 521, in save
self.save_reduce(obj=obj, *rv)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 605, in save_reduce
save(cls)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/root/miniconda3/envs/py3.6/lib/python3.6/site-packages/dill/_dill.py", line 1365, in save_type
obj.__bases__, _dict), obj=obj)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 610, in save_reduce
save(args)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 751, in save_tuple
save(element)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/root/miniconda3/envs/py3.6/lib/python3.6/site-packages/dill/_dill.py", line 933, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 821, in save_dict
self._batch_setitems(obj.items())
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 847, in _batch_setitems
save(v)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/root/miniconda3/envs/py3.6/lib/python3.6/site-packages/dill/_dill.py", line 933, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 821, in save_dict
self._batch_setitems(obj.items())
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 847, in _batch_setitems
save(v)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 507, in save
self.save_global(obj, rv)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 927, in save_global
(obj, module_name, name))
_pickle.PicklingError: Can't pickle typing.Union[str, NoneType]: it's not the same object as typing.Union
``` | 600 |
https://github.com/huggingface/datasets/issues/598 | The current version of the package on github has an error when loading dataset | [
"Thanks for reporting !\r\nWhich version of transformers are you using ?\r\nIt looks like it doesn't have the PreTrainedTokenizerBase class",
"I was using transformer 2.9. And I switch to the latest transformer package. Everything works just fine!!\r\n\r\nThanks for helping! I should look more carefully next time... | Instead of downloading the package from pip, downloading the version from source will result in an error when loading dataset (the pip version is completely fine):
To recreate the error:
First, installing nlp directly from source:
```
git clone https://github.com/huggingface/nlp.git
cd nlp
pip install -e .
```
Then run:
```
from nlp import load_dataset
dataset = load_dataset('wikitext', 'wikitext-2-v1',split = 'train')
```
will give error:
```
>>> dataset = load_dataset('wikitext', 'wikitext-2-v1',split = 'train')
Checking /home/zeyuy/.cache/huggingface/datasets/84a754b488511b109e2904672d809c041008416ae74e38f9ee0c80a8dffa1383.2e21f48d63b5572d19c97e441fbb802257cf6a4c03fbc5ed8fae3d2c2273f59e.py for additional imports.
Found main folder for dataset https://raw.githubusercontent.com/huggingface/nlp/0.4.0/datasets/wikitext/wikitext.py at /home/zeyuy/.cache/huggingface/modules/nlp_modules/datasets/wikitext
Found specific version folder for dataset https://raw.githubusercontent.com/huggingface/nlp/0.4.0/datasets/wikitext/wikitext.py at /home/zeyuy/.cache/huggingface/modules/nlp_modules/datasets/wikitext/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d
Found script file from https://raw.githubusercontent.com/huggingface/nlp/0.4.0/datasets/wikitext/wikitext.py to /home/zeyuy/.cache/huggingface/modules/nlp_modules/datasets/wikitext/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d/wikitext.py
Found dataset infos file from https://raw.githubusercontent.com/huggingface/nlp/0.4.0/datasets/wikitext/dataset_infos.json to /home/zeyuy/.cache/huggingface/modules/nlp_modules/datasets/wikitext/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d/dataset_infos.json
Found metadata file for dataset https://raw.githubusercontent.com/huggingface/nlp/0.4.0/datasets/wikitext/wikitext.py at /home/zeyuy/.cache/huggingface/modules/nlp_modules/datasets/wikitext/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d/wikitext.json
Loading Dataset Infos from /home/zeyuy/.cache/huggingface/modules/nlp_modules/datasets/wikitext/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d
Overwrite dataset info from restored data version.
Loading Dataset info from /home/zeyuy/.cache/huggingface/datasets/wikitext/wikitext-2-v1/1.0.0/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d
Reusing dataset wikitext (/home/zeyuy/.cache/huggingface/datasets/wikitext/wikitext-2-v1/1.0.0/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d)
Constructing Dataset for split train, from /home/zeyuy/.cache/huggingface/datasets/wikitext/wikitext-2-v1/1.0.0/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/load.py", line 600, in load_dataset
ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications)
File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/builder.py", line 611, in as_dataset
datasets = utils.map_nested(
File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/utils/py_utils.py", line 216, in map_nested
return function(data_struct)
File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/builder.py", line 631, in _build_single_dataset
ds = self._as_dataset(
File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/builder.py", line 704, in _as_dataset
return Dataset(**dataset_kwargs)
File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/arrow_dataset.py", line 188, in __init__
self._fingerprint = generate_fingerprint(self)
File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/fingerprint.py", line 91, in generate_fingerprint
hasher.update(key)
File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/fingerprint.py", line 57, in update
self.m.update(self.hash(value).encode("utf-8"))
File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/fingerprint.py", line 53, in hash
return cls.hash_default(value)
File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/fingerprint.py", line 46, in hash_default
return cls.hash_bytes(dumps(value))
File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/utils/py_utils.py", line 361, in dumps
with _no_cache_fields(obj):
File "/home/zeyuy/miniconda3/lib/python3.8/contextlib.py", line 113, in __enter__
return next(self.gen)
File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/utils/py_utils.py", line 348, in _no_cache_fields
if isinstance(obj, tr.PreTrainedTokenizerBase) and hasattr(obj, "cache") and isinstance(obj.cache, dict):
AttributeError: module 'transformers' has no attribute 'PreTrainedTokenizerBase'
```
| 598 |
https://github.com/huggingface/datasets/issues/597 | Indices incorrect with multiprocessing | [
"I fixed a bug that could cause this issue earlier today. Could you pull the latest version and try again ?",
"Still the case on master.\r\nI guess we should have an offset in the multi-procs indeed (hopefully it's enough).\r\n\r\nAlso, side note is that we should add some logging before the \"test\" to say we ar... | When `num_proc` > 1, the indices argument passed to the map function is incorrect:
```python
d = load_dataset('imdb', split='test[:1%]')
def fn(x, inds):
print(inds)
return x
d.select(range(10)).map(fn, with_indices=True, batched=True)
# [0, 1]
# [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
d.select(range(10)).map(fn, with_indices=True, batched=True, num_proc=2)
# [0, 1]
# [0, 1]
# [0, 1, 2, 3, 4]
# [0, 1, 2, 3, 4]
```
As you can see, the subset passed to each thread is indexed from 0 to N which doesn't reflect their positions in `d`. | 597 |
https://github.com/huggingface/datasets/issues/595 | `Dataset`/`DatasetDict` has no attribute 'save_to_disk' | [
"`pip install git+https://github.com/huggingface/nlp.git` should have done the job.\r\n\r\nDid you uninstall `nlp` before installing from github ?",
"> Did you uninstall `nlp` before installing from github ?\r\n\r\nI did not. I created a new environment and installed `nlp` directly from `github` and it worked!\r\... | Hi,
As the title indicates, both `Dataset` and `DatasetDict` classes don't seem to have the `save_to_disk` method. While the file [`arrow_dataset.py`](https://github.com/huggingface/nlp/blob/34bf0b03bfe03e7f77b8fec1cd48f5452c4fc7c1/src/nlp/arrow_dataset.py) in the repo here has the method, the file `arrow_dataset.py` which is saved after `pip install nlp -U` in my `conda` environment DOES NOT contain the `save_to_disk` method. I even tried `pip install git+https://github.com/huggingface/nlp.git ` and still no luck. Do I need to install the library in another way? | 595 |
https://github.com/huggingface/datasets/issues/590 | The process cannot access the file because it is being used by another process (windows) | [
"Hi, which version of `nlp` are you using?\r\n\r\nBy the way we'll be releasing today a significant update fixing many issues (but also comprising a few breaking changes).\r\nYou can see more informations here #545 and try it by installing from source from the master branch.",
"I'm using version 0.4.0.\r\n\r\n",
... | Hi, I consistently get the following error when developing in my PC (windows 10):
```
train_dataset = train_dataset.map(convert_to_features, batched=True)
File "C:\Users\saareliad\AppData\Local\Continuum\miniconda3\envs\py38\lib\site-packages\nlp\arrow_dataset.py", line 970, in map
shutil.move(tmp_file.name, cache_file_name)
File "C:\Users\saareliad\AppData\Local\Continuum\miniconda3\envs\py38\lib\shutil.py", line 803, in move
os.unlink(src)
PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\saareliad\\.cache\\huggingface\\datasets\\squad\\plain_text\\1.0.0\\408a8fa46a1e2805445b793f1022e743428ca739a34809fce872f0c7f17b44ab\\tmpsau1bep1'
``` | 590 |
https://github.com/huggingface/datasets/issues/589 | Cannot use nlp.load_dataset text, AttributeError: module 'nlp.utils' has no attribute 'logging' | [] |
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/nlp/load.py", line 533, in load_dataset
builder_cls = import_main_class(module_path, dataset=True)
File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/nlp/load.py", line 61, in import_main_class
module = importlib.import_module(module_path)
File "/root/anaconda3/envs/pytorch/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/nlp/datasets/text/5dc629379536c4037d9c2063e1caa829a1676cf795f8e030cd90a537eba20c08/text.py", line 9, in <module>
logger = nlp.utils.logging.get_logger(__name__)
AttributeError: module 'nlp.utils' has no attribute 'logging'
```
Occurs on the following code, or any code including the load_dataset('text'):
```
dataset = load_dataset("text", data_files=file_path, split="train")
dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True,
truncation=True, max_length=args.block_size), batched=True)
dataset.set_format(type='torch', columns=['input_ids'])
return dataset
``` | 589 |
https://github.com/huggingface/datasets/issues/583 | ArrowIndexError on Dataset.select | [] | If the indices table consists in several chunks, then `dataset.select` results in an `ArrowIndexError` error for pyarrow < 1.0.0
Example:
```python
from nlp import load_dataset
mnli = load_dataset("glue", "mnli", split="train")
shuffled = mnli.shuffle(seed=42)
mnli.select(list(range(len(mnli))))
```
raises:
```python
---------------------------------------------------------------------------
ArrowIndexError Traceback (most recent call last)
<ipython-input-64-006a5d38d418> in <module>
----> 1 mnli.shuffle(seed=42).select(list(range(len(mnli))))
~/Desktop/hf/nlp/src/nlp/fingerprint.py in wrapper(*args, **kwargs)
161 # Call actual function
162
--> 163 out = func(self, *args, **kwargs)
164
165 # Update fingerprint of in-place transforms + update in-place history of transforms
~/Desktop/hf/nlp/src/nlp/arrow_dataset.py in select(self, indices, keep_in_memory, indices_cache_file_name, writer_batch_size, new_fingerprint)
1653 if self._indices is not None:
1654 if PYARROW_V0:
-> 1655 indices_array = self._indices.column(0).chunk(0).take(indices_array)
1656 else:
1657 indices_array = self._indices.column(0).take(indices_array)
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib.Array.take()
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowIndexError: take index out of bounds
```
This is because the `take` method is only done on the first chunk which only contains 1000 elements by default (mnli has ~400 000 elements).
Shall we change that to use
```python
pa.concat_tables(self._indices._indices.slice(i, 1) for i in indices_array)
```
instead of `take` ? @thomwolf | 583 |
https://github.com/huggingface/datasets/issues/582 | Allow for PathLike objects | [] | Using PathLike objects as input for `load_dataset` does not seem to work. The following will throw an error.
```python
files = list(Path(r"D:\corpora\yourcorpus").glob("*.txt"))
dataset = load_dataset("text", data_files=files)
```
Traceback:
```
Traceback (most recent call last):
File "C:/dev/python/dutch-simplification/main.py", line 7, in <module>
dataset = load_dataset("text", data_files=files)
File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\load.py", line 548, in load_dataset
builder_instance.download_and_prepare(
File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\builder.py", line 470, in download_and_prepare
self._save_info()
File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\builder.py", line 564, in _save_info
self.info.write_to_directory(self._cache_dir)
File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\info.py", line 149, in write_to_directory
self._dump_info(f)
File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\info.py", line 156, in _dump_info
file.write(json.dumps(asdict(self)).encode("utf-8"))
File "c:\users\bramv\appdata\local\programs\python\python38\lib\json\__init__.py", line 231, in dumps
return _default_encoder.encode(obj)
File "c:\users\bramv\appdata\local\programs\python\python38\lib\json\encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "c:\users\bramv\appdata\local\programs\python\python38\lib\json\encoder.py", line 257, in iterencode
return _iterencode(o, 0)
TypeError: keys must be str, int, float, bool or None, not WindowsPath
```
We have to cast to a string explicitly to make this work. It would be nicer if we could actually use PathLike objects.
```python
files = [str(f) for f in Path(r"D:\corpora\wablieft").glob("*.txt")]
```
| 582 |
https://github.com/huggingface/datasets/issues/581 | Better error message when input file does not exist | [] | In the following scenario, when `data_files` is an empty list, the stack trace and error message could be improved. This can probably be solved by checking for each file whether it actually exists and/or whether the argument is not false-y.
```python
dataset = load_dataset("text", data_files=[])
```
Example error trace.
```
Using custom data configuration default
Downloading and preparing dataset text/default-d18f9b6611eb8e16 (download: Unknown size, generated: Unknown size, post-processed: Unknown sizetotal: Unknown size) to C:\Users\bramv\.cache\huggingface\datasets\text\default-d18f9b6611eb8e16\0.0.0\3a79870d85f1982d6a2af884fde86a71c771747b4b161fd302d28ad22adf985b...
Traceback (most recent call last):
File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\builder.py", line 424, in incomplete_dir
yield tmp_dir
File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\builder.py", line 462, in download_and_prepare
self._download_and_prepare(
File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\builder.py", line 537, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\builder.py", line 813, in _prepare_split
num_examples, num_bytes = writer.finalize()
File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\arrow_writer.py", line 217, in finalize
self.pa_writer.close()
AttributeError: 'NoneType' object has no attribute 'close'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:/dev/python/dutch-simplification/main.py", line 7, in <module>
dataset = load_dataset("text", data_files=files)
File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\load.py", line 548, in load_dataset
builder_instance.download_and_prepare(
File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\builder.py", line 470, in download_and_prepare
self._save_info()
File "c:\users\bramv\appdata\local\programs\python\python38\lib\contextlib.py", line 131, in __exit__
self.gen.throw(type, value, traceback)
File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\builder.py", line 430, in incomplete_dir
shutil.rmtree(tmp_dir)
File "c:\users\bramv\appdata\local\programs\python\python38\lib\shutil.py", line 737, in rmtree
return _rmtree_unsafe(path, onerror)
File "c:\users\bramv\appdata\local\programs\python\python38\lib\shutil.py", line 615, in _rmtree_unsafe
onerror(os.unlink, fullname, sys.exc_info())
File "c:\users\bramv\appdata\local\programs\python\python38\lib\shutil.py", line 613, in _rmtree_unsafe
os.unlink(fullname)
PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\bramv\\.cache\\huggingface\\datasets\\text\\default-d18f9b6611eb8e16\\0.0.0\\3a79870d85f1982d6a2af884fde86a71c771747b4b161fd302d28ad22adf985b.incomplete\\text-train.arrow'
``` | 581 |
https://github.com/huggingface/datasets/issues/580 | nlp re-creates already-there caches when using a script, but not within a shell | [
"Couln't reproduce on my side :/ \r\nlet me know if you manage to reproduce on another env (colab for example)",
"Fixed with a clean re-install!"
] | `nlp` keeps creating new caches for the same file when launching `filter` from a script, and behaves correctly from within the shell.
Example: try running
```
import nlp
hans_easy_data = nlp.load_dataset('hans', split="validation").filter(lambda x: x['label'] == 0)
hans_hard_data = nlp.load_dataset('hans', split="validation").filter(lambda x: x['label'] == 1)
```
twice. If launched from a `file.py` script, the cache will be re-created the second time. If launched as 3 shell/`ipython` commands, `nlp` will correctly re-use the cache.
As observed with @lhoestq. | 580 |
https://github.com/huggingface/datasets/issues/577 | Some languages in wikipedia dataset are not loading | [
"Some wikipedia languages have already been processed by us and are hosted on our google storage. This is the case for \"fr\" and \"en\" for example.\r\n\r\nFor other smaller languages (in terms of bytes), they are directly downloaded and parsed from the wikipedia dump site.\r\nParsing can take some time for langua... | Hi,
I am working with the `wikipedia` dataset and I have a script that goes over 92 of the available languages in that dataset. So far I have detected that `ar`, `af`, `an` are not loading. Other languages like `fr` and `en` are working fine. Here's how I am loading them:
```
import nlp
langs = ['ar'. 'af', 'an']
for lang in langs:
data = nlp.load_dataset('wikipedia', f'20200501.{lang}', beam_runner='DirectRunner', split='train')
print(lang, len(data))
```
Here's what I see for 'ar' (it gets stuck there):
```
Downloading and preparing dataset wikipedia/20200501.ar (download: Unknown size, generated: Unknown size, post-processed: Unknown sizetotal: Unknown size) to /home/gaguilar/.cache/huggingface/datasets/wikipedia/20200501.ar/1.0.0/7be7f4324255faf70687be8692de57cf79197afdc33ff08d6a04ed602df32d50...
```
Note that those languages are indeed in the list of expected languages. Any suggestions on how to work around this? Thanks! | 577 |
https://github.com/huggingface/datasets/issues/575 | Couldn't reach certain URLs and for the ones that can be reached, code just blocks after downloading. | [
"Update:\r\n\r\nThe imdb download completed after a long time (about 45 mins). Ofcourse once download loading was instantaneous. Also, the loaded object was of type `arrow_dataset`. \r\n\r\nThe urls for glue still doesn't work though.",
"Thanks for the report, I'll give a look!",
"I am also seeing a similar err... | Hi,
I'm following the [quick tour](https://huggingface.co/nlp/quicktour.html) and tried to load the glue dataset:
```
>>> from nlp import load_dataset
>>> dataset = load_dataset('glue', 'mrpc', split='train')
```
However, this ran into a `ConnectionError` saying it could not reach the URL (just pasting the last few lines):
```
/net/vaosl01/opt/NFS/su0/miniconda3/envs/hf/lib/python3.7/site-packages/nlp/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only)
354 " to False."
355 )
--> 356 raise ConnectionError("Couldn't reach {}".format(url))
357
358 # From now on, connected is True.
ConnectionError: Couldn't reach https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2Fmrpc_dev_ids.tsv?alt=media&token=ec5c0836-31d5-48f4-b431-7480817f1adc
```
I tried glue with cola and sst2. I got the same error, just instead of mrpc in the URL, it was replaced with cola and sst2.
Since this was not working, I thought I'll try another dataset. So I tried downloading the imdb dataset:
```
ds = load_dataset('imdb', split='train')
```
This downloads the data, but it just blocks after that:
```
Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4.56k/4.56k [00:00<00:00, 1.38MB/s]
Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.07k/2.07k [00:00<00:00, 1.15MB/s]
Downloading and preparing dataset imdb/plain_text (download: 80.23 MiB, generated: 127.06 MiB, post-processed: Unknown sizetotal: 207.28 MiB) to /net/vaosl01/opt/NFS/su0/huggingface/datasets/imdb/plain_text/1.0.0/76cdbd7249ea3548c928bbf304258dab44d09cd3638d9da8d42480d1d1be3743...
Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 84.1M/84.1M [00:07<00:00, 11.1MB/s]
```
I checked the folder `$HF_HOME/datasets/downloads/extracted/<id>/aclImdb`. This folder is constantly growing in size. When I navigated to the train folder within, there was no file. However, the test folder seemed to be populating. The last time I checked it was 327M. I thought the Imdb dataset was smaller than that. My questions are:
1. Why is it still blocking? Is it still downloading?
2. I specified split as train, so why is the test folder being populated?
3. I read somewhere that after downloading, `nlp` converts the text files into some sort of `arrow` files, which will also take a while. Is this also happening here?
Thanks.
| 575 |
https://github.com/huggingface/datasets/issues/568 | `metric.compute` throws `ArrowInvalid` error | [
"Hmm might be related to what we are solving in #564",
"Could you try to update to `datasets>=1.0.0` (we changed the name of the library) and try again ?\r\nIf is was related to the distributed setup settings it must be fixed.\r\nIf it was related to empty metric inputs it's going to be fixed in #654 ",
"Closin... | I get the following error with `rouge.compute`. It happens only with distributed training, and it occurs randomly I can't easily reproduce it. This is using `nlp==0.4.0`
```
File "/home/beltagy/trainer.py", line 92, in validation_step
rouge_scores = rouge.compute(predictions=generated_str, references=gold_str, rouge_types=['rouge2', 'rouge1', 'rougeL'])
File "/home/beltagy/miniconda3/envs/allennlp/lib/python3.7/site-packages/nlp/metric.py", line 224, in compute
self.finalize(timeout=timeout)
File "/home/beltagy/miniconda3/envs/allennlp/lib/python3.7/site-packages/nlp/metric.py", line 213, in finalize
self.data = Dataset(**reader.read_files(node_files))
File "/home/beltagy/miniconda3/envs/allennlp/lib/python3.7/site-packages/nlp/arrow_reader.py", line 217, in read_files
dataset_kwargs = self._read_files(files=files, info=self._info, original_instructions=original_instructions)
File "/home/beltagy/miniconda3/envs/allennlp/lib/python3.7/site-packages/nlp/arrow_reader.py", line 162, in _read_files
pa_table: pa.Table = self._get_dataset_from_filename(f_dict)
File "/home/beltagy/miniconda3/envs/allennlp/lib/python3.7/site-packages/nlp/arrow_reader.py", line 276, in _get_dataset_from_filename
f = pa.ipc.open_stream(mmap)
File "/home/beltagy/miniconda3/envs/allennlp/lib/python3.7/site-packages/pyarrow/ipc.py", line 173, in open_stream
return RecordBatchStreamReader(source)
File "/home/beltagy/miniconda3/envs/allennlp/lib/python3.7/site-packages/pyarrow/ipc.py", line 64, in __init__
self._open(source)
File "pyarrow/ipc.pxi", line 469, in pyarrow.lib._RecordBatchStreamReader._open
File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Tried reading schema message, was null or length 0
``` | 568 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.