The dataset viewer is not available for this split.
Error code: StreamingRowsError
Exception: CastError
Message: Couldn't cast
mp4: binary
json: struct<category: string, channel_id: string, duration: int64, file_size_bytes: int64, query: string, (... 125 chars omitted)
child 0, category: string
child 1, channel_id: string
child 2, duration: int64
child 3, file_size_bytes: int64
child 4, query: string
child 5, source_term: string
child 6, title: string
child 7, upload_date: string
child 8, uploader: string
child 9, url: string
child 10, video_id: string
child 11, view_count: int64
__key__: string
__url__: string
video: null
metadata: null
to
{'video': Value('string'), 'metadata': Value('string')}
because column names don't match
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2674, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2208, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2241, in _iter_arrow
pa_table = cast_table_to_features(pa_table, self.features)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2192, in cast_table_to_features
raise CastError(
datasets.table.CastError: Couldn't cast
mp4: binary
json: struct<category: string, channel_id: string, duration: int64, file_size_bytes: int64, query: string, (... 125 chars omitted)
child 0, category: string
child 1, channel_id: string
child 2, duration: int64
child 3, file_size_bytes: int64
child 4, query: string
child 5, source_term: string
child 6, title: string
child 7, upload_date: string
child 8, uploader: string
child 9, url: string
child 10, video_id: string
child 11, view_count: int64
__key__: string
__url__: string
video: null
metadata: null
to
{'video': Value('string'), 'metadata': Value('string')}
because column names don't matchNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Video Batch 20260316
This dataset contains a collection of video files and their associated metadata, organized in WebDataset format (tar archives) to support large file sizes and efficient streaming.
Dataset Structure
The dataset consists of multiple .tar shards in the data/ directory. Each .tar file contains pairs of files for every video:
{video_id}.mp4: The raw MP4 video file.{video_id}.json: A sidecar JSON file containing the video's metadata.
Metadata Schema
The {video_id}.json files contain the following fields:
video_id(str): The unique identifier for the video (usually YouTube ID).title(str): The scraped title of the video.category(str): The video category.source_term(str): The search term loosely associated with this video.query(str): The exact query used to find the video.url(str): The original URL of the video (e.g., YouTube URL).uploader(str): The uploader/channel name.channel_id(str): The unique ID of the channel.upload_date(str): The date the video was uploaded (YYYYMMDD).duration(int): The duration of the video in seconds.view_count(int): Number of views at the time of scraping.file_size_bytes(int): Actual size of the.mp4file in bytes.
How to Load the Dataset
Because this dataset is stored in WebDataset format, you can stream it efficiently without having to download hundreds of gigabytes at once.
Using HuggingFace datasets (Streaming)
from datasets import load_dataset
# Load the dataset in streaming mode (recommended for large video datasets)
dataset = load_dataset(
"potsawee/video-batch-20260316",
split="train",
streaming=True
)
for sample in dataset:
# 'sample' is a dictionary containing the contents of the tar file for one video
video_id = sample['__key__']
# Access the metadata dictionary parsed from the .json file
metadata = sample['json']
print(f"Video Title: {metadata['title']}")
# Access the video bytes (or path if caching) implicitly from the .mp4 file
# Depending on how the decoding is set up, you may need to read the raw bytes:
video_bytes = sample['mp4']
print(f"Processed video: {video_id} \\n")
break
Alternatively, you can load it using the webdataset library directly if you are using PyTorch and want an efficient DataLoader.
Using webdataset library
import webdataset as wds
# Read directly from HuggingFace using huggingface-cli or direct URLs
url = "https://huggingface.co/datasets/potsawee/video-batch-20260316/resolve/main/data/shard-{000000..000850}.tar"
dataset = wds.WebDataset(url).decode().to_tuple("mp4", "json")
for video_bytes, metadata in dataset:
print(metadata['title'])
break
Downloading Individual Tar Files
Because each shard is a standard, uncompressed .tar archive, you don't have to use a dataset library. You can simply download any shard-XXXXXX.tar file directly from the Hugging Face repository and extract it using standard tools.
Using Command Line:
# Download a specific shard
wget https://huggingface.co/datasets/potsawee/video-batch-20260316/resolve/main/data/shard-000005.tar
# Extract the mp4 and json files
tar -xvf shard-000005.tar
Using Python:
import tarfile
from huggingface_hub import hf_hub_download
# Download a specific shard to local cache
tar_path = hf_hub_download(
repo_id="potsawee/video-batch-20260316",
filename="data/shard-000005.tar",
repo_type="dataset"
)
# Extract it
with tarfile.open(tar_path, "r") as tar:
tar.extractall("./extracted_videos")
Creating Shards
These shards were generated by aggregating downloaded mp4 files and jsonl logs, partitioning them into chunks of ~500MB each, and compressing them sequentially into uncompressed .tar archives. This guarantees that PyArrow's 2GB column limit is never encountered, and enables massively parallel streaming.
- Downloads last month
- 605