--- license: cc-by-4.0 task_categories: - question-answering language: - en --- # StreamGaze Dataset **StreamGaze** is a comprehensive streaming video benchmark for evaluating MLLMs on gaze-based QA tasks across past, present, and future contexts. ## 📁 Dataset Structure ``` streamgaze/ ├── metadata/ │ ├── egtea.csv # EGTEA fixation metadata │ ├── egoexolearn.csv # EgoExoLearn fixation metadata │ └── holoassist.csv # HoloAssist fixation metadata │ ├── qa/ │ ├── past_gaze_sequence_matching.json │ ├── past_non_fixated_object_identification.json │ ├── past_object_transition_prediction.json │ ├── past_scene_recall.json │ ├── present_future_action_prediction.json │ ├── present_object_attribute_recognition.json │ ├── present_object_identification_easy.json │ ├── present_object_identification_hard.json │ ├── proactive_gaze_triggered_alert.json │ └── proactive_object_appearance_alert.json │ └── videos/ ├── videos_egtea_original.tar.gz # EGTEA original videos ├── videos_egtea_viz.tar.gz # EGTEA with gaze visualization ├── videos_egoexolearn_original.tar.gz # EgoExoLearn original videos ├── videos_egoexolearn_viz.tar.gz # EgoExoLearn with gaze visualization ├── videos_holoassist_original.tar.gz # HoloAssist original videos └── videos_holoassist_viz.tar.gz # HoloAssist with gaze visualization ``` ## 🎯 Task Categories ### **Past (Historical Context)** - **Gaze Sequence Matching**: Match gaze patterns to action sequences - **Non-Fixated Object Identification**: Identify objects outside gaze - **Object Transition Prediction**: Predict object state changes - **Scene Recall**: Recall scene details from memory ### **Present (Current Context)** - **Object Identification (Easy/Hard)**: Identify objects in/outside FOV - **Object Attribute Recognition**: Recognize object attributes - **Future Action Prediction**: Predict upcoming actions ### **Proactive (Future-Oriented)** - **Gaze-Triggered Alert**: Alert based on gaze patterns - **Object Appearance Alert**: Alert on object appearance ## 📥 Usage ### Extract Videos ```bash # Extract EGTEA videos tar -xzf videos_egtea_original.tar.gz -C videos/egtea/original/ tar -xzf videos_egtea_viz.tar.gz -C videos/egtea/viz/ # Extract EgoExoLearn videos tar -xzf videos_egoexolearn_original.tar.gz -C videos/egoexolearn/original/ tar -xzf videos_egoexolearn_viz.tar.gz -C videos/egoexolearn/viz/ # Extract HoloAssist videos tar -xzf videos_holoassist_original.tar.gz -C videos/holoassist/original/ tar -xzf videos_holoassist_viz.tar.gz -C videos/holoassist/viz/ ``` ## 🔑 Metadata Format Each metadata CSV contains: - `video_source`: Video identifier - `fixation_id`: Fixation segment ID - `start_time_seconds` / `end_time_seconds`: Temporal boundaries - `center_x` / `center_y`: Gaze center coordinates (normalized) - `representative_object`: Primary object at gaze point - `other_objects_in_cropped_area`: Objects within FOV - `other_objects_outside_fov`: Objects outside FOV - `scene_caption`: Scene description - `action_caption`: Action description ## 📝 QA Format Each QA JSON file contains: ```json { "response_time": "[00:08 - 09:19]", "questions": [ { "question": "Among {milk, spoon, pan, phone}, which did the user never gaze at?", "time_stamp": "03:14", "answer": "A", "options": [ "A. milk", "B. spoon", "C. pan", "D. phone" ], } ], "video_path": "OP01-R03-BaconAndEggs.mp4" } ``` ## 📄 License This dataset is released under the **Creative Commons Attribution 4.0 International (CC BY 4.0)** license. See https://creativecommons.org/licenses/by/4.0/ ## 🔗 Links - **Evaluation code**: [https://github.com/daeunni/StreamGaze](https://github.com/daeunni/StreamGaze) - **Project page**: [https://streamgaze.github.io/](https://streamgaze.github.io/) ## 📧 Contact For questions or issues, please contact: daeun@cs.unc.edu