| | --- |
| | license: mit |
| | --- |
| | <div align="center"> |
| | <h1> |
| | TeleEgo: <br> |
| | Benchmarking Egocentric AI Assistants in the Wild |
| | </h1> |
| | |
| | <!-- ้กน็ฎๅพฝ็ซ --> |
| | <p> |
| | <a href="https://arxiv.org/abs/2510.23981"> |
| | <img alt="arXiv" src="https://img.shields.io/badge/ArXiv-2510.23981-b31b1b.svg"> |
| | </a> |
| | <a href="https://programmergg.github.io/jrliu.github.io/"> |
| | <img alt="Page" src="https://img.shields.io/badge/Project Page-Link-green"> |
| | </a> |
| | <a href="https://github.com/TeleAI-UAGI/TeleEgo/"> |
| | <img alt="GitHub" src="https://img.shields.io/badge/GitHub-Repository-blue?logo=github"> |
| | </a> |
| | </p> |
| | |
| | <!-- <img src="assets/teaser.png" alt="Teaser" style="width:80%; max-width:700px;"> --> |
| | |
| | ๐ข **Note**๏ผThis project is still under active development, and the benchmark will be continuously updated. |
| | </div> |
| |
|
| |
|
| |
|
| | ## ๐ Introduction |
| |
|
| | **TeleEgo** is a comprehensive **omni benchmark** designed for **multi-person, multi-scene, multi-task, and multimodal long-term memory reasoning** in egocentric video streams. |
| | It reflects realistic personal assistant scenarios where continuous egocentric video data is collected across hours or even days, requiring models to maintain and reason over **memory, understanding, and cross-memory reasoning**. **Omni** here means that TeleEgo covers the full spectrum of **roles, scenes, tasks, modalities, and memory horizons**, offering all-round evaluation for egocentric AI assistants. |
| |
|
| | **TeleEgo provides:** |
| |
|
| | - ๐ง **Omni-scale, diverse egocentric data** from 5 roles across 4 daily scenarios. |
| | - ๐ค **Multi-modal annotations**: video, narration, and speech transcripts. |
| | - โ **Fine-grained QA benchmark**: 3 cognitive dimensions, 12 subcategories. |
| |
|
| |
|
| | --- |
| |
|
| | ## ๐ Dataset Overview |
| |
|
| | - **Participants**: 5 (balanced gender) |
| | - **Scenarios**: |
| | - Work & Study |
| | - Lifestyle & Routines |
| | - Social Activities |
| | - Outings & Culture |
| | - **Recording**: 3 days/participant (~14.4 hours each) |
| | - **Modalities**: |
| | - Egocentric video streams |
| | - Speech & conversations |
| | - Narration and event descriptions |
| |
|
| | --- |
| |
|
| | ## Download |
| |
|
| | ```bash |
| | # Extract (only need to specify the first file) |
| | 7z x archive.7z.001 |
| | |
| | # Or extract to a specific directory |
| | 7z x archive.7z.001 -o./extracted_data |
| | ``` |
| |
|
| | ## Dataset Structure |
| | After extraction, the dataset structure is: |
| |
|
| | ``` |
| | TeleEgo/ |
| | โโโ merged_P1_A.json # QA annotations for Participant 1 |
| | โโโ merged_P2_A.json # QA annotations for Participant 2 |
| | โโโ merged_P3_A.json # QA annotations for Participant 3 |
| | โโโ merged_P4_A.json # QA annotations for Participant 4 |
| | โโโ merged_P5_A.json # QA annotations for Participant 5 |
| | โโโ merged_P1.mp4 # Video stream for Participant 1 (~46GB) |
| | โโโ merged_P2.mp4 # Video stream for Participant 2 (~35GB) |
| | โโโ merged_P3.mp4 # Video stream for Participant 3 (~58GB) |
| | โโโ merged_P4.mp4 # Video stream for Participant 4 (~57GB) |
| | โโโ merged_P5.mp4 # Video stream for Participant 5 (~38GB) |
| | โโโ timeline_P1.json # Temporal annotations for Participant 1 |
| | โโโ timeline_P2.json # Temporal annotations for Participant 2 |
| | โโโ timeline_P3.json # Temporal annotations for Participant 3 |
| | โโโ timeline_P4.json # Temporal annotations for Participant 4 |
| | โโโ timeline_P5.json # Temporal annotations for Participant 5 |
| | ``` |
| |
|
| | ## Alternative Download Methods |
| |
|
| | If you have difficulty accessing Hugging Face, you can also download the dataset from: |
| |
|
| | **Baidu Netdisk (็พๅบฆ็ฝ็)** |
| | ``` |
| | Link: https://pan.baidu.com/s/1TSqfjqeaXdP2TWEpiy_3KA?pwd=7wmh |
| | ``` |
| |
|
| | The Baidu Netdisk version contains the **uncompressed data files** (MP4 videos and JSON annotations) directly |
| |
|
| |
|
| | ## ๐งช Benchmark Tasks |
| |
|
| | TeleEgo-QA evaluates models along **three main dimensions**: |
| |
|
| | 1. **Memory** |
| | - Short-term / Long-term / Ultra-long Memory |
| | - Entity Tracking |
| | - Temporal Comparison & Interval |
| |
|
| | 2. **Understanding** |
| | - Causal Understanding |
| | - Intent Inference |
| | - Multi-step Reasoning |
| | - Cross-modal Understanding |
| |
|
| | 3. **Cross-Memory Reasoning** |
| | - Cross-temporal Causality |
| | - Cross-entity Relation |
| | - Temporal Chain Understanding |
| |
|
| | Each QA instance includes: |
| |
|
| | - Question type: Single-choice, Multi-choice, Binary, Open-ended |
| |
|
| | <!-- --- |
| |
|
| | --- |
| | --> |
| | <!-- ## Baselines |
| |  |
| |  |
| | --- |
| |
|
| | ## ๐ค Collaborators |
| |
|
| | Thanks to these amazing people for contributing to the project: |
| |
|
| | <a href="https://github.com/rebeccaeexu"> |
| | <img src="https://avatars.githubusercontent.com/rebeccaeexu" width="60px" style="border-radius:50%" /> |
| | </a> |
| | <a href="https://github.com/DavisWANG0"> |
| | <img src="https://avatars.githubusercontent.com/DavisWANG0" width="60px" style="border-radius:50%" /> |
| | </a> |
| | <a href="https://github.com/H-oliday"> |
| | <img src="https://avatars.githubusercontent.com/H-oliday" width="60px" style="border-radius:50%" /> |
| | </a> |
| | <a href="https://github.com/Xiaolong-RRL"> |
| | <img src="https://avatars.githubusercontent.com/Xiaolong-RRL" width="60px" style="border-radius:50%" /> |
| | </a> |
| | <a href="https://github.com/Programmergg"> |
| | <img src="https://avatars.githubusercontent.com/Programmergg" width="60px" style="border-radius:50%" /> |
| | </a> |
| | <a href="https://github.com/yiheng-wang-duke"> |
| | <img src="https://avatars.githubusercontent.com/yiheng-wang-duke" width="60px" style="border-radius:50%" /> |
| | </a> |
| | <a href="https://github.com/cocowy1"> |
| | <img src="https://avatars.githubusercontent.com/cocowy1" width="60px" style="border-radius:50%" /> |
| | </a> |
| | <a href="https://github.com/chxy95"> |
| | <img src="https://avatars.githubusercontent.com/chxy95" width="60px" style="border-radius:50%" /> |
| | </a> --> |
| |
|
| |
|
| | ## ๐ Citation |
| |
|
| | If you find our **TeleEgo** in your research, please cite: |
| |
|
| | ```bib |
| | @article{yan2025teleego, |
| | title={TeleEgo: Benchmarking Egocentric AI Assistants in the Wild}, |
| | author={Yan, Jiaqi and Ren, Ruilong and Liu, Jingren and Xu, Shuning and Wang, Ling and Wang, Yiheng and Wang, Yun and Zhang, Long and Chen, Xiangyu and Sun, Changzhi and others}, |
| | journal={arXiv preprint arXiv:2510.23981}, |
| | year={2025} |
| | } |
| | ``` |
| |
|
| | ## ๐ชช License |
| |
|
| | This project is licensed under the **MIT License**. |
| | Dataset usage is restricted under a **research-only license**. |
| |
|
| | --- |
| |
|
| | <!-- ## References |
| |
|
| | * EgoLife: Towards Egocentric Life Assistant [\[arXiv:2503.03803\]](https://arxiv.org/abs/2503.03803) |
| | * M3-Agent: Seeing, Listening, Remembering, and Reasoning [\[arXiv:2508.09736\]](https://arxiv.org/abs/2508.09736) |
| | * HourVideo: 1-Hour Video-Language Understanding [\[arXiv:2411.04998\]](https://arxiv.org/abs/2411.04998) --> |
| |
|
| |
|
| | ## ๐ฌ Contact |
| |
|
| | If you have any questions, please feel free to reach out: chxy95@gmail.com. |
| |
|
| | --- |
| |
|
| | <div align="center"> |
| |
|
| | <strong>โจ TeleEgo is an Omni benchmark, a step toward building personalized AI assistants with true long-term memory, reasoning and decision-making in real-world wearable scenarios. โจ</strong> |
| |
|
| | </div> |
| |
|
| | <!-- <br/> --> |
| |
|
| | <!-- <div align="center" style="margin-top: 10px;"> |
| | <img src="assets/TeleAI.jpg" alt="TeleAI Logo" width="120px" /> |
| | |
| | <img src="assets/TeleEgo.png" alt="TeleEgo Logo" width="120px" /> |
| | </div> |
| | --> |