Q: What does the robot's right hand turn into?
(A) Into the leg. (B) Into the tail. (C) Into the sword. (D) Into the wing.
Multimodal Large Language Models (MLLMs) have demonstrated impressive 2D image/video understanding capabilities. However, there are no publicly standardized benchmarks to assess the abilities of MLLMs in understanding the 4D objects (3D objects with temporal evolution over time). In this paper, we introduce 4D-Bench, the first benchmark to evaluate the capabilities of MLLMs in 4D object understanding, featuring tasks in 4D object Question Answering (4D object QA) and 4D object captioning. 4D-Bench provides 4D objects with diverse categories, high-quality annotations, and tasks necessitating multi-view spatial-temporal understanding, different from existing 2D image/video-based benchmarks. With 4D-Bench, we evaluate a wide range of open-source and closed-source MLLMs. The results from the 4D object captioning experiment indicate that MLLMs generally exhibit weaker temporal understanding compared to their appearance understanding, notably, while open-source models approach closed-source performance in appearance understanding, they show larger performance gaps in temporal understanding. 4D object QA yields surprising findings: even with simple single-object videos, MLLMs perform poorly, with state-of-the-art GPT-4o achieving only 63% accuracy compared to the human baseline of 91%. These findings highlight a substantial gap in 4D object understanding and the need for further advancements in MLLMs.
Q: What does the robot's right hand turn into?
(A) Into the leg. (B) Into the tail. (C) Into the sword. (D) Into the wing.
Q: How many burnt cigarettes are there?
(A) Nine burnt cigarettes (B) Zero burnt cigarettes (C) Three burnt cigarettes (D) Two burnt cigarettes
Caption:
A cartoon-style knight character wears a rounded helmet adorned with a golden unicorn horn, ornate armor with bluish-green shoulder plates edged in bright gold, and boots with golden accents. Draped in a purple cape, the knight is wielding a weapon, tossing it into the air, and catching it in a playful, skillful display.
We render multi-view videos for 4D objects from Objaverse-XL, followed by comprehensive filtering and annotation processes. Our pipeline includes:
The 4D-Bench dataset comprises two main tasks with comprehensive annotations:
@misc{zhu20254dbenchbenchmarkingmultimodallarge,
title={4D-Bench: Benchmarking Multi-modal Large Language Models for 4D Object Understanding},
author={Wenxuan Zhu and Bing Li and Cheng Zheng and Jinjie Mai and Jun Chen and Letian Jiang and Abdullah Hamdi and Sara Rojas Martinez and Chia-Wen Lin and Mohamed Elhoseiny and Bernard Ghanem},
year={2025},
eprint={2503.17827},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2503.17827},
}