We introduce FinMMDocR, a novel bilingual multimodal benchmark for evaluating multimodal large language models (MLLMs) on real-world financial numerical reasoning. Compared to existing benchmarks, our work delivers three major advancements.
(1) Scenario Awareness: 57.9% of 1,200 expert-annotated problems incorporate 12 types of implicit financial scenarios (e.g., Portfolio Management), challenging models to perform expert-level reasoning based on assumptions;
(2) Document Understanding: 837 Chinese/English documents spanning 9 types (e.g., Company Research) average 50.8 pages with rich visual elements, significantly surpassing existing benchmarks in both breadth and depth of financial documents;
(3) Multi-Step Computation: Problems demand 11-step reasoning on average (5.3 extraction + 5.7 calculation steps), with 65.0% requiring cross-page evidence (2.4 pages average).
The best-performing MLLM achieves only 58.0% accuracy, and different retrieval-augmented generation (RAG) methods show significant performance variations on this task. We expect FinMMDocR to drive improvements in MLLMs and reasoning-enhanced methods on complex multimodal reasoning tasks in real-world scenarios.
@misc{tang2025finmmdocrbenchmarkingfinancialmultimodal,
title={FinMMDocR: Benchmarking Financial Multimodal Reasoning with Scenario Awareness, Document Understanding, and Multi-Step Computation},
author={Zichen Tang and Haihong E and Rongjin Li and Jiacheng Liu and Linwei Jia and Zhuodi Hao and Zhongjun Yang and Yuanze Li and Haolin Tian and Xinyi Hu and Peizhi Zhao and Yuan Liu and Zhengyu Wang and Xianghe Wang and Yiling Huang and Xueyuan Lin and Ruofei Bai and Zijian Xie and Qian Huang and Ruining Cao and Haocheng Gao},
year={2025},
eprint={2512.24903},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2512.24903},
}