Visual Question Answering (VQA) requires reasoning about the visually-grounded relations in the image and question context. A crucial aspect of solving complex questions is reliable multi-hop reasoning, i.e., dynamically learning the interplay between visual entities in each step. In this paper, we investigate the potential of the reasoning graph network on multi-hop reasoning questions, especially over 3 “hops.” We call this model QMRGT: A Question-Guided Multi-hop Reasoning Graph Network. It constructs a cross-modal interaction module (CIM)...