Learning to simulate complex physical systems from data has emerged as a promising way to overcome the limitations of traditional numerical solvers, which often require prohibitive computational costs for high-fidelity solutions. Recent Graph Neural Simulators (GNSs) accelerate simulations by learning dynamics on graph-structured data, yet often struggle to capture long-range interactions and suffer from error accumulation under autoregressive rollouts.
To address these challenges, we propose Information-preserving Graph Neural Simulator (IGNS) , a graph-based neural simulator built on the principles of Hamiltonian dynamics. This structure guarantees preservation of information across the graph, while extending to port-Hamiltonian systems allows the model to capture a broader class of dynamics, including non-conservative effects. IGNS further incorporates a warmup phase to initialize global context, geometric encoding to handle irregular meshes, and a multi-step training objective that facilitates PDE matching, where the trajectory produced by integrating the port-Hamiltonian core aligns with the ground-truth trajectory, thereby reducing rollout error.
To evaluate these properties systematically, we introduce new benchmarks that target long-range dependencies and challenging external forcing scenarios. Across all tasks, IGNS consistently outperforms state-of-the-art GNSs, achieving higher accuracy and stability under challenging and complex dynamical systems.
IGNS builds on four key components: (i) Port-Hamiltonian formalism to capture energy-conserving and non-conservative behavior, (ii) Warmup phase to establish global context, (iii) Geometric encoding for irregular meshes, and (iv) Multi-step objective to reduce error accumulation.
We parameterize the Hamiltonian $H_\theta(t, \mathbf{X})$ and evolve the joint state according to:
where $\mathbf{x}_i=[\mathbf{q};\mathbf{p}]^T_i$ represents generalized coordinates and momenta, and $\mathbf{J}$ encodes the symplectic structure. The damping and forcing terms enable modeling of non-conservative systems while maintaining the energy-preserving structure. We parameterize the Hamiltonian as:
where $\tilde{\sigma}$ is an anti-derivative of a non-linear activation, $\mathbf{1}_d$ is a row vector of ones, and $\mathbf{W}(t)$, $\mathbf{V}(t)$ are block diagonal learnable weighted matrices that ensure separation into the $\mathbf{q}$ and $\mathbf{p}$ components.
To capture long-range interactions from the start, we perform $l$ rounds of message passing without advancing time. This broadcasts information across the graph up to radius $l$ from each node, providing global context that is preserved throughout the rollout thanks to the energy-conserving core.
We embed geometric features directly into edge attributes to capture spatial structure on irregular meshes. Each edge feature encodes displacement vectors ($\mathbf{s}_{ij}=\text{pos}_j - \text{pos}_i$) and distances ($d_{ij}=\|\text{pos}_j - \text{pos}_i\|_2$), enabling the model to leverage mesh geometry while preventing overfitting to specific discretizations.
We train with a multi-step objective that supervises entire rollout windows:
This enhances trajectory-level consistency and reduces error accumulation compared to one-step training followed by autoregressive rollout.
Click on each task to view demonstrations
@inproceedings{
hoang2026igns,
title={Improving Long-Range Interactions in Graph Neural Simulators via Hamiltonian Dynamics},
author={Hoang, Tai and Trenta, Alessandro and Gravina, Alessio and
Freymuth, Niklas and Becker, Philipp and Bacciu, Davide and Neumann, Gerhard},
booktitle={The Fourteenth International Conference on Learning Representations},
year={2026},
url={https://openreview.net/forum?id=x66u6TEDUw}
}