Chairs: C. Cirstea, F. Gadducci, H. Schlingloff Past Chairmen: M. Roggenbach, L. Schröder, T. Mossakowski, J. Fiadeiro, P. Mosses, H.-J. Kreowski
Fri, 10 April 2026 at 10:00 am in Torino, Italy
Joint work with: Suhaib Al-Rousan, Kim Guldstrand Larsen, Christian Schilling
Abstract: Verifying that two quantum circuits are equivalent is a critical challenge in compiler validation. A common approach is to transform the circuits into a tensor network and then iteratively contract tensors in the network. The order in which contractions are applied has a crucial influence on the complexity, as tensor size may grow exponentially in the number of qubits. This motivated research into heuristic policies to find good contraction orders, including recent work based on reinforcement learning (RL). Orthogonally, to mitigate the complexity of an explicit tensor representation, tensor decision diagrams (TDDs) have been proposed as a compact symbolic representation. However, also the TDD performance is sensitive to the contraction order, and policies designed for tensor networks do not perform well in case of a TDD representation. In this paper, we propose the first RL framework to obtain a graph-neural-network policy that navigates the trade-off between arithmetic complexity and TDD size. Experimental results demonstrate that a policy learned with our framework is effective at taming the symbolic representation size.