Chairs: C. Cirstea, F. Gadducci, H. Schlingloff Past Chairmen: M. Roggenbach, L. Schröder, T. Mossakowski, J. Fiadeiro, P. Mosses, H.-J. Kreowski
Fri, 10 April 2026 at 04:00 pm in Torino, Italy
Joint work with: Seunghyun Chae, Jueun Yeon
Abstract: The formal verification of deep neural networks (DNNs) has become increasingly important as these models are deployed in safety-critical domains. Despite significant progress, scalability remains a fundamental challenge. One key source of this limitation is that existing DNN verification techniques do not effectively exploit internal semantic dependencies in a reusable or generalized form. In contrast, software verification techniques often exploit semantic properties induced by program structure to guide analysis and prune the search space. We tackle this challenge by proposing activation properties, a class of semantic constraints over neuron activations that can be systematically derived and reused to guide DNN verification. These properties can be extracted from both the structural characteristics of the network (such as weights and biases) and from unsatisfiable configurations encountered during the search. Notably, these properties can also be generalized to support reuse across related verification tasks, including those involving slightly perturbed networks. We incorporate these properties into the branch-and-bound procedure to guide the search and prune infeasible regions, avoiding splits that violate known activation properties and prioritizing promising ones. We have implemented this approach in two state-of-the-art neural network verifiers, alpha-beta-CROWN and IVAN, demonstrating its broad applicability. Our evaluation shows that the use of activation properties consistently reduces the search space and delivers substantial performance improvements over existing baselines.