Keynote speakers
Here it is the list of keynote speakers (in alphabetic order by name).
Michael Butler @ University of Southampton
|
The role of abstraction and refinement in safety of intelligent systems
|
Abstract Rapid developments in artificial intelligence (AI) and robotics pose challenges to conventional methods of safety assurance. Machine learning is a very different programming approach to algorithm design, and notions of correctness for machine learning systems are still evolving. Increasing levels of autonomy being deployed in robotic systems means weaker concepts of control and predictability, increasing risk of accidents, and challenging conventional safety assurance methods. The role of abstraction/refinement formal methods, such as Event-B, is quite well established for traditional safety-critical systems, but it remains an open question whether and how such methods are applicable to systems with autonomy and learning-based control. Recent ideas on guaranteed safe AI from leading formal methods researchers propose the use of three components to ensure safe AI systems: a safety specification, a world model, and a verifier. At a high level this represents a systems approach to safety analysis which aligns well with common usage of Event-B in conventional system design. Constructing specifications and world (or environment) models for complex systems remains challenging, and abstraction and refinement approaches are aimed at addressing system complexity through incremental modelling and verification. Systematic approaches to analysis of (informal) requirements and of safety hazards also play in important role. We explore how existing work on hierarchical analysis and verification could be extended and relaxed to deal with autonomy and machine learning. The focus is on physical safety rather than broader issues of influences of AI on human behaviours and ethics. The ideas are influenced by collaboration with Asieh Salehi Fathabadi, Colin Snook, Dana Dghaym, Fahad Alotaibi, Haider Al-Shareefy, and Thai Son Hoang.
|
Short Bio Michael Butler is the Dean of Engineering and Physical Science at the University of Southampton.
Michael as made many key theoretical and methodological contributions to the refinement calculus and to the B and Event-B formal methods.
He has worked on tools such as ProB, U2B, csp2B and the Rodin toolset for Event-B.
|
|
Nils Jansen @ Ruhr-University Bochum
|
Neurosymbolic Learning Systems: Artificial Intelligence and Formal Methods
|
Abstract Artificial Intelligence (AI) has emerged as a disruptive force in our society. Increasing applications in healthcare, transport, military, and other fields underscore the critical need for a comprehensive understanding and the robustness of an AI’s decision-making process. Neurosymbolic AI aims to create robust AI systems by integrating neural and symbolic AI techniques. We highlight the role of formal methods in such techniques, serving as a rigorous and structured backbone for symbolic AI methods. Moreover, as a specific machine learning technique, we look at deep reinforcement learning (RL) with the promise that autonomous systems can learn to operate in unfamiliar environments with minimal human intervention. However, why haven’t most autonomous systems implemented RL yet? The answer is simple: there are significant unsolved challenges. One of the most important ones is obvious: Autonomous systems operate in unfamiliar, unknown environments. This lack of knowledge is called uncertainty. This talk will explore why making decisions that account for this uncertainty is essential to achieving trustworthiness, reliability, and safety in RL.
|
Short Bio Nils Jansen is Professor of Artificial Intelligence and Formal Methods at the Ruhr-University Bochum. He is also an Associate Professor with the Institute for Computing and Information Sciences at Radboud University Nijmegen, The Netherlands.
Nils' mission is to increase the trustworthiness of Artificial Intelligence, bringing together the areas of machine learning and formal methods, in particular, formal verification. He tackles problems that are inspired by autonomous systems and planning problems in robotics. Nils Jansen has been awarded an ERC Starting Grant named DEUCE: Data-Driven Verification and Learning under Uncertainty.
|
|
Thierry Lecomte @ CLEARSY
|
Mathematical Proofs and Moving Trains: The Double Life of Atelier B
|
Abstract Atelier B has played a crucial role in ensuring the safety of critical systems for over three decades. This presentation explores the recent evolution of the Atelier B, from the last keynote in 2016 in Linz to its current, expanded role in system engineering across industries. We will revisit key milestones, such as the modeling of complex systems, the development and programming of the CLEARSY Safety Platform, and its use in educating students and engineers. The session will also showcase how the versatility of Atelier B has been harnessed to model, prove, and implement robust systems—from automated metros to industrial control. Through these advancements, Atelier B continues to shape the landscape of high-integrity software development, merging mathematical rigor with practical, real-world applications.
|
Short Bio With 30 years of experience in R&D, Thierry Lecomte has worked on industrial projects in the automotive, healthcare, microelectronics, nuclear energy, railway and space industries. Today he is R&D director of CLEARSY, a French SME specialized in the invention of safety critical systems, where he has worked since its creation in 2001.
His current subjects of interest are safety and security co-engineering, safe artificial intelligence, and autonomous mobility - all related to formal methods.
|
|