Neuro-Symbolic Methods for Reactive Synthesis, Repair, and Natural Language Formalization
17 Nov 2023
When: Friday, November 17, 2023, 9:30 (GMT).
Where: In person and Zoom online.
Topic: Neuro-Symbolic Methods for Reactive Synthesis, Repair, and Natural Language Formalization.
Speaker: Matthias Cosler.
Neuro-symbolic methods bridge the gap between symbolic and neural approaches by combining symbolic knowledge with data-driven deep-learning techniques. In this talk, I will present our work on neuro-symbolic methods for reactive synthesis, automatic circuit repair, and natural language formalization.
Reactive synthesis is a challenging problem in computer science that dates back to Church in the 1950s as the automatic construction of a system that satisfies formal temporal requirements. We developed several transformer-based approaches that address the reactive synthesis problem and show remarkable out-of-distribution capabilities.
Automatic circuit repair is the problem of repairing a faulty sequential circuit to a circuit that satisfies temporal requirements. We developed an approach to correct the defective circuit and analyzed the fault’s influence on the model’s training and evaluation. Applied iteratively to the reactive synthesis problem, our approach improves over previous work on all benchmarks.
Further, we focus on the formalization of unstructured natural language requirements to Linear Temporal Logic (LTL) specifications. Manually formalizing requirements for verification or reactive synthesis is a tedious and error-prone task. Using large language models, we present a framework that aids with translating natural language expressions into formal logic and enables the use of these requirements for verification, planning, and system design.
With these directions, we advance on important issues in the applicability of temporal reasoning with the help of deep learning.
Matthias Cosler is a Ph.D. student with Bernd Finkbeiner at CISPA Helmholtz Center for Information Security and Saarland University. He obtained his Bachelor’s and Master’s degrees at Saarland University. During his studies, he spent an exchange semester at Trinity College Dublin. His research interests are in the neuro-symbolic intersection of formal methods and deep learning, with a current focus on the application of transformers and large language models to temporal logic, and reactive synthesis.
Functional Synthesis: Formal Methods meets Machine Learning
09 Oct 2023
When: Monday, October 9, 2023, 15:30 (BST).
Where: In person and Zoom online.
Topic: Functional Synthesis: Formal Methods meets Machine Learning.
Speaker: Maximilian Prokop, PhD candidate.
Partial exploration and on-the-fly algorithms are crucial to tackle otherwise completely intractable problems. The performance of such algorithms depends heavily on the exploration heuristic. In this talk, we demonstrate how one can harness the power of machine learning for this purpose and thus explore semantically labelled state spaces more efficiently. We will explain all ideas with the example of parity games in LTL synthesis, as we also did in our paper “Guessing Winning Strategies in LTL Synthesis using Semantic Learning” from this years iteration of CAV.
Maximilian Prokop is a first year joint PhD candidate of Javier Esparza at the TU Munich and Jan Kretinsky at the Masaryk University of Brno. His major research interest is the applicability of machine learning in the realm of formal methods which up to now manifested developing ML-based exploration heuristics for LTL Synthesis.
Programming Agents via Evaluative Feedback.
17 Nov 2022
When: Thursday, November 17, 2022, 4:30pm (CET).
Where: Zoom online.
Topic: Programming Agents via Evaluative Feedback.
Speaker: Michael Littiman, computer science professor at Brown University.
Reinforcement-learning agents optimize their behavior from evaluative feedback. This talk focuses on how we’re improving reinforcement-learning algorithms by building a better understanding of how people can provide this feedback through telling, training, and teaching. Specifically, it will cover linear temporal logic as a task representation, arguing that traditional rewards can make behaviors difficult to express. It will also summarize work that attempts to characterize the ways human users deliver evaluative feedback and novel algorithms that can use this feedback effectively as a way of enabling end-user training of AI systems.
Michael L. Littman is a University Professor of Computer Science at Brown University, studying machine learning and decision making under uncertainty. He has earned multiple university-level awards for teaching and his research on reinforcement learning, probabilistic planning, and automated crossword-puzzle solving has been recognized with three best-paper awards and three influential paper awards. Littman is co-director of Brown’s Humanity Centered Robotics Initiative and a Fellow of the Association for the Advancement of Artificial Intelligence and the Association for Computing Machinery. He is also a Fellow of the American Association for the Advancement of Science Leshner Leadership Institute for Public Engagement with Science, focusing on Artificial Intelligence.
Knowledge Compilation Meets Logical Separability
17 Nov 2022
When: Thursday, November 17, 2022, 3pm (CET).
Where: Zoom online.
Topic: Knowledge Compilation Meets Logical Separability.
Speaker: Junming Qiu, a MSc student at Jinan University, China.
Knowledge compilation is an alternative solution to address demanding reasoning tasks with high complexity via converting knowledge bases into a suitable target language. Interestingly, the notion of logical separability, proposed by Levesque, offers a general explanation for the tractability of clausal entailment for two remarkable languages: decomposable negation normal form and prime implicates. It is interesting to explore what role logical separability on earth plays in problem tractability. In this work, we apply the notion of logical separability in three reasoning problems within the context of propositional logic: satisfiability check (CO), clausal entailment check (CE) and model counting (CT), contributing to three corresponding polytime procedures. We provide three logical separability-based properties: CO-logical separability, CE-logical separability and CT-logical separability. We then identify three novel normal forms: CO-LSNNF, CE-LSNNF and CT-LSNNF based on the above properties. Besides, we show that every normal form is the necessary and sufficient condition under which the corresponding procedure is correct. We finally integrate the above logical separability-based languages into the knowledge compilation map.
The talk is based on the paper:
[AAAI-2022] Knowledge Compilation Meets Logical Separability.
Junming Qiu is a MSc student at Jinan University, China. His main research interests lie in automated reasoning, including knowledge compilation and belief change.
On the Power of LTLf in Assured Autonomy
10 Nov 2022
When: Thursday, November 10, 2022, 3pm (CET).
Where: Zoom online + Dipartimento di Informatica, Sapienza University of Rome - Viale Regina Elena, 295, Building E - Room T1.
Topic: On the Power of LTLf in Assured Autonomy.
Speaker: Shufang Zhu, Postdoctoral researcher at the Department of Computer, Control and Management Engineering, Sapienza University of Rome.
Assured Autonomy is a novel area that merges Artificial Intelligence (AI) and Formal Methods (FM), concerning building AI agents that autonomously deliberate how to act in a changing, incompletely known, unpredictable environment under formal guarantees. A popular specification language for describing formal guarantees is Linear Temporal Logic (LTL) from FM. However, LTL is interpreted over infinite traces (relating to non-terminating systems). Since AI agents are not dedicated to a single task in their complete life cycle, but are supposed to accomplish one task after another, AI applications often employ a finite-trace variant of LTL, denoted as LTLf. In particular, the study of LTLf synthesis brings the intellectual merit of achieving Assured Autonomy by allowing agents to automatically construct programs with guarantees to meet their tasks specified in LTLf. In this talk, I will review an evolving journey toward Assured Autonomy through LTLf synthesis. Starting from an attempt to devise a symbolic backward LTLf synthesis framework, which has demonstrated its significant efficiency in various applicable scenarios, the journey evolves into a forward LTLf synthesis technique that highlights several interesting avenues of future work in the context of Markovian Decision Process.
Shufang Zhu is a Postdoctoral researcher at the Department of Computer, Control and Management Engineering, Sapienza University of Rome, working with Prof. Giuseppe De Giacomo. Her research concerns interdisciplinary knowledge across artificial intelligence (AI) and formal methods (FM), focusing on automated reasoning, planning and synthesis. She received her Ph.D. degree in 2020, at East China Normal University, Shanghai, China, under the supervision of Prof. Geguang Pu. During her Ph.D., she received the scholarship from Chinese Scholarship Council and studied as a visiting Ph.D. student (August 2016 to Feb 2018) at Rice University, under the supervision of Prof. Moshe Y. Vardi.
Modern Query Evaluation Algorithms
13 Oct 2022
When: Thursday, October 13, 2022, 3pm (CEST).
Where: Zoom online, check the address in the Google Calendar Event.
Topic: Modern Query Evaluation Algorithms.
Speaker: Nadezda (Nadia) Knorozova, Ph.D. student at the University of Zurich.
I will talk about the recent advances in the query evaluation algorithms for join and aggregate queries, focusing on the theoretical research that is implemented in commercial database systems.
Since 2011 there has been a new understanding of the complexity of join queries. This led to an array of new theoretical results and eventually to more efficient algorithms. I will focus on state-of-the-art methods that have been successfully absorbed by commercial database systems. In particular, I will present the Leapfrog Triejoin algorithm for evaluation of join queries, and the FAQ framework for evaluation of aggregate queries that is based on decomposition of query hypergraphs.
Nadezda Knorozova, or simply Nadia, is a Ph.D. student at the University of Zurich, with the background in relational databases with both academic and industry experience. She worked on the commercial Datalog engine called Logicblox, covering many aspects of the system: in-database machine learning, which implements machine learning algorithms that are able to exploit relational structure of the data; the query optimizer, a search algorithm for finding the best join order; join and aggregate query evaluation.
Nadezda received her bachelor degree in Computer Science from the University of Minnesota. There, in collaboration with the database group, she has developed a novel decomposition-based algorithm for computing Voronoi diagrams. She received her Masters degree in Computer Science from the University of Oxford, with a thesis on factorized representations of disequality joins.