A first-order logic characterisation of safety and co-safety languages
05 May 2022
When: Thursday, May 5, 2022, 3pm (CEST).
Where: Zoom online, check the address in the Google Calendar Event.
Topic: A first-order logic characterisation of safety and co-safety languages
Speaker: Dr. Luca Geatti, postdoc researcher in Computer Science at the Free University of Bozen/Bolzano.
Abstract
Linear Temporal Logic (LTL) is one of the most popular temporal logics, that comes into play in a variety of branches of computer science. Its widespread use is also due to its strong foundational properties. One of them is Kamp’s theorem, showing that LTL and the first-order theory of one successor (S1S[FO]) are expressively equivalent. Safety and co-safety
languages, where a finite prefix suffices to establish whether a word does not or does belong to the language, respectively, play a crucial role in lowering the complexity of problems like model checking and reactive synthesis for LTL. Safety-LTL (resp., coSafety-LTL) is a fragment of LTL where only universal (resp., existential) temporal modalities are allowed, that recognises safety (resp., co-safety) languages only.
In this talk, we introduce a fragment of S1S[FO], called Safety-FO, and its dual coSafety-FO, which are expressively complete with regards to the LTL-definable safety languages. In particular, we prove that they respectively characterise exactly Safety-LTL and coSafety-LTL, a result that joins Kamp’s theorem, and provides a clearer view of the charactisations of (fragments of) LTL in terms of first-order languages. In addition, it gives a direct, compact, and self-contained proof that any safety language definable in LTL is definable in Safety-LTL as well. As a by-product, we obtain some interesting results on the expressive power of the weak tomorrow operator of Safety-LTL interpreted over finite and infinite traces.
Short Bio
Luca Geatti is a postdoc researcher in Computer Science at the Free University of Bozen/Bolzano. His research interests span automata theory, temporal logics, automatic synthesis, reactive synthesis and formal verification.
Material
Structure and Hierarchy in Reinforcement Learning
28 Apr 2022
When: Thursday, April 28, 2022, 12pm (CEST).
Where: Zoom online, check the address in the Google Calendar Event.
Topic: Structure and Hierarchy in Reinforcement Learning
Speaker: Prof. Anders Jonsson, associate professor in the Department of Information and Communication Technology, Pompeu Fabra University.
Abstract
Hierarchical reinforcement learning (HRL) decomposes a task into simpler subtasks that can be independently solved, making it possible to solve complex sparse reward tasks more efficiently. In this talk I will present two recent papers that both introduce novel approaches for HRL.
The first paper assumes that the state space is partitioned, and defines subtasks for moving between the partitions. We represent value functions on several levels of abstraction, and use the compositionality of subtasks to estimate the optimal values of the states in each partition. The policy is implicitly defined on these optimal value estimates, rather than being decomposed among the subtasks. As a consequence, our approach can learn the globally optimal policy, and does not suffer from non-stationarities induced by high-level decisions.
In the second paper, we introduce a formalism for hierarchically composing reward machines (RMs), i.e. finite state machines with edges labeled by propositional logic formulas over high-level events that capture subgoals. We adapt HRL algorithms to Hierarchical RMs (HRMs) by defining each RM in the hierarchy as a subtask, and describe a curriculum-based method to induce an HRM for each task. A multi-level HRM can be learned more efficiently than a flat HRM since the size of each machine is much smaller.
Guillermo Infante, Anders Jonsson, Vicenç Gómez (2022).
Globally Optimal Hierarchical Reinforcement Learning for Linearly-Solvable Markov Decision Processes.
Proceedings of the AAAI Conference on Artificial Intelligence (AAAI).
Daniel Furelos-Blanco, Mark Law, Anders Jonsson, Krysia Broda, Alessandra Russo (2022).
Hierarchies of Reward Machines.
The Multi-disciplinary Conference on Reinforcement Learning and Decision Making (RLDM).
Short Bio
Anders Jonsson is an associate professor in the Department of Information and Communication Technology working in automated planning and reinforcement learning. He received his Ph.D. in computer science in 2005 from the University of Massachusetts Amherst. His research interests involve sequential decision problems in which one or several agents have to make repeated decisions about what to do. Specifically, he is currently working on sequential decision problems involving multiple agents, hierarchical representations of problems, combining the strengths of reinforcement learning and planning, finding and exploiting structure in sequential decision problems, and analyzing the computational complexity of different classes of problems.
Material
Target Languages (vs. Inductive Biases) for Learning to Act and Plan
21 Apr 2022
When: Thursday, April 21, 2022, 3pm (CEST).
Where: Zoom online, check the address in the Google Calendar Event.
Topic: Target Languages (vs. Inductive Biases) for Learning to Act and Plan
Speaker: Prof. Hector Geffner, researcher at the Institució Catalana de Recerca i Estudis Avançats (ICREA) and professor at the Departament de Tecnologies de la Informació i les Comunicacions (DTIC), Universitat Pompeu Fabra, in the Artificial Intelligence Group. He is also a Wallenberg Guest Professor at Linköping University, Sweden.
Abstract
Recent breakthroughs in AI have shown the remarkable power of deep learning and deep reinforcement learning. These developments, however, have been tied to specific tasks, and progress in out-of-distribution generalization has been limited. While it is assumed that these limitations can be overcome by incorporating suitable inductive biases, the notion of inductive biases itself is often left vague and does not provide meaningful guidance. In the paper, I articulate a different learning approach where representations do not emerge from biases in a neural architecture but are learned over a given target language with a known semantics. The basic ideas are implicit in mainstream AI where representations have been encoded in languages ranging from fragments of first-order logic to probabilistic structural causal models. The challenge is to learn from data the representations that have traditionally been crafted by hand. Generalization is then a result of the semantics of the language. The goals of this paper are to make these ideas explicit, to place them in a broader context where the design of the target language is crucial, and to illustrate them in the context of learning to act and plan. For this, after a general discussion, I consider learning representations of actions, general policies, and subgoals (“intrinsic rewards”). In these cases, learning is formulated as a combinatorial problem but nothing prevents the use of deep learning techniques instead. Indeed, learning representations over languages with a known semantics provides an account of what is to be learned, while learning representations with neural nets provides a complementary account of how representations can be learned. The challenge and the opportunity is to bring the two together.
Short Bio
Hector Geffner is a researcher at the Institució Catalana de Recerca i Estudis Avançats (ICREA) and a professor at the Departament de Tecnologies de la Informació i les Comunicacions (DTIC), Universitat Pompeu Fabra, in the Artificial Intelligence Group. He is also a Wallenberg Guest Professor at Linköping University, Sweden.
Hector Geffner got his Ph.D at UCLA with a dissertation that was co-winner of the 1990 ACM Dissertation Award. He then worked as Staff Research Member at the IBM T.J. Watson Research Center in NY, USA and at the Universidad Simon Bolivar, in Caracas, Venezuela. Since 2001, he is a researcher at ICREA and a professor at the Universitat Pompeu Fabra, Barcelona, and since 2019, a Wallenberg Guest Professor at Linköping University, Sweden. Hector is a former Associate Editor of Artificial Intelligence and the Journal of Artificial Intelligence Research. He was also a member of the EurAI board, anjd is a Fellow of AAAI and EurAI. He is the author of the book ``Default Reasoning: Causal and Conditional Theories’’ , MIT Press, 1992, editor of “Heuristics, Probability, and Causality: a Tribute to Judea Pearl” along with R. Dechter and Joe Halpern, College Publications, 2010, and author of “A Concise Introduction to Models and Methods for Automated Planning” with Blai Bonet, Morgan and Claypool, 2013. Hector is interested in computational models of reasoning, action, learning, and planning that are general and effective. He is also a concerned citizen (particularly concerned these days) and, aside from courses on logic and AI, he teaches a course on social and technological change at the UPF. He leads a project on representation learning for planning (RLeap), funded by an Advanced ERC grant.
Material
Business Process Management (BPM) and Intelligent Agents
12 Apr 2022
When: Tuesday, April 12, 2022, 3pm (CEST).
Where: Zoom online, check the address in the Google Calendar Event.
Topic: Business Process Management (BPM) and Intelligent Agents
Speaker: Prof. Massimo Mecella, Professor in the School of Information Engineering, Computer Science and Statistics of SAPIENZA, in the Department of Computer, Control and Management Sciences & Engineering ANTONIO RUBERTI (DIAG, formerly DIS - Department of Computer and Systems Sciences).
Abstract
In this talk, we outline Business Process Management and its applications in various fields (including smart manufacturing) and we attempt to build a bridge with AI, KR and formal methods approaches. In particular, starting from the outline of some research results of the speaker(s), we will build a kind of brainstorming in which novel ideas on how to improve BPM with methods taken from AI, towards the vision of a smart BPM, will be shown and discussed.
Short Bio
Massimo Mecella is a Professor in the School of Information Engineering, Computer Science and Statistics of SAPIENZA, in the Department of Computer, Control and Management Sciences & Engineering ANTONIO RUBERTI (DIAG, formerly DIS - Department of Computer and Systems Sciences).
His research interests lie on Service Oriented Computing, Process Management, CPSs (Cyber-Physical Systems) and IoT (Internet-of-Things), User-centered design for complex systems, CIS - Cooperative Information Systems and WWW-based information systems, Software architectures, distributed technologies and middleware, Software engineering and Geo-Information Science & Technology (GIS&T). The previous topics are challenged in the application domains of eGovernment and eBusiness, domotics, smart houses and smart spaces, healthcare, disaster/crisis response & management.
Material
Symbolic Reactive Synthesis from Extended Bounded Response LTL+P Specifications
31 Mar 2022
When: Thursday, March 31, 2022, 3pm (CEST).
Where: Zoom online, check the address in the Google Calendar Event.
Topic: Symbolic Reactive Synthesis from Extended Bounded Response LTL+P Specifications
Speaker: Dr. Luca Geatti, postdoc researcher in Computer Science at the Free University of Bozen/Bolzano.
Abstract
Reactive synthesis is a key technique for the design of correct-by-construction systems, which has been thoroughly investigated in the last decades. It consists of the synthesis of a controller that reacts to environment’s inputs satisfying a given temporal logic specification. Common approaches are based on the explicit construction of automata and on their determinization, that limit their scalability.
In this talk, we introduce a new safety fragment of Linear Temporal Logic with Past (LTL+P), called Extended Bounded Response LTL with Past (LTLEBR+P), which allows one to combine bounded and universal unbounded temporal operators (thus covering a large set of practical cases).
First, we study the expressive power of LTLEBR+P, showing that it can express all and only the safety properties definable in LTL+P. Second, we show that reactive synthesis from LTLEBR+P specifications can be reduced to solving a safety game over a deterministic symbolic automaton built directly from the specification. Third, we study the theoretical complexity of the fragment showing that the proposed solution is optimal and we evaluate it on various benchmarks showing better performance with respect to other approaches for general LTL or larger safety fragments.
Short Bio
Luca Geatti is a postdoc researcher in Computer Science at the Free University of Bozen/Bolzano. His research interests span automata theory, temporal logics, automatic synthesis, reactive synthesis and formal verification.
Material
Certified learning, or learning for verification?
24 Mar 2022
When: Thursday, March 24, 2022, 3pm (CET).
Where: Zoom online, check the address in the Google Calendar Event.
Topic: Certified learning, or learning for verification?
Speaker: Prof. Alessandro Abate, Professor of Verification and Control in the Department of Computer Science at the University of Oxford.
Abstract
We are witnessing an increased, inter-disciplinary convergence between areas underpinned by model-based reasoning and by data-driven learning. Work across these areas is not only scientifically justified, but also motivated by industrial applications where access to information-rich data has to be traded off with a demand for safety criticality: cyber-physical systems are exemplar applications.
In this talk, I will report on ongoing initiatives in this cross-disciplinary domain. According to the dual perspective in the title of this talk, I will sketch, on the one hand, results where formal methods can provide certificates to learning algorithms, and on the other hand, results where learning can bolster formal verification and strategy synthesis objectives.
Short Bio
Alessandro Abate is Professor of Verification and Control in the Department of Computer Science at the University of Oxford. Earlier, he did research at Stanford University and at SRI International, and was an Assistant Professor at the Delft Center for Systems and Control, TU Delft. He received a Laurea degree from the University of Padova and MS/PhD at UC Berkeley.
His research interests lie on the analysis, formal verification, and control theory of heterogeneous and complex dynamical models – in particular of stochastic hybrid systems – and in their applications in cyber-physical systems (particularly involving safety-critical applications, energy, and biological networks). He blends in techniques from machine learning and AI, such as Bayesian inference, reinforcement learning, and game theory.
Material