Edinburgh Research Archive

Implicit learnability of knowledge

Item Status

Embargo End Date

Authors

Mocanu, Ionela Georgiana

Abstract

The deployment of knowledge-based systems in the real world requires addressing the challenge of knowledge acquisition. While knowledge engineering by hand is a daunting task, machine learning has been proposed as an alternative. However, learning explicit representations for real-world knowledge that feature a desirable level of expressiveness remains difficult and often leads to heuristics without robustness guarantees. Probably Approximately Correct (PAC) Semantics offers strong guarantees, however learning explicit representations is not tractable, even in propositional logic. Previous works have proposed solutions to these challenges by learning to reason directly, without producing an explicit representation of the learned knowledge. Recent work on so-called implicit learning has shown tremendous promise in obtaining polynomial-time results for fragments of first-order logic, bypassing the intractable step of producing an explicit representation of learned knowledge. This thesis extends these ideas to richer logical languages such as arithmetic theories and multi-agent logics. We demonstrate that it is possible to learn to reason efficiently for standard fragments of linear arithmetic, and we establish a general finding that provides an efficient reduction from the learning-to-reason problem for any logic to any sound and complete solver for that logic. We then extend implicit learning in PAC Semantics to handle noisy data in the form of intervals and threshold uncertainty in the language of linear arithmetic. We prove that our extended framework maintains existing polynomial-time complexity guarantees. Furthermore, we provide the first empirical investigation of this purely theoretical framework. Using benchmark problems, we show that our implicit approach to learning optimal linear programming objective constraints significantly outperforms an explicit approach in practice. Our results demonstrate the effectiveness of PAC Semantics and implicit learning for real-world problems with noisy data and provide a path towards robust learning in expressive languages. Development in reasoning about knowledge and interactions in complex multi-agent systems spans domains such as artificial intelligence, smart traffic, and robotics. In these systems, epistemic logic serves as a formal language for expressing and reasoning about knowledge, beliefs, and communication among agents, yet integrating learning algorithms within multi-agent epistemic logic is challenging due to the inherent complexity of distributed knowledge reasoning. We provide proof of correctness for our learning procedure and analyse the sample complexity required to assert the entailment of an epistemic query. Overall, our work offers a promising approach to integrating learning and deduction in a range of logical languages from linear arithmetic to multi-agent epistemic logics.

This item appears in the following Collection(s)