Probabilistic graph formalisms for meaning representations
View/ Open
Date
01/07/2019Author
Gilroy, Sorcha
Metadata
Abstract
In recent years, many datasets have become available that represent natural language
semantics as graphs. To use these datasets in natural language processing (NLP), we
require probabilistic models of graphs. Finite-state models have been very successful
for NLP tasks on strings and trees because they are probabilistic and composable. Are
there equivalent models for graphs? In this thesis, we survey several graph formalisms,
focusing on whether they are probabilistic and composable, and we contribute several
new results. In particular, we study the directed acyclic graph automata languages
(DAGAL), the monadic second-order graph languages (MSOGL), and the hyperedge
replacement languages (HRL). We prove that DAGAL cannot be made probabilistic,
we explain why MSOGL also most likely cannot be made probabilistic, and we review
the fact that HRL are not composable. We then review a subfamily of HRL and
MSOGL: the regular graph languages (RGL; Courcelle 1991), which have not been
widely studied, and particularly have not been studied in an NLP context. Although
Courcelle (1991) only sketches a proof, we present a full, more NLP-accessible proof
that RGL are a subfamily of MSOGL. We prove that RGL are probabilistic and composable,
and we provide a novel Earley-style parsing algorithm for them that runs in
time linear in the size of the input graph. We compare RGL to two other new formalisms:
the restricted DAG languages (RDL; Bj¨orklund et al. 2016) and the tree-like
languages (TLL; Matheja et al. 2015). We show that RGL and RDL are incomparable;
TLL and RDL are incomparable; and either RGL are incomparable to TLL, or RGL
are contained within TLL. This thesis provides a clearer picture of this field from an
NLP perspective, and suggests new theoretical and empirical research directions.