Graph Neural Networks for Knowledge Base Question Answering

23 Jan 2019 |

Daniil Sorokin (TU Darmstadt), South England Natural Language Processing Meetup. Slides are here.

Some extra thoughts by yours truly are at the bottom of this post, if for some reason you’re more interested in my hot takes on representation spaces rather than my notes…

The Problem

approach

The Approach

The Results

Postscript

Props to the speaker for presenting the type of work that invites everyone in the audience to ask some variation of “But did you try X?”, and handling all with humility and grace. My own version is the following: “But did you try sentence encoders besides CNNs?”

I’m curious not because we’re interested in picking the best sentence encoder ever and optimising the shit out of our question answering pipeline, but because I wonder whether fixing the question encoder architecture to be a CNN, and fixing the similarity metric to be cosine similarity, is actually biasing the comparison rather than keeping it fair. A particular choice of encoder architecture induces a particular type of representation space for sentences, which could be more compatible with the spaces induced by some graph encoders than others (where that compatibility is mediated by use of cosine similarity, in some precise sense that I’m too lazy to go into at the moment but may or may not be related to this).

Ultimately, I guess there’s just an extent to which your fair head-to-head comparisons to determine which model is “best” always have to be interpreted in the context that the experiment was set up, and my question is then really about how big is that context in this case. Are GNNs “the best” in the context of all of semantic parsing and natural language understanding? Only in the context of question answering pipelines that encode the question with CNNs, perform entity linking, generate a particular space of semantic graphs, encode those graphs with the model being evaluated, and then compare the encodings using cosine similarity? Somewhere in between, no doubt – but who knows where.