Sammendrag
Many question answering (QA) systems over RDF induced from question-query pairs using some machine learning technique suffer from a lack of controllability, making the governance and incremental improvement of the system challenging, not to mention the initial effort of collecting and providing training data. As an alternative, we present a model-based QA approach that uses an ontology lexicon in lemon format and automatically generates a lexicalized grammar used to interpret and parse questions into SPARQL queries. The approach gives maximum control over the QA system to the developer as every lexicon extension increases the coverage of the grammar, and thus of the QA system, in a predictable way. We describe our approach to generating grammars from lemon lexica and show how these grammars generate specific questions that we index to support fast QA performance in a prototype that answers questions with respect to DBpedia.
Vis fullstendig beskrivelse