Treffer: Improving Candidate Quality of Probabilistic Logic Models

Title:
Improving Candidate Quality of Probabilistic Logic Models
Contributors:
Joana Côrte-Real and Anton Dries and Inês Dutra and Ricardo Rocha
Publisher Information:
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Publication Year:
2018
Collection:
DROPS - Dagstuhl Research Online Publication Server (Schloss Dagstuhl - Leibniz Center for Informatics )
Document Type:
Fachzeitschrift article in journal/newspaper<br />conference object
File Description:
application/pdf
Language:
English
Relation:
Is Part Of OASIcs, Volume 64, Technical Communications of the 34th International Conference on Logic Programming (ICLP 2018); https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICLP.2018.6
DOI:
10.4230/OASIcs.ICLP.2018.6
Accession Number:
edsbas.B9B75CD2
Database:
BASE

Weitere Informationen

Many real-world phenomena exhibit both relational structure and uncertainty. Probabilistic Inductive Logic Programming (PILP) uses Inductive Logic Programming (ILP) extended with probabilistic facts to produce meaningful and interpretable models for real-world phenomena. This merge between First Order Logic (FOL) theories and uncertainty makes PILP a very adequate tool for knowledge representation and extraction. However, this flexibility is coupled with a problem (inherited from ILP) of exponential search space growth and so, often, only a subset of all possible models is explored due to limited resources. Furthermore, the probabilistic evaluation of FOL theories, coming from the underlying probabilistic logic language and its solver, is also computationally demanding. This work introduces a prediction-based pruning strategy, which can reduce the search space based on the probabilistic evaluation of models, and a safe pruning criterion, which guarantees that the optimal model is not pruned away, as well as two alternative more aggressive criteria that do not provide this guarantee. Experiments performed using three benchmarks from different areas show that prediction pruning is effective in (i) maintaining predictive accuracy for all criteria and experimental settings; (ii) reducing the execution time when using some of the more aggressive criteria, compared to using no pruning; and (iii) selecting better candidate models in limited resource settings, also when compared to using no pruning.