Franz Baader
Professor
NameHerr Prof. Dr.-Ing. Franz Baader
Eine verschlüsselte E-Mail über das SecureMail-Portal versenden (nur für TUD-externe Personen).
Besuchsadresse:
Andreas-Pfitzmann-Bau, Raum 3021 Nöthnitzer Straße 46
01187 Dresden
Informationen zur Lehre
Forschungsthemen
- Wissensrepräsentation (insbesondere Beschreibungslogik und Modallogik)
- Automatisches Beweisen (inbesondere Termersetzung und Unifikationstheorie)
Veröffentlichungen der Arbeitsgruppe
- Publikationen, technische Berichte, betreute Habilitationen und betreute Dissertationen
- Ein neues Lehrbuch zum Thema Beschreibungslogik: Franz Baader, Ian Horrocks, Carsten Lutz, and Ulrike Sattler: An Introduction to Description Logic
- Ein vielzitiertes Lehrbuch zum Thema Termersetzungssysteme: Franz Baader, Tobias Nipkow: Term Rewriting and All That Jetzt auch als Taschenbuch!
- Ein vielzitiertes Handbuch zum Thema Beschreibungslogiken: Franz Baader, Diego Calvanese, Deborah McGuinness, Daniele Nardi, Peter Patel-Schneider (Ed.): The Description Logic Handbook
- Die meisten meiner eigenen Publikationen findet man unten sowie auf DBLP und Google Scholar.
- Mein H-Index (Hirsch Number) beruhend auf Google Scholar ist 69 (Stand: November 2024).
- Folien einiger eingeladener Vorträge und Tutorials
Mitglied im Steering Committee von Konferenzen und Workshops
- CADE - Conference on Automated Deduction (2002-2009, President 2004-2009; 2010-2013, President 2011-2013)
- DL - Description Logics (2000-2002; 2008-2011)
- FroCoS - Frontiers of Combining Systems (seit 1996, Chair)
- IJCAR - International Joint Conference on Automated Reasoning (2004-2009; Chair 2006-2009; 2011-2013, Chair; since 2015, chair)
- KRDB - Knowledge Representation meets Databases (seit 1994)
- LICS - IEEE Symposium on Logic in Computer Science (Organizing Committee, 2005-2009, 2011-2013)
- RTA - Rewriting Techniques and Applications (2001-2004; 2007-2010; Chair 2009-2010)
- STACS - Symposium on Theoretical Aspects of Computer Science (2004-2006)
- UNIF - International Workshop on Unification (2008-2010, Chair)
Mitglied im Editorial Board von Zeitschiften
- AICom - The European Journal on Artificial Intelligence (seit 2000)
- Journal of Artificial Intelligence Research (2000-2003)
- Journal of Applied Logic (2002–2018)
- Journal of Applied Logics — IfCoLog Journal (since 2018)
- JANCL - Journal of Applied Non-Classical Logics (seit 2003)
- LMCS - Logical Methods in Computer Science (seit 2004)
- Artificial Intelligence (2006-2014)
- Journal of Automated Reasoning (seit 2007)
- Logic Journal of the IGPL (seit 2010)
Mitglied im Advisory Board von Buchreihen
- Cognitive Technologies (seit 2002)
Weitere Mitgliedschaften
- Sprecher der GI Fachgruppe Logik in der Informatik (2000-2006)
- Fachexperte des GI Fachbereichs Künstliche Intelligenz (seit 2001)
- International relations officer des GI Fachbereichs Künstliche Intelligenz beim ECCAI (seit 2002)
- ECCAI Fellow (seit 2004)
- Mitglied im Senats- und Bewilligungsausschuß für Graduiertenkollegs der DFG (1999-2004)
- Mitglied im Fachkollegium Informatik der DFG (2004-2012; 2008-2012 als Sprecher)
- Gründungsmitglied der IFIP Working Group 1.6 - Term Rewriting
- Mitglied der Academia Europaea (seit 2011)
Weiterführende Links
- International Center of Computational Logic
- Wikipedia
- DBLP
- Google Scholar
- ResearchGate
- Deutsche Forschungsgemeinschaft (DFG)
- Mathematics Genealogy Project
- Academia Europaea
- Deutsche Nationalbibliothek
- Publikationen am DFKI
- Guide2Research
- Academic
Veröffentlichungen
2024
Abstract BibTeX Entry DOI
Concrete domains have been introduced in description logic (DL) to enable reference to concrete objects (such as numbers) and predefined predicates on these objects (such as numerical comparisons) when defining concepts. The primary research goal in this context was to find restrictions on the concrete domain such that its integration into certain DLs preserves decidability or tractability. In this article, we complement these investigations by studying the abstract expressive power of both first-order and description logics extended with concrete domains, i.e., we analyze which classes of first-order interpretations can be expressed using these logics, compared to what their counterparts without concrete domains can express. We demonstrate that, under natural conditions on the concrete domain D (which also play a role for decidability), extensions of first-order logic (FOL) or the well-known DL ALC with D share important formal characteristics with FOL, such as the compactness and the Löwenheim-Skolem properties. Nevertheless, these conditions do not ensure that the abstract expressive power of the extensions we consider is contained in that of FOL, though in some cases it is. To be more precise, we show, on the one hand, that unary concrete domains leave the abstract expressive power within FOL if we are allowed to introduce auxiliary predicates. As a by-product, we obtain (semi-)decidability results for some fragments of FOL extended with the concrete domains considered in this article. On the other hand, we show that the ability to express equality between elements of D, another condition employed in the context of showing decidability of ALC(D), is sufficient to push the abstract expressive power of most first-order fragments with concrete domains beyond that of FOL. While for such concrete domains D decidability is retained for ALC(D), we show that the availability of equality in D causes undecidability of the two-variable fragment of FOL(D), although the two-variable fragment of FOL is decidable.
@article{ BaBo-ACR-24, address = {New York, NY, USA}, author = {Franz {Baader} and Filippo {De Bortoli}}, doi = {https://doi.org/10.1145/3699839.3699840}, journal = {SIGAPP Appl. Comput. Rev.}, month = {October}, number = {3}, pages = {5--17}, publisher = {Association for Computing Machinery}, title = {Logics with Concrete Domains: First-Order Properties, Abstract Expressive Power, and (Un)Decidability}, volume = {24}, year = {2024}, }
Abstract BibTeX Entry PDF File DOI
Concrete domains have been introduced in description logic (DL) to enable reference to concrete objects (such as numbers) and predefined predicates on these objects (such as numerical comparisons) when defining concepts. The primary research goal in this context was to find restrictions on the concrete domain such that its integration into certain DLs preserves decidability or tractability. In this paper, we investigate the abstract expressive power of both first-order and description logics extended with concrete domains, i.e., we analyze which classes of first-order interpretations can be expressed using these logics, compared to what first-order logic without concrete domains can express. We demonstrate that, under natural conditions on the concrete domain D (which also play a role for decidability), extensions of first-order logic (FOL) or the well-known DL ALC with D share important formal characteristics with FOL, such as the compactness and the Löwenheim-Skolem properties. Nevertheless, their abstract expressive power need not be contained in that of FOL, though in some cases it is. To be more precise, we show, on the one hand, that unary concrete domains leave the abstract expressive power within FOL if we are allowed to introduce auxiliary predicates. On the other hand, we exhibit a class of concrete domains that push the abstract expressive power beyond that of FOL. As a by-product of these investigations, we obtain (semi-)decidability results for some of the logics with concrete domains considered in this paper.
@inproceedings{ BaBo-SAC-24, address = {New York, NY, USA}, author = {Franz {Baader} and Filippo {De Bortoli}}, booktitle = {Proceedings of the 39th {ACM/SIGAPP} Symposium on Applied Computing}, doi = {https://doi.org/10.1145/3605098.3635984}, pages = {754--761}, publisher = {{ACM}}, series = {SAC '24}, title = {{The Abstract Expressive Power of First-Order and Description Logics with Concrete Domains}}, year = {2024}, }
Abstract BibTeX Entry DOI
In applications of AI systems where exact definitions of the important notions of the application domain are hard to come by, the use of traditional logic-based knowledge representation languages such as Description Logics may lead to very large and unintuitive definitions, and high complexity of reasoning. To overcome this problem, we define new concept constructors that allow us to define concepts in an approximate way. To be more precise, we present a family τEL(m) of extensions of the lightweight Description Logic EL that use threshold constructors for this purpose. To define the semantics of these constructors we employ graded membership functions m, which for each individual in an interpretation and concept yield a number in the interval [0,1] expressing the degree to which the individual belongs to the concept in the interpretation. Threshold concepts C⋈t for ⋈∈<,≤,>,≥ then collect all the individuals that belong to C with degree ⋈t. The logic τEL(m) extends EL with threshold concepts whose semantics is defined relative to a function m. To construct appropriate graded membership functions, we show how concept measures ∼ (which are graded generalizations of subsumption or equivalence between concepts) can be used to define graded membership functions m∼. Then we introduce a large class of concept measures, called simi-d, for which the logics τEL(m∼) have good algorithmic properties. Basically, we show that reasoning in τEL(m∼) is NP/coNP-complete without TBox, PSpace-complete w.r.t. acyclic TBoxes, and ExpTime-complete w.r.t. general TBoxes. The exception is the instance problem, which is already PSpace-complete without TBox w.r.t. combined complexity. While the upper bounds hold for all elements of simi-d, we could prove some of the hardness results only for a subclass of simi-d. This article considerably improves on and generalizes results we have shown in three previous conference papers and it provides detailed proofs of all our results.
@article{ BaFe-AIJ-24, author = {Franz {Baader} and Oliver {Fern{\'{a}}ndez Gil}}, doi = {https://doi.org/10.1016/j.artint.2023.104034}, journal = {Artificial Intelligence}, pages = {104034}, title = {Extending the description logic EL with threshold concepts induced by concept measures}, volume = {326}, year = {2024}, }
Abstract BibTeX Entry DOI
Unification has been introduced in Description Logic (DL) as a means to detect redundancies in ontologies. In particular, it was shown that testing unifiability in the DL EL is an NP-complete problem, and this result has been extended in several directions. Surprisingly, it turned out that the complexity increases to PSpace if one disallows the use of the top concept in concept descriptions. Motivated by features of the medical ontology SNOMED CT, we extend this result to a setting where the top concept is disallowed, but there is a background ontology consisting of restricted forms of concept and role inclusion axioms. We are able to show that the presence of such axioms does not increase the complexity of unification without top, i.e., testing for unifiability remains a PSpace-complete problem.
@inproceedings{ BaFe-IJCAR-24, author = {Franz {Baader} and Oliver {Fern{\'a}ndez Gil}}, booktitle = {Automated Reasoning - 12th International Joint Conference, {IJCAR} 2024, Nancy, France, July 1-6, 2024, Proceedings, Part {II}}, doi = {https://doi.org/10.1007/978-3-031-63501-4_15}, editor = {Christoph {Benzm{\"u}ller} and Marijn J.H. {Heule} and Renate A. {Schmidt}}, pages = {279--297}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {Unification in the Description Logic {$\mathcal{ELH}_{\mathcal{R}^+}$} without the Top Concept Modulo Cycle-Restricted Ontologies}, volume = {14740}, year = {2024}, }
Abstract BibTeX Entry PDF File
Unification has been introduced in Description Logic (DL) as a means to detect redundancies in ontologies. In particular, it was shown that testing unifiability in the DL EL is an NP-complete problem, and this result has been extended in several directions. Surprisingly, it turned out that the complexity increases to PSpace if one disallows the use of the top concept in concept descriptions. Motivated by features of the medical ontology SNOMED CT, we extend this result to a setting where the top concept is disallowed, but there is a background ontology consisting of restricted forms of concept and role inclusion axioms. We are able to show that the presence of such axioms does not increase the complexity of unification without top, i.e., testing for unifiability remains a PSpace-complete problem.
@inproceedings{ BaFe-DL-24, address = {Bergen, Norway}, author = {Franz {Baader} and Oliver {Fern{\'a}ndez Gil}}, booktitle = {Proceedings of the 37th International Workshop on Description Logics (DL'24)}, editor = {Jean Christoph {Jung} and Laura {Giordano} and Ana {Ozaki}}, publisher = {CEUR-WS.org}, series = {{CEUR} Workshop Proceedings}, title = {Unification in {$\mathcal{ELH}_{\mathcal{R}^+}$} without the Top Concept modulo Cycle-Restricted Ontologies (Extended Abstract)}, year = {2024}, }
Abstract BibTeX Entry PDF File DOI
Motivated by an application where we try to make proofs for Description Logic inferences smaller by rewriting, we consider the following decision problem, which we call the small term reachability problem: given a term rewriting system R, a term s, and a natural number n, decide whether there is a term t of size ≤ n reachable from s using the rules of R. We investigate the complexity of this problem depending on how termination of R can be established. We show that the problem is NP-complete for length-reducing term rewriting systems. Its complexity increases to N2ExpTime-complete (NExpTime-complete) if termination is proved using a (linear) polynomial order and to PSpace-complete for systems whose termination can be shown using a restricted class of Knuth-Bendix orders. Confluence reduces the complexity to P for the length-reducing case, but has no effect on the worst-case complexity in the other two cases.
@inproceedings{ BaaderGieslFSCD24, address = {Dagstuhl, Germany}, author = {Franz {Baader} and J\"{u}rgen {Giesl}}, booktitle = {9th International Conference on Formal Structures for Computation and Deduction (FSCD 2024)}, doi = {https://doi.org/10.4230/LIPIcs.FSCD.2024.16}, editor = {Jakob {Rehof}}, pages = {16:1--16:18}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, title = {{On the Complexity of the Small Term Reachability Problem for Terminating Term Rewriting Systems}}, volume = {299}, year = {2024}, }
Abstract BibTeX Entry PDF File DOI ©Springer Extended Version
Errors in knowledge bases (KBs) written in a Description Logic (DL) are usually detected when reasoning derives an inconsistency or a consequence that does not hold in the application domain modelled by the KB. Whereas classical repair approaches produce maximal subsets of the KB not implying the inconsistency or unwanted consequence, optimal repairs maximize the consequence sets. In this paper, we extend previous results on how to compute optimal repairs from the DL \(\mathcal{EL}\) to its extension \(\mathcal{EL}^{\bot}\), which in contrast to \(\mathcal{EL}\) can express inconsistency. The problem of how to deal with inconsistency in the context of optimal repairs was addressed previously, but in a setting where the (fixed) terminological part of the KB must satisfy a restriction on cyclic dependencies. Here, we consider a setting where this restriction is not required. We also show how the notion of optimal repairs obtained this way can be used in inconsistency- and error-tolerant reasoning.
@inproceedings{ BaKrNu-FoIKS2024, author = {Franz {Baader} and Francesco {Kriegel} and Adrian {Nuradiansyah}}, booktitle = {Proceedings of the 13th International Symposium on Foundations of Information and Knowledge Systems {(FoIKS 2024)}, April 8--11, 2024, Sheffield, United Kingdom}, doi = {https://doi.org/10.1007/978-3-031-56940-1_1}, pages = {3--22}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {Inconsistency- and Error-Tolerant Reasoning w.r.t.\ Optimal Repairs of $\mathcal{EL}^{\bot}$ Ontologies}, volume = {14589}, year = {2024}, }
Abstract BibTeX Entry PDF File DOI ©IJCAI, http://www.ijcai.org Extended Version
Removing unwanted consequences from a knowledge base has been investigated in belief change under the name contraction and is called repair in ontology engineering. Simple repair and contraction approaches based on removing statements from the knowledge base (respectively called belief base contractions and classical repairs) have the disadvantage that they are syntax-dependent and may remove more consequences than necessary. Belief set contractions do not have these problems, but may result in belief sets that have no finite representation if one works with logics that are not fragments of propositional logic. Similarly, optimal repairs, which are syntax-independent and maximize the retained consequences, may not exist. In this paper, we want to leverage advances in characterizing and computing optimal repairs of ontologies based on the description logics \(\mathcal{EL}\) to obtain contraction operations that combine the advantages of belief set and belief base contractions. The basic idea is to employ, in the partial meet contraction approach, optimal repairs instead of optimal classical repairs as remainders. We introduce this new approach in a very general setting, and prove a characterization theorem that relates the obtained contractions with well-known postulates. Then, we consider several interesting instances, not only in the standard repair/contraction setting where one wants to get rid of a consequence, but also in other settings such as variants of forgetting in propositional and description logic. We also show that classical belief set contraction is an instance of our approach.
@inproceedings{ BaWa-KR-24, author = {Franz {Baader} and Renata {Wassermann}}, booktitle = {Proceedings of the 21st International Conference on Principles of Knowledge Representation and Reasoning (KR 2024)}, doi = {https://doi.org/10.24963/kr.2024/9}, pages = {94--105}, title = {Contractions Based on Optimal Repairs}, year = {2024}, }
2023
Abstract BibTeX Entry PDF File Extended Version
Logic-based approaches to AI have the advantage that their behavior can in principle be explained with the help of proofs of the computed consequences. For ontologies based on Description Logic (DL), we have put this advantage into practice by showing how proofs for consequences derived by DL reasoners can be computed and displayed in a user-friendly way. However, these methods are insufficient in applications where also numerical reasoning is relevant. The present paper considers proofs for DLs extended with concrete domains (CDs) based on the rational numbers, which leave reasoning tractable if integrated into the lightweight DL EL\bot. Since no implemented DL reasoner supports these CDs, we first develop reasoning procedures for them, and show how they can be combined with reasoning approaches for pure DLs, both for EL\bot and the more expressive DL ALC. These procedures are designed such that it is easy to extract proofs from them. We show how the extracted CD proofs can be combined with proofs on the DL side into integrated proofs that explain both the DL and the CD reasoning.
@inproceedings{ AlBaBoKoKo-RuleML23, author = {Christian {Alrabbaa} and Franz {Baader} and Stefan {Borgwardt} and Patrick {Koopmann} and Alisa {Kovtunova}}, booktitle = {Rules and Reasoning - 7th International Joint Conference, RuleML+RR 2023, Oslo, Norway, September 18-20}, title = {Combining Proofs for Description Logic and Concrete Domain Reasoning}, year = {2023}, }
Abstract BibTeX Entry PDF File
Reasoning in Description Logics (DLs) with numerical concrete domains combines abstract logical with concrete numerical reasoning. We show how consequences computed by such combined reasoning engines can be explained in a uniform way by proofs that integrate the numerical reasoning steps into the proofs on the DL side.
@inproceedings{ AlBaBoKoKo-XLoKR22, author = {Christian {Alrabbaa} and Franz {Baader} and Stefan {Borgwardt} and Patrick {Koopmann} and Alisa {Kovtunova}}, booktitle = {Informal Proceedings of the Explainable Logic-Based Knowledge Representation ({XLoKR} 2023) workshop co-located with the 20th International Conference on Principles of Knowledge Representation and Reasoning ({KR} 2023), Rhodes, Greece, September 2nd}, title = {Combining Proofs for Description Logic and Concrete Domain Reasoning (Extended Abstract)}, year = {2023}, }
Abstract BibTeX Entry PDF File DOI ©ACM
The question of how a given knowledge base can be modified such that certain unwanted consequences are removed has been investigated in the area of knowledge engineering under the name of repair and in the area of belief change under the name of con- traction. Whereas in the former area the emphasis was more on designing and implementing concrete repair algorithms, the latter area concentrated on characterizing classes of contraction operations by certain postulates they satisfy. In the classical setting, repairs and contractions are subsets of the knowledge base that no longer have the unwanted consequence. This makes these approaches syntax-dependent and may result in removal of more consequences than necessary. To alleviate this problem, gentle repairs and pseudo-constractions have been introduced in the respective research areas, and their connections have been investigated in recent work. Optimal repairs preserve a maximal amount of consequences, but they may not always exist. We show that, if they exist, then they can be obtained by certain pseudo-contraction operations, and thus they comply with the postulates that these operations satisfy. Conversely, under certain conditions, pseudo-contractions are guaranteed to produce optimal repairs.
@inproceedings{ Baader-SAC2023, author = {Franz {Baader}}, booktitle = {Proceedings of the 38th ACM/SIGAPP Symposium on Applied Computing (SAC '23), March 27--31, 2023, Tallinn, Estonia}, doi = {https://doi.org/10.1145/3555776.3577719}, pages = {983--990}, publisher = {Association for Computing Machinery}, title = {Optimal Repairs in Ontology Engineering as Pseudo-Contractions in Belief Change}, year = {2023}, }
BibTeX Entry PDF File (ceur-ws.org) Full Conference Paper
@inproceedings{ Baader-DL2023, author = {Franz {Baader}}, booktitle = {Proceedings of the 36th International Workshop on Description Logics {(DL} 2023), Rhodes, Greece, September 2--4, 2023}, publisher = {CEUR-WS.org}, series = {{CEUR} Workshop Proceedings}, title = {Optimal Repairs in Ontology Engineering as Pseudo-Contractions in Belief Change (Extended Abstract)}, volume = {3515}, year = {2023}, }
Abstract BibTeX Entry PDF File DOI
The question of how a given knowledge base can be modified such that certain unwanted consequences are removed has been investigated in the area of ontology engineering under the name of repair and in the area of belief change under the name of contraction. Whereas in the former area the emphasis was more on designing and implementing concrete repair algorithms, the latter area concentrated on characterizing classes of contraction operations by certain postulates they satisfy. In the classical setting, repairs and contractions are subsets of the knowledge base that no longer have the unwanted consequence. This makes these approaches syntax-dependent and may result in removal of more consequences than necessary. To alleviate this problem, gentle repairs and pseudo-constractions have been introduced in the respective research areas, and their connections have been investigated in recent work. Optimal repairs preserve a maximal amount of consequences, but they may not always exist. We show that, if they exist, then they can be obtained by certain pseudo-contraction operations, and thus they comply with the postulates that these operations satisfy. Conversely, under certain conditions, pseudo-contractions are guaranteed to produce optimal repairs. Recently, contraction operations have also been defined for concepts rather than for whole knowledge bases. We show that there is again a close connection between such operations and optimal repairs of a restricted form of knowledge bases.
@article{ BaaderACR23, author = {Franz {Baader}}, doi = {https://doi.org/10.1145/3626307.3626308}, journal = {ACM SIGAPP Applied Computing Review}, number = {3}, pages = {5--18}, publisher = {ACM}, title = {Relating Optimal Repairs in Ontology Engineering with Contraction Operations in Belief Change}, volume = {23}, year = {2023}, }
Abstract BibTeX Entry PDF File Online version
Concrete domains have been introduced in Description Logic (DL) to enable reference to concrete objects (such as numbers) and predefined predicates on these objects (such as numerical comparisons) when defining concepts. The primary research goal in this context was to find restrictions on the concrete domain such that its integration into certain DLs preserves decidability or tractability. In this paper, we investigate the abstract expressive power of logics extended with concrete domains, namely which classes of first-order interpretations can be expressed using these logics. In the first part of the paper, we show that, under natural conditions on the concrete domain \(\mathfrak{D}\) (which also play a role for decidability), extensions of first-order logic (\(\texttt{FOL}\)) or \(\mathcal{ALC}\) with \(\mathfrak{D}\) share important formal properties with \(\texttt{FOL}\), such as the compactness and the Löwenheim-Skolem property. Nevertheless, their abstract expressive power need not be contained in that of \(\texttt{FOL}\). In the second part of the paper, we investigate whether finitely bounded homogeneous structures, which preserve decidability if employed as concrete domains, can be used to express certain universal first-order sentences, which then could be added to DL knowledge bases without destroying decidability. We show that this requires rather strong conditions on said sentences or an extended scheme for integrating the concrete domain that leads to undecidability.
@inproceedings{ BaBo-DL-23, address = {Rhodes, Greece}, author = {Franz {Baader} and Filippo {De Bortoli}}, booktitle = {Proceedings of the 36th International Workshop on Description Logics (DL'23)}, editor = {Oliver {Kutz} and Ana {Ozaki}}, publisher = {CEUR-WS}, series = {CEUR Workshop Proceedings}, title = {{On the Abstract Expressive Power of Description Logics with Concrete Domains}}, volume = {3515}, year = {2023}, }
Abstract BibTeX Entry PDF File DOI ©Springer Extended Version Erratum
Ontologies based on Description Logics may contain errors, which are usually detected when reasoning produces consequences that follow from the ontology, but do not hold in the modelled application domain. In previous work, we have introduced repair approaches for \(\mathcal{EL}\) ontologies that are optimal in the sense that they preserve a maximal amount of consequences. In this paper, we will, on the one hand, review these approaches, but with an emphasis on motivation rather than on technical details. On the other hand, we will describe new results that address the problems that optimal repairs may become very large or need not even exist unless strong restrictions on the terminological part of the ontology apply. We will show how one can deal with these problems by introducing concise representations of optimal repairs.
@inproceedings{ BaKoKr-JELIA2023, author = {Franz {Baader} and Patrick {Koopmann} and Francesco {Kriegel}}, booktitle = {Proceedings of the 18th European Conference on Logics in Artificial Intelligence {(JELIA 2023)}, September 20--22, 2023, Dresden, Germany}, doi = {https://doi.org/10.1007/978-3-031-43619-2_2}, pages = {11--34}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {Optimal Repairs in the Description Logic $\mathcal{EL}$ Revisited}, volume = {14281}, year = {2023}, }
BibTeX Entry PDF File PDF File (ceur-ws.org) Full Conference Paper 1 Full Conference Paper 2
@inproceedings{ BaKrNu-DL2023, author = {Franz {Baader} and Francesco {Kriegel} and Adrian {Nuradiansyah}}, booktitle = {Proceedings of the 36th International Workshop on Description Logics ({DL} 2023), Rhodes, Greece, September 2--4, 2023}, publisher = {CEUR-WS.org}, series = {{CEUR} Workshop Proceedings}, title = {Error-Tolerant Reasoning in $\mathcal{EL}$ w.r.t.\ Optimal {ABox} Repairs (Extended Abstract)}, volume = {3515}, year = {2023}, }
Abstract BibTeX Entry PDF File DOI ©ACM
Errors in Description Logic (DL) ontologies are often detected when reasoning yields unintuitive consequences. The question is then how to repair the ontology in an optimal way, i.e., such that the unwanted consequences are removed, but a maximal set of the unobjected consequences is kept. Error-tolerant reasoning does not commit to a single optimal repair: brave reasoning asks whether the consequence is entailed by some repair and cautious reasoning whether it is entailed by all repairs. Previous research on repairing ABoxes w.r.t. TBoxes formulated in the DL \(\mathcal{EL}\) has developed methods for computing optimal repairs, and has recently also determined the complexity of error-tolerant reasoning: brave reasoning is in P and cautious reasoning is in coNP. However, in this work the unwanted consequences were restricted to being \(\mathcal{EL}\) instance assertions. In the present paper, we show that the mentioned results can be extended to a setting where also role assertions can be required to be removed. Our solution is based on a two-stage approach where first the unwanted role assertions and then the unwanted concept assertions are removed. We also investigate the complexity of error-tolerant reasoning w.r.t. classical repairs, which are maximal subsets of the ABox that do not have the unwanted consequences, and show that, in this setting, brave reasoning is NP-complete and cautious reasoning is coNP-complete.
@inproceedings{ BaKrNu-SAC2023, author = {Franz {Baader} and Francesco {Kriegel} and Adrian {Nuradiansyah}}, booktitle = {Proceedings of the 38th ACM/SIGAPP Symposium on Applied Computing (SAC '23), March 27--31, 2023, Tallinn, Estonia}, doi = {https://doi.org/10.1145/3555776.3577630}, pages = {974--982}, publisher = {Association for Computing Machinery}, title = {Treating Role Assertions as First-class Citizens in Repair and Error-tolerant Reasoning}, year = {2023}, }
Abstract BibTeX Entry DOI
OWL is a powerful language to formalize terminologies in an ontology. Its main strength lies in its foundation on description logics, allowing systems to automatically deduce implicit information through logical reasoning. However, since ontologies are often complex, understanding the outcome of the reasoning process is not always straightforward. Unlike already existing tools for exploring ontologies, our visualization tool Evonne is tailored towards explaining logical consequences. In addition, it supports the debugging of unwanted consequences and allows for an interactive comparison of the impact of removing statements from the ontology. Our visual approach combines (1) specialized views for the explanation of logical consequences and the structure of the ontology, (2) employing multiple layout modes for iteratively exploring explanations, (3) detailed explanations of specific reasoning steps, (4) cross-view highlighting and colour coding of the visualization components, (5) features for dealing with visual complexity and (6) comparison and exploration of possible fixes to the ontology. We evaluated Evonne in a qualitative study with 16 experts in logics, and their positive feedback confirms the value of our concepts for explaining reasoning and debugging ontologies.
@article{ MEALKOLABADA-CGF23, author = {Juli{\'a}n {M{\'e}ndez} and Christian {Alrabbaa} and Patrick {Koopmann} and Ricardo {Langner} and Franz {Baader} and Raimund {Dachselt}}, doi = {https://doi.org/10.1111/cgf.14730}, journal = {Computer Graphics Forum}, number = {6}, pages = {e14730}, title = {Evonne: A Visual Tool for Explaining Reasoning with {OWL} Ontologies and Supporting Interactive Debugging}, volume = {42}, year = {2023}, }
2022
Abstract BibTeX Entry PDF File Extended Version
Explanations for description logic (DL) entailments provide important support for the maintenance of large ontologies. The ``justifications'' usually employed for this purpose in ontology editors pinpoint the parts of the ontology responsible for a given entailment. Proofs for entailments make the intermediate reasoning steps explicit, and thus explain how a consequence can actually be derived. We present an interactive system for exploring description logic proofs, called Evonne, which visualizes proofs of consequences for ontologies written in expressive DLs. We describe the methods used for computing those proofs, together with a feature called signature-based proof condensation. Moreover, we evaluate the quality of generated proofs using real ontologies.
@inproceedings{ ALBABODAKOME-IJCAR22, author = {Christian {Alrabbaa} and Franz {Baader} and Stefan {Borgwardt} and Raimund {Dachselt} and Patrick {Koopmann} and Juli\'an {M\'endez}}, booktitle = {Automated Reasoning - 11th International Joint Conference, {IJCAR} 2022}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {\textsc{Evonne}: Interactive Proof Visualization for Description Logics (System Description)}, year = {2022}, }
Abstract BibTeX Entry PDF File
In a previous paper, we have investigated restricted unification in the Description Logic (DL) \(\mathcal{FL}_0\). Here we extend this investigation to the DL \(\mathcal{FL}_\bot\), which is obtained from \(\mathcal{FL}_0\) by adding the bottom concept. We show that restricted unification in \(\mathcal{FL}_\bot\) is decidable and provide some upper and lower bounds for the complexity. This is particularly interesting since the decidability status of unrestricted unification in \(\mathcal{FL}_\bot\) appears to be still open. We also show an ExpTime lower bound for the unrestricted problem.
@inproceedings{ BaFe-UNIF-22, address = {Haifa, Israel}, author = {Franz {Baader} and Oliver {Fern{\'a}ndez Gil}}, booktitle = {Proceedings of the 36th International Workshop on Unification ({UNIF 2022})}, editor = {David {M. Cerna} and Barbara {Morawska}}, title = {Restricted Unification in the Description Logic {$\mathcal{FL}_{\bot}$}}, year = {2022}, }
Abstract BibTeX Entry DOI
The word problem for a finite set of ground identities is known to be decidable in polynomial time using congruence closure, and this is also the case if some of the function symbols are assumed to be commutative or defined by certain shallow identities, called strongly shallow. We show that decidability in P is preserved if we add the assumption that certain function symbols f are extensional in the sense that f(s1,...,sn) ≈ f(t1,...,tn) implies s1 ≈ t1, ...,sn ≈ tn. In addition, we investigate a variant of extensionality that is more appropriate for commutative function symbols, but which raises the complexity of the word problem to coNP.
@article{ BaaderKapurJAR22, address = {Cham}, author = {Franz {Baader} and Deepak {Kapur}}, doi = {https://doi.org/10.1007/s10817-022-09624-4}, journal = {Journal of Automated Reasoning}, number = {3}, pages = {301--329}, publisher = {Springer International Publishing}, title = {Deciding the Word Problem for Ground and Strongly Shallow Identities w.r.t.\ Extensional Symbols}, volume = {66}, year = {2022}, }
Abstract BibTeX Entry PDF File DOI ©Springer Extended Abstract Extended Version
Errors in Description Logic (DL) ontologies are often detected when a reasoner computes unwanted consequences. The question is then how to repair the ontology such that the unwanted consequences no longer follow, but as many of the other consequences as possible are preserved. The problem of computing such optimal repairs was addressed in our previous work in the setting where the data (expressed by an ABox) may contain errors, but the schema (expressed by an \(\mathcal{EL}\) TBox) is assumed to be correct. Actually, we consider a generalization of ABoxes called quantified ABoxes (qABoxes) both as input for and as result of the repair process. Using qABoxes for repair allows us to retain more information, but the disadvantage is that standard DL systems do not accept qABoxes as input. This raises the question, investigated in the present paper, whether and how one can obtain optimal repairs if one restricts the output of the repair process to being ABoxes. In general, such optimal ABox repairs need not exist. Our main contribution is that we show how to decide the existence of optimal ABox repairs in exponential time, and how to compute all such repairs in case they exist.
@inproceedings{ BaKoKrNu-ESWC2022, author = {Franz {Baader} and Patrick {Koopmann} and Francesco {Kriegel} and Adrian {Nuradiansyah}}, booktitle = {19th Extended Semantic Web Conference, {ESWC} 2022, Hersonissos, Greece, May 29 -- June 2, 2022, Proceedings}, doi = {https://doi.org/10.1007/978-3-031-06981-9_8}, pages = {130--146}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {Optimal ABox Repair w.r.t.\ Static {$\mathcal{EL}$} TBoxes: from Quantified ABoxes back to ABoxes}, volume = {13261}, year = {2022}, }
BibTeX Entry PDF File PDF File (ceur-ws.org) Full Conference Paper
@inproceedings{ BaKoKrNu-DL2022, author = {Franz {Baader} and Patrick {Koopmann} and Francesco {Kriegel} and Adrian {Nuradiansyah}}, booktitle = {Proceedings of the 35th International Workshop on Description Logics ({DL} 2022), Haifa, Israel, August 7--10, 2022}, publisher = {CEUR-WS.org}, series = {{CEUR} Workshop Proceedings}, title = {Optimal {ABox} Repair w.r.t. Static $\mathcal{EL}$ {TBoxes:} from Quantified {ABoxes} back to {ABoxes} (Extended Abstract)}, year = {2022}, }
BibTeX Entry Online Version
@inproceedings{ BaKoMiTuZa-DL-2022, author = {Franz {Baader} and Patrick {Koopmann} and Friedrich {Michel} and Anni{-}Yasmin {Turhan} and Benjamin {Zarrie{\ss}}}, booktitle = {Proceedings of the 35th International Workshop on Description Logics {(DL} 2022) co-located with Federated Logic Conference (FLoC 2022), Haifa, Israel, August 7th to 10th, 2022}, editor = {Ofer {Arieli} and Martin {Homola} and Jean Christoph {Jung} and Marie{-}Laure {Mugnier}}, publisher = {CEUR-WS.org}, series = {{CEUR} Workshop Proceedings}, title = {Efficient TBox Reasoning with Value Restrictions Using the FL0wer Reasoner (Extended Abstract)}, volume = {3263}, year = {2022}, }
Abstract BibTeX Entry DOI
The inexpressive Description Logic (DL) \(\mathcal{FL}_0\), which has conjunction and value restriction as its only concept constructors, had fallen into disrepute when it turned out that reasoning in \(\mathcal{FL}_0\) w.r.t. general TBoxes is ExpTime-complete, i.e., as hard as in the considerably more expressive logic \(\mathcal{ALC}\). In this paper, we rehabilitate \(\mathcal{FL}_0\) by presenting a dedicated subsumption algorithm for \(\mathcal{FL}_0\), which is much simpler than the tableau-based algorithms employed by highly optimized DL reasoners. Our experiments show that the performance of our novel algorithm, as prototypically implemented in our \(\mathcal{FL}_{o}\textit{wer}\) reasoner, compares very well with that of the highly optimized reasoners. \(\mathcal{FL}_{o}\textit{wer}\) can also deal with ontologies written in the extension \(\mathcal{FL}_\bot\) of \(\mathcal{FL}_0\) with the top and the bottom concept by employing a polynomial-time reduction, shown in this paper, which eliminates top and bottom. We also investigate the complexity of reasoning in DLs related to the Horn-fragments of \(\mathcal{FL}_0\) and \(\mathcal{FL}_\bot\).
@article{ BaKoMiTuZa-TPLP-22, author = {Franz {Baader} and Patrick {Koopmann} and Friedrich {Michel} and Anni-Yasmin {Turhan} and Benjamin {Zarrie{\ss}}}, doi = {https://doi.org/10.1017/S1471068421000466}, journal = {Theory and Practice of Logic Programming}, number = {2}, pages = {162--192}, publisher = {Cambridge University Press}, title = {Efficient {TBox} Reasoning with Value Restrictions using the $\mathcal{FL}_{o}\textit{wer}$ reasoner}, volume = {22}, year = {2022}, }
Abstract BibTeX Entry PDF File DOI ©IJCAI Extended Version Addendum
Ontologies based on Description Logic (DL) represent general background knowledge in a terminology (TBox) and the actual data in an ABox. DL systems can then be used to compute consequences (such as answers to certain queries) from an ontology consisting of a TBox and an ABox. Since both human-made and machine-learned data sets may contain errors, which manifest themselves as unintuitive or obviously incorrect consequences, repairing DL-based ontologies in the sense of removing such unwanted consequences is an important topic in DL research. Most of the repair approaches described in the literature produce repairs that are not optimal, in the sense that they do not guarantee that only a minimal set of consequences is removed. In a series of papers, we have developed an approach for computing optimal repairs, starting with the restricted setting of an \(\mathcal{EL}\) instance store, extending this to the more general setting of a quantified ABox (where some individuals may be anonymous), and then adding a static \(\mathcal{EL}\) TBox.
Here, we extend the expressivity of the underlying DL considerably, by adding nominals, inverse roles, regular role inclusions and the bottom concept to \(\mathcal{EL}\), which yields a fragment of the well-known DL Horn-\(\mathcal{SROIQ}\). The ideas underlying our repair approach still apply to this DL, though several non-trivial extensions are needed to deal with the new constructors and axioms. The developed repair approach can also be used to treat unwanted consequences expressed by certain conjunctive queries or regular path queries, and to handle Horn-\(\mathcal{ALCOI}\) TBoxes with regular role inclusions.
@inproceedings{ BaKr-KR2022, author = {Franz {Baader} and Francesco {Kriegel}}, booktitle = {Proceedings of the 19th International Conference on Principles of Knowledge Representation and Reasoning, {KR} 2022, Haifa, Israel, July 31 -- August 5, 2022}, doi = {https://doi.org/10.24963/kr.2022/3}, editor = {Gabriele {Kern{-}Isberner} and Gerhard {Lakemeyer} and Thomas {Meyer}}, pages = {22--32}, title = {Pushing Optimal ABox Repair from {$\mathcal{EL}$} Towards More Expressive Horn-DLs}, year = {2022}, }
Abstract BibTeX Entry PDF File DOI ©Springer
Ontologies based on Description Logic (DL) represent general background knowledge in a terminology (TBox) and the actual data in an ABox. Both human-made and machine-learned data sets may contain errors, which are usually detected when the DL reasoner returns unintuitive or obviously incorrect answers to queries. To eliminate such errors, classical repair approaches offer as repairs maximal subsets of the ABox not having the unwanted answers w.r.t. the TBox. It is, however, not always clear which of these classical repairs to use as the new, corrected data set. Error-tolerant semantics instead takes all repairs into account: cautious reasoning returns the answers that follow from all classical repairs whereas brave reasoning returns the answers that follow from some classical repair. It is inspired by inconsistency-tolerant reasoning and has been investigated for the DL \(\mathcal{EL}\), but in a setting where the TBox rather than the ABox is repaired. In a series of papers, we have developed a repair approach for ABoxes that improves on classical repairs in that it preserves a maximal set of consequences (i.e., answers to queries) rather than a maximal set of ABox assertions. The repairs obtained by this approach are called optimal repairs. In the present paper, we investigate error-tolerant reasoning in the DL \(\mathcal{EL}\), but we repair the ABox and use optimal repairs rather than classical repairs as the underlying set of repairs. To be more precise, we consider a static \(\mathcal{EL}\) TBox (which is assumed to be correct), represent the data by a quantified ABox (where some individuals may be anonymous), and use \(\mathcal{EL}\) concepts as queries (instance queries). We show that brave entailment of instance queries can be decided in polynomial time. Cautious entailment can be decided by a coNP procedure, but is still in P if the TBox is empty.
@inproceedings{ BaKrNu-RuleML2022, author = {Franz {Baader} and Francesco {Kriegel} and Adrian {Nuradiansyah}}, booktitle = {Rules and Reasoning - 6th International Joint Conference, RuleML+RR 2022, Virtual, September 26-28, 2022, Proceedings}, doi = {https://doi.org/10.1007/978-3-031-21541-4_15}, editor = {Guido {Governatori} and Anni{-}Yasmin {Turhan}}, pages = {227--243}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {Error-Tolerant Reasoning in the Description Logic {$\mathcal{EL}$} Based On Optimal Repairs}, volume = {13752}, year = {2022}, }
Abstract BibTeX Entry DOI
Concrete domains have been introduced in the area of Description Logic to enable reference to concrete objects (such as numbers) and predefined predicates on these objects (such as numerical comparisons) when defining concepts. Unfortunately, in the presence of general concept inclusions (GCIs), which are supported by all modern DL systems, adding concrete domains may easily lead to undecidability. To regain decidability of the DL ALC in the presence of GCIs, quite strong restrictions, in sum called omega-admissibility, were imposed on the concrete domain. On the one hand, we generalize the notion of omega-admissibility from concrete domains with only binary predicates to concrete domains with predicates of arbitrary arity. On the other hand, we relate omega-admissibility to well-known notions from model theory. In particular, we show that finitely bounded homogeneous structures yield omega-admissible concrete domains. This allows us to show omega-admissibility of concrete domains using existing results from model theory. When integrating concrete domains into lightweight DLs of the EL family, achieving decidability is not enough. One wants reasoning in the resulting DL to be tractable. This can be achieved by using so-called p-admissible concrete domains and restricting the interaction between the DL and the concrete domain. We investigate p-admissibility from an algebraic point of view. Again, this yields strong algebraic tools for demonstrating p-admissibility. In particular, we obtain an expressive numerical p-admissible concrete domain based on the rational numbers. Although omega-admissibility and p-admissibility are orthogonal conditions that are almost exclusive, our algebraic characterizations of these two properties allow us to locate an infinite class of p-admissible concrete domains whose integration into ALC yields decidable DLs
@article{ BaaderRydvalJAR22, address = {Cham}, author = {Franz {Baader} and Jakub {Rydval}}, doi = {https://doi.org/10.1007/s10817-022-09626-2}, journal = {Journal of Automated Reasoning}, number = {3}, pages = {357--407}, publisher = {Springer International Publishing}, title = {Using Model Theory to Find Decidable and Tractable Description Logics with Concrete Domains}, volume = {66}, year = {2022}, }
2021
Abstract BibTeX Entry PDF File DOI Extended Technical Report
Logic-based approaches to AI have the advantage that their behavior can in principle be explained to a user. If, for instance, a Description Logic reasoner derives a consequence that triggers some action of the overall system, then one can explain such an entailment by presenting a proof of the consequence in an appropriate calculus. How comprehensible such a proof is depends not only on the employed calculus, but also on the properties of the particular proof, such as its overall size, its depth, the complexity of the employed sentences and proof steps, etc. For this reason, we want to determine the complexity of generating proofs that are below a certain threshold w.r.t. a given measure of proof quality. Rather than investigating this problem for a fixed proof calculus and a fixed measure, we aim for general results that hold for wide classes of calculi and measures. In previous work, we first restricted the attention to a setting where proof size is used to measure the quality of a proof. We then extended the approach to a more general setting, but important measures such as proof depth were not covered. In the present paper, we provide results for a class of measures called recursive, which yields lower complexities and also encompasses proof depth. In addition, we close some gaps left open in our previous work, thus providing a comprehensive picture of the complexity landscape.
@inproceedings{ AlBaBoKoKo-CADE2021, author = {Christian {Alrabbaa} and Franz {Baader} and Stefan {Borgwardt} and Patrick {Koopmann} and Alisa {Kovtunova}}, booktitle = {Proceedings of the 28th International Conference on Automated Deduction (CADE-28), July 11--16, 2021, Virtual Event, United States}, doi = {https://doi.org/10.1007/978-3-030-79876-5_17}, editor = {Andr{\'e} {Platzer} and Geoff {Sutcliffe}}, pages = {291--308}, series = {Lecture Notes in Computer Science}, title = {Finding Good Proofs for Description Logic Entailments Using Recursive Quality Measures}, volume = {12699}, year = {2021}, }
BibTeX Entry PDF File
@inproceedings{ AlrabbaaBBKK21, author = {Christian {Alrabbaa} and Franz {Baader} and Stefan {Borgwardt} and Patrick {Koopmann} and Alisa {Kovtunova}}, booktitle = {Proceedings of the 34th International Workshop on Description Logics {(DL} 2021) part of Bratislava Knowledge September {(BAKS} 2021), Bratislava, Slovakia, September 19th to 22nd, 2021}, editor = {Martin {Homola} and Vladislav {Ryzhikov} and Renate A. {Schmidt}}, publisher = {CEUR-WS.org}, series = {{CEUR} Workshop Proceedings}, title = {Finding Good Proofs for Description Logic Entailments Using Recursive Quality Measures (Extended Abstract)}, volume = {2954}, year = {2021}, }
BibTeX Entry PDF File
@inproceedings{ BaFeRo-DL-21, author = {Franz {Baader} and Oliver {Fern{\'a}ndez Gil} and Maryam {Rostamigiv}}, booktitle = {Proceedings of the 34th International Workshop on Description Logics ({DL} 2021), Hybrid Event, Bratislava, Slovakia, September 19--22, 2021}, editor = {Martin {Homola} and Vladislav {Ryzhikov} and Renate A. {Schmidt}}, publisher = {CEUR-WS.org}, series = {{CEUR} Workshop Proceedings}, title = {Restricted Unification in the DL {$\mathcal{FL}_0$} (Extended Abstract)}, volume = {2954}, year = {2021}, }
Abstract BibTeX Entry PDF File DOI
Unification in the Description Logic (DL) FL0 is known to be ExpTime-complete and of unification type zero. We investigate whether a lower complexity of the unification problem can be achieved by either syntactically restricting the role depth of concepts or semantically restricting the length of role paths in interpretations. We show that the answer to this question depends on whether the number formulating such a restriction is encoded in unary or binary: for unary coding, the complexity drops from ExpTime to PSpace. As an auxiliary result, we prove a PSpace-completeness result for a depth-restricted version of the intersection emptiness problem for deterministic root-to-frontier tree automata. Finally, we show that the unification type of FL0 improves from type zero to unitary (finitary) for unification without (with) constants in the restricted setting.
@inproceedings{ BaGiRo-FroCoS21, author = {Franz {Baader} and Oliver {Fern\'andez Gil} and Maryam {Rostamigiv}}, booktitle = {Proc. of the 13th International Symposium on Frontiers of Combining Systems ({FroCoS} 2021)}, doi = {https://doi.org/10.1007/978-3-030-86205-3_5}, editor = {Boris {Konev} and Giles {Reger}}, pages = {81--97}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {Restricted Unification in the {DL} {$\mathcal{FL}_0$}}, volume = {12941}, year = {2021}, }
Abstract BibTeX Entry PDF File DOI ©Springer Extended Abstract Extended Version
The application of automated reasoning approaches to Description Logic (DL) ontologies may produce certain consequences that either are deemed to be wrong or should be hidden for privacy reasons. The question is then how to repair the ontology such that the unwanted consequences can no longer be deduced. An optimal repair is one where the least amount of other consequences is removed. Most of the previous approaches to ontology repair are of a syntactic nature in that they remove or weaken the axioms explicitly present in the ontology, and thus cannot achieve semantic optimality. In previous work, we have addressed the problem of computing optimal repairs of (quantified) ABoxes, where the unwanted consequences are described by concept assertions of the light-weight DL \(\mathcal{E\!L}\). In the present paper, we improve on the results achieved so far in two ways. First, we allow for the presence of terminological knowledge in the form of an \(\mathcal{E\!L}\) TBox. This TBox is assumed to be static in the sense that it cannot be changed in the repair process. Second, the construction of optimal repairs described in our previous work is best case exponential. We introduce an optimized construction that is exponential only in the worst case. First experimental results indicate that this reduces the size of the computed optimal repairs considerably.
@inproceedings{ BaKoKrNu-CADE2021, author = {Franz {Baader} and Patrick {Koopmann} and Francesco {Kriegel} and Adrian {Nuradiansyah}}, booktitle = {Proceedings of the 28th International Conference on Automated Deduction (CADE-28), July 11--16, 2021, Virtual Event, United States}, doi = {https://doi.org/10.1007/978-3-030-79876-5_18}, editor = {Andr{\'e} {Platzer} and Geoff {Sutcliffe}}, pages = {309--326}, series = {Lecture Notes in Computer Science}, title = {{Computing Optimal Repairs of Quantified ABoxes w.r.t. Static $\mathcal{E\!L}$ TBoxes}}, volume = {12699}, year = {2021}, }
BibTeX Entry PDF File DOI PDF File (Poster) PDF File (rwth-aachen.de) Full Conference Paper
@inproceedings{ BaKoKrNu-KR2021, author = {Franz {Baader} and Patrick {Koopmann} and Francesco {Kriegel} and Adrian {Nuradiansyah}}, booktitle = {Recent Published Research (RPR) track of the 18th International Conference on Principles of Knowledge Representation and Reasoning, {KR} 2021, Virtual Event, November 3--12, 2021}, doi = {https://doi.org/10.5281/zenodo.8341301}, title = {{Computing Optimal Repairs of Quantified ABoxes w.r.t.\ Static $\mathcal{EL}$ TBoxes (Extended Abstract)}}, year = {2021}, }
Abstract BibTeX Entry PDF File PDF File (ceur-ws.org) Extended Version
We review our recent work on how to compute optimal repairs, optimal compliant anonymizations, and optimal safe anonymizations of ABoxes containing possibly anonymized individuals. The results can be used both to remove erroneous consequences from a knowledge base and to hide secret information before publication of the knowledge base, while keeping as much as possible of the original information.
@inproceedings{ BaKoKrNuPe-DL-2021, author = {Franz {Baader} and Patrick {Koopmann} and Francesco {Kriegel} and Adrian {Nuradiansyah} and Rafael {Pe\~{n}aloza}}, booktitle = {Proceedings of the 34th International Workshop on Description Logics ({DL} 2021), Hybrid Event, Bratislava, Slovakia, September 19--22, 2021}, editor = {Martin {Homola} and Vladislav {Ryzhikov} and Renate A. {Schmidt}}, publisher = {CEUR-WS.org}, series = {{CEUR} Workshop Proceedings}, title = {{Privacy-Preserving Ontology Publishing: The Case of Quantified ABoxes w.r.t.\ a Static Cycle-Restricted $\mathcal{EL}$ TBox}}, volume = {2954}, year = {2021}, }
Abstract BibTeX Entry PDF File DOI Extended Version
In recent work, we have shown how to compute compliant anonymizations of quantified ABoxes w.r.t. \(\mathcal{E\!L}\) policies. In this setting, quantified ABoxes can be used to publish information about individuals, some of which are anonymized. The policy is given by concepts of the Description Logic (DL) \(\mathcal{E\!L}\), and compliance means that one cannot derive from the ABox that some non-anonymized individual is an instance of a policy concept. If one assumes that a possible attacker could have additional knowledge about some of the involved non-anonymized individuals, then compliance with a policy is not sufficient. One wants to ensure that the quantified ABox is safe in the sense that none of the secret instance information is revealed, even if the attacker has additional compliant knowledge. In the present paper, we show that safety can be decided in polynomial time, and that the unique optimal safe anonymization of a non-safe quantified ABox can be computed in exponential time, provided that the policy consists of a single \(\mathcal{E\!L}\) concept.
@inproceedings{ BaKrNuPe-SAC2021, author = {Franz {Baader} and Francesco {Kriegel} and Adrian {Nuradiansyah} and Rafael {Pe{\~n}aloza}}, booktitle = {Proceedings of the 36th Annual ACM Symposium on Applied Computing (SAC '21), March 22--26, 2021, Virtual Event, Republic of Korea}, doi = {https://doi.org/10.1145/3412841.3441961}, pages = {863--872}, publisher = {Association for Computing Machinery}, title = {{Safety of Quantified ABoxes w.r.t.\ Singleton $\mathcal{E\!L}$ Policies}}, year = {2021}, }
Abstract BibTeX Entry PDF File
Research on unification in Description Logic (DL) has concentrated on the lightweight DLs FL0 and EL. For both DLs, the unification type is zero, which is the worst possible type. The complexity of deciding unifiability is ExpTime-complete for FL0 and NP-complete for EL. In a recent paper, we have shown that, for FL0, both the unification type and the complexity of the decision problem can be improved by considering restricted versions of the unification problem where either the role depth of concepts is restricted syntactically or the length of role paths in interpretations is restricted semantically. In the present paper, we show that no such improvements can be obtained for EL: both in the syntactically and in the semantically restricted case, the unification type stays zero, and the complexity of the decision problem stays NP-complete.
@inproceedings{ BaRo-DL-2021, author = {Franz {Baader} and Maryam {Rostamigiv}}, booktitle = {Proceedings of the 34th International Workshop on Description Logics ({DL} 2021), Hybrid Event, Bratislava, Slovakia, September 19--22, 2021}, editor = {Martin {Homola} and Vladislav {Ryzhikov} and Renate A. {Schmidt}}, publisher = {CEUR-WS.org}, series = {{CEUR} Workshop Proceedings}, title = {Restricted Unification in the DL {$\mathcal{EL}$}}, volume = {2954}, year = {2021}, }
Abstract BibTeX Entry PDF File ©Springer-Verlag
Concrete domains have been introduced in Description Logics (DLs) to enable reference to concrete objects (such as numbers) and predefined predicates on these objects (such as numerical comparisons) when defining concepts. To retain decidability when integrating a concrete domain into a decidable DL, the domain must satisfy quite strong restrictions. In previous work, we have analyzed the most prominent such condition, called omega-admissibility, from an algebraic point of view. This provided us with useful algebraic tools for proving omega-admissibility, which allowed us to find new examples for concrete domains whose integration leaves the prototypical expressive DL ALC decidable. When integrating concrete domains into lightweight DLs of the EL family, achieving decidability is not enough. One wants reasoning in the resulting DL to be tractable. This can be achieved by using so-called p-admissible concrete domains and restricting the interaction between the DL and the concrete domain. In the present paper, we investigate p-admissibility from an algebraic point of view. Again, this yields strong algebraic tools for demonstrating p-admissibility. In particular, we obtain an expressive numerical p-admissible concrete domain based on the rational numbers. Although omega-admissibility and p-admissibility are orthogonal conditions that are almost exclusive, our algebraic characterizations of these two properties allow us to locate an infinite class of p-admissible concrete domains whose integration into ALC yields decidable DLs.
@inproceedings{ BaRy-JELIA-21, address = {Cham}, author = {Franz {Baader} and Jakub {Rydval}}, booktitle = {Proceedings of the 17th European Conference on Logics in Artificial Intelligence (JELIA 2021)}, editor = {Wolfgang {Faber} and Gerhard {Friedrich} and Martin {Gebser} and Michael {Morak}}, pages = {194--209}, publisher = {Springer International Publishing}, series = {Lecture Notes in Computer Science}, title = {An Algebraic View on p-Admissible Concrete Domains for Lightweight Description Logics}, volume = {12678}, year = {2021}, }
2020
Abstract BibTeX Entry PDF File DOI
Logic-based approaches to AI have the advantage that their behaviour can in principle be explained by providing their users with proofs for the derived consequences. However, if such proofs get very large, then it may be hard to understand a consequence even if the individual derivation steps are easy to comprehend. This motivates our interest in finding small proofs for Description Logic (DL) entailments. Instead of concentrating on a specific DL and proof calculus for this DL, we introduce a general framework in which proofs are represented as labeled, directed hypergraphs, where each hyperedge corresponds to a single sound derivation step. On the theoretical side, we investigate the complexity of deciding whether a certain consequence has a proof of size at most n along the following orthogonal dimensions: (i) the underlying proof system is polynomial or exponential; (ii) proofs may or may not reuse already derived consequences; and (iii) the number n is represented in unary or binary. We have determined the exact worst-case complexity of this decision problem for all but one of the possible combinations of these options. On the practical side, we have developed and implemented an approach for generating proofs for expressive DLs based on a non-standard reasoning task called forgetting. We have evaluated this approach on a set of realistic ontologies and compared the obtained proofs with proofs generated by the DL reasoner ELK, finding that forgetting-based proofs are often better w.r.t. different measures of proof complexity.
@inproceedings{ AlBaBoKoKo-20, author = {Christian {Alrabbaa} and Franz {Baader} and Stefan {Borgwardt} and Patrick {Koopmann} and Alisa {Kovtunova}}, booktitle = {LPAR-23: 23rd International Conference on Logic for Programming, Artificial Intelligence and Reasoning}, doi = {https://doi.org/10.29007/nhpp}, editor = {Elvira {Albert} and Laura {Kovacs}}, pages = {32--67}, publisher = {EasyChair}, series = {EPiC Series in Computing}, title = {Finding Small Proofs for Description Logic Entailments: Theory and Practice}, volume = {73}, year = {2020}, }
Abstract BibTeX Entry PDF File
If a Description Logic (DL) system derives a consequence, then one can in principle explain such an entailment by presenting a proof of the consequence in an appropriate calculus. Intuitively, good proofs are ones that are simple enough to be comprehensible to a user of the system. In recent work, we have introduced a general framework in which proofs are represented as labeled, directed hypergraphs, and have investigated the complexity of finding small proofs. However, large size is not the only reason that can make a proof complex. In the present paper, we introduce other measures for the complexity of a proof, and analyze the computational complexity of deciding whether a given consequence has a proof of complexity at most \(q\). This problem can be NP-complete even for \(\mathcal{EL}\), but we also identify measures where it can be decided in polynomial time.
@inproceedings{ AlBaBoKoKo-DL-20, author = {Christian {Alrabbaa} and Franz {Baader} and Stefan {Borgwardt} and Patrick {Koopmann} and Alisa {Kovtunova}}, booktitle = {{DL 2020:} International Workshop on Description Logics}, publisher = {CEUR-WS.org}, series = {CEUR-WS}, title = {On the Complexity of Finding Good Proofs for Description Logic Entailments}, year = {2020}, }
Abstract BibTeX Entry PDF File
The classical approach for repairing a Description Logic (DL) ontology in the sense of removing an unwanted consequence is to delete a minimal number of axioms from the ontology such that the resulting ontology no longer has the consequence. While there are automated tools for computing all possible such repairs, the user still needs to decide by hand which of the (potentially exponentially many) repairs to choose. In this paper, we argue that exploring a proof of the unwanted consequence may help us to locate other erroneous consequences within the proof, and thus allows us to make a more informed decision on which axioms to remove. In addition, we suggest that looking at the so-called atomic decomposition, which describes the modular structure of the ontology, enables us to judge the impact that removing a certain axiom has. Since both proofs and atomic decompositions of ontologies may be large, visual support for inspecting them is required. We describe a prototypical system that can visualize proofs and the atomic decomposition in an integrated visualization tool to support ontology debugging.
@inproceedings{ AlBaDaFlKo-DL-20, author = {Christian {Alrabbaa} and Franz {Baader} and Raimund {Dachselt} and Tamara {Flemisch} and Patrick {Koopmann}}, booktitle = {{DL 2020:} International Workshop on Description Logics}, publisher = {CEUR-WS.org}, series = {CEUR-WS}, title = {Visualising Proofs and the Modular Structure of Ontologies to Support Ontology Repair}, year = {2020}, }
Abstract BibTeX Entry DOI
We introduce and investigate the expressive description logic (DL) ALCSCC++, in which the global and local cardinality constraints introduced in previous papers can be mixed. We prove that the added expressivity does not increase the complexity of satisfiability checking and other standard inference problems. However, reasoning in ALCSCC++ becomes undecidable if inverse roles are added or conjunctive query entailment is considered. We prove that decidability of querying can be regained if global and local constraints are not mixed and the global constraints are appropriately restricted. In this setting, query entailment can be shown to be EXPTIME-complete and hence not harder than reasoning in ALC.
@inproceedings{ BaBaRu-ECAI20, author = {Franz {Baader} and Bartosz {Bednarczyk} and Sebastian {Rudolph}}, booktitle = {Proceedings of the 24th European Conference on Artificial Intelligence ({ECAI} 2020)}, doi = {https://doi.org/10.3233/FAIA200146}, pages = {616--623}, publisher = {IOS Press}, series = {Frontiers in Artificial Intelligence and Applications}, title = {Satisfiability and Query Answering in Description Logics with Global and Local Cardinality Constraints}, volume = {325}, year = {2020}, }
Abstract BibTeX Entry PDF File DOI
In contrast to qualitative linear temporal logics, which can be used to state that some property will eventually be satisfied, metric temporal logics allow us to formulate constraints on how long it may take until the property is satisfied. While most of the work on combining description logics (DLs) with temporal logics has concentrated on qualitative temporal logics, there is a growing interest in extending this work to the quantitative case. In this article, we complement existing results on the combination of DLs with metric temporal logics by introducing interval-rigid concept and role names. Elements included in an interval-rigid concept or role name are required to stay in it for some specified amount of time. We investigate several combinations of (metric) temporal logics with ALC by either allowing temporal operators only on the level of axioms or also applying them to concepts. In contrast to most existing work on the topic, we consider a timeline based on the integers and also allow assertional axioms. We show that the worst-case complexity does not increase beyond the previously known bound of 2-ExpSpace and investigate in detail how this complexity can be reduced by restricting the temporal logic and the occurrences of interval-rigid names.
@article{ BaBoKoOzTh-TOCL20, author = {Franz {Baader} and Stefan {Borgwardt} and Patrick {Koopmann} and Ana {Ozaki} and Veronika {Thost}}, doi = {https://doi.org/10.1145/3399443}, journal = {ACM Transactions on Computational Logic}, month = {August}, number = {4}, pages = {30:1--30:46}, title = {Metric Temporal Description Logics with Interval-Rigid Names}, volume = {21}, year = {2020}, }
Abstract BibTeX Entry DOI
The project "Semantic Technologies for Situation Awareness" was concerned with detecting certain critical situations from data obtained by observing a complex hard- and software system, in order to trigger actions that allow this system to save energy. The general idea was to formalize situations as ontology-mediated queries, but in order to express the relevant situations, both the employed ontology language and the query language had to be extended. In this paper we sketch the general approach and then concentrate on reporting the formal results obtained for reasoning in these extensions, but do not describe the application that triggered these extensions in detail.
@article{ BBKTT-KI20, author = {Franz {Baader} and Stefan {Borgwardt} and Patrick {Koopmann} and Veronika {Thost} and Anni-Yasmin {Turhan}}, doi = {https://doi.org/10.1007/s13218-020-00694-3}, journal = {{KI} -- K{\"u}nstliche Intelligenz}, number = {4}, pages = {291--301}, title = {Semantic Technologies for Situation Awareness}, volume = {34}, year = {2020}, }
Abstract BibTeX Entry DOI
Simple counting quantifiers that can be used to compare the number of role successors of an individual or the cardinality of a concept with a fixed natural number have been employed in Description Logics (DLs) for more than two decades under the respective names of number restrictions and cardinality restriction on concepts. Recently, we have considerably extended the expressivity of such quantifiers by allowing to impose set and cardinality constraints formulated in the quantifier-free fragment of Boolean Algebra with Presburger Arithmetic (QFBAPA) on sets of role successors and concepts, respectively. We were able to prove that this extension does not increase the complexity of reasoning. In the present paper, we investigate the expressive power of the DLs obtained this way, using appropriate bisimulation characterizations and 0-1 laws as tools for distinguishing the expressiveness of different logics. In particular, we show that, in contrast to most classical DLs, these logics are no longer expressible in first-order predicate logic (FOL), and we characterize their first-order fragments. In most of our previous work on DLs with QFBAPA-based set and cardinality constraints we have employed finiteness restrictions on interpretations to ensure that the obtained sets are finite. Here we dispense with these restrictions to make the comparison with classical DLs, where one usually considers arbitrary models rather than finite ones, easier. It turns out that doing so does not change the complexity of reasoning.
@inproceedings{ BaBo-ANDREI60-20, author = {Franz {Baader} and Filippo {De Bortoli}}, booktitle = {ANDREI-60. Automated New-era Deductive Reasoning Event in Iberia}, doi = {https://doi.org/10.29007/ltzn}, editor = {Laura {Kovacs} and Konstantin {Korovin} and Giles {Reger}}, pages = {1--25}, publisher = {EasyChair}, series = {EPiC Series in Computing}, title = {Description Logics That Count, and What They Can and Cannot Count}, volume = {68}, year = {2020}, }
Abstract BibTeX Entry PDF File
Simple counting quantifiers that can be used to compare the number of role successors of an individual or the cardinality of a concept with a fixed natural number have been employed in Description Logics (DLs) for more than two decades under the respective names of number restrictions and cardinality restriction on concepts. Recently, we have considerably extended the expressivity of such quantifiers by allowing to impose set and cardinality constraints formulated in the quantifier-free fragment of Boolean Algebra with Presburger Arithmetic (QFBAPA) on sets of role successors and concepts, respectively. We were able to prove that this extension does not increase the complexity of reasoning. In the present paper, we investigate the expressive power of the DLs obtained this way, using appropriate bisimulation characterizations and 0-1 laws as tools for distinguishing the expressiveness of different logics. In particular, we show that, in contrast to most classical DLs, these logics are no longer expressible in first-order predicate logic (FOL), and we characterize their first-order fragments. In most of our previous work on DLs with QFBAPA-based set and cardinality constraints we have employed finiteness restrictions on interpretations to ensure that the obtained sets are finite. Here we dispense with these restrictions to make the comparison with classical DLs, where one usually considers arbitrary models rather than finite ones, easier. It turns out that doing so does not change the complexity of reasoning.
@inproceedings{ BaDeBo-DL-20, address = {Online}, author = {Franz {Baader} and Filippo {De Bortoli}}, booktitle = {Proceedings of the 33rd International Workshop on Description Logics (DL'20)}, editor = {Stefan {Borgwardt} and Thomas {Meyer}}, publisher = {CEUR-WS}, series = {CEUR Workshop Proceedings}, title = {Description Logics that Count, and What They Can and Cannot Count (Extended Abstract)}, volume = {2663}, year = {2020}, }
Abstract BibTeX Entry PDF File DOI
The word problem for a finite set of ground identities is known to be decidable in polynomial time using congruence closure, and this is also the case if some of the function symbols are assumed to be commutative. We show that decidability in P is preserved if we add the assumption that certain function symbols f are extensional in the sense that f(s1,...,sn) = f(t1,...,tn) implies s1 = t1, ..., sn = tn. In addition, we investigate a variant of extensionality that is more appropriate for commutative function symbols, but which raises the complexity of the word problem to coNP.
@inproceedings{ BaKa-IJCAR20, author = {Franz {Baader} and Deepak {Kapur}}, booktitle = {Proceedings of the International Joint Conference on Automated Reasoning ({IJCAR} 2020)}, doi = {https://doi.org/10.1007/978-3-030-51074-9_10}, editor = {Viorica {Sofronie-Stokkermans} and Nicolas {Peltier}}, pages = {163--180}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {Deciding the Word Problem for Ground Identities with Commutative and Extensional Symbols}, volume = {12166}, year = {2020}, }
Abstract BibTeX Entry PDF File
The generation of proof certificates and the use of proof checkers is nowadays standard in first-order automated theorem proving and related areas. They have, to the best of our knowledge, not yet been employed in Description Logics, where the focus was on detecting and repairing errors in the ontology, rather than on catching erroneous consequences created by an incorrect reasoner. This paper reports on first steps towards remedying this deficit for subsumptions computed by the DL reasoner ELK. We use an existing tool for generating proofs of consequences from ELK, and transform these proofs into a format that is accepted as certificates by our proof checker. The checker is obtained as an instance of a generic certification tool based on the Logical Framework with Side Conditions (LFSC), by formalizing the inference rules of ELK in LFSC. We report on the results of applying this approach to the classification of a large number of real-world OWL 2 EL ontologies.
@inproceedings{ BaKoTi-DL-20, author = {Franz {Baader} and Patrick {Koopmann} and Cesare {Tinelli}}, booktitle = {{DL 2020:} International Workshop on Description Logics}, publisher = {CEUR-WS.org}, series = {CEUR-WS}, title = {First Results on How to Certify Subsumptions Computed by the EL Reasoner ELK Using the Logical Framework with Side Conditions}, year = {2020}, }
Abstract BibTeX Entry PDF File DOI ©Springer Extended Version
We adapt existing approaches for privacy-preserving publishing of linked data to a setting where the data are given as Description Logic (DL) ABoxes with possibly anonymised (formally: existentially quantified) individuals and the privacy policies are expressed using sets of concepts of the DL \(\mathcal{E\!L}\). We provide a chacterization of compliance of such ABoxes w.r.t. \(\mathcal{E\!L}\) policies, and show how optimal compliant anonymisations of ABoxes that are non-compliant can be computed. This work extends previous work on privacy-preserving ontology publishing, in which a very restricted form of ABoxes, called instance stores, had been considered, but restricts the attention to compliance. The approach developed here can easily be adapted to the problem of computing optimal repairs of quantified ABoxes.
@inproceedings{ BaKrNuPe-ISWC2020, author = {Franz {Baader} and Francesco {Kriegel} and Adrian {Nuradiansyah} and Rafael {Pe{\~n}aloza}}, booktitle = {Proceedings of the 19th International Semantic Web Conference (ISWC 2020), Part {I}, Athens, Greece, November 2-6, 2020}, doi = {https://doi.org/10.1007/978-3-030-62419-4_1}, editor = {Jeff Z. {Pan} and Valentina A. M. {Tamma} and Claudia {d'Amato} and Krzysztof {Janowicz} and Bo {Fu} and Axel {Polleres} and Oshani {Seneviratne} and Lalana {Kagal}}, pages = {3--20}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {{Computing Compliant Anonymisations of Quantified ABoxes w.r.t. $\mathcal{E\!L}$ Policies}}, volume = {12506}, year = {2020}, }
Abstract BibTeX Entry DOI
The theory ACUI of an associative, commutative, and idempotent binary function symbol + with unit 0 was one of the first equational theories for which the complexity of testing solvability of unification problems was investigated in detail. In this paper, we investigate two extensions of ACUI. On the one hand, we consider approximate ACUI-unification, where we use appropriate measures to express how close a substitution is to being a unifier. On the other hand, we extend ACUI-unification to ACUIG-unification, i.e., unification in equational theories that are obtained from ACUI by adding a finite set G of ground identities. Finally, we combine the two extensions, i.e., consider approximate ACUIG-unification. For all cases we are able to determine the exact worst-case complexity of the unification problem.
@article{ BaMaMoOk-MSCS-20, author = {Franz {Baader} and Pavlos {Marantidis} and Antoine {Mottet} and Alexander {Okhotin}}, doi = {https://doi.org/10.1017/S0960129519000185}, journal = {Mathematical Structures in Computer Science}, number = {6}, pages = {597--626}, title = {Extensions of Unification Modulo {$\mathit{ACUI}$}}, volume = {30}, year = {2020}, }
Abstract BibTeX Entry PDF File DOI
Concrete domains have been introduced in the area of Description Logic to enable reference to concrete objects (such as numbers) and predefined predicates on these objects (such as numerical comparisons) when defining concepts. Unfortunately, in the presence of general concept inclusions (GCIs), which are supported by all modern DL systems, adding concrete domains may easily lead to undecidability. One contribution of this paper is to strengthen the existing undecidability results further by showing that concrete domains even weaker than the ones considered in the previous proofs may cause undecidability. To regain decidability in the presence of GCIs, quite strong restrictions, in sum called omega-admissiblity, need to be imposed on the concrete domain. On the one hand, we generalize the notion of omega-admissiblity from concrete domains with only binary predicates to concrete domains with predicates of arbitrary arity. On the other hand, we relate omega-admissiblity to well-known notions from model theory. In particular, we show that finitely bounded, homogeneous structures yield omega-admissible concrete domains. This allows us to show omega-admissibility of concrete domains using existing results from model theory.
@inproceedings{ BaRy-IJCAR20, author = {Franz {Baader} and Jakub {Rydval}}, booktitle = {Proceedings of the International Joint Conference on Automated Reasoning ({IJCAR} 2020)}, doi = {https://doi.org/10.1007/978-3-030-51074-9_24}, editor = {Viorica {Sofronie-Stokkermans} and Nicolas {Peltier}}, pages = {413--431}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {Description Logics with Concrete Domains and General Concept Inclusions Revisited}, volume = {12166}, year = {2020}, }
Abstract BibTeX Entry PDF File
Concrete domains have been introduced in the area of Description Logic to enable reference to concrete objects (such as numbers) and predefined predicates on these objects (such as numerical comparisons) when defining concepts. Unfortunately, in the presence of general concept inclusions (GCIs), which are supported by all modern DL systems, adding concrete domains may easily lead to undecidability. One contribution of this paper is to strengthen the existing undecidability results further by showing that concrete domains even weaker than the ones considered in the previous proofs may cause undecidability. To regain decidability in the presence of GCIs, quite strong restrictions, in sum called omega-admissiblity, need to be imposed on the concrete domain. On the one hand, we generalize the notion of omega-admissiblity from concrete domains with only binary predicates to concrete domains with predicates of arbitrary arity. On the other hand, we relate omega-admissiblity to well-known notions from model theory. In particular, we show that finitely bounded, homogeneous structures yield omega-admissible concrete domains. This allows us to show omega-admissibility of concrete domains using existing results from model theory.
@inproceedings{ BaRy-DL-20, address = {Online}, author = {Franz {Baader} and Jakub {Rydval}}, booktitle = {Proceedings of the 33rd International Workshop on Description Logics (DL'20)}, editor = {Stefan {Borgwardt} and Thomas {Meyer}}, publisher = {CEUR-WS}, series = {CEUR Workshop Proceedings}, title = {Description Logics with Concrete Domains and General Concept Inclusions Revisited (Extended Abstract)}, volume = {2663}, year = {2020}, }
Abstract BibTeX Entry DOI
We investigate the impact that general concept inclusions and role-value maps have on the complexity and decidability of reasoning in the Description Logic FL0. On the one hand, we give a more direct proof for ExpTime-hardness of subsumption w.r.t. general concept inclusions in FL0. On the other hand, we determine restrictions on role-value maps that ensure decidability of subsumption, but we also show undecidability for the cases where these restrictions are not satisfied.
@article{ BaTh-KI-20, author = {Franz {Baader} and Cl\'ement {Th\'eron}}, doi = {https://dx.doi.org/10.1007/s13218-020-00651-0}, journal = {{KI} -- K{\"u}nstliche Intelligenz}, number = {3}, pages = {291--301}, title = {Role-Value Maps and General Concept Inclusions in the Minimal Description Logic with Value Restrictions -- or Revisiting Old Skeletons in the DL Cupboard}, volume = {34}, year = {2020}, }
2019
Abstract BibTeX Entry PDF File
In two previous publications we have, on the one hand, extended the description logic (DL) ALCQ by more expressive number restrictions using numerical and set constraints expressed in the quantifier-free fragment of Boolean Algebra with Presburger Arithmetic (QFBAPA). The resulting DL was called ALCSCC. On the other hand, we have extended the terminological formalism of the well-known description logic ALC from concept inclusions (CIs) to more general cardinality constraints expressed in QFBAPA, which we called extended cardinality constraints. Here, we combine the two extensions, i.e., we consider extended cardinality constraints on ALCSCC concepts. We show that this does not increase the complexity of reasoning, which is NExpTime-complete both for extended cardinality constraints in ALC and ALCSCC. The same is true for a restricted version of such cardinality constraints, where the complexity of reasoning decreases to ExpTime, not just for ALC, but also for ALCSCC.
@inproceedings{ BaaderSAC19, author = {Franz {Baader}}, booktitle = {Proceedings of the 34th Annual {ACM} Symposium on Applied Computing ({SAC}'19)}, pages = {1123--1131}, publisher = {{ACM}}, title = {Expressive Cardinality Constraints on {$\mathcal{ALCSCC}$} Concepts}, year = {2019}, }
Abstract BibTeX Entry PDF File DOI
In two previous publications we have, on the one hand, extended the description logic (DL) ALCQ by more expressive number restrictions using numerical and set constraints expressed in the quantifier-free fragment of Boolean Algebra with Presburger Arithmetic (QFBAPA). The resulting DL was called ALCSCC. On the other hand, we have extended the terminological formalism of the well-known description logic ALC from concept inclusions (CIs) to more general cardinality constraints expressed in QFBAPA, which we called extended cardinality constraints. Here, we combine the two extensions, i.e., we consider extended cardinality constraints on ALCSCC concepts. We show that this does not increase the complexity of reasoning, which is NExpTime-complete both for extended cardinality constraints in the DL ALC and in its extension ALCSCC. The same is true for a restricted version of such cardinality constraints, where the complexity of reasoning decreases to ExpTime, not just for ALC, but also for ALCSCC.
@article{ Baader-ACR-19, author = {Franz {Baader}}, doi = {https://doi.org/10.1145/3372001.3372002}, journal = {ACM SIGAPP Applied Computing Review}, pages = {5--17}, publisher = {ACM}, title = {Expressive Cardinality Restrictions on Concepts in a Description Logic with Expressive Number Restrictions}, volume = {19}, year = {2019}, }
Abstract BibTeX Entry PDF File
We consider an expressive description logic (DL) in which the global and local cardinality constraints introduced in previous papers can be mixed. On the one hand, we show that this does not increase the complexity of satisfiability checking and other standard inference problems. On the other hand, conjunctive query entailment in this DL turns out to be undecidable. We prove that decidability of querying can be regained if global and local constraints are not mixed and the global constraints are appropriately restricted.
@inproceedings{ BaBR-DL-19, author = {Franz {Baader} and Bartosz {Bednarczyk} and Sebastian {Rudolph}}, booktitle = {Proceedings of the 32nd International Workshop on Description Logics (DL'19)}, editor = {Mantas {Simkus} and Grant {Weddell}}, publisher = {CEUR-WS}, series = {CEUR Workshop Proceedings}, title = {Satisfiability Checking and Conjunctive Query Answering in Description Logics with Global and Local Cardinality Constraints}, volume = {2373}, year = {2019}, }
Abstract BibTeX Entry PDF File
Selecting patients for clinical trials is very labor-intensive. Our goal is to design (semi-)automated techniques that can support clinical researchers in this task. In this paper we summarize our recent advances towards such a system: First, we present the challenges involved when representing electronic health records and eligibility criteria for clinical trials in a formal language. Second, we introduce temporal conjunctive queries with negation as a formal language suitable to represent clinical trials. Third, we describe our methodology for automatic translation of clinical trial eligibility criteria from natural language into our query language. The evaluation of our prototypical implementation shows promising results. Finally, we talk about the parts we are currently working on and the challenges involved.
@inproceedings{ BBFKXZHQA19, address = {Marina del Rey, USA}, author = {Franz {Baader} and Stefan {Borgwardt} and Walter {Forkel} and Alisa {Kovtunova} and Chao {Xu} and Beihai {Zhou}}, booktitle = {2nd International Workshop on Hybrid Question Answering with Structured and Unstructured Knowledge (HQA'19)}, editor = {Franz {Baader} and Brigitte {Grau} and Yue {Ma}}, title = {Temporalized Ontology-Mediated Query Answering under Minimal-World Semantics}, year = {2019}, }
Abstract BibTeX Entry PDF File DOI (The final publication is available at link.springer.com) ©Springer Nature Switzerland AG 2019
In recent work we have extended the description logic (DL) by means of more expressive number restrictions using numerical and set constraints stated in the quantifier-free fragment of Boolean Algebra with Presburger Arithmetic (QFBAPA). It has been shown that reasoning in the resulting DL, called \(\mathcal{ALCSCC}\), is PSpace-complete without a TBox and ExpTime-complete w.r.t. a general TBox. The semantics of \(\mathcal{ALCSCC}\) is defined in terms of finitely branching interpretations, that is, interpretations where every element has only finitely many role successors. This condition was needed since QFBAPA considers only finite sets. In this paper, we first introduce a variant of \(\mathcal{ALCSCC}\), called \(\mathcal{ALCSCC}^\infty\), in which we lift this requirement (inexpressible in first-order logic) and show that the complexity results for \(\mathcal{ALCSCC}\) mentioned above are preserved. Nevertheless, like \(\mathcal{ALCSCC}\), \(\mathcal{ALCSCC}^\infty\) is not a fragment of first-order logic. The main contribution of this paper is to give a characterization of the first-order fragment of \(\mathcal{ALCSCC}^\infty\). The most important tool used in the proof of this result is a notion of bisimulation that characterizes this fragment.
@inproceedings{ BaDeBo-FroCoS19, author = {Franz {Baader} and Filippo {De Bortoli}}, booktitle = {Proc. of the 12th International Symposium on Frontiers of Combining Systems ({FroCoS} 2019)}, doi = {https://doi.org/10.1007/978-3-030-29007-8_12}, editor = {Andreas {Herzig} and Andrei {Popescu}}, pages = {203--219}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {On the Expressive Power of Description Logics with Cardinality Constraints on Finite and Infinite Sets}, volume = {11715}, year = {2019}, }
Abstract BibTeX Entry PDF File (The final publication is available at link.springer.com)
The probabilistic Description Logic ALCME is an extension of the Description Logic ALC that allows for uncertain conditional statements of the form "if C holds, then D holds with probability p," together with probabilistic assertions about individuals. In ALCME, probabilities are understood as an agent's degree of belief. Probabilistic conditionals are formally interpreted based on the so-called aggregating semantics, which combines a statistical interpretation of probabilities with a subjective one. Knowledge bases of ALCME are interpreted over a fixed finite domain and based on their maximum entropy (ME) model. We prove that checking consistency of such knowledge bases can be done in time polynomial in the cardinality of the domain, and in exponential time in the size of a binary encoding of this cardinality. If the size of the knowledge base is also taken into account, the combined complexity of the consistency problem is NP-complete for unary encoding of the domain cardinality and NExpTime-complete for binary encoding.
@inproceedings{ BaEKW19, author = {Franz {Baader} and Andreas {Ecke} and Gabriele {Kern{-}Isberner} and Marco {Wilhelm}}, booktitle = {Proc. of the 12th International Symposium on Frontiers of Combining Systems ({FroCoS} 2019)}, editor = {Andreas {Herzig} and Andrei {Popescu}}, pages = {167--184}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {The Complexity of the Consistency Problem in the Probabilistic Description Logic {$\mathcal{ALC}^{\mathsf{ME}}$}}, volume = {11715}, year = {2019}, }
Abstract BibTeX Entry PDF File
Matching concept descriptions against concept patterns was introduced as a new inference task in Description Logics two decades ago, motivated by applications in the Classic system. Shortly afterwards, a polynomial-time matching algorithm was developed for the DL FL0. However, this algorithm cannot deal with general TBoxes (i.e., finite sets of general concept inclusions). Here we show that matching in FL0 w.r.t. general TBoxes is in ExpTime, which is the best possible complexity for this problem since already subsumption w.r.t. general TBoxes is ExpTime-hard in FL0. We also show that, w.r.t. a restricted form of TBoxes, the complexity of matching in FL0 can be lowered to PSpace.
@inproceedings{ BaFeMa-DL19, author = {Franz {Baader} and Oliver {Fern\'andez Gil} and Pavlos {Marantidis}}, booktitle = {Proceedings of the 32nd International Workshop on Description Logics (DL'19)}, editor = {Mantas {Simkus} and Grant {Weddell}}, publisher = {CEUR-WS}, series = {CEUR Workshop Proceedings}, title = {Matching in the Description Logic {FL0} with respect to General TBoxes (Extended abstract)}, volume = {2373}, year = {2019}, }
Abstract BibTeX Entry PDF File DOI ©Springer Extended Version
We make a first step towards adapting an existing approach for privacy-preserving publishing of linked data to Description Logic (DL) ontologies. We consider the case where both the knowledge about individuals and the privacy policies are expressed using concepts of the DL \(\mathcal{EL}\), which corresponds to the setting where the ontology is an \(\mathcal{EL}\) instance store. We introduce the notions of compliance of a concept with a policy and of safety of a concept for a policy, and show how optimal compliant (safe) generalizations of a given \(\mathcal{EL}\) concept can be computed. In addition, we investigate the complexity of the optimality problem.
@inproceedings{ BaKrNu-JELIA19, author = {Franz {Baader} and Francesco {Kriegel} and Adrian {Nuradiansyah}}, booktitle = {16th European Conference on Logics in Artificial Intelligence, {JELIA} 2019, Rende, Italy, May 7-11, 2019, Proceedings}, doi = {https://doi.org/10.1007/978-3-030-19570-0_21}, editor = {Francesco {Calimeri} and Nicola {Leone} and Marco {Manna}}, pages = {323--338}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {{Privacy-Preserving Ontology Publishing for $\mathcal{EL}$ Instance Stores}}, volume = {11468}, year = {2019}, }
Abstract BibTeX Entry PDF File DOI ©Springer-Verlag
In previous work, we have investigated privacy-preserving publishing of Description Logic (DL) ontologies in a setting where the knowledge about individuals to be published is an \(\mathcal{EL}\) instance store, and both the privacy policy and the possible background knowledge of an attacker are represented by concepts of the DL \(\mathcal{EL}\). We have introduced the notions of compliance of a concept with a policy and of safety of a concept for a policy, and have shown how, in the context mentioned above, optimal compliant (safe) generalizations of a given \(\mathcal{EL}\) concept can be computed. In the present paper, we consider a modified setting where we assume that the background knowledge of the attacker is given by a DL different from the one in which the knowledge to be published and the safety policies are formulated. In particular, we investigate the situations where the attacker’s knowledge is given by an \(\mathcal{FL}_0\) or an \(\mathcal{FLE}\) concept. In both cases, we show how optimal safe generalizations can be computed. Whereas the complexity of this computation is the same (ExpTime) as in our previous results for the case of \(\mathcal{FL}_0\), it turns out to be actually lower (polynomial) for the more expressive DL \(\mathcal{FLE}\).
@inproceedings{ BaNu-KI19, author = {Franz {Baader} and Adrian {Nuradiansyah}}, booktitle = {KI 2019: Advances in Artificial Intelligence - 42nd German Conference on AI, Kassel, Germany, September 23 - 26, 2019, Proceedings}, doi = {https://doi.org/10.1007/978-3-030-30179-8_7}, editor = {Christoph {Benzm{\"u}ller} and Heiner {Stuckenschmidt}}, pages = {87--100}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {Mixing Description Logics in Privacy-Preserving Ontology Publishing}, volume = {11793}, year = {2019}, }
Abstract BibTeX Entry PDF File DOI ©Springer-Verlag
We present \(\mathcal{ALC}^{\mathsf{ME}}\), a probabilistic variant of the Description Logic \(\mathcal{ALC}\) that allows for representing and processing conditional statements of the form ``if \(E\) holds, then \(F\) follows with probability \(p\)'' under the principle of maximum entropy. Probabilities are understood as degrees of belief and formally interpreted by the aggregating semantics. We prove that both checking consistency and drawing inferences based on approximations of the maximum entropy distribution is possible in \(\mathcal{ALC}^{\mathsf{ME}}\) in time polynomial in the domain size. A major problem for probabilistic reasoning from such conditional knowledge bases is to count models and individuals. To achieve our complexity results, we develop sophisticated counting strategies on interpretations aggregated with respect to the so-called conditional impacts of types, which refine their conditional structure.
@inproceedings{ WiKeEcBa-JELIA19, author = {Marco {Wilhelm} and Gabriele {Kern-Isberner} and Andreas {Ecke} and Franz {Baader}}, booktitle = {16th European Conference on Logics in Artificial Intelligence, {JELIA} 2019, Rende, Italy, May 7-11, 2019, Proceedings}, doi = {https://doi.org/10.1007/978-3-030-19570-0_28}, editor = {Francesco {Calimeri} and Nicola {Leone} and Marco {Manna}}, pages = {434--449}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {{Counting Strategies for the Probabilistic Description Logic $\mathcal{ALC}^{\mathsf{ME}}$ Under the Principle of Maximum Entropy}}, volume = {11468}, year = {2019}, }
Abstract BibTeX Entry PDF File
Selecting patients for clinical trials is very labor-intensive. Our goal is to develop an automated system that can support doctors in this task. This paper describes a major step towards such a system: the automatic translation of clinical trial eligibility criteria from natural language into formal, logic-based queries. First, we develop a semantic annotation process that can capture many types of clinical trial criteria. Then, we map the annotated criteria to the formal query language. We have built a prototype system based on state-of-the-art NLP tools such as Word2Vec, Stanford NLP tools, and the MetaMap Tagger, and have evaluated the quality of the produced queries on a number of criteria from clinicaltrials.gov. Finally, we discuss some criteria that were hard to translate, and give suggestions for how to formulate eligibility criteria to make them easier to translate automatically.
@inproceedings{ XFB-ODLS15, address = {Graz, Austria}, author = {Chao {Xu} and Walter {Forkel} and Stefan {Borgwardt} and Franz {Baader} and Beihai {Zhou}}, booktitle = {Proc.\ of the 9th Workshop on Ontologies and Data in Life Sciences (ODLS'19), part of The Joint Ontology Workshops (JOWO'19)}, editor = {Martin {Boeker} and Ludger {Jansen} and Frank {Loebe} and Stefan {Schulz}}, series = {CEUR Workshop Proceedings}, title = {Automatic Translation of Clinical Trial Eligibility Criteria into Formal Queries}, volume = {2518}, year = {2019}, }
2018
Abstract BibTeX Entry PDF File DOI
Finding suitable candidates for clinical trials is a labor-intensive task that requires expert medical knowledge. Our goal is to design (semi-)automated techniques that can support clinical researchers in this task. We investigate the issues involved in designing formal query languages for selecting patients that are eligible for a given clinical trial, leveraging existing ontology-based query answering techniques. In particular, we propose to use a temporal extension of existing approaches for accessing data through ontologies written in Description Logics. We sketch how such a query answering system could work and show that eligibility criteria and patient data can be adequately modeled in our formalism.
@inproceedings{ BaBF-HQA18, author = {Franz {Baader} and Stefan {Borgwardt} and Walter {Forkel}}, booktitle = {Proc.\ of the 1st Int.\ Workshop on Hybrid Question Answering with Structured and Unstructured Knowledge (HQA'18), Companion of the The Web Conference 2018}, doi = {https://doi.org/10.1145/3184558.3191538}, pages = {1069--1074}, publisher = {ACM}, title = {Patient Selection for Clinical Trials Using Temporalized Ontology-Mediated Query Answering}, year = {2018}, }
Abstract BibTeX Entry PDF File DOI
Matching concept descriptions against concept patterns was introduced as a new inference task in Description Logics two decades ago, motivated by applications in the Classic system. Shortly afterwards, a polynomial-time matching algorithm was developed for the DL FL0. However, this algorithm cannot deal with general TBoxes (i.e., finite sets of general concept inclusions). Here we show that matching in FL0 w.r.t. general TBoxes is in ExpTime, which is the best possible complexity for this problem since already subsumption w.r.t. general TBoxes is ExpTime-hard in FL0. We also show that, w.r.t. a restricted form of TBoxes, the complexity of matching in FL0 can be lowered to PSpace.
@inproceedings{ BaFeMa18-LPAR, author = {Franz {Baader} and Oliver {Fern\'andez Gil} and Pavlos {Marantidis}}, booktitle = {LPAR-22. 22nd International Conference on Logic for Programming, Artificial Intelligence and Reasoning}, doi = {https://doi.org/10.29007/q74p}, editor = {Gilles {Barthe} and Geoff {Sutcliffe} and Margus {Veanes}}, pages = {76--94}, publisher = {EasyChair}, series = {EPiC Series in Computing}, title = {Matching in the Description Logic $\mathcal{FL}_0$ with respect to General TBoxes}, volume = {57}, year = {2018}, }
Abstract BibTeX Entry PDF File DOI
Although being quite inexpressive, the description logic (DL) FL0, which provides only conjunction, value restriction and the top concept as concept constructors, has an intractable subsumption problem in the presence of terminologies (TBoxes): subsumption reasoning w.r.t. acyclic FL0 TBoxes is coNP-complete, and becomes even ExpTime-complete in case general TBoxes are used. In the present paper, we use automata working on infinite trees to solve both standard and non-standard inferences in FL0 w.r.t. general TBoxes. First, we give an alternative proof of the ExpTime upper bound for subsumption in FL0 w.r.t. general TBoxes based on the use of looping tree automata. Second, we employ parity tree automata to tackle non-standard inference problems such as computing the least common subsumer and the difference of FL0 concepts w.r.t. general TBoxes.
@inproceedings{ BaFePe-GCAI-18, author = {Franz {Baader} and Oliver {Fern\'andez Gil} and Maximilian {Pensel}}, booktitle = {{GCAI} 2018, 4th Global Conference on Artificial Intelligence, Luxembourg, September 2018.}, doi = {https://doi.org/10.29007/scbw}, editor = {Daniel {Lee} and Alexander {Steen} and Toby {Walsh}}, pages = {1--14}, publisher = {EasyChair}, series = {EPiC Series in Computing}, title = {Standard and Non-Standard Inferences in the Description Logic {$\mathcal{FL}_0$} Using Tree Automata}, volume = {55}, year = {2018}, }
BibTeX Entry PDF File
@inproceedings{ BaFeRo-DL-21, author = {Franz {Baader} and Oliver {Fern{\'a}ndez Gil} and Maximilian {Pensel}}, booktitle = {Proceedings of the 31th International Workshop on Description Logics ({DL} 2018), Arizona, US, October 27--29, 2018}, editor = {Magdalena {Ortiz} and Thomas {Schneider}}, publisher = {CEUR-WS.org}, series = {{CEUR} Workshop Proceedings}, title = {Standard and Non-Standard Inferences in the Description Logic {$\mathcal{FL}_0$} Using Tree Automata}, volume = {2211}, year = {2018}, }
BibTeX Entry PDF File PDF File (ceur-ws.org) Full Conference Paper
@inproceedings{ BaKrNuPe-DL2018, author = {Franz {Baader} and Francesco {Kriegel} and Adrian {Nuradiansyah} and Rafael {Pe{\~n}aloza}}, booktitle = {Proceedings of the 31st International Workshop on Description Logics, Tempe, Arizona, October 27-29, 2018}, editor = {Magdalena {Ortiz} and Thomas {Schneider}}, publisher = {CEUR-WS.org}, series = {{CEUR} Workshop Proceedings}, title = {{Making Repairs in Description Logics More Gentle (Extended Abstract)}}, volume = {2211}, year = {2018}, }
Abstract BibTeX Entry PDF File ©AAAI Extended Abstract Extended Version
The classical approach for repairing a Description Logic ontology \(\mathfrak{O}\) in the sense of removing an unwanted consequence \(\alpha\) is to delete a minimal number of axioms from \(\mathfrak{O}\) such that the resulting ontology \(\mathfrak{O}'\) does not have the consequence \(\alpha\). However, the complete deletion of axioms may be too rough, in the sense that it may also remove consequences that are actually wanted. To alleviate this problem, we propose a more gentle notion of repair in which axioms are not deleted, but only weakened. On the one hand, we investigate general properties of this gentle repair method. On the other hand, we propose and analyze concrete approaches for weakening axioms expressed in the Description Logic \(\mathcal{E\mkern-1.618mu L}\).
@inproceedings{ BaKrNuPe-KR2018, address = {USA}, author = {Franz {Baader} and Francesco {Kriegel} and Adrian {Nuradiansyah} and Rafael {Pe{\~n}aloza}}, booktitle = {Principles of Knowledge Representation and Reasoning: Proceedings of the Sixteenth International Conference, {KR} 2018, Tempe, Arizona, 30 October - 2 November 2018}, editor = {Frank {Wolter} and Michael {Thielscher} and Francesca {Toni}}, pages = {319--328}, publisher = {{AAAI} Press}, title = {{Making Repairs in Description Logics More Gentle}}, year = {2018}, }
Abstract BibTeX Entry PDF File
It is well-known that the unification problem for a binary associative-commutative-idempotent function symbol with a unit (ACUI-unification) is polynomial for unification with constants and NP-complete for general unification. We prove that the same is true if we add a finite set of ground identities. To be more precise, we first show that not only unification with constants, but also unification with linear constant restrictions is in P for any extension of ACUI with a finite set of ground identities. Using well-known combination results for unification algorithms, this then yields an NP-upper bound for general unification modulo such a theory. The matching lower bound can be shown as in the case without ground identities.
@inproceedings{ BaMaMo-UNIF2018, address = {Oxford, UK}, author = {Franz {Baader} and Pavlos {Marantidis} and Antoine {Mottet}}, booktitle = {Proceedings of the 32th International Workshop on Unification ({UNIF 2018})}, editor = {Mauricio {Ayala-Rinc{\'o}n} and Philippe {Balbiani}}, title = {ACUI Unification modulo Ground Theories}, year = {2018}, }
Abstract BibTeX Entry PDF File DOI
Ontology-mediated query answering can be used to access large data sets through a mediating ontology. It has drawn considerable attention in the Description Logic (DL) community where both the complexity of query answering and practical query answering approaches based on rewriting were investigated in detail. Surprisingly, there is still a gap in what is known about the data complexity of query answering w.r.t. ontologies formulated in the inexpressive DL FL0. While it is known that the data complexity of answering conjunctive queries w.r.t. FL0 ontologies is coNP-complete, the exact complexity of answering instance queries was open until now. In the present paper, we show that answering instance queries w.r.t. FL0 ontologies is in P for data complexity. Together with the known lower bound of P-completeness for a fragment of FL0, this closes the gap mentioned above.
@inproceedings{ BaMaPe-RoD-18, author = {Franz {Baader} and Pavlos {Marantidis} and Maximilian {Pensel}}, booktitle = {Proc.\ of the Reasoning on Data Workshop (RoD'18), Companion of the The Web Conference 2018}, doi = {https://dx.doi.org/10.1145/3184558.3191618}, pages = {1603--1607}, publisher = {ACM}, title = {The Data Complexity of Answering Instance Queries in $\mathcal{FL}_0$}, year = {2018}, }
Abstract BibTeX Entry PDF File
We make a first step towards adapting the approach of Cuenca Grau and Kostylev for privacy-preserving publishing of linked data to Description Logic ontologies. We consider the case where both the knowledge about individuals and the privacy policies are expressed using EL concepts. We introduce the notions of compliance of a concept with a policy and of safety of a concept for a policy, and show how optimal compliant (safe) generalizations of a given EL concept can be computed.
@inproceedings{ BaNu-DL2018, author = {Franz {Baader} and Adrian {Nuradiansyah}}, booktitle = {Proceedings of the 31st International Workshop on Description Logics, Tempe, Arizona, October 27-29, 2018}, editor = {Magdalena {Ortiz} and Thomas {Schneider}}, note = {}, publisher = {CEUR-WS.org}, series = {{CEUR} Workshop Proceedings}, title = {{Towards Privacy-Preserving Ontology Publishing}}, year = {2018}, }
2017
Abstract BibTeX Entry PDF File
We introduce a new description logic that extends the well-known logic ALCQ by allowing the statement of constraints on role successors that are more general than the qualified number restrictions of ALCQ. To formulate these constraints, we use the quantifier-free fragment of Boolean Algebra with Presburger Arithmetic (QFBAPA), in which one can express Boolean combinations of set constraints and numerical constraints on the cardinalities of sets. Though our new logic is considerably more expressive than ALCQ, we are able to show that the complexity of reasoning in it is the same as in ALCQ, both without and with TBoxes.
@inproceedings{ Baader-FroCoS17, address = {Bras{\'i}lia, Brazil}, author = {Franz {Baader}}, booktitle = {Proceedings of the 11th International Symposium on Frontiers of Combining Systems (FroCoS'17)}, editor = {Clare {Dixon} and Marcelo {Finger}}, pages = {43--59}, publisher = {Springer-Verlag}, series = {Lecture Notes in Computer Science}, title = {A New Description Logic with Set Constraints and Cardinality Constraints on Role Successors}, volume = {10483}, year = {2017}, }
Abstract BibTeX Entry PDF File
The work in this paper is motivated by a privacy scenario in which the identity of certain persons (represented as anonymous individ- uals) should be hidden. We assume that factual information about known individuals (i.e., individuals whose identity is known) and anonymous individuals is stored in an ABox and general background information is expressed in a TBox, where both the TBox and the ABox are publicly accessible. The identity problem then asks whether one can deduce from the TBox and the ABox that a given anonymous individual is equal to a known one. Since this would reveal the identity of the anonymous indi- vidual, such a situation needs to be avoided. We first observe that not all Description Logics (DLs) are able to derive any such equalities between individuals, and thus the identity problem is trivial in these DLs. We then consider DLs with nominals, number restrictions, or function de- pendencies, in which the identity problem is non-trivial. We show that in these DLs the identity problem has the same complexity as the instance problem. Finally, we consider an extended scenario in which users with different rôles can access different parts of the TBox and ABox, and we want to check whether, by a sequence of rôle changes and queries asked in each rôle, one can deduce the identity of an anonymous individual.
@inproceedings{ DBLP:conf/dlog/BaaderBN17, author = {Franz {Baader} and Daniel {Borchmann} and Adrian {Nuradiansyah}}, booktitle = {Proceedings of the 30th International Workshop on Description Logics, Montpellier, France, July 18-21, 2017.}, editor = {Alessandro {Artale} and Birte {Glimm} and Roman {Kontchakov}}, publisher = {CEUR-WS.org}, series = {{CEUR} Workshop Proceedings}, title = {Preliminary Results on the Identity Problem in Description Logic Ontologies}, volume = {1879}, year = {2017}, }
Abstract BibTeX Entry PDF File ©Springer-Verlag
The work in this paper is motivated by a privacy scenario in which the identity of certain persons (represented as anonymous individ- uals) should be hidden. We assume that factual information about known individuals (i.e., individuals whose identity is known) and anonymous individuals is stored in an ABox and general background information is expressed in a TBox, where both the TBox and the ABox are publicly accessible. The identity problem then asks whether one can deduce from the TBox and the ABox that a given anonymous individual is equal to a known one. Since this would reveal the identity of the anonymous indi- vidual, such a situation needs to be avoided. We first observe that not all Description Logics (DLs) are able to derive any such equalities between individuals, and thus the identity problem is trivial in these DLs. We then consider DLs with nominals, number restrictions, or function de- pendencies, in which the identity problem is non-trivial. We show that in these DLs the identity problem has the same complexity as the instance problem. Finally, we consider an extended scenario in which users with different rôles can access different parts of the TBox and ABox, and we want to check whether, by a sequence of rôle changes and queries asked in each rôle, one can deduce the identity of an anonymous individual.
@inproceedings{ BaBoNu-JIST-2017, author = {Franz {Baader} and Daniel {Borchmann} and Adrian {Nuradiansyah}}, booktitle = {Semantic Technology - 7th Joint International Conference, {JIST} 2017, Gold Coast, QLD, Australia, November 10-12, 2017, Proceedings}, editor = {Zhe {Wang} and Anni-Yasmin {Turhan} and Kewen {Wang} and Xiaowang {Zhang}}, pages = {102--117}, title = {The Identity Problem in Description Logic Ontologies and Its Application to View-Based Information Hiding}, year = {2017}, }
Abstract BibTeX Entry PDF File ©Springer-Verlag
In contrast to qualitative linear temporal logics, which can be used to state that some property will eventually be satisfied, metric temporal logics allow to formulate constraints on how long it may take until the property is satisfied. While most of the work on combining Description Logics (DLs) with temporal logics has concentrated on qualitative temporal logics, there has recently been a growing interest in extending this work to the quantitative case. In this paper, we complement existing results on the combination of DLs with metric temporal logics over the natural numbers by introducing interval-rigid names. This allows to state that elements in the extension of certain names stay in this extension for at least some specified amount of time.
@inproceedings{ BaBoKoOzTh-FroCoS17, address = {Bras{\'i}lia, Brazil}, author = {Franz {Baader} and Stefan {Borgwardt} and Patrick {Koopmann} and Ana {Ozaki} and Veronika {Thost}}, booktitle = {Proceedings of the 11th International Symposium on Frontiers of Combining Systems (FroCoS'17)}, editor = {Clare {Dixon} and Marcelo {Finger}}, pages = {60--76}, series = {Lecture Notes in Computer Science}, title = {Metric Temporal Description Logics with Interval-Rigid Names}, volume = {10483}, year = {2017}, }
Abstract BibTeX Entry PDF File
In contrast to qualitative linear temporal logics, which can be used to state that some property will eventually be satisfied, metric temporal logics allow to formulate constraints on how long it may take until the property is satisfied. While most of the work on combining Description Logics (DLs) with temporal logics has concentrated on qualitative temporal logics, there has recently been a growing interest in extending this work to the quantitative case. In this paper, we complement existing results on the combination of DLs with metric temporal logics over the natural numbers by introducing interval-rigid names. This allows to state that elements in the extension of certain names stay in this extension for at least some specified amount of time.
@inproceedings{ BBK+-DL17, address = {Montpellier, France}, author = {Franz {Baader} and Stefan {Borgwardt} and Patrick {Koopmann} and Ana {Ozaki} and Veronika {Thost}}, booktitle = {Proceedings of the 30th International Workshop on Description Logics (DL'17)}, editor = {Alessandro {Artale} and Birte {Glimm} and Roman {Kontchakov}}, publisher = {CEUR-WS}, series = {CEUR Workshop Proceedings}, title = {Metric Temporal Description Logics with Interval-Rigid Names (Extended Abstract)}, volume = {1879}, year = {2017}, }
Abstract BibTeX Entry PDF File ©IJCAI
We investigate ontology-based query answering (OBQA) in a setting where both the ontology and the query can refer to concrete values such as numbers and strings. In contrast to previous work on this topic, the built-in predicates used to compare values are not restricted to being unary. We introduce restrictions on these predicates and on the ontology language that allow us to reduce OBQA to query answering in databases using the so-called combined rewriting approach. Though at first sight our restrictions are different from the ones used in previous work, we show that our results strictly subsume some of the existing first-order rewritability results for unary predicates.
@inproceedings{ BaBL-IJCAI17, address = {Melbourne, Australia}, author = {Franz {Baader} and Stefan {Borgwardt} and Marcel {Lippmann}}, booktitle = {Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI'17)}, editor = {Carles {Sierra}}, pages = {786--792}, title = {Query Rewriting for \textit{{DL-Lite}} with {$n$}-ary Concrete Domains}, year = {2017}, }
BibTeX Entry PDF File
@inproceedings{ BaBL-DL17, address = {Montpellier, France}, author = {Franz {Baader} and Stefan {Borgwardt} and Marcel {Lippmann}}, booktitle = {Proceedings of the 30th International Workshop on Description Logics (DL'17)}, editor = {Alessandro {Artale} and Birte {Glimm} and Roman {Kontchakov}}, publisher = {CEUR-WS}, series = {CEUR Workshop Proceedings}, title = {Query Rewriting for \textit{{DL-Lite}} with {$n$}-ary Concrete Domains (Abstract)}, volume = {1879}, year = {2017}, }
Abstract BibTeX Entry PDF File DOI
Fuzzy description logics (FDLs) have been introduced to represent concepts for which membership cannot be determined in a precise way, i.e., where instead of providing a strict border between being a member and not being a member, it is more appropriate to model a gradual change from membership to non-membership. First approaches for reasoning in FDLs where based either on a reduction to reasoning in classical description logics (DLs) or on adaptations of reasoning approaches for DLs to the fuzzy case. However, it turned out that these approaches in general do not work if expressive terminological axioms, called general concept inclusions (GCIs), are available in the FDL. The goal of this project was a comprehensive study of the border between decidability and undecidability for FDLs with GCIs, as well as determining the exact complexity of the decidable logics. As a result, we have provided an almost complete classification of the decidability and complexity of FDLs with GCIs.
@article{ BaBP-KI17, author = {Franz {Baader} and Stefan {Borgwardt} and Rafael {Pe{\~n}aloza}}, doi = {http://dx.doi.org/10.1007/s13218-016-0459-3}, journal = {K{\"u}nstliche Intelligenz}, note = {Project report.}, number = {1}, pages = {85--90}, title = {Decidability and Complexity of Fuzzy Description Logics}, volume = {31}, year = {2017}, }
Abstract BibTeX Entry PDF File
We extend the terminological formalism of the well-known description logic ALC from concept inclusions (CIs) to more general constraints formulated in the quantifier-free fragment of Boolean Algebra with Presburger Arithmetic (QFBAPA). In QFBAPA one can formulate Boolean combinations of inclusion constraints and numerical constraints on the cardinalities of sets. Our new formalism extends, on the one hand, so-called cardinality restrictions on concepts, which have been introduced two decades ago, and on the other hand the recently introduced statistical knowledge bases. Though considerably more expressive, our formalism has the same complexity (NExpTime) as cardinality restrictions on concepts. We will also introduce a restricted version of our formalism for which the complexity is ExpTime. This yields the until now unknown exact complexity of the consistency problem for statistical knowledge bases.
@inproceedings{ BaEc-GCAI17, author = {Franz {Baader} and Andreas {Ecke}}, booktitle = {{GCAI} 2017. 3rd Global Conference on Artificial Intelligence}, pages = {6--19}, publisher = {EasyChair}, series = {EPiC Series in Computing}, title = {Extending the Description Logic ALC with More Expressive Cardinality Constraints on Concepts}, volume = {50}, year = {2017}, }
Abstract BibTeX Entry PDF File
In a recent research paper, we have proposed an extension of the light-weight Description Logic (DL) EL in which concepts can be defined in an approximate way. To this purpose, the notion of a graded membership function m, which instead of a Boolean membership value 0 or 1 yields a membership degree from the interval [0,1], was introduced. Threshold concepts can then, for example, require that an individual belongs to a concept C with degree at least 0.8. Reasoning in the threshold DL tel(m) obtained this way of course depends on the employed graded membership function m. The paper defines a specific such function, called deg, and determines the exact complexity of reasoning in tel(deg). In addition, it shows how concept similarity measures (CSMs) satisfying certain properties can be used to define graded membership functions m , but it does not investigate the complexity of reasoning in the induced threshold DLs tel(m ). In the present paper, we start filling this gap. In particular, we show that computability of implies decidability of tel(m ), and we introduce a class of CSMs for which reasoning in the induced threshold DLs has the same complexity as in tel(deg).
@inproceedings{ sacBaFe17, author = {Franz {Baader} and Oliver {Fern{\'a}ndez Gil}}, booktitle = {Proceedings of the 32nd Annual {ACM} Symposium on Applied Computing, Marrakech, Morocco, April 4-6, 2017}, pages = {983--988}, publisher = {{ACM}}, title = {Decidability and Complexity of Threshold Description Logics Induced by Concept Similarity Measures}, year = {2017}, }
Abstract BibTeX Entry PDF File DOI
Recently introduced approaches for relaxed query answering, approximately defining concepts, and approximately solving unification problems in Description Logics have in common that they are based on the use of concept comparison measures together with a threshold construction. In this paper, we will briefly review these approaches, and then show how weighted automata working on infinite trees can be used to construct computable concept comparison measures for FL0 that are equivalence invariant w.r.t. general TBoxes. This is a first step towards employing such measures in the mentioned approximation approaches.
@inproceedings{ BaFM-LATA17, author = {Franz {Baader} and Oliver {Fern{\'a}ndez Gil} and Pavlos {Marantidis}}, booktitle = {Proceedings of the 11th International Conference on Language and Automata Theory and Applications ({LATA 2017})}, doi = {https://doi.org/10.1007/978-3-319-53733-7_1}, editor = {Frank {Drewes} and Carlos {Mart{\'i}n{-}Vide} and Bianca {Truthe}}, pages = {3--26}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {Approximation in Description Logics: How Weighted Tree Automata Can Help to Define the Required Concept Comparison Measures in $\mathcal{FL}_0$}, venue = {Ume{\aa}, Sweden}, volume = {10168}, year = {2017}, }
Abstract BibTeX Entry
Description logics (DLs) have a long tradition in computer science and knowledge representation, being designed so that domain knowledge can be described and so that computers can reason about this knowledge. DLs have recently gained increased importance since they form the logical basis of widely used ontology languages, in particular the web ontology language OWL. Written by four renowned experts, this is the first textbook on description logics. It is suitable for self-study by graduates and as the basis for a university course. Starting from a basic DL, the book introduces the reader to their syntax, semantics, reasoning problems and model theory and discusses the computational complexity of these reasoning problems and algorithms to solve them. It then explores a variety of reasoning techniques, knowledge-based applications and tools and it describes the relationship between DLs and OWL.
@book{ DLbook-2017, author = {Franz {Baader} and Ian {Horrocks} and Carsten {Lutz} and Ulrike {Sattler}}, publisher = {Cambridge University Press}, title = {An Introduction to Description Logic}, year = {2017}, }
Abstract BibTeX Entry PDF File PDF File (Extended Technical Report) DOI (The final publication is available at link.springer.com) ©Spinger International Publishing
We consider ontology-based query answering in a setting where some of the data are numerical and of a probabilistic nature, such as data obtained from uncertain sensor readings. The uncertainty for such numerical values can be more precisely represented by continu- ous probability distributions than by discrete probabilities for numerical facts concerning exact values. For this reason, we extend existing ap- proaches using discrete probability distributions over facts by continuous probability distributions over numerical values. We determine the exact (data and combined) complexity of query answering in extensions of the well-known description logics EL and ALC with numerical comparison operators in this probabilistic setting.
@inproceedings{ BaKoTu-FroCoS-17, author = {Franz {Baader} and Patrick {Koopmann} and Anni-Yasmin {Turhan}}, booktitle = {Frontiers of Combining Systems: 11th International Symposium}, doi = {https://doi.org/10.1007/978-3-319-66167-4_5}, pages = {77--94}, publisher = {Springer International Publishing}, series = {Lecture Notes in Computer Science}, title = {Using Ontologies to Query Probabilistic Numerical Data}, volume = {10483}, year = {2017}, }
Abstract BibTeX Entry PDF File
Both matching and unification in the Description Logic FL0 can be reduced to solving certain formal language equations. In previous work, we have extended unification in FL0 to approximate unification, and have shown that approximate unification can be reduced to approximately solving language equations. An approximate solution of a language equation need not make the languages on the left- and right-hand side of the equation equal, but close w.r.t. a given distance function. In the present paper, we consider approximate matching. We show that, for a large class of distance functions, approximate matching is in NP. We then consider a particular distance function d1(K,L) = 2−n, where n is the length of the shortest word in the symmetric difference of the languages K, L, and show that w.r.t. this distance function approximate matching is polynomial.
@inproceedings{ BaMa-UNIF2017, address = {Oxford, UK}, author = {Franz {Baader} and Pavlos {Marantidis}}, booktitle = {Proceedings of the 31st International Workshop on Unification ({UNIF'17})}, editor = {Adri\`{a} {Gasc\'{o}n} and Christopher {Lynch}}, title = {Language equations for approximate matching in the Description Logic FL0}, year = {2017}, }
2016
Abstract BibTeX Entry PDF File DOI
In ontology-based data access (OBDA), database querying is enriched with an ontology that provides domain knowledge and additional vocabulary for query formulation. We identify query emptiness and predicate emptiness as two central reasoning services in this context. Query emptiness asks whether a given query has an empty answer over all databases formulated in a given vocabulary. Predicate emptiness is defined analogously, but quantifies universally over all queries that contain a given predicate. In this paper, we determine the computational complexity of query emptiness and predicate emptiness in the EL, DL-Lite, and ALC-families of description logics, investigate the connection to ontology modules, and perform a practical case study to evaluate the new reasoning services.
@article{ BBLW-JAIR16, author = {F. {Baader} and M. {Bienvenu} and C. {Lutz} and F. {Wolter}}, doi = {http://dx.doi.org/10.1613/jair.4866}, journal = {Journal of Artificial Intelligence Research (JAIR)}, pages = {1--59}, title = {Query and Predicate Emptiness in Ontology-Based Data Access}, volume = {56}, year = {2016}, }
Abstract BibTeX Entry PDF File DOI
Unification in Description Logics has been proposed as a novel inference service that can, for example, be used to detect redundancies in ontologies. The inexpressive description logic EL is of particular interest in this context since, on the one hand, several large biomedical ontologies are defined using EL. On the other hand, unification in EL has been shown to be NP-complete, and thus of considerably lower complexity than unification in other description logics of similarly restricted expressive power. However, EL allows the use of the top concept, which represents the whole interpretation domain, whereas the large medical ontology SNOMED CT makes no use of this feature. Surprisingly, removing the top concept from EL makes the unification problem considerably harder. More precisely, we will show that unification in EL without the top concept is PSpace-complete. In addition to the decision problem, we also consider the problem of actually computing unifiers in EL without top.
@article{ BBBM-NDJFL15, author = {Franz {Baader} and Nguyen Thanh {Binh} and Stefan {Borgwardt} and Barbara {Morawska}}, doi = {http://dx.doi.org/10.1215/00294527-3555507}, journal = {Notre Dame Journal of Formal Logic}, number = {4}, pages = {443--476}, title = {Deciding Unifiability and Computing Local Unifiers in the Description Logic {$\mathcal{EL}$} without Top Constructor}, volume = {57}, year = {2016}, }
Abstract BibTeX Entry PDF File DOI
Unification in Description Logics has been introduced as a means to detect redundancies in ontologies. We try to extend the known decidability results for unification in the Description Logic EL to disunification since negative constraints can be used to avoid unwanted unifiers. While decidability of the solvability of general EL-disunification problems remains an open problem, we obtain NP-completeness results for two interesting special cases: dismatching problems, where one side of each negative constraint must be ground, and local solvability of disunification problems, where we consider only solutions that are constructed from terms occurring in the input problem. More precisely, we first show that dismatching can be reduced to local disunification, and then provide two complementary NP-algorithms for finding local solutions of disunification problems.
@article{ BaBM-LMCS16, author = {Franz {Baader} and Stefan {Borgwardt} and Barbara {Morawska}}, doi = {http://dx.doi.org/10.2168/LMCS-12(4:1)2016}, journal = {Logical Methods in Computer Science}, number = {4:1}, pages = {1--28}, title = {Extending Unification in {$\mathcal{EL}$} to Disunification: {T}he Case of Dismatching and Local Disunification}, volume = {12}, year = {2016}, }
Abstract BibTeX Entry PDF File
We introduce an extension to Description Logics that allows us to use prototypes to define concepts. To accomplish this, we introduce the notion of a prototype distance functions (pdf), which assign to each element of an interpretation a distance value. Based on this, we define a new concept constructor of the form P n(d) for being a relation from ≤,<,≥,>, which is interpreted as the set of all elements with a distance n according to the pdf d. We show how weighted alternating parity tree automata (wapta) over the integers can be used to define pdfs, and how this allows us to use both concepts and pointed interpretations as prototypes. Finally, we investigate the complexity of reasoning in ALCP(wapta), which extends the Description Logic ALC with prototype constructors for pdfs defined using wapta.
@inproceedings{ BaEc-LATA16, author = {Franz {Baader} and Andreas {Ecke}}, booktitle = {Proceedings of the 10th International Conference on Language and Automata Theory and Applications (LATA 2016)}, pages = {63--75}, publisher = {Springer-Verlag}, series = {Lecture Notes in Computer Science}, title = {Reasoning with Prototypes in the Description Logic ALC using Weighted Tree Automata}, venue = {Prague, Czeck}, volume = {9618}, year = {2016}, }
Abstract BibTeX Entry PDF File
In a previous paper, we have introduced an extension of the lightweight Description Logic EL that allows us to define concepts in an approximate way. For this purpose, we have defined a graded membership function deg, which for each individual and concept yields a number in the interval [0,1] expressing the degree to which the individual belongs to the concept. Threshold concepts C t for in <, <=, >, >= then collect all the individuals that belong to C with degree t. We have then investigated the complexity of reasoning in the Description Logic tEL(deg), which is obtained from EL by adding such threshold concepts. In the present paper, we extend these results, which were obtained for reasoning without TBoxes, to the case of reasoning w.r.t. acyclic TBoxes. Surprisingly, this is not as easy as might have been expected. On the one hand, one must be quite careful to define acyclic TBoxes such that they still just introduce abbreviations for complex concepts, and thus can be unfolded. On the other hand, it turns out that, in contrast to the case of EL, adding acyclic TBoxes to tEL(deg) increases the complexity of reasoning by at least on level of the polynomial hierarchy.
@inproceedings{ ecaiBaFe16, author = {Franz {Baader} and Oliver {Fern{\'a}ndez Gil}}, booktitle = {{ECAI} 2016 - 22nd European Conference on Artificial Intelligence, 29 August-2 September 2016, The Hague, The Netherlands - Including Prestigious Applications of Artificial Intelligence {(PAIS} 2016)}, pages = {1096--1104}, publisher = {{IOS} Press}, series = {Frontiers in Artificial Intelligence and Applications}, title = {Extending the Description Logic $\tau\mathcal{EL}(deg)$ with Acyclic TBoxes}, volume = {285}, year = {2016}, }
Abstract BibTeX Entry PDF File
The unification type of an equational theory is defined using a preorder on substitutions, called the instantiation preorder, whose scope is either restricted to the variables occurring in the unification problem, or unrestricted such that all variables are considered. It is known that the unification type of an equational theory may vary, depending on which instantiation preorder is used. More precisely, it was shown that the theory ACUI of an associative, commutative, and idempotent binary function symbol with a unit is unitary w.r.t. the restricted instantiation preorder, but not unitary w.r.t. the unrestricted one. Here, we improve on this result, by showing that, w.r.t. the unrestricted instantiation preorder, ACUI is not even finitary.
@inproceedings{ BaLu-UNIF2016, address = {Porto, Portugal}, author = {Franz {Baader} and Pierre {Ludmann}}, booktitle = {Proceedings of the 30th International Workshop on Unification ({UNIF'16})}, editor = {Silvio {Ghilardi} and Manfred {Schmidt-Schau{\ss}}}, pages = {31--35}, title = {The Unification Type of {${\mathsf{ACUI}}$} w.r.t.\ the Unrestricted Instantiation Preorder is not Finitary}, year = {2016}, }
Abstract BibTeX Entry
Unification in Description logics (DLs) has been introduced as a novel inference service that can be used to detect redundancies in ontologies, by finding different concepts that may potentially stand for the same intuitive notion. It was first investigated in detail for the DL FL0, where unification can be reduced to solving certain language equations. In order to increase the recall of this method for finding redundancies, we introduce and investigate the notion of approximate unification, which basically finds pairs of concepts that "almost" unify. The meaning of "almost" is formalized using distance measures between concepts. We show that approximate unification in FL0 can be reduced to approximately solving language equations, and devise algorithms for solving the latter problem for two particular distance measures.
@inproceedings{ BaMaOk-JELIA16, author = {Franz {Baader} and Pavlos {Marantidis} and Alexander {Okhotin}}, booktitle = {Proc.\ of the 15th Eur.\ Conf.\ on Logics in Artificial Intelligence ({JELIA 2016})}, editor = {Loizos {Michael} and Antonis C. {Kakas}}, pages = {49--63}, publisher = {Springer-Verlag}, series = {Lecture Notes in Artificial Intelligence}, title = {Approximate Unification in the Description Logic {$\mathcal{FL}_0$}}, volume = {10021}, year = {2016}, }
Abstract BibTeX Entry PDF File
Unification with constants modulo the theory ACUI of an associative (A), commutative (C) and idempotent (I) binary function symbol with a unit (U) corresponds to solving a very simple type of set equations. It is well-known that solvability of systems of such equations can be decided in polynomial time by reducing it to satisfiability of propositional Horn formulae. Here we introduce a modified version of this problem by no longer requiring all equations to be completely solved, but allowing for a certain number of violations of the equations. We introduce three different ways of counting the number of violations, and investigate the complexity of the respective decision problem, i.e., the problem of deciding whether there is an assignment that solves the system with at most l violations for a given threshold value l.
@inproceedings{ BaMaOk-UNIF2016, address = {Porto, Portugal}, author = {Franz {Baader} and Pavlos {Marantidis} and Alexander {Okhotin}}, booktitle = {Proceedings of the 30th International Workshop on Unification ({UNIF'16})}, editor = {Silvio {Ghilardi} and Manfred {Schmidt-Schau{\ss}}}, pages = {37--41}, title = {Approximately Solving Set Equations}, year = {2016}, }
2015
Abstract BibTeX Entry PDF File ©Springer-Verlag
In Ontology-Based Data Access (OBDA), user queries are evaluated over a set of facts under the open world assumption, while taking into account background knowledge given in the form of a Description Logic (DL) ontology. In order to deal with dynamically changing data sources, temporal conjunctive queries (TCQs) have recently been proposed as a useful extension of OBDA to support the processing of temporal information. We extend the existing complexity analysis of TCQ entailment to very expressive DLs underlying the OWL 2 standard, and in contrast to previous work also allow for queries containing transitive roles.
@inproceedings{ BaBL-AI15, address = {Canberra, Australia}, author = {Franz {Baader} and Stefan {Borgwardt} and Marcel {Lippmann}}, booktitle = {Proceedings of the 28th Australasian Joint Conference on Artificial Intelligence (AI'15)}, editor = {Bernhard {Pfahringer} and Jochen {Renz}}, pages = {21--33}, publisher = {Springer-Verlag}, series = {Lecture Notes in Artificial Intelligence}, title = {Temporal Conjunctive Queries in Expressive Description Logics with Transitive Roles}, volume = {9457}, year = {2015}, }
Abstract BibTeX Entry PDF File DOI
Ontology-based data access (OBDA) generalizes query answering in databases towards deductive entailment since (i) the fact base is not assumed to contain complete knowledge (i.e., there is no closed world assumption), and (ii) the interpretation of the predicates occurring in the queries is constrained by axioms of an ontology. OBDA has been investigated in detail for the case where the ontology is expressed by an appropriate Description Logic (DL) and the queries are conjunctive queries. Motivated by situation awareness applications, we investigate an extension of OBDA to the temporal case. As the query language we consider an extension of the well-known propositional temporal logic LTL where conjunctive queries can occur in place of propositional variables, and as the ontology language we use the expressive DL SHQ. For the resulting instance of temporalized OBDA, we investigate both data complexity and combined complexity of the query entailment problem. In the course of this investigation, we also establish the complexity of consistency of Boolean knowledge bases in SHQ.
@article{ BaBL-JWS15, author = {Franz {Baader} and Stefan {Borgwardt} and Marcel {Lippmann}}, doi = {http://dx.doi.org/10.1016/j.websem.2014.11.008}, journal = {Journal of Web Semantics}, pages = {71--93}, title = {Temporal Query Entailment in the Description Logic {$\mathcal{SHQ}$}}, volume = {33}, year = {2015}, }
Abstract BibTeX Entry PDF File
Unification in Description Logics has been introduced as a means to detect redundancies in ontologies. We try to extend the known decidability results for unification in the Description Logic EL to disunification since negative constraints on unifiers can be used to avoid unwanted unifiers. While decidability of the solvability of general EL-disunification problems remains an open problem, we obtain NP-completeness results for two interesting special cases: dismatching problems, where one side of each negative constraint must be ground, and local solvability of disunification problems, where we restrict the attention to solutions that are built from so-called atoms occurring in the input problem. More precisely, we first show that dismatching can be reduced to local disunification, and then provide two complementary NP-algorithms for finding local solutions of (general) disunification problems.
@inproceedings{ BaBM-RTA15, address = {Warsaw, Poland}, author = {Franz {Baader} and Stefan {Borgwardt} and Barbara {Morawska}}, booktitle = {Proceedings of the 26th International Conference on Rewriting Techniques and Applications (RTA'15)}, editor = {Maribel {Fern\'andez}}, pages = {40--56}, publisher = {Dagstuhl Publishing}, series = {Leibniz International Proceedings in Informatics}, title = {Dismatching and Local Disunification in {$\mathcal{EL}$}}, volume = {36}, year = {2015}, }
Abstract BibTeX Entry PDF File
Unification in Description Logics has been introduced as a means to detect redundancies in ontologies. We try to extend the known decidability results for unification in the Description Logic EL to disunification since negative constraints on unifiers can be used to avoid unwanted unifiers. While decidability of the solvability of general EL-disunification problems remains an open problem, we obtain NP-completeness results for two interesting special cases: dismatching problems, where one side of each negative constraint must be ground, and local solvability of disunification problems, where we restrict the attention to solutions that are built from so-called atoms occurring in the input problem. More precisely, we first show that dismatching can be reduced to local disunification, and then provide two complementary NP-algorithms for finding local solutions of (general) disunification problems.
@inproceedings{ BaBM-DL15, address = {Athens, Greece}, author = {Franz {Baader} and Stefan {Borgwardt} and Barbara {Morawska}}, booktitle = {Proceedings of the 28th International Workshop on Description Logics (DL'15)}, editor = {Diego {Calvanese} and Boris {Konev}}, pages = {30--33}, series = {CEUR Workshop Proceedings}, title = {Dismatching and Local Disunification in {$\mathcal{EL}$} (Extended Abstract)}, volume = {1350}, year = {2015}, }
BibTeX Entry PDF File
@inproceedings{ BaBM-UNIF15, address = {Warsaw, Poland}, author = {Franz {Baader} and Stefan {Borgwardt} and Barbara {Morawska}}, booktitle = {Proceedings of the 29th International Workshop on Unification (UNIF'15)}, editor = {Santiago {Escobar} and Mateu {Villaret}}, pages = {13--18}, title = {Dismatching and Local Disunification in {$\mathcal{EL}$} (Extended Abstract)}, year = {2015}, }
Abstract BibTeX Entry PDF File DOI
The combination of Fuzzy Logics and Description Logics (DLs) has been investigated for at least two decades because such fuzzy DLs can be used to formalize imprecise concepts. In particular, tableau algorithms for crisp Description Logics have been extended to reason also with their fuzzy counterparts. It has turned out, however, that in the presence of general concept inclusion axioms (GCIs) this extension is less straightforward than thought. In fact, a number of tableau algorithms claimed to deal correctly with fuzzy DLs with GCIs have recently been shown to be incorrect. In this paper, we concentrate on fuzzy ALC, the fuzzy extension of the well-known DL ALC. We present a terminating, sound, and complete tableau algorithm for fuzzy ALC with arbitrary continuous t-norms. Unfortunately, in the presence of GCIs, this algorithm does not yield a decision procedure for consistency of fuzzy ALC ontologies since it uses as a sub-procedure a solvability test for a finitely represented, but possibly infinite, system of inequations over the real interval [0,1], which are built using the t-norm. In general, it is not clear whether this solvability problem is decidable for such infinite systems of inequations. This may depend on the specific t-norm used. In fact, we also show in this paper that consistency of fuzzy ALC ontologies with GCIs is undecidable for the product t-norm. This implies, of course, that for the infinite systems of inequations produced by the tableau algorithm for fuzzy ALC with product t-norm, solvability is in general undecidable. We also give a brief overview of recently obtained (un)decidability results for fuzzy ALC w.r.t. other t-norms.
@article{ BaBP-JPL15, author = {Franz {Baader} and Stefan {Borgwardt} and Rafael {Pe{\~n}aloza}}, doi = {http://dx.doi.org/10.1007/s10992-014-9329-3}, journal = {Journal of Philosophical Logic}, number = {2}, pages = {117--146}, title = {On the Decidability Status of Fuzzy {$\mathcal{ALC}$} with General Concept Inclusions}, volume = {44}, year = {2015}, }
Abstract BibTeX Entry PDF File
We introduce an extension of the lightweight Description Logic EL that allows us to define concepts in an approximate way. For this purpose, we use a graded membership function, which for each individual and concept yields a number in the interval [0,1] expressing the degree to which the individual belongs to the concept. Threshold concepts C t for in <,<=,>,>= then collect all the individuals that belong to C with degree t. We generalize a well-known characterization of membership in EL concepts to construct a specific graded membership function deg , and investigate the complexity of reasoning in the Description Logic tauEL(deg), which extends EL by threshold concepts defined using deg . We also compare the instance problem for threshold concepts of the form C >t in tauEL(deg) with the relaxed instance queries of Ecke et al.
@inproceedings{ BBG-FROCOS-15, address = {Wroclaw, Poland}, author = {Franz {Baader} and Gerhard {Brewka} and Oliver {Fern\'andez Gil}}, booktitle = {Proceedings of the 10th International Symposium on Frontiers of Combining Systems (FroCoS'15)}, editor = {Carsten {Lutz} and Silvio {Ranise}}, pages = {33--48}, publisher = {Springer-Verlag}, series = {Lectures Notes in Artificial Intelligence}, title = {Adding Threshold Concepts to the Description Logic {$\mathcal{EL}$}}, volume = {9322}, year = {2015}, }
Abstract BibTeX Entry PDF File
We introduce an extension of the lightweight Description Logic EL that allows us to define concepts in an approximate way. For this purpose, we use a graded membership function which, for each individual and concept, yields a number in the interval [0,1] expressing the degree to which the individual belongs to the concept. Threshold concepts then collect all the individuals that belong to an EL concept C with degree less, less or equal, greater, respectively greater or equal r, for some r in [0,1] . We generalize a well-known characterization of membership in EL concepts to obtain an appropriate graded membership function deg, and investigate the complexity of reasoning in the Description Logic which extends EL by threshold concepts defined using deg.
@inproceedings{ BBGIL-DL-15, address = {Athens, Greece}, author = {Franz {Baader} and Gerhard {Brewka} and Oliver {Fern\'andez Gil}}, booktitle = {Proceedings of the 28th International Workshop on Description Logics (DL-2015)}, editor = {Diego {Calvanese} and Boris {Konev}}, publisher = {CEUR-WS.org}, series = {CEUR Workshop Proceedings}, title = {Adding Threshold Concepts to the Description Logic {EL} (extended abstract)}, volume = {1350}, year = {2015}, }
Abstract BibTeX Entry PDF File
The exact unification type of an equational theory is based on a new preorder on substitutions, called the exactness preorder, which is tailored towards transferring decidability results for unification to disunification. We show that two important results regarding the unification type of commutative theories hold not only for the usual instantiation preorder, but also for the exactness preorder: w.r.t. elementary unification, commutative theories are of type unary or nullary, and the theory ACUIh of Abelian idempotent monoids with a homomorphism is nullary.
@inproceedings{ BaLu-UNIF2015, address = {Warsaw, Poland}, author = {Franz {Baader} and Pierre {Ludmann}}, booktitle = {Proceedings of the 29th International Workshop on Unification ({UNIF'15})}, editor = {Santiago {Escobar} and Mateu {Villaret}}, pages = {19--23}, title = {The Exact Unification Type of Commutative Theories}, year = {2015}, }
Abstract BibTeX Entry DOI
Background: Ontologies play a major role in life sciences, enabling a number of applications, from new data integration to knowledge verification. SNOMED CT is a large medical ontology that is formally defined so that it ensures global consistency and support of complex reasoning tasks. Most biomedical ontologies and taxonomies on the other hand define concepts only textually, without the use of logic. Here, we investigate how to automatically generate formal concept definitions from textual ones. We develop a method that uses machine learning in combination with several types of lexical and semantic features and outputs formal definitions that follow the structure of SNOMED CT concept definitions. Results: We evaluate our method on three benchmarks and test both the underlying relation extraction component as well as the overall quality of output concept definitions. In addition, we provide an analysis on the following aspects: (1) How do definitions mined from the Web and literature differ from the ones mined from manually created definitions, e.g., MESH? (2) How do different feature representations, e.g., the restrictions of relations' domain and range, impact on the generated definition quality?, (3) How do different machine learning algorithms compare to each other for the task of formal definition generation?, and, (4) What is the influence of the learning data size to the task? We discuss all of these settings in detail and show that the suggested approach can achieve success rates of over 90%. In addition, the results show that the choice of corpora, lexical features, learning algorithm and data size do not impact the performance as strongly as semantic types do. Semantic types limit the domain and range of a predicted relation, and as long as relations' domain and range pairs do not overlap, this information is most valuable in formalizing textual definitions. Conclusions: The analysis presented in this manuscript implies that automated methods can provide a valuable contribution to the formalization of biomedical knowledge, thus paving the way for future applications that go beyond retrieval and into complex reasoning. The method is implemented and accessible to the public from: https://github.com/alifahsyamsiyah/learningDL .
@article{ Petrovaetal-BioSem, author = {Alina {Petrova} and Yue {Ma} and George {Tsatsaronis} and Maria {Kissa} and Felix {Distel} and Franz {Baader} and Michael {Schroeder}}, doi = {https://doi.org/10.1186/s13326-015-0015-3}, journal = {Journal of Biomedical Semantics}, number = {22}, title = {Formalizing Biomedical Concepts from Textual Definitions}, volume = {6}, year = {2015}, }
2014
Abstract BibTeX Entry PDF File
Our understanding of the notion "dynamic system" is a rather broad one: such a system has states, which can change over time. Ontologies are used to describe the states of the system, possibly in an incomplete way. Monitoring is then concerned with deciding whether some run of the system or all of its runs satisfy a certain property, which can be expressed by a formula of an appropriate temporal logic. We consider different instances of this broad framework, which can roughly be classified into two cases. In one instance, the system is assumed to be a black box, whose inner working is not known, but whose states can be (partially) observed during a run of the system. In the second instance, one has (partial) knowledge about the inner working of the system, which provides information on which runs of the system are possible. In this paper, we will review some of our recent work that can be seen as instances of this general framework of ontology-based monitoring of dynamic systems. We will also mention possible extensions towards probabilistic reasoning and the integration of mathematical modeling of dynamical systems.
@inproceedings{ Ba-KR-2014, address = {Vienna, Austria}, author = {Franz {Baader}}, booktitle = {Proceedings of the 14th International Conference on Principles of Knowledge Representation and Reasoning (KR'14)}, editor = {Chitta {Baral} and Giuseppe {De Giacomo} and Thomas {Eiter}}, note = {Invited contribution.}, pages = {678--681}, publisher = {AAAI Press}, title = {Ontology-Based Monitoring of Dynamic Systems}, year = {2014}, }
Abstract BibTeX Entry DOI
Formulae of linear temporal logic (LTL) can be used to specify (wanted or unwanted) properties of a dynamical system. In model checking, the system's behaviour is described by a transition system, and one needs to check whether all possible traces of this transition system satisfy the formula. In runtime verification, one observes the actual system behaviour, which at any point in time yields a finite prefix of a trace. The task is then to check whether all continuations of this prefix to a trace satisfy (violate) the formula. More precisely, one wants to construct a monitor, i.e., a finite automaton that receives the finite prefix as input and then gives the right answer based on the state currently reached. In this paper, we extend the known approaches to LTL runtime verification in two directions. First, instead of propositional LTL we use the more expressive temporal logic ALC-LTL, which can use axioms of the Description Logic (DL) ALC instead of propositional variables to describe properties of single states of the system. Second, instead of assuming that the observed system behaviour provides us with complete information about the states of the system, we assume that states are described in an incomplete way by ALC-knowledge bases. We show that also in this setting monitors can effectively be constructed. The (double-exponential) size of the constructed monitors is in fact optimal, and not higher than in the propositional case. As an auxiliary result, we show how to construct Büchi automata for ALC-LTL-formulae, which yields alternative proofs for the known upper bounds of deciding satisfiability in ALC-LTL.
@article{ BaLi-JAL14, author = {Franz {Baader} and Marcel {Lippmann}}, doi = {http://dx.doi.org/10.1016/j.jal.2014.09.001}, journal = {Journal of Applied Logic}, number = {4}, pages = {584--613}, title = {Runtime Verification Using the Temporal Description Logic $\mathcal{ALC}$-LTL Revisited}, volume = {12}, year = {2014}, }
Abstract BibTeX Entry PDF File
Matching concept descriptions against concept patterns was introduced as a new inference task in Description Logics (DLs) almost 20 years ago, motivated by applications in the Classic system. For the DL EL, it was shown in 2000 that the matching problem is NP-complete. It then took almost 10 years before this NP-completeness result could be extended from matching to unification in EL. The next big challenge was then to further extend these results from matching and unification without a TBox to matching and unification w.r.t. a general TBox, i.e., a finite set of general concept inclusions. For unification, we could show some partial results for general TBoxes that satisfy a certain restriction on cyclic dependencies between concepts, but the general case is still open. For matching, we were able to solve the general case: we can show that matching in EL w.r.t. general TBoxes is NP-complete. We also determine some tractable variants of the matching problem.
@inproceedings{ BaMo-UNIF14, address = {Vienna, Austria}, author = {Franz {Baader} and Barbara {Morawska}}, booktitle = {Proceedings of the 28th International Workshop on Unification ({UNIF'14})}, editor = {Temur {Kutsia} and Christophe {Ringeissen}}, pages = {22--26}, series = {RISC-Linz Report Series No. 14-06}, title = {Matching with respect to general concept inclusions in the Description Logic $\mathcal{EL}$}, year = {2014}, }
Abstract BibTeX Entry PDF File
Matching concept descriptions against concept patterns was introduced as a new inference task in Description Logics (DLs) almost 20 years ago, motivated by applications in the Classic system. For the DL EL, it was shown in 2000 that matching without a TBox is NP-complete. In this paper we show that matching in EL w.r.t. general TBoxes (i.e., finite sets of general concept inclusions, GCIs) is in NP by introducing a goal-oriented matching algorithm that uses non-deterministic rules to transform a given matching problem into a solved form by a polynomial number of rule applications. We also investigate some tractable variants of the matching problem w.r.t. general TBoxes.
@inproceedings{ BaMo-KI2014, author = {Franz {Baader} and Barbara {Morawska}}, booktitle = {Proceedings of the 37th German Conference on Artificial Intelligence (KI'14)}, editor = {Carsten {Lutz} and Michael {Thielscher}}, pages = {135--146}, publisher = {Springer-Verlag}, series = {Lecture Notes in Artificial Intelligence}, title = {Matching with respect to general concept inclusions in the Description Logic $\mathcal{EL}$}, volume = {8736}, year = {2014}, }
Abstract BibTeX Entry PDF File
Matching concept descriptions against concept patterns was introduced as a new inference task in Description Logics (DLs) almost 20 years ago, motivated by applications in the Classic system. For the DL EL, it was shown in 2000 that the matching problem is NP-complete. It then took almost 10 years before this NP-completeness result could be extended from matching to unification in EL. The next big challenge was then to further extend these results from matching and unification without a TBox to matching and unification w.r.t. a general TBox, i.e., a finite set of general concept inclusions. For unification, we could show some partial results for general TBoxes that satisfy a certain restriction on cyclic dependencies between concepts, but the general case is still open. For matching, we solve the general case in this paper: we show that matching in EL w.r.t. general TBoxes is NP-complete by introducing a goal-oriented matching algorithm that uses non-deterministic rules to transform a given matching problem into a solved form by a polynomial number of rule applications. We also investigate some tractable variants of the matching problem.
@inproceedings{ BaMo-DL14, address = {Vienna, Austria}, author = {Franz {Baader} and Barbara {Morawska}}, booktitle = {Proceedings of the 27th International Workshop on Description Logics ({DL'14})}, editor = {Meghyn {Bienvenu} and Magdalena {Ortiz} and Riccardo {Rosati} and Mantas {Simkus}}, pages = {33--44}, series = {CEUR Workshop Proceedings}, title = {Matching with respect to general concept inclusions in the Description Logic $\mathcal{EL}$}, volume = {1193}, year = {2014}, }
2013
Abstract BibTeX Entry PDF File (The final publication is available at link.springer.com)
Ontology-based data access (OBDA) generalizes query answering in databases towards deduction since (i) the fact base is not assumed to contain complete knowledge (i.e., there is no closed world assumption), and (ii) the interpretation of the predicates occurring in the queries is constrained by axioms of an ontology. OBDA has been investigated in detail for the case where the ontology is expressed by an appropriate Description Logic (DL) and the queries are conjunctive queries. Motivated by situation awareness applications, we investigate an extension of OBDA to the temporal case. As query language we consider an extension of the well-known propositional temporal logic LTL where conjunctive queries can occur in place of propositional variables, and as ontology language we use the prototypical expressive DL ALC. For the resulting instance of temporalized OBDA, we investigate both data complexity and combined complexity of the query entailment problem.
@inproceedings{ BaBL-CADE13, address = {Lake Placid, NY, USA}, author = {Franz {Baader} and Stefan {Borgwardt} and Marcel {Lippmann}}, booktitle = {Proceedings of the 24th International Conference on Automated Deduction (CADE-24)}, editor = {Maria Paola {Bonacina}}, pages = {330--344}, publisher = {Springer-Verlag}, series = {Lecture Notes in Artificial Intelligence}, title = {Temporalizing Ontology-Based Data Access}, volume = {7898}, year = {2013}, }
Abstract BibTeX Entry PDF File
Unification in Description Logics (DLs) has been proposed as an inference service that can, for example, be used to detect redundancies in ontologies. For the DL EL, which is used to define several large biomedical ontologies, unification is NP-complete. However, the unification algorithms for EL developed until recently could not deal with ontologies containing general concept inclusions (GCIs). In a series of recent papers we have made some progress towards addressing this problem, but the ontologies the developed unification algorithms can deal with need to satisfy a certain cycle restriction. In the present paper, we follow a different approach. Instead of restricting the input ontologies, we generalize the notion of unifiers to so-called hybrid unifiers. Whereas classical unifiers can be viewed as acyclic TBoxes, hybrid unifiers are cyclic TBoxes, which are interpreted together with the ontology of the input using a hybrid semantics that combines fixpoint and descriptive semantics. We show that hybrid unification in EL is NP-complete.
@inproceedings{ BaBM-UNIF13, address = {Eindhoven, The Netherlands}, author = {Franz {Baader} and Oliver {Fern\'andez Gil} and Barbara {Morawska}}, booktitle = {Proceedings of the 27th International Workshop on Unification (UNIF'13)}, editor = {Barbara {Morawska} and Konstantin {Korovin}}, title = {Hybrid Unification in the Description Logic {$\mathcal{EL}$}}, year = {2013}, }
Abstract BibTeX Entry PDF File
Unification in Description Logics (DLs) has been proposed as an inference service that can, for example, be used to detect redundancies in ontologies. For the DL EL, which is used to define several large biomedical ontologies, unification is NP-complete. However, the unification algorithms for EL developed until recently could not deal with ontologies containing general concept inclusions (GCIs). In a series of recent papers we have made some progress towards addressing this problem, but the ontologies the developed unification algorithms can deal with need to satisfy a certain cycle restriction. In the present paper, we follow a different approach. Instead of restricting the input ontologies, we generalize the notion of unifiers to so-called hybrid unifiers. Whereas classical unifiers can be viewed as acyclic TBoxes, hybrid unifiers are cyclic TBoxes, which are interpreted together with the ontology of the input using a hybrid semantics that combines fixpoint and descriptive semantics. We show that hybrid unification in EL is NP-complete and introduce a goal-oriented algorithm for computing hybrid unifiers.
@inproceedings{ BaFM-FroCoS13, address = {Nancy, France}, author = {Franz {Baader} and Oliver {Fern\'andez Gil} and Barbara {Morawska}}, booktitle = {Proceedings of the 9th International Symposium on Frontiers of Combining Systems ({FroCoS 2013})}, editor = {Pascal {Fontaine} and Christophe {Ringeissen} and Renate A. {Schmidt}}, month = {September}, pages = {295--310}, publisher = {Springer-Verlag}, series = {Lecture Notes in Computer Science}, title = {Hybrid Unification in the Description Logic {$\mathcal{EL}$}}, volume = {8152}, year = {2013}, }
Abstract BibTeX Entry PDF File
Unification in Description Logics (DLs) has been proposed as an inference service that can, for example, be used to detect redundancies in ontologies. For the DL EL, which is used to define several large biomedical ontologies, unification is NP-complete. However, the unification algorithms for EL developed until recently could not deal with ontologies containing general concept inclusions (GCIs). In a series of recent papers we have made some progress towards addressing this problem, but the ontologies the developed unification algorithms can deal with need to satisfy a certain cycle restriction. In the present paper, we follow a different approach. Instead of restricting the input ontologies, we generalize the notion of unifiers to so-called hybrid unifiers. Whereas classical unifiers can be viewed as acyclic TBoxes, hybrid unifiers are cyclic TBoxes, which are interpreted together with the ontology of the input using a hybrid semantics that combines fixpoint and descriptive semantics. We show that hybrid unification in EL is NP-complete.
@inproceedings{ BaFM-DL13, author = {Franz {Baader} and Oliver {Fern\'andez Gil} and Barbara {Morawska}}, booktitle = {Proceedings of the 26th International Workshop on Description Logics ({DL-2013})}, editor = {Thomas {Eiter} and Birte {Glimm} and Yevgeny {Kazakov} and Markus {Kr{\"o}tzsch}}, month = {July}, pages = {29--40}, series = {CEUR Workshop Proceedings}, title = {Hybrid {$\mathcal{EL}$}-Unification is NP-Complete}, venue = {Ulm, Germany}, volume = {1014}, year = {2013}, }
Abstract BibTeX Entry PDF File
Language equations are equations where both the constants occurring in the equations and the solutions are formal languages. They have first been introduced in formal language theory, but are now also considered in other areas of computer science. In the present paper, we restrict the attention to language equations with one-sided concatenation, but in contrast to previous work on these equations, we allow not just union but all Boolean operations to be used when formulating them. In addition, we are not just interested in deciding solvability of such equations, but also in deciding other properties of the set of solutions, like its cardinality (finite, infinite, uncountable) and whether it contains least/greatest solutions. We show that all these decision problems are ExpTime-complete.
@article{ BaOk-FI13, author = {Franz {Baader} and Alexander {Okhotin}}, journal = {Fundamenta Informaticae}, number = {1}, pages = {1--35}, title = {On Language Equations with One-sided Concatenation}, volume = {126}, year = {2013}, }
Abstract BibTeX Entry PDF File ©Springer-Verlag
High-level action programming languages such as Golog have successfully been used to model the behavior of autonomous agents. In addition to a logic-based action formalism for describing the environment and the effects of basic actions, they enable the construction of complex actions using typical programming language constructs. To ensure that the execution of such complex actions leads to the desired behavior of the agent, one needs to specify the required properties in a formal way, and then verify that these requirements are met by any execution of the program. Due to the expressiveness of the action formalism underlying Golog (Situation Calculus), the verification problem for Golog programs is in general undecidable. Action formalisms based on Description Logic (DL) try to achieve decidability of inference problems such as the projection problem by restricting the expressiveness of the underlying base logic. However, until now these formalisms have not been used within Golog programs. In the present paper, we introduce a variant of Golog where basic actions are defined using such a DL-based formalism, and show that the verification problem for such programs is decidable. This improves on our previous work on verifying properties of infinite sequences of DL actions in that it considers (finite and infinite) sequences of DL actions that correspond to (terminating and non-terminating) runs of a Golog program rather than just infinite sequences accepted by a Büchi automaton abstracting the program.
@inproceedings{ BaZa-FroCoS13, address = {Nancy, France}, author = {Franz {Baader} and Benjamin {Zarrie{\ss}}}, booktitle = {Proceedings of the 9th International Symposium on Frontiers of Combining Systems ({FroCoS 2013})}, editor = {Pascal {Fontaine} and Christophe {Ringeissen} and Renate A. {Schmidt}}, month = {September}, pages = {181--196}, publisher = {Springer-Verlag}, series = {Lecture Notes in Computer Science}, title = {Verification of Golog Programs over Description Logic Actions}, volume = {8152}, year = {2013}, }
Abstract BibTeX Entry PDF File
Ontologies such as the SNOMED Clinical Terms (SNOMED CT), and the Medical Subject Headings (MeSH) play a major role in life sciences. Modeling formally the concepts and the roles in this domain is a crucial process to allow for the integration of biomedical knowledge across applications. In this direction we propose a novel methodology to learn formal definitions for biomedical concepts from unstructured text. We evaluate experimentally the suggested methodology in learning formal definitions of SNOMED CT concepts, using their text definitions from MeSH. The evaluation is focused on the learning of three roles which are among the most populated roles in SNOMED CT: Associated Morphology, Finding Site and Causative Agent. Results show that our methodology may provide an Accuracy of up to 75%. For the representation of the instances three main approaches are suggested, namely, Bag of Words, word n-grams and character n-grams.
@inproceedings{ TsaEtAl-OWLED13, address = {Montpellier, France}, author = {George {Tsatsaronis} and Alina {Petrova} and Maria {Kissa} and Yue {Ma} and Felix {Distel} and Franz {Baader} and Michael {Schroeder}}, booktitle = {Proceedings of the 10th OWL: Experiences and Directions Workshop (OWLED 2013)}, editor = {Kavitha {Srinivas} and Simon {Jupp}}, month = {May}, title = {Learning Formal Definitions for Biomedical Concepts}, year = {2013}, }
2012
Abstract BibTeX Entry PDF File
UEL is a system that computes unifiers for unification problems formulated in the description logic EL. EL is a description logic with restricted expressivity, but which is still expressive enough for the formal representation of biomedical ontologies, such as the large medical ontology SNOMED CT. We propose to use UEL as a tool to detect redundancies in such ontologies by computing unifiers of two formal concepts suspected of expressing the same concept of the application domain. UEL provides access to two different unification algorithms and can be used as a plug-in of the popular ontology editor Protégé, or stand-alone.
@inproceedings{ BBMM-DL-12, address = {Rome, Italy}, author = {Franz {Baader} and Stefan {Borgwardt} and Julian Alfredo {Mendez} and Barbara {Morawska}}, booktitle = {Proceedings of the 25th International Workshop on Description Logics (DL'12)}, editor = {Yevgeny {Kazakov} and Domenico {Lembo} and Frank {Wolter}}, pages = {26--36}, series = {CEUR Workshop Proceedings}, title = {{UEL}: Unification Solver for {$\mathcal{EL}$}}, volume = {846}, year = {2012}, }
Abstract BibTeX Entry PDF File ©Springer-Verlag
Unification in Description Logics (DLs) has been proposed as an inference service that can, for example, be used to detect redundancies in ontologies. For the DL EL, which is used to define several large biomedical ontologies, unification is NP-complete. A goal-oriented NP unification algorithm for EL that uses nondeterministic rules to transform a given unification problem into solved form has recently been presented. In this paper, we extend this goal-oriented algorithm in two directions: on the one hand, we add general concept inclusion axioms (GCIs), and on the other hand, we add role hierarchies (H) and transitive roles (R+). For the algorithm to be complete, however, the ontology consisting of the GCIs and role axioms needs to satisfy a certain cycle restriction.
@inproceedings{ BaBM-AI12, address = {Sydney, Australia}, author = {Franz {Baader} and Stefan {Borgwardt} and Barbara {Morawska}}, booktitle = {Proceedings of the 25th Australasian Joint Conference on Artificial Intelligence (AI'12)}, editor = {Michael {Thielscher} and Dongmo {Zhang}}, pages = {493--504}, publisher = {Springer-Verlag}, series = {Lecture Notes in Artificial Intelligence}, title = {A Goal-Oriented Algorithm for Unification in {$\mathcal{ELH}_{R^+}$} w.r.t. Cycle-Restricted Ontologies}, volume = {7691}, year = {2012}, }
Abstract BibTeX Entry PDF File
Unification in Description Logics (DLs) has been proposed as an inference service that can, for example, be used to detect redundancies in ontologies. The inexpressive Description Logic EL is of particular interest in this context since, on the one hand, several large biomedical ontologies are defined using EL. On the other hand, unification in EL has been shown to be NP-complete, and thus of significantly lower complexity than unification in other DLs of similarly restricted expressive power. Recently, a brute-force NP-unification algorithm for EL w.r.t. a restricted form of general concept inclusion axioms was developed. This paper introduces a goal-oriented algorithm that reduces the amount of nondeterministic guesses considerably.
@inproceedings{ BaBM-DL-12, address = {Rome, Italy}, author = {Franz {Baader} and Stefan {Borgwardt} and Barbara {Morawska}}, booktitle = {Proceedings of the 25th International Workshop on Description Logics (DL'12)}, editor = {Yevgeny {Kazakov} and Domenico {Lembo} and Frank {Wolter}}, pages = {37--47}, series = {CEUR Workshop Proceedings}, title = {A Goal-Oriented Algorithm for Unification in {$\mathcal{EL}$} w.r.t. Cycle-Restricted {TB}oxes}, volume = {846}, year = {2012}, }
Abstract BibTeX Entry PDF File
Unification has been investigated both in modal logics and in description logics, albeit with different motivations. In description logics, unification can be used to detect redundancies in ontologies. In this context, it is not sufficient to decide unifiability, one must also compute appropriate unifiers and present them to the user. For the description logic EL, which is used to define several large biomedical ontologies, deciding unifiability is an NP-complete problem. It is known that every solvable EL-unification problem has a minimal unifier, and that every minimal unifier is a local unifier. Existing unification algorithms for EL compute all minimal unifiers, but additionally (all or some) non-minimal local unifiers. Computing only the minimal unifiers would be better since there are considerably less minimal unifiers than local ones, and their size is usually also quite small. In this paper we investigate the question whether the known algorithms for EL-unification can be modified such that they compute exactly the minimal unifiers without changing the complexity and the basic nature of the algorithms. Basically, the answer we give to this question is negative.
@inproceedings{ BaBM-AiML12, author = {Franz {Baader} and Stefan {Borgwardt} and Barbara {Morawska}}, booktitle = {Proceedings of the 9-th International Conference on Advances in Modal Logic ({AiML'12})}, editor = {Silvio {Ghilardi} and Lawrence {Moss}}, title = {Computing Minimal {$\mathcal{EL}$}-unifiers is Hard}, year = {2012}, }
Abstract BibTeX Entry PDF File
Unification in Description Logics (DLs) has been proposed as an inference service that can, for example, be used to detect redundancies in ontologies. The inexpressive Description Logic EL is of particular interest in this context since, on the one hand, several large biomedical ontologies are defined using EL. On the other hand, unification in EL has recently been shown to be NP-complete, and thus of significantly lower complexity than unification in other DLs of similarly restricted expressive power. However, the unification algorithms for EL developed so far cannot deal with general concept inclusion axioms (GCIs). This paper makes a considerable step towards addressing this problem, but the GCIs our new unification algorithm can deal with still need to satisfy a certain cycle restriction.
@inproceedings{ BaBM-KR12, author = {Franz {Baader} and Stefan {Borgwardt} and Barbara {Morawska}}, booktitle = {Proceedings of the Thirteenth International Conference on Principles of Knowledge Representation and Reasoning ({KR'12})}, editor = {Gerhard {Brewka} and Thomas {Eiter} and Sheila A. {McIlraith}}, pages = {568--572}, publisher = {AAAI Press}, title = {Extending Unification in {$\mathcal{EL}$} Towards General {TBoxes}}, year = {2012}, }
Abstract BibTeX Entry PDF File
Unification in Description Logics (DLs) has been proposed as an inference service that can, for example, be used to detect redundancies in ontologies. For the DL EL, which is used to define several large biomedical ontologies, unification is NP-complete. Several algorithms that solve unification in EL have previously been presented. In this paper, we summarize recent extensions of these algorithms that can deal with general concept inclusion axioms (GCIs), role hierarchies(H), and transitive roles (R+). For the algorithms to be complete, however, the ontology consisting of the GCIs and role axioms needs to satisfy a certain cycle restriction.
@inproceedings{ BaBM-UNIF12, address = {Manchester, UK}, author = {Franz {Baader} and Stefan {Borgwardt} and Barbara {Morawska}}, booktitle = {Proceedings of the 26th International Workshop on Unification (UNIF'12)}, editor = {Santiago {Escobar} and Konstantin {Korovin} and Vladimir {Rybakov}}, title = {Recent Advances in Unification for the {$\mathcal{EL}$} Family}, year = {2012}, }
Abstract BibTeX Entry PDF File ©Springer-Verlag
Unification in Description Logics has been proposed as an inference service that can, for example, be used to detect redundancies in ontologies. For the Description Logic EL, which is used to define several large biomedical ontologies, unification is NP-complete. An NP unification algorithm for EL based on a translation into propositional satisfiability (SAT) has recently been presented. In this paper, we extend this SAT encoding in two directions: on the one hand, we add general concept inclusion axioms, and on the other hand, we add role hierarchies (H) and transitive roles (R+). For the translation to be complete, however, the ontology needs to satisfy a certain cycle restriction. The SAT translation depends on a new rewriting-based characterization of subsumption w.r.t. ELHR+-ontologies.
@inproceedings{ BaBM-IJCAR-12, address = {Manchester, UK}, author = {Franz {Baader} and Stefan {Borgwardt} and Barbara {Morawska}}, booktitle = {Proceedings of the 6th International Joint Conference on Automated Reasoning (IJCAR'12)}, pages = {30--44}, publisher = {Springer-Verlag}, series = {Lecture Notes in Artificial Intelligence}, title = {{SAT}-Encoding of Unification in {$\mathcal{ELH}_{R^+}$} w.r.t. Cycle-Restricted Ontologies}, volume = {7364}, year = {2012}, }
Abstract BibTeX Entry PDF File
Most of the research on temporalized Description Logics (DLs) has concentrated on the case where temporal operators can be applied to concepts, and sometimes additionally to TBox axioms and ABox assertions. The aim of this paper is to study temporalized DLs where temporal operators on TBox axioms and ABox assertions are available, but temporal operators on concepts are not. While the main application of existing temporalized DLs is the representation of conceptual models that explicitly incorporate temporal aspects, the family of DLs studied in this paper addresses applications that focus on the temporal evolution of data and of ontologies. Our results show that disallowing temporal operators on concepts can significantly decrease the complexity of reasoning. In particular, reasoning with rigid roles (whose interpretation does not change over time) is typically undecidable without such a syntactic restriction, whereas our logics are decidable in elementary time even in the presence of rigid roles. We analyze the effects on computational complexity of dropping rigid roles, dropping rigid concepts, replacing temporal TBoxes with global ones, and restricting the set of available temporal operators. In this way, we obtain a novel family of temporalized DLs whose complexity ranges from 2-ExpTime-complete via NExpTime-complete to ExpTime-complete.
@article{ BaaderGL12, author = {Franz {Baader} and Silvio {Ghilardi} and Carsten {Lutz}}, journal = {ACM Trans. Comput. Log.}, number = {3}, title = {{LTL} over Description Logic Axioms}, volume = {13}, year = {2012}, }
Abstract BibTeX Entry PDF File
The framework developed in this paper can deal with scenarios where selected sub-ontologies of a large ontology are offered as views to users, based on contexts like the access rights of a user, the trust level required by the application, or the level of detail requested by the user. Instead of materializing a large number of different sub-ontologies, we propose to keep just one ontology, but equip each axiom with a label from an appropriate context lattice. The different contexts of this ontology are then also expressed by elements of this lattice. For large-scale ontologies, certain consequences (like the subsumption hierarchy) are often pre-computed. Instead of pre-computing these consequences for every context, our approach computes just one label (called a boundary) for each consequence such that a comparison of the user label with the consequence label determines whether the consequence follows from the sub-ontology determined by the context. We describe different black-box approaches for computing boundaries, and present first experimental results that compare the efficiency of these approaches on large real-world ontologies. Black-box means that, rather than requiring modifications of existing reasoning procedures, these approaches can use such procedures directly as sub-procedures, which allows us to employ existing highly-optimized reasoners. Similar to designing ontologies, the process of assigning axiom labels is error-prone. For this reason, we also address the problem of how to repair the labelling of an ontology in case the knowledge engineer notices that the computed boundary of a consequence does not coincide with her intuition regarding in which context the consequence should or should not be visible.
@article{ BaKP-JWS12, author = {Franz {Baader} and Martin {Knechtel} and Rafael {Pe{\~n}aloza}}, journal = {Journal of Web Semantics}, note = {Available at http://dx.doi.org/10.1016/j.websem.2011.11.006}, pages = {22--40}, title = {Context-Dependent Views to Axioms and Consequences of Semantic Web Ontologies}, volume = {12--13}, year = {2012}, }
Abstract BibTeX Entry PDF File ©Springer-Verlag
UEL is a system that computes unifiers for unification problems formulated in the description logic EL. EL is a description logic with restricted expressivity, but which is still expressive enough for the formal representation of biomedical ontologies, such as the large medical ontology SNOMED CT. We propose to use UEL as a tool to detect redundancies in such ontologies by computing unifiers of two formal concepts suspected of expressing the same concept of the application domain. UEL can be used as a plug-in of the popular ontology editor Protege, or as a standalone unification application.
@inproceedings{ BaMM-IJCAR-12, address = {Manchester, UK}, author = {Franz {Baader} and Julian {Mendez} and Barbara {Morawska}}, booktitle = {Proceedings of the 6th International Joint Conference on Automated Reasoning (IJCAR'12)}, pages = {45--51}, publisher = {Springer-Verlag}, series = {Lecture Notes in Artificial Intelligence}, title = {UEL: Unification Solver for the Description Logic EL -- System Description}, volume = {7364}, year = {2012}, }
Abstract BibTeX Entry PDF File ©Springer-Verlag
We extend previous results on the complexity of solving language equations with one-sided concatenation and all Boolean operations to the case where also disequations (i.e., negated equations) may occur. To show that solvability of systems of equations and disequations is still in ExpTime, we introduce a new type of automata working on infinite trees, which we call looping automata with colors. As applications of these results, we show new complexity results for disunification in the description logic FL0 and for monadic set constraints with negation. We believe that looping automata with colors may also turn out to be useful in other applications.
@inproceedings{ BaOk-LPAR18, address = {M{\'e}rida, Venezuela}, author = {Franz {Baader} and Alexander {Okhotin}}, booktitle = {Proceedings of the 18th International Conference on Logic for Programming, Artifical Intelligence, and Reasoning {(LPAR-12)}}, editor = {Nikolaj {Bj{\o}rner} and Andrei {Voronkov}}, pages = {107--121}, publisher = {Springer-Verlag}, series = {Lecture Notes in Computer Science}, title = {Solving language equations and disequations with applications to disunification in description logics and monadic set constraints}, volume = {7180}, year = {2012}, }
Abstract BibTeX Entry PDF File
In the reasoning about actions community, one of the most basic reasoning problems is the projection problem: the question whether a certain assertion holds after executing a sequence of actions. While undecidable for general action theories based on the situation calculus, the projection problem was shown to be decidable in two different restrictions of the situation calculus to theories formulated using description logics. In this paper, we compare our implementations of projection procedures for these two approaches on random testing data for several realistic application domains. Important contributions of this work are not only the obtained experimental results, but also the approach for generating test cases. By using patterns extracted from the respective application domains, we ensure that the randomly generated input data make sense and are not inconsistent.
@inproceedings{ YeLiLi-DL-12, address = {Rome, Italy}, author = {Wael {Yehia} and Hongkai {Liu} and Marcel {Lippmann} and Franz {Baader} and Mikhail {Soutchanski}}, booktitle = {Proceedings of the 25th International Workshop on Description Logics ({DL-2012})}, editor = {Yevgeny {Kazakov} and Domenico {Lembo} and Frank {Wolter}}, month = {June}, publisher = {CEUR-WS.org}, series = {CEUR Workshop Proceedings}, title = {Experimental Results on Solving the Projection Problem in Action Formalisms Based on Description Logics}, volume = {846}, year = {2012}, }
2011
Abstract BibTeX Entry PDF File
Unification was originally introduced in automated deduction and term rewriting, but has recently also found applications in other fields. In this article, we give a survey of the results on unification obtained in two closely related, yet different, application areas of unification: description logics and modal logics.
@article{ BaaderGhilardi2011, author = {F. {Baader} and S. {Ghilardi}}, journal = {Logic Journal of the IGPL}, note = {Available at http://jigpal.oxfordjournals.org/content/19/6/705.abstract}, number = {6}, pages = {705--730}, title = {Unification in Modal and Description Logics}, volume = {19}, year = {2011}, }
Abstract BibTeX Entry (The final publication is available at link.springer.com)
Mainstream research in Description Logics (DLs) until recently concentrated on increasing the expressive power of the employed description language while keeping standard inference problems like subsumption and instance manageable in the sense that highly optimized reasoning procedures for them behave well in practice. One of the main successes of this line of research was the adoption of OWL DL, which is based on an expressive DL, as the standard ontology language for the Semantic Web. More recently, there has been a growing interest in more light-weight DLs, and in other kinds of inference problems, mainly triggered by need in applications with large-scale ontologies. In this paper, we first review the DL research leading to the very expressive DLs with practical inference procedures underlying OWL, and then sketch the recent development of light-weight DLs and novel inference procedures.
@article{ BaaderSpektrum2011, author = {Franz {Baader}}, journal = {Informatik-Spektrum}, number = {5}, pages = {434--442}, title = {What's new in Description Logics}, volume = {34}, year = {2011}, }
BibTeX Entry PDF File
@inproceedings{ BBBM-UNIF11, address = {Wroc{\l}aw, Poland}, author = {Franz {Baader} and Nguyen Thanh {Binh} and Stefan {Borgwardt} and Barbara {Morawska}}, booktitle = {Proceedings of the 25th International Workshop on Unification (UNIF'11)}, editor = {Franz {Baader} and Barbara {Morawska} and Jan {Otop}}, pages = {2--8}, title = {Computing Local Unifiers in the Description Logic {$\mathcal{EL}$} without the Top Concept}, year = {2011}, }
Abstract BibTeX Entry PDF File ©Springer-Verlag
Unification in Description Logics has been proposed as a novel inference service that can, for example, be used to detect redundancies in ontologies. The inexpressive Description Logic EL is of particular interest in this context since, on the one hand, several large biomedical ontologies are defined using EL. On the other hand, unification in EL has recently been shown to be NP-complete, and thus of considerably lower complexity than unification in other DLs of similarly restricted expressive power. However, EL allows the use of the top concept, which represents the whole interpretation domain, whereas the large medical ontology SNOMED CT makes no use of this feature. Surprisingly, removing the top concept from EL makes the unification problem considerably harder. More precisely, we will show in this paper that unification in EL without the top concept is PSpace-complete.
@inproceedings{ BaBiBoMoCADE11, address = {Wroclaw, Poland}, author = {Franz {Baader} and Nguyen Thanh {Binh} and Stefan {Borgwardt} and Barbara {Morawska}}, booktitle = {Proceedings of the 23rd International Conference on Automated Deduction ({CADE 2011})}, editor = {Nikolaj {Bj{\o}rner} and Viorica {Sofronie-Stokkermans}}, pages = {70--84}, publisher = {Springer-Verlag}, series = {Lecture Notes in Computer Science}, title = {Unification in the Description Logic $\mathcal{EL}$\\ without the Top Concept}, volume = {6803}, year = {2011}, }
Abstract BibTeX Entry PDF File
Unification in Description Logics has been proposed as a novel inference service that can, for example, be used to detect redundancies in ontologies. The inexpressive Description Logic EL is of particular interest in this context since, on the one hand, several large biomedical ontologies are defined using EL. On the other hand, unification in EL has recently been shown to be NP-complete, and thus of considerably lower complexity than unification in other DLs of similarly restricted expressive power. However, EL allows the use of the top concept, which represents the whole interpretation domain, whereas the large medical ontology SNOMED CT makes no use of this feature. Surprisingly, removing the top concept from EL makes the unification problem considerably harder. More precisely, we will show that unification in EL without the top concept is PSpace-complete.
@inproceedings{ BBBM-DL-11, author = {Franz {Baader} and Nguyen Thanh {Binh} and Stefan {Borgwardt} and Barbara {Morawska}}, booktitle = {Proceedings of the 24th International Workshop on Description Logics (DL 2011)}, editor = {Riccardo {Rosati} and Sebastian {Rudolph} and Michael {Zakharyaschev}}, pages = {26--36}, series = {CEUR-WS}, title = {Unification in the Description Logic {$\mathcal{EL}$} without the Top Concept}, volume = {745}, year = {2011}, }
Abstract BibTeX Entry PDF File ©IEEE Press
This paper concentrates on a fuzzy Description Logic with product t-norm and involutive negation. It does not answer the question posed in its title for this logic, but it gives strong indications that the answer might in fact be "no." On the one hand, it shows that an algorithm that was claimed to answer the question affirmatively for this logic is actually incorrect. On the other hand, it proves undecidability of a variant of this logic.
@inproceedings{ BaPe-FuzzIEEE11, author = {Franz {Baader} and Rafael {Pe{\~n}aloza}}, booktitle = {Proceedings of 2011 IEEE International Conference on Fuzzy Systems ({Fuzz-IEEE 2011})}, pages = {1735--1742}, publisher = {IEEE Press}, title = {Are Fuzzy Description Logics with General Concept Inclusion Axioms Decidable?}, year = {2011}, }
Abstract BibTeX Entry PDF File
Fuzzy Description Logics (DLs) have been investigated for at least two decades because they can be used to formalize imprecise concepts. In particular, tableau algorithm for crisp DLs have been extended to reason also with their fuzzy counterparts. Recently, it has been shown that, in the presence of GCIs, some of these fuzzy DLs do not have the finite model property, thus throwing doubt on the correctness of tableau algorithms claimed to handle fuzzy DLs with GCIs. Previously, we have shown that these doubts are indeed justified, by proving that a certain fuzzy DL with product t-norm and involutive negation is undecidable. In this paper, we show that undecidability also holds if we consider a fuzzy DL where disjunction and involutive negation are replaced by the constructor implication, interpreted as the residuum.
@inproceedings{ BaPe-DL11, address = {Barcelona, Spain}, author = {Franz {Baader} and Rafael {Pe{\~n}aloza}}, booktitle = {Proceedings of the 24th International Workshop on Description Logics (DL 2011)}, editor = {Riccardo {Rosati} and Sebastian {Rudolph} and Michael {Zakharyaschev}}, series = {CEUR-WS}, title = {GCIs Make Reasoning in Fuzzy DL with the Product T-norm Undecidable}, volume = {745}, year = {2011}, }
Abstract BibTeX Entry PDF File ©Springer-Verlag
The combination of Fuzzy Logics and Description Logics (DLs) has been investigated for at least two decades because such fuzzy DLs can be used to formalize imprecise concepts. In particular, tableau algorithms for crisp Description Logics have been extended to reason also with their fuzzy counterparts. Recently, it has been shown that, in the presence of general concept inclusion axioms (GCIs), some of these fuzzy DLs actually do not have the finite model property, thus throwing doubt on the correctness of tableau algorithm for which it was claimed that they can handle fuzzy DLs with GCIs. In a previous paper, we have shown that these doubts are indeed justified, by proving that a certain fuzzy DL with product t-norm and involutive negation is undecidable. In the present paper, we show that undecidability also holds if we consider a t-norm-based fuzzy DL where disjunction and involutive negation are replaced by the constructor implication, which is interpreted as the residuum. The only condition on the t-norm is that it is a continuous t-norm "starting" with the product t-norm, which covers an uncountable family of t-norms.
@inproceedings{ BaPe-FroCoS11, address = {Saarbr{\"u}cken, Germany}, author = {Franz {Baader} and Rafael {Pe{\~n}aloza}}, booktitle = {Proceedings of 8th International Symposium Frontiers of Combining Systems ({FroCoS 2011})}, editor = {Cesare {Tinelli} and Viorica {Sofronie-Stokkermans}}, pages = {55--70}, publisher = {Springer-Verlag}, series = {Lecture Notes in Aritificial Intelligence}, title = {On the Undecidability of Fuzzy Description Logics with GCIs and Product t-norm}, volume = {6989}, year = {2011}, }
2010
BibTeX Entry (The final publication is available at link.springer.com)
@article{ BaBeNi-IS-10, author = {Franz {Baader} and Bernhard {Beckert} and Tobias {Nipkow}}, journal = {Informatik-Spektrum}, number = {5}, pages = {444--451}, title = {Deduktion: von der {T}heorie zur {A}nwendung}, volume = {33}, year = {2010}, }
Abstract BibTeX Entry PDF File
Ontologies can be used to provide an enriched vocabulary for the formulation of queries over instance data. We identify query emptiness and predicate emptiness as two central reasoning services in this context. Query emptiness asks whether a given query has an empty answer over all data sets formulated in a given signature. Predicate emptiness is defined analogously, but quantifies universally over all queries that contain a given predicate. In this paper, we determine the computational complexity of query emptiness and predicate emptiness in the EL, DL-Lite, and ALC-families of description logics, investigate the connection to ontology modules, and perform a practical case study to evaluate the new reasoning services.
@inproceedings{ BaaderBLW10, author = {Franz {Baader} and Meghyn {Bienvenu} and Carsten {Lutz} and Frank {Wolter}}, booktitle = {Proceedings of the 12th International Conference on Principles of Knowledge Representation and Reasoning ({KR2010})}, editor = {Fangzhen {Lin} and Ulrike {Sattler}}, publisher = {AAAI Press}, title = {Query and Predicate Emptiness in Description Logics}, year = {2010}, }
Abstract BibTeX Entry PDF File (The final publication is available at link.springer.com)
In the reasoning about actions community, causal relationships have been proposed as a possible approach for solving the ramification problem, i.e., the problem of how to deal with indirect effects of actions. In this paper, we show that causal relationships can be added to action formalisms based on Description Logics (DLs) without destroying the decidability of the consistency and the projection problem. We investigate the complexity of these decision problems based on which DL is used as base logic for the action formalism.
@inproceedings{ BaLiLi-LPAR-10, address = {Yogyakarta, Indonesia}, author = {Franz {Baader} and Marcel {Lippmann} and Hongkai {Liu}}, booktitle = {Proceedings of the 17th International Conference on Logic for Programming, Artifical Intelligence, and Reasoning ({LPAR-17})}, editor = {Christian G. {Ferm{\"u}ller} and Andrei {Voronkov}}, month = {October}, pages = {82--96}, publisher = {Springer-Verlag}, series = {Lecture Notes in Computer Science (subline Advanced Research in Computing and Software Science)}, title = {Using Causal Relationships to Deal with the Ramification Problem in Action Formalisms Based on Description Logics}, volume = {6397}, year = {2010}, }
Abstract BibTeX Entry PDF File
The verification problem for action logic programs with non-terminating behaviour is in general undecidable. In this paper, we consider a restricted setting in which the problem becomes decidable. On the one hand, we abstract from the actual execution sequences of a non-terminating program by considering infinite sequences of actions defined by a Buechi automaton. On the other hand, we assume that the logic underlying our action formalism is a decidable description logic rather than full first-order predicate logic.
@inproceedings{ BaLiMe-ECAI10, author = {Franz {Baader} and Hongkai {Liu} and Anees ul {Mehdi}}, booktitle = {Proceedings of the 19th European Conference on Artificial Intelligence ({ECAI10})}, editor = {Helder {Coelho} and Rudi {Studer} and Michael {Wooldridge}}, pages = {53--58}, publisher = {IOS Press}, series = {Frontiers in Artificial Intelligence and Applications}, title = {Verifying Properties of Infinite Sequences of Description Logic Actions}, volume = {215}, year = {2010}, }
BibTeX Entry PDF File (The final publication is available at link.springer.com)
@article{ BaLuTu-KIJ-10, author = {Franz {Baader} and Carsten {Lutz} and Anni-Yasmin {Turhan}}, journal = {KI -- K{\"u}nstliche Intelligenz}, month = {April}, number = {1}, pages = {25--33}, title = {Small is again Beautiful in Description Logics}, volume = {24}, year = {2010}, }
Abstract BibTeX Entry PDF File (The final publication is available at link.springer.com)
Unification in Description Logics has been proposed as a novel inference service that can, for example, be used to detect redundancies in ontologies. In a recent paper, we have shown that unification in EL is NP-complete, and thus of a complexity that is considerably lower than in other Description Logics of comparably restricted expressive power. In this paper, we introduce a new NP-algorithm for solving unification problems in EL, which is based on a reduction to satisfiability in propositional logic (SAT). The advantage of this new algorithm is, on the one hand, that it allows us to employ highly optimized state-of-the-art SAT solvers when implementing an EL-unification algorithm. On the other hand, this reduction provides us with a proof of the fact that EL-unification is in NP that is much simpler than the one given in our previous paper on EL-unification.
@inproceedings{ BaMo-LPAR-10, address = {Yogyakarta, Indonesia}, author = {Franz {Baader} and Barbara {Morawska}}, booktitle = {Proceedings of the 17th International Conference on Logic for Programming, Artifical Intelligence, and Reasoning ({LPAR-17})}, editor = {Christian G. {Ferm{\"u}ller} and Andrei {Voronkov}}, month = {October}, pages = {97--111}, publisher = {Springer-Verlag}, series = {Lecture Notes in Computer Science (subline Advanced Research in Computing and Software Science)}, title = {SAT Encoding of Unification in EL}, volume = {6397}, year = {2010}, }
Abstract BibTeX Entry PDF File
Abstract: The Description Logic EL has recently drawn considerable attention since, on the one hand, important inference problems such as the subsumption problem are polynomial. On the other hand, EL is used to define large biomedical ontologies. Unification in Description Logics has been proposed as a novel inference service that can, for example, be used to detect redundancies in ontologies. The main result of this paper is that unification in EL is decidable. More precisely, EL-unification is NP-complete, and thus has the same complexity as EL-matching. We also show that, w.r.t. the unification type, EL is less well-behaved: it is of type zero, which in particular implies that there are unification problems that have no finite complete set of unifiers.
@article{ Ba-Mo-LMCS-10, author = {Franz {Baader} and Barbara {Morawska}}, journal = {Logical Methods in Computer Science}, note = {Special Issue of the 20th International Conference on Rewriting Techniques and Applications; also available at http://arxiv.org/abs/1006.2289}, number = {3}, title = {Unification in the Description Logic {$\mathcal{EL}$}}, volume = {6}, year = {2010}, }
Abstract BibTeX Entry PDF File (The final publication is available at link.springer.com)
Axiom pinpointing has been introduced in description logics (DL) to help the user understand the reasons why consequences hold by computing minimal subsets of the knowledge base that have the consequence in question (MinA). Most of the pinpointing algorithms described in the DL literature are obtained as extensions of tableau-based reasoning algorithms for computing consequences from DL knowledge bases. In this paper, we show that automata-based algorithms for reasoning in DLs and other logics can also be extended to pinpointing algorithms. The idea is that the tree automaton constructed by the automata-based approach can be transformed into a weighted tree automaton whose so-called behaviour yields a pinpointing formula, i.e., a monotone Boolean formula whose minimal valuations correspond to the MinAs. We also develop an approach for computing the behaviour of a given weighted tree automaton. We use the DL İ as well as Linear Temporal Logic (LTL) to illustrate our new pinpointing approach.
@article{ BaPe-JAR09, author = {Franz {Baader} and Rafael {Pe{\~n}aloza}}, journal = {Journal of Automated Reasoning}, month = {August}, note = {Special Issue: Selected Papers from IJCAR~2008}, number = {2}, pages = {91--129}, title = {Automata-based Axiom Pinpointing}, volume = {45}, year = {2010}, }
Abstract BibTeX Entry PDF File
Axiom pinpointing has been introduced in description logics (DLs) to help the user to understand the reasons why consequences hold and to remove unwanted consequences by computing minimal (maximal) subsets of the knowledge base that have (do not have) the consequence in question. Most of the pinpointing algorithms described in the DLliterature are obtained as extensions of the standard tableau-based reasoning algorithms for computing consequences from DL knowledge bases. Although these extensions are based on similar ideas, they are all introduced for a particular tableau-based algorithm for a particular DL. The purpose of this article is to develop a general approach for extending a tableau-based algorithm to a pinpointing algorithm. This approach is based on a general definition of tableau algorithms, which captures many of the known tableau-based algorithms employed in DLs, but also other kinds of reasoning procedures.
@article{ BaaPen-JLC10, author = {Franz {Baader} and Rafael {Pe{\~n}aloza}}, journal = {Journal of Logic and Computation}, note = {Special Issue: Tableaux and Analytic Proof Methods}, number = {1}, pages = {5--34}, title = {Axiom Pinpointing in General Tableaux}, volume = {20}, year = {2010}, }
2009
Abstract BibTeX Entry PDF File ©Springer-Verlag
Description Logics (DLs) are a well-investigated family of logic-based knowledge representation formalisms, which can be used to represent the conceptual knowledge of an application domain in a structured and formally well-understood way. They are employed in various application domains, such as natural language processing, configuration, and databases, but their most notable success so far is the adoption of the DL-based language OWL as standard ontology language for the semantic web. This article concentrates on the problem of designing reasoning procedures for DLs. After a short introduction and a brief overview of the research in this area of the last 20 years, it will on the one hand present approaches for reasoning in expressive DLs, which are the foundation for reasoning in the Web ontology language OWL DL. On the other hand, it will consider tractable reasoning in the more light-weight DL EL, which is employed in bio-medical ontologies, and which is the foundation for the OWL 2 profile OWL 2 EL.
@incollection{ Baader09, author = {Franz {Baader}}, booktitle = {Reasoning Web: Semantic Technologies for Information Systems, 5th International Summer School 2009}, pages = {1--39}, publisher = {Springer--Verlag}, series = {Lecture Notes in Computer Science}, title = {Description Logics}, volume = {5689}, year = {2009}, }
Abstract BibTeX Entry PDF File ©Springer-Verlag
Situation Awareness (SA) is the problem of comprehending elements of an environment within a volume of time and space. It is a crucial factor in decision-making in dynamic environments. Current SA systems support the collection, filtering and presentation of data from different sources very well, and typically also some form of low-level data fusion and analysis, e.g., recognizing patterns over time. However, a still open research challenge is to build systems that support higher-level information fusion, viz., to integrate domain specific knowledge and automatically draw conclusions that would otherwise remain hidden or would have to be drawn by a human operator. To address this challenge, we have developed a novel system architecture that emphasizes the role of formal logic and automated theorem provers in its main components. Additionally, it features controlled natural language for operator I/O. It offers three logical languages to adequately model different aspects of the domain. This allows to build SA systems in a more declarative way than is possible with current approaches. From an automated reasoning perspective, the main challenges lay in combining (existing) automated reasoning techniques, from low-level data fusion of time-stamped data to semantic analysis and alert generation that is based on linear temporal logic. The system has been implemented and interfaces with Google-Earth to visualize the dynamics of situations and system output. It has been successfully tested on realistic data, but in this paper we focus on the system architecture and in particular on the interplay of the different reasoning components.
@inproceedings{ SAILpaper, author = {Franz {Baader} and Andreas {Bauer} and Peter {Baumgartner} and Anne {Cregan} and Alfredo {Gabaldon} and Krystian {Ji} and Kevin {Lee} and David {Rajaratnam} and Rolf {Schwitter}}, booktitle = {Proceedings of the 18th International Conference on Automated Reasoning with Analytic Tableaux and Related Methods (Tableaux 2009)}, editor = {Martin {Giese} and Arild {Waaler}}, pages = {77--92}, publisher = {Springer-Verlag}, series = {Lecture Notes in Computer Science}, title = {A Novel Architecture for Situation Awareness Systems}, volume = {5607}, year = {2009}, }
Abstract BibTeX Entry PDF File ©Springer-Verlag
Formulae of linear temporal logic (LTL) can be used to specify (wanted or unwanted) properties of a dynamical system. In model checking, the system's behaviour is described by a transition system, and one needs to check whether all possible traces of this transition system satisfy the formula. In runtime verification, one observes the actual system behaviour, which at any time point yields a finite prefix of a trace. The task is then to check whether all continuations of this prefix into a trace satisfy (violate) the formula. In this paper, we extend the known approaches to LTL runtime verification in two directions. First, instead of propositional LTL we use ALC-LTL, which can use axioms of the description logic ALC instead of propositional variables to describe properties of single states of the system. Second, instead of assuming that the observed system behaviour provides us with complete information about the states of the system, we consider the case where states may be described in an incomplete way by ALC ABoxes.
@inproceedings{ BaBaLi-FroCoS09, author = {Franz {Baader} and Andreas {Bauer} and Marcel {Lippmann}}, booktitle = {Proceedings of the 7th International Symposium on Frontiers of Combining Systems (FroCoS 2009)}, editor = {Silvio {Ghilardi} and Roberto {Sebastiani}}, pages = {149--164}, publisher = {Springer-Verlag}, series = {Lecture Notes in Computer Science}, title = {Runtime Verification Using a Temporal Description Logic}, volume = {5749}, year = {2009}, }
Abstract BibTeX Entry PDF File ©Springer-Verlag
We consider policies that are described by regular expressions, finite automata, or formulae of linear temporal logic (LTL). Such policies are assumed to describe situations that are problematic, and thus should be avoided. Given a trace pattern u, i.e., a sequence of action symbols and variables, were the variables stand for unknown (i.e., not observed) sequences of actions, we ask whether u potentially violates a given policy L, i.e., whether the variables in u can be replaced by sequences of actions such that the resulting trace belongs to L. We also consider the dual case where the regular policy L is supposed to describe all the admissible situations. Here, we want to know whether u always adheres to the given policy L, i.e., whether all instances of u belong to L. We determine the complexity of the violation and the adherence problem, depending on whether trace patterns are linear or not, and on whether the policy is assumed to be fixed or not.
@inproceedings{ BaBaTiu09, author = {Franz {Baader} and Andreas {Bauer} and Alwen {Tiu}}, booktitle = {Proceedings of the Third International Conference on Language, and Automata Theory, and Applications {(LATA 2009)}}, editor = {A.H. {Dediu} and A.M. {Ionescu} and C. {Martin-Vide}}, pages = {105--116}, publisher = {Springer-Verlag}, series = {Lecture Notes in Artificial Intelligence}, title = {Matching Trace Patterns with Regular Policies}, volume = {5457}, year = {2009}, }
Abstract BibTeX Entry PDF File ©Springer-Verlag
In a previous ICFCA paper we have shown that, in the Description Logics EL and ELgfp, the set of general concept inclusions holding in a finite model always has a finite basis. In this paper, we address the problem of how to compute this basis efficiently, by adapting methods from formal concept analysis.
@inproceedings{ BaDi09, author = {Franz {Baader} and Felix {Distel}}, booktitle = {Proceedings of the 7th International Conference on {F}ormal {C}oncept {A}nalysis, {(ICFCA 2009)}}, editor = {S\'ebastien {Ferr\'e} and Sebastian {Rudolph}}, pages = {146--161}, publisher = {Springer Verlag}, series = {Lecture Notes in Artificial Intelligence}, title = {Exploring Finite Models in the Description Logic {ELgfp}}, volume = {5548}, year = {2009}, }
Abstract BibTeX Entry PDF File ©Springer-Verlag
The framework developed in this paper can deal with scenarios where selected sub-ontologies of a large ontology are offered as views to users, based on criteria like the user's access right, the trust level required by the application, or the level of detail requested by the user. Instead of materializing a large number of different sub-ontologies, we propose to keep just one ontology, but equip each axiom with a label from an appropriate labeling lattice. The access right, required trust level, etc. is then also represented by a label (called user label) from this lattice, and the corresponding sub-ontology is determined by comparing this label with the axiom labels. For large-scale ontologies, certain consequence (like the concept hierarchy) are often precomputed. Instead of precomputing these consequences for every possible sub-ontology, our approach computes just one label for each consequence such that a comparison of the user label with the consequence label determines whether the consequence follows from the corresponding sub-ontology or not. In this paper we determine under which restrictions on the user and axiom labels such consequence labels (called boundaries) always exist, describe different black-box approaches for computing boundaries, and present first experimental results that compare the efficiency of these approaches on large real-world ontologies. Black-box means that, rather than requiring modifications of existing reasoning procedures, these approaches can use such procedures directly as sub-procedures, which allows us to employ existing highly-optimized reasoners.
@inproceedings{ BaKP-ISWC-09, author = {Franz {Baader} and Martin {Knechtel} and Rafael {Pe{\~n}aloza}}, booktitle = {Proceedings of the 8th International Semantic Web Conference (ISWC 2009)}, editor = {Abraham Bernstein et {al.}}, pages = {49--64}, series = {Lecture Notes in Computer Science}, title = {A Generic Approach for Large-Scale Ontological Reasoning in the Presence of Access Restrictions to the Ontology's Axioms}, volume = {5823}, year = {2009}, }
Abstract BibTeX Entry PDF File ©Springer-Verlag
The Description Logic EL has recently drawn considerable attention since, on the one hand, important inference problems such as the subsumption problem are polynomial. On the other hand, EL is used to define large biomedical ontologies. Unification in Description Logics has been proposed as a novel inference service that can, for example, be used to detect redundancies in ontologies. The main result of this paper is that unification in EL is decidable. More precisely, EL-unification is NP-complete, and thus has the same complexity as EL-matching. We also show that, w.r.t. the unification type, EL is less well-behaved: it is of type zero, which in particular implies that there are unification problems that have no finite complete set of unifiers.
@inproceedings{ BaMo09, author = {Franz {Baader} and Barbara {Morawska}}, booktitle = {Proceedings of the 20th International Conference on Rewriting Techniques and Applications (RTA 2009)}, editor = {Ralf {Treinen}}, pages = {350--364}, publisher = {Springer-Verlag}, series = {Lecture Notes in Computer Science}, title = {Unification in the Description Logic {$\mathcal{EL}$}}, volume = {5595}, year = {2009}, }
Abstract BibTeX Entry PDF File
We recall the re-engineering of SNOMED CT's SEP encoding as proposed in a previous paper, and then show that a backward compatible version, which also contains definitions for the auxiliary S- and P-concepts, requires an additional complex role inclusion that destroys the acyclicity property of the set of complex role inclusion. For this reason, the backward compatible reengineered version of SNOMED CT is not expressible in OWL 2, but it is expressible in EL++ and an appropriate extension of SROIQ.
@inproceedings{ OBML09, address = {Leipzig, Germany}, author = {Franz {Baader} and Stefan {Schulz} and Kent {Spackmann} and Bontawee {Suntisrivaraporn}}, booktitle = {Proceedings of 1. Workshop des GI-Arbeitskreises Ontologien in Biomedizin und Lebenswissenschaften (OBML 2009)}, title = {How Should Parthood Relations be Expressed in {SNOMED CT}?}, year = {2009}, }
Abstract BibTeX Entry PDF File ©Springer-Verlag
In a previous paper, we have introduced an approach for extending both the terminological and the assertional part of a Description Logic knowledge base by using information provided by the assertional part and by a domain expert. This approach, called knowledge base completion, was based on an extension of attribute exploration to the case of partial contexts. The present paper recalls this approach, and then addresses usability issues that came up during first experiments with a preliminary implementation of the completion algorithm. It turns out that these issues can be addressed by extending the exploration algorithm for partial contexts such that it can deal with implicational background knowledge.
@inproceedings{ BaSe09, author = {Franz {Baader} and Bar\i{}\c{s} {Sertkaya}}, booktitle = {Proceedings of the 7th International Conference on {F}ormal {C}oncept {A}nalysis, {(ICFCA 2009)}}, editor = {S\'ebastien {Ferr\'e} and Sebastian {Rudolph}}, pages = {1--21}, publisher = {Springer Verlag}, series = {Lecture Notes in Artificial Ingelligence}, title = {Usability Issues in Description Logic Knowledge Base Completion}, volume = {5548}, year = {2009}, }
Abstract BibTeX Entry PDF File ©Springer-Verlag
When trying to apply recently developed approaches for updating Description Logic ABoxes in the context of an action programming language, one encounters two problems. First, updates generate so-called Boolean ABoxes, which cannot be handled by traditional Description Logic reasoners. Second, iterated update operations result in very large Boolean ABoxes, which, however, contain a huge amount of redundant information. In this paper, we address both issues from a practical point of view.
@inproceedings{ FroCos-09, author = {Conrad {Drescher} and Hongkai {Liu} and Franz {Baader} and Steffen {Guhlemann} and Uwe {Petersohn} and Peter {Steinke} and Michael {Thielscher}}, booktitle = {The Seventh International Symposium on Frontiers of Combining Systems (FroCoS-2009)}, editor = {Silvio {Ghilardi} and Roberto {Sebastiani}}, pages = {149--164}, publisher = {Springer-Verlag}, series = {Lecture Notes in Computer Science}, title = {Putting ABox Updates into Action}, volume = {5749}, year = {2009}, }
Abstract BibTeX Entry PDF File
When trying to apply recently developed approaches for updating Description Logic ABoxes in the context of an action programming language, one encounters two problems. First, updates generate so-called Boolean ABoxes, which cannot be handled by traditional Description Logic reasoners. Second, iterated update operations result in very large Boolean ABoxes, which, however, contain a huge amount of redundant information. In this paper, we address both issues from a practical point of view.
@inproceedings{ nrac-09, author = {Conrad {Drescher} and Hongkai {Liu} and Franz {Baader} and Peter {Steinke} and Michael {Thielscher}}, booktitle = {Proceedings of the 8th IJCAI International Workshop on Nonmontonic Reasoning, Action and Change (NRAC-09)}, title = {Putting ABox Updates into Action}, year = {2009}, }
Abstract BibTeX Entry PDF File
After a critical review of the present architecture of SNOMED CT, addressing both logical and ontological issues, we present a roadmap toward an overall improvement and recommend the following actions: SNOMED CT's ontology, dictionary, and information model components should be kept separate. SNOMED CT's upper level should be re-arranged according to a standard upper level ontology. SNOMED CT concepts should be assigned to the four disjoint groups: classes, instances, relations, and meta-classes. SNOMED CT's binary relations should be reduced to a set of canonical ones, following existing recommendations. Taxonomies should be cleansed and split into disjoint partitions. The number of full definitions should be increased. Finally, new approaches are proposed for modeling part-whole hierarchies, as well as the integration of qualifier relations into a unified framework. All proposed modifications can be expressed by the computationally tractable description logic EL++.
@article{ SchEtAl-JMI-09, author = {Stefan {Schulz} and Boontawee {Suntisrivaraporn} and Franz {Baader} and Martin {Boeker}}, journal = {International Journal of Medical Informatics}, number = {Supplement 1}, pages = {S86--S94}, publisher = {Elsevier}, title = {{SNOMED} reaching its adolescence: Ontologists' and logicians' health check}, volume = {78}, year = {2009}, }
2008
Abstract BibTeX Entry PDF File
We extend the description logic EL++ with reflexive roles and range restrictions, and show that subsumption remains tractable if a certain syntactic restriction is adopted. We also show that subsumption becomes PSpace-hard (resp. undecidable) if this restriction is weakened (resp. dropped). Additionally, we prove that tractability is lost when symmetric roles are added: in this case, subsumption becomes ExpTime- hard.
@inproceedings{ BaaderEtAl-OWLED08DC, author = {Franz {Baader} and Sebastian {Brandt} and Carsten {Lutz}}, booktitle = {In Proceedings of the OWLED 2008 DC Workshop on OWL: Experiences and Directions}, editor = {Kendall {Clark} and Peter F. {Patel-Schneider}}, title = {Pushing the EL Envelope Further}, year = {2008}, }
Abstract BibTeX Entry PDF File PS File ©Springer-Verlag
Formal Concept Analysis (FCA) can be used to analyze data given in the form of a formal context. In particular, FCA provides efficient algorithms for computing a minimal basis of the implications holding in the context. In this paper, we extend classical FCA by considering data that are represented by relational structures rather than formal contexts, and by replacing atomic attributes by complex formulae defined in some logic. After generalizing some of the FCA theory to this more general form of contexts, we instantiate the general framework with attributes defined in the Description Logic (DL) EL, and with relational structures over a signature of unary and binary predicates, i.e., models for EL. In this setting, an implication corresponds to a so-called general concept inclusion axiom (GCI) in EL, The main technical result of this paper is that, in EL, for any finite model there is a finite set of implications (GCIs) holding in this model from which all implications (GCIs) holding in the model follow.
@inproceedings{ BaaderDistel08, author = {Franz {Baader} and Felix {Distel}}, booktitle = {Proceedings of the 6th International Conference on Formal Concept Analysis, (ICFCA 2008)}, editor = {Raoul {Medina} and Sergei {Obiedkov}}, pages = {46--61}, publisher = {Springer}, series = {Lecture Notes in Artificial Intelligence}, title = {A Finite Basis for the Set of {EL}-Implications Holding in a Finite Model}, volume = {4933}, year = {2008}, }
Abstract BibTeX Entry PDF File
Most of the research on temporalized Description Logics (DLs) has concentrated on the case where temporal operators can occur within DL concept descriptions. In this setting, reasoning usually becomes quite hard if rigid roles, i.e., roles whose interpretation does not change over time, are available. In this paper, we consider the case where temporal operators are allowed to occur only in front of DL axioms (i.e., ABox assertions and general concept inclusion axioms), but not inside of concepts descriptions. As the temporal component, we use linear temporal logic (LTL) and in the DL component we consider the basic DL ALC. We show that reasoning in the presence of rigid roles becomes considerably simpler in this setting.
@inproceedings{ BaaGhiLu-KR08, author = {Franz {Baader} and Silvio {Ghilardi} and Carsten {Lutz}}, booktitle = {Proceedings of the 11th International Conference on Principles of Knowledge Representation and Reasoning ({KR2008})}, title = {LTL over Description Logic Axioms}, year = {2008}, }
Abstract BibTeX Entry PDF File
Most of the research on temporalized Description Logics (DLs) has concentrated on the most general case where temporal operators can occur both within DL concepts and in front of DL axioms. In this setting, reasoning usually becomes quite hard. If rigid roles (i.e., roles whose interpretation does not vary over time) are allowed, then the interesting inference problems (such as satisfiability of concepts) become undecidable. Even if all symbols are interpreted as flexible (i.e., their interpretations can change arbitrarily from one time-point to the next), the complexity of reasoning is doubly exponential, i.e., one exponential higher than the complexity of reasoning in pure DLs such as ALC. In this paper, we consider the case where temporal operators are allowed to occur only in front of axioms (i.e., ABox assertions and general concept inclusion axioms (GCIs)), but not inside concepts. As the temporal component, we use linear temporal logic (LTL) and in the DL component we consider ALC. We show that reasoning becomes simpler in this setting.
@inproceedings{ BaGhiLu-DL08, author = {Franz {Baader} and Silvio {Ghilardi} and Carsten {Lutz}}, booktitle = {Proceedings of the 21st International Workshop on Description Lo\ \ gics ({DL2008})}, series = {CEUR-WS}, title = {LTL over Description Logic Axioms}, volume = {353}, year = {2008}, }
Abstract BibTeX Entry PDF File
In the area of Description Logic (DL), both tableau-based and automata-based algorithms are frequently used to show decidability and complexity results for basic inference problems such as satisfiability of concepts. Whereas tableau-based algorithms usually yield worst-case optimal algorithms in the case of PSPACE-complete logics, it is often very hard to design optimal tableau-based algorithms for EXPTIME-complete DLs. In contrast, the automata-based approach is usually well-suited to prove EXPTIME upper-bounds, but its direct application will usually also yield an EXPTIME-algorithm for a PSPACE-complete logic since the (tree) automaton constructed for a given concept is usually exponentially large. In the present paper, we formulate conditions under which an on-the-fly construction of such an exponentially large automaton can be used to obtain a PSPACE-algorithm. We illustrate the usefulness of this approach by proving a new PSPACE upper-bound for satisfiability of concepts with respect to acyclic terminologies in the DL SI, which extends the basic DL ALC with transitive and inverse roles.
@article{ BaaHlaPen-IC-08, author = {Franz {Baader} and Jan {Hladik} and Rafael {Pe{\~n}aloza}}, journal = {Information and Computation, Special Issue: First International Conference on Language and Automata Theory and Applications ({LATA'07})}, number = {9--10}, pages = {1045--1056}, title = {Automata Can Show {PSPACE} Results for Description Logics}, volume = {206}, year = {2008}, }
Abstract BibTeX Entry PDF File
Hybrid EL-TBoxes combine general concept inclusions (GCIs), which are interpreted with descriptive semantics, with cyclic concept definitions, which are interpreted with greatest fixpoint (gfp) semantics. We introduce a proof-theoretic approach that yields a polynomial-time decision procedure for subsumption in EL w.r.t. hybrid TBoxes, and present preliminary experimental results regarding the performance of the reasoner Hyb that implements this decision procedure.
@inproceedings{ BaaNovSun-DL-08, author = {Franz {Baader} and Novak {Novakovic} and Boontawee {Suntisrivaraporn}}, booktitle = {Proceedings of the 2008 International Workshop on Description Logics ({DL2008})}, series = {CEUR-WS}, title = {A Proof-Theoretic Subsumption Reasoner for Hybrid $\mathcal{EL}$-{TBoxes}}, volume = {353}, year = {2008}, }
Abstract BibTeX Entry PDF File ©Springer-Verlag
Axiom pinpointing has been introduced in description logics (DL) to help the user understand the reasons why consequences hold by computing minimal subsets of the knowledge base that have the consequence in question (MinA). Most of the pinpointing algorithms described in the DL literature are obtained as extensions of tableau-based reasoning algorithms for computing consequences from DL knowledge bases. In this paper, we show that automata-based algorithms for reasoning in DLs can also be extended to pinpointing algorithms. The idea is that the tree automaton constructed by the automata-based approach can be transformed into a weighted tree automaton whose so-called behaviour yields a pinpointing formula, i.e., a monotone Boolean formula whose minimal valuations correspond to the MinAs. We also develop an approach for computing the bahaviour of a given weighted tree automaton.
@inproceedings{ BaPe-IJCAR08, author = {Franz {Baader} and Rafael {Pe{\~n}aloza}}, booktitle = {Proceedings of the 4th International Joint Conference on Automated Reasoning, (IJCAR 2008)}, editor = {Alessandro {Armando} and Peter {Baumgartner} and Gilles {Dowek}}, pages = {226--241}, publisher = {Springer}, series = {Lecture Notes in Artificial Intelligence}, title = {Automata-Based Axiom Pinpointing}, volume = {5195}, year = {2008}, }
Abstract BibTeX Entry PDF File
SNOMED CT is a large-scale medical ontology, which is developed using a variant of the inexpressive Description Logic EL. Description Logic reasoning can not only be used to compute subsumption relationships between SNOMED concepts, but also to pinpoint the reason why a certain subsumption relationship holds by computing the axioms responsible for this relationship. This helps developers and users of SNOMED CT to understand why a given subsumption relationship follows from the ontology, which can be seen as a first step toward removing unwanted subsumption relationships. In this paper, we describe a new method for axiom pinpointing in the Description Logic EL+, which is based on the computation so-called reachability-based modules. Our experiments on SNOMED CT show that the sets of axioms explaining subsumption are usually quite small, and that our method is fast enough to compute such sets on demand.
@inproceedings{ BaaSun-KRMED-08, author = {Franz {Baader} and Boontawee {Suntisrivaraporn}}, booktitle = {Proceedings of the 3rd Knowledge Representation in Medicine (KR-MED'08): Representing and Sharing Knowledge Using SNOMED}, series = {CEUR-WS}, title = {Debugging {SNOMED CT} Using Axiom Pinpointing in the Description Logic $\mathcal{EL}^+$}, volume = {410}, year = {2008}, }
2007
Abstract BibTeX Entry
This volume contains the papers presented at the 18th International Conference on Rewriting Techniques and Applications (RTA'07), which was held on June 26–28, 2007, on the Paris campus of the Conservatoire National des Arts et Metiers (CNAM) in Paris, France.
@book{ BaaderRTA07, editor = {F. {Baader}}, publisher = {Springer-Verlag}, series = {Lecture Notes in Computer Science}, title = {18th International Conference on Rewriting Techniques and Applications {(RTA 2007)}}, volume = {4533}, year = {2007}, }
Abstract BibTeX Entry PDF File
Basically, the connection of two many-sorted theories is obtained by taking their disjoint union, and then connecting the two parts through connection functions that must behave like homomorphisms on the shared signature. We determine conditions under which decidability of the validity of universal formulae in the component theories transfers to their connection. In addition, we consider variants of the basic connection scheme. Our results can be seen as a generalization of the so-called E-connection approach for combining modal logics to an algebraic setting.
@article{ BaaGi-JSL-07, author = {F. {Baader} and S. {Ghilardi}}, journal = {The Journal of Symbolic Logic}, number = {2}, pages = {535--583}, title = {Connecting Many-Sorted Theories}, volume = {72}, year = {2007}, }
Abstract BibTeX Entry PDF File
In Description Logics (DLs), both tableau-based and automata-based algorithms are frequently used to show decidability and complexity results for basic inference problems such as concept satisfiability. Whereas tableau-based algorithms usually yield worst-case optimal algorithms in the case of PSPACE-complete logics, it is often very hard to design optimal tableau-based algorithms for EXPTIME-complete DLs. In contrast, the automata-based approach is usually well-suited to prove EXPTIME upper-bounds, but its direct application will usually also yield an EXPTIME-algorithm for a PSPACE-complete logic since the (tree) automaton constructed for a given concept is usually exponentially large. In the present paper, we formulate conditions under which an on-the-fly construction of such an exponentially large automaton can be used to obtain a PSPACE-algorithm. We illustrate the usefulness of this approach by proving a new PSPACE upper-bound for satisfiability of concepts w.r.t. acyclic terminologies in the DL SI.
@inproceedings{ BaaHlaPen-DL-07, author = {F. {Baader} and J. {Hladik} and R. {Pe{\~n}aloza}}, booktitle = {Proceedings of the 2007 International Workshop on Description Logics}, editor = {D. {Calvanese} and E. {Franconi} and S. {Tessaris}}, series = {CEUR-WS}, title = {Blocking Automata for {PSPACE} {DLs}}, year = {2007}, }
Abstract BibTeX Entry PDF File PS File
In Description Logics (DLs), both tableau-based and automata-based algorithms are frequently used to show decidability and complexity results for basic inference problems such as satisfiability of concepts. Whereas tableau-based algorithms usually yield worst-case optimal algorithms in the case of PSPACE-complete logics, it is often very hard to design optimal tableau-based algorithms for EXPTIME-complete DLs. In contrast, the automata-based approach is usually well-suited to prove EXPTIME upper-bounds, but its direct application will usually also yield an EXPTIME-algorithm for a PSPACE-complete logic since the (tree) automaton constructed for a given concept is usually exponentially large. In the present paper, we formulate conditions under which an on-the-fly construction of such an exponentially large automaton can be used to obtain a PSPACE-algorithm. We illustrate the usefulness of this approach by proving a new PSPACE upper-bound for satisfiability of concepts w.r.t. acyclic terminologies in the DL SI, which extends the basic DL ALC with transitive and inverse roles.
@inproceedings{ BaaHlaPen-LATA-07, author = {F. {Baader} and J. {Hladik} and R. {Pe{\~n}aloza}}, booktitle = {Proceedings of the First International Conference on Language and Automata Theory and Applications ({LATA'07})}, editor = {C. {Martin-Vide}}, title = {{SI}! Automata Can Show {PSPACE} Results for Description Logics}, year = {2007}, }
Abstract BibTeX Entry
In this chapter we will introduce description logics, a family of logic-based knowledge representation languages that can be used to represent the terminological knowledge of an application domain in a structured way. We will first review their provenance and history, and show how the field has developed. We will then introduce the basic description logic ALC in some detail, including definitions of syntax, semantics and basic reasoning services, and describe important extensions such as inverse roles, number restrictions, and concrete domains. Next, we will discuss the relationship between description logics and other formalisms, in particular first order and modal logics; the most commonly used reasoning techniques, in particular tableaux, resolution and automata based techniques; and the computational complexity of basic reasoning problems. After reviewing some of the most prominent applications of description logics, in particular ontology language applications, we will conclude with an overview of other aspects of description logic research, and with pointers to the relevant literature.
@incollection{ BaHS07, author = {F. {Baader} and I. {Horrocks} and U. {Sattler}}, booktitle = {Handbook of Knowledge Representation}, editor = {Frank van {Harmelen} and Vladimir {Lifschitz} and Bruce {Porter}}, pages = {135--179}, publisher = {Elsevier}, title = {Description Logics}, year = {2007}, }
Abstract BibTeX Entry PDF File
We propose an approach for extending both the terminological and the assertional part of a Description Logic knowledge base by using information provided by the knowledge base and by a domain expert. The use of techniques from Formal Concept Analysis ensures that, on the one hand, the interaction with the expert is kept to a minimum, and, on the other hand, we can show that the extended knowledge base is complete in a certain, well-defined sense.
@inproceedings{ BGSS07b, author = {Franz {Baader} and Bernhard {Ganter} and Ulrike {Sattler} and Baris {Sertkaya}}, booktitle = {Proceedings of the Third International Workshop OWL: Experiences and Directions {(OWLED 2007)}}, editor = {Christine {Golbreich} and Aditya {Kalyanpur} and Bijan {Parsia}}, publisher = {CEUR-WS}, title = {Completing Description Logic Knowledge Bases using Formal Concept Analysis}, volume = {258}, year = {2007}, }
Abstract BibTeX Entry PDF File
We propose an approach for extending both the terminological and the assertional part of a Description Logic knowledge base by using information provided by the knowledge base and by a domain expert. The use of techniques from Formal Concept Analysis ensures that, on the one hand, the interaction with the expert is kept to a minimum, and, on the other hand, we can show that the extended knowledge base is complete in a certain, well-defined sense.
@inproceedings{ BGSS07, author = {Franz {Baader} and Bernhard {Ganter} and Ulrike {Sattler} and Baris {Sertkaya}}, booktitle = {Proceedings of the Twentieth International Joint Conference on Artificial Intelligence {(IJCAI-07)}}, publisher = {AAAI Press}, title = {Completing Description Logic Knowledge Bases using Formal Concept Analysis}, year = {2007}, }
Abstract BibTeX Entry PDF File PS File
Extensions of the description logic EL have recently been proposed as lightweight ontology languages. The most important feature of these extensions is that, despite including powerful expressive means such as general concept inclusion axioms, reasoning can be carried out in polynomial time. In this paper, we consider one of these extensions, EL+, and introduce a refinement of the known polynomial-time classification algorithm for this logic. This refined algorithm was implemented in our <b>CEL</b> reasoner. We describe the results of several experiments with <b>CEL</b> on large ontologies from practice, which show that even a relatively straightforward implementation of the described algorithm outperforms highly optimized, state-of-the-art tableau reasoners for expressive description logics.
@inproceedings{ BaaLutSun-JoLLI-07, author = {Franz {Baader} and Carsten {Lutz} and Boontawee {Suntisrivaraporn}}, booktitle = {Journal of Logic, Language and Information, Special Issue on Method for Modality (M4M)}, note = {To appear}, title = {Is Tractable Reasoning in Extensions of the Description Logic $\mathcal{EL}$ Useful in Practice?}, year = {2007}, }
Abstract BibTeX Entry PDF File ©Springer-Verlag
Axiom pinpointing has been introduced in description logics to help the user to understand the reasons why consequences hold and to remove unwanted consequences by computing minimal (maximal) subsets of the knowledge base that have (do not have) the consequence in question. The pinpointing algorithms described in the DL literature are obtained as extensions of the standard tableau-based reasoning algorithms for computing consequences from DL knowledge bases. Although these extensions are based on similar ideas, they are all introduced for a particular tableau-based algorithm for a particular DL. The purpose of this paper is to develop a general approach for extending a tableau-based algorithm to a pinpointing algorithm. This approach is based on a general definition of "tableaux algorithms," which captures many of the known tableau-based algorithms employed in DLs, but also other kinds of reasoning procedures.
@inproceedings{ BaaderPenaloza-Tableaux-07, address = {Aix-en-Provence, France}, author = {Franz {Baader} and Rafael {Pe{\~n}aloza}}, booktitle = {Proceedings of the 16th International Conference on Automated Reasoning with Analytic Tableaux and Related Methods {TABLEAUX 2007}}, editor = {N. {Olivetti}}, pages = {11--27}, publisher = {Springer-Verlag}, series = {Lecture Notes in Computer Science}, title = {Axiom Pinpointing in General Tableaux}, volume = {4548}, year = {2007}, }
Abstract BibTeX Entry PDF File
Axiom pinpointing has been introduced in description logics (DLs) to help the user to understand the reasons why consequences hold by computing minimal subsets of the knowledge base that have the consequence in question. Until now, the pinpointing approach has only been applied to the DL ALC and some of its extensions. This paper considers axiom pinpointing in the DL EL, for which subsumption can be decided in polynomial time. We describe an extension of the subsumption algorithm for EL that can be used to compute all minimal subsets of a given TBox that imply a certain subsumption relationship. We also show that an EL TBox may have exponentially many such minimal subsets and that even finding out whether there is such a minimal subset within a given cardinality bound is an NP-complete problem. In contrast to these negative results, we also show that one such minimal set can be computed in polynomial time. Finally, we provide some encouraging experimental results regarding the performance of a practical algorithm that computes one (not necessarily minimal) set that has a given subsumption relation as consequence.
@inproceedings{ BaaPenSun-DL-07, author = {Franz {Baader} and Rafael {Pe{\~n}aloza} and Boontawee {Suntisrivaraporn}}, booktitle = {Proceedings of the 2007 International Workshop on Description Logics ({DL2007})}, series = {CEUR-WS}, title = {Pinpointing in the Description Logic $\mathcal{EL}$}, year = {2007}, }
Abstract BibTeX Entry PDF File ©Springer-Verlag
Axiom pinpointing has been introduced in description logics (DLs) to help the user understand the reasons why consequences hold by computing minimal subsets of the knowledge base that have the consequence in question. Until now, the pinpointing approach has only been applied to the DL ALC and some of its extensions. This paper considers axiom pinpointing in the less expressive DL EL+, for which subsumption can be decided in polynomial time. More precisely, we consider an extension of the pinpointing problem where the knowledge base is divided into a <i>static</i> part, which is always present, and a <i>refutable</i> part, of which subsets are taken. We describe an extension of the subsumption algorithm for EL+ that can be used to compute all minimal subsets of (the refutable part of) a given TBox that imply a certain subsumption relationship. The worst-case complexity of this algorithm turns out to be exponential. This is not surprising since we can show that a given TBox may have exponentially many such minimal subsets. However, we can also show that the problem is not even output polynomial, i.e., unless P=NP, there cannot be an algorithm computing all such minimal sets that is polynomial in the size of its input <i>and output</i>. In addition, we show that finding out whether there is such a minimal subset within a given cardinality bound is an NP-complete problem. In contrast to these negative results, we also show that one such minimal subset can be computed in polynomial time. Finally, we provide some encouraging experimental results regarding the performance of a practical algorithm that computes one (small, but not necessarily minimal) subset that has a given subsumption relation as consequence.
@inproceedings{ BaaPenSun-KI-07, address = {Osnabr\"uck, Germany}, author = {Franz {Baader} and Rafael {Pe{\~n}aloza} and Boontawee {Suntisrivaraporn}}, booktitle = {Proceedings of the 30th German Conference on Artificial Intelligence ({KI2007})}, pages = {52--67}, publisher = {Springer-Verlag}, series = {Lecture Notes in Artificial Intelligence}, title = {Pinpointing in the Description Logic $\mathcal{EL}$}, volume = {4667}, year = {2007}, }
Abstract BibTeX Entry PDF File PS File
Methods for computing the least common subsumer (lcs) are usually restricted to rather inexpressive Description Logics (DLs) whereas existing knowledge bases are written in very expressive DLs. In order to allow the user to re-use concepts defined in such terminologies and still support the definition of new concepts by computing the lcs, we extend the notion of the lcs of concept descriptions to the notion of the lcs w.r.t. a background terminology. We will show both theoretical results on the existence of the least common subsumer in this setting, and describe a practical approach—based on a method from formal concept analysis—for computing good common subsumers, which may, however, not be the least ones. We will also describe results obtained in a first evaluation of this practical approach.
@article{ BaaderSertkayaTurhan-JAL-07, author = {Franz {Baader} and Bar\i{}\c{s} {Sertkaya} and Anni-Yasmin {Turhan}}, journal = {Journal of Applied Logic}, number = {3}, pages = {392--420}, publisher = {Elsevier Science Publishers (North-Holland), Amsterdam}, title = {Computing the Least Common Subsumer w.r.t.~a Background Terminology}, volume = {5}, year = {2007}, }
Abstract BibTeX Entry PDF File
After a critical review of the present architecture of SNOMED CT, addressing both logical and ontological issues, we present a roadmap towards an overall improvement of this terminology. In particular, we recommend the following actions: Upper level categories should be re-arranged according to a standard upper level ontology. Meta-class like concepts should be identified and removed from the taxonomy. SNOMED concepts denoting (non instantiable) individual entities (e.g. geographical regions) should be kept separate from those concepts that denote (instantiable) types. SNOMED binary relations should be reduced to a set of canonical ones, following existing recommendations. Taxonomies should be cleansed and split into disjoint partitions. The number of full definitions should be increased. Finally, we propose a new approach to modeling part-whole hierarchies, as well as the integration of qualifier relations into the description logic framework.
@inproceedings{ SchSunBaa-Medinfo-07, author = {Stefan {Schulz} and Boontawee {Suntisrivaraporn} and Franz {Baader}}, booktitle = {Proceedings of The Medinfo 2007 Congress}, editor = {}, pages = {}, publisher = {IOS Press}, series = {Studies in Health Technology and Informatics (SHTI-series)}, title = {{SNOMED CT}'s Problem List: Ontologists' and Logicians' Therapy Suggestions}, volume = {}, year = {2007}, }
Abstract BibTeX Entry PDF File PS File ©Springer-Verlag
Reification of parthood relations according to the SEP-triplet encoding pattern has been employed in the clinical terminology SNOMED CT to simulate transitivity of the part-of relation via transitivity of the is-a relation and to inherit properties along part-of links. In this paper we argue that using a more expressive representation language, which allows for a direct representation of the relevant properties of the part-of relation, makes modelling less error prone while having no adverse effect on the efficiency of reasoning.
@inproceedings{ SunBaaSchSpa-AIME-07, author = {Boontawee {Suntisrivaraporn} and Franz {Baader} and Stefan {Schulz} and Kent {Spackman}}, booktitle = {Proceedings of the 11th Conference on Artificial Intelligence in Medicine {(AIME'07})}, editor = {Jim Hunter {Riccardo Bellazzi, Ameen Abu-Hanna}}, pages = {}, publisher = {Springer-Verlag}, series = {Lecture Notes in Computer Science}, title = {Replacing SEP-Triplets in SNOMED CT using Tractable Description Logic Operators}, volume = {}, year = {2007}, }
2006
Abstract BibTeX Entry PDF File
Description logics (DLs) are a successful family of logic-based knowledge representation formalisms that can be used to represent the terminological knowledge of an application domain in a structured and formally well-founded way. DL systems provide their users with inference procedures that allow to reason about the represented knowledge. Standard inference problems (such as the subsumption and the instance problem) are now well-understood. Their computational properties (such as decidability and complexity) have been investigated in detail, and modern DL systems are equipped with highly optimized implementations of these inference procedures, which - in spite of their high worst-case complexity - perform quite well in practice. In applications of DL systems it has turned out that building and maintaining large DL knowledge bases can be further facilitated by procedures for other, non-standard inference problem, such as computing the least common subsumer and the most specific concept, and rewriting and matching of concepts. While the research concerning these non-standard inferences is not as mature as the one for the standard inferences, it has now reached a point where it makes sense to motivate these inferences within a uniform application framework, give an overview of the results obtained so far, describe the remaining open problems, and give perspectives for future research in this direction.
@incollection{ BaaderKuesters-IMAT-06, author = {F. {Baader} and R. {K{\"u}sters}}, booktitle = {Mathematical Problems from Applied Logic {I}}, editor = {D.M. {Gabbay} and S.S. {Goncharov} and M. {Zakharyaschev}}, pages = {1--75}, publisher = {Springer-Verlag}, series = {International Mathematical Series}, title = {Nonstandard Inferences in Description Logics: The Story So Far}, volume = {4}, year = {2006}, }
Abstract BibTeX Entry PDF File PS File
Description logics are a family of knowledge representation languages that were developed independently of modal logics, but later turned out to be closely related to them. This chapter introduces description logics and briefly recalls the connections between description and modal logics, but then concentrates on means of expressivity and reasoning problems that are important for description logics, but not in the focus of research in modal logics.
@incollection{ BaaderLutz-MLHandbook-06, author = {F. {Baader} and C. {Lutz}}, booktitle = {The Handbook of Modal Logic}, editor = {Patrick {Blackburn} and Johan van {Benthem} and Frank {Wolter}}, pages = {757--820}, publisher = {Elsevier}, title = {Description Logic}, year = {2006}, }
Abstract BibTeX Entry PDF File PS File
@inproceedings{ BaaLutSun-DL-06, author = {F. {Baader} and C. {Lutz} and B. {Suntisrivaraporn}}, booktitle = {Proceedings of the 2006 International Workshop on Description Logics ({DL2006})}, series = {CEUR-WS}, title = {Efficient Reasoning in $\mathcal{EL}^+$}, year = {2006}, }
Abstract BibTeX Entry PDF File PS File ©Springer-Verlag
CEL (Classifier for EL) is a reasoner for the small description logic EL+ which can be used to compute the subsumption hierarchy induced by EL+ ontologies. The most distinguishing feature of CEL is that, unlike other modern DL reasoners, it is based on a polynomial-time subsumption algorithm, which allows it to process very large ontologies in reasonable time. In spite of its restricted expressive power, EL+ is well-suited for formulating life science ontologies.
@inproceedings{ BaaLutSun-IJCAR-06, author = {F. {Baader} and C. {Lutz} and B. {Suntisrivaraporn}}, booktitle = {Proceedings of the 3rd International Joint Conference on Automated Reasoning ({IJCAR'06})}, editor = {U. {Furbach} and N. {Shankar}}, pages = {287--291}, publisher = {Springer-Verlag}, series = {Lecture Notes in Artificial Intelligence}, title = {{\textsf{CEL}}---A Polynomial-time Reasoner for Life Science Ontologies}, volume = {4130}, year = {2006}, }
Abstract BibTeX Entry PDF File
Language equations are equations where both the constants occurring in the equations and the solutions are formal languages. They have first been introduced in formal language theory, but are now also considered in other areas of computer science. In particular, they can be seen as unification problems in the algebra of languages whose operations are the Boolean operations and concatenation. They are also closely related to monadic set constraints. In the present paper, we restrict the attention to language equations with one-sided concatenation, but in contrast to previous work on these equations, we allow not just union but all Boolean operations to be used when formulating them. In addition, we are not just interested in deciding solvability of such equations, but also in deciding other properties of the set of solutions, like its cardinality (finite, infinite, uncountable) and whether it contains least/greatest solutions. We show that all these decision problems are ExpTime-complete.
@inproceedings{ Baader-Okhotin-UNIF-06, author = {F. {Baader} and A. {Okhotin}}, booktitle = {Proceedings of the 20th International Workshop on Unification, {UNIF'06}}, editor = {Jordi {Levy}}, pages = {59--73}, title = {Complexity of Language Equations With One-Sided Concatenation and All {B}oolean Operations}, year = {2006}, }
Abstract BibTeX Entry PDF File
Previous results for combining decision procedures for the word problem in the non-disjoint case do not apply to equational theories induced by modal logics—which are not disjoint for sharing the theory of Boolean algebras. Conversely, decidability results for the fusion of modal logics are strongly tailored towards the special theories at hand, and thus do not generalize to other types of equational theories. In this paper, we present a new approach for combining decision procedures for the word problem in the non-disjoint case that applies to equational theories induced by modal logics, but is not restricted to them. The known fusion decidability results for modal logics are instances of our approach. However, even for equational theories induced by modal logics our results are more general since they are not restricted to so-called normal modal logics.
@article{ BaaGT-IC-06, author = {Franz {Baader} and Silvio {Ghilardi} and Cesare {Tinelli}}, journal = {Information and Computation}, number = {10}, pages = {1413--1452}, title = {A new combination procedure for the word problem that generalizes fusion decidability results in modal logics}, volume = {204}, year = {2006}, }
Abstract BibTeX Entry PDF File
The design of comprehensive ontologies is a serious challenge. Therefore, it is necessary to support the ontology designer by providing him with design methodologies, ontology editors, and automated reasoning tools that explicate the consequences of his design decisions. Currently, reasoning tools are largely limited to the reasoning services (i) computing the subsumption hierarchy of the classes in an ontology and (ii) determining the consistency of these classes. In this paper, we survey the most important tasks that arise in ontology design and discuss how they can be supported by automated reasoning tools. In particular, we show that it is beneficial to go beyond the usual reasoning services (i) and (ii).
@inproceedings{ LutzEtAl-OWLED06, author = {Carsten {Lutz} and Franz {Baader} and Enrico {Franconi} and Domenico {Lembo} and Ralf {M{\"o}ller} and Riccardo {Rosati} and Ulrike {Sattler} and Boontawee {Suntisrivaraporn} and Sergio {Tessaris}}, booktitle = {In Proceedings of the second international workshop OWL: Experiences and Directions}, editor = {Bernardo Cuenca {Grau} and Pascal {Hitzler} and Connor {Shankey} and Evan {Wallace}}, month = {November}, title = {Reasoning Support for Ontology Design}, year = {2006}, }
2005
Abstract BibTeX Entry PDF File
Recently, it has been shown that the small description logic (DL) EL, which allows for conjunction and existential restrictions, has better algorithmic properties than its counterpart FL0, which allows for conjunction and value restrictions. Whereas the subsumption problem in FL0 becomes already intractable in the presence of acyclic TBoxes, it remains tractable in EL even with general concept inclusion axioms (GCIs). On the one hand, we extend the positive result for EL by identifying a set of expressive means that can be added to EL without sacrificing tractability. On the other hand, we show that basically all other additions of typical DL constructors to EL with GCIs make subsumption intractable, and in most cases even ExpTime-complete. In addition, we show that subsumption in FL0 with GCIs is ExpTime-complete.
@inproceedings{ BaaderBrandtLutz-IJCAI-05, address = {Edinburgh, UK}, author = {F. {Baader} and S. {Brandt} and C. {Lutz}}, booktitle = {Proceedings of the Nineteenth International Joint Conference on Artificial Intelligence {IJCAI-05}}, publisher = {Morgan-Kaufmann Publishers}, title = {Pushing the $\mathcal{EL}$ Envelope}, year = {2005}, }
Abstract BibTeX Entry PDF File PS File ©Springer-Verlag
In a previous paper, we have introduced a general approach for connecting two many-sorted theories through connection functions that behave like homomorphisms on the shared signature, and have shown that, under appropriate algebraic conditions, decidability of the validity of universal formulae in the component theories transfers to their connection. This work generalizes decidability transfer results for so-called E-connections of modal logics. However, in this general algebraic setting, only the most basic type of E-connections could be handled. In the present paper, we overcome this restriction by looking at pairs of connection functions that are adjoint pairs for partial orders defined in the component theories.
@inproceedings{ BaaderGhilardiFroCoS05, address = {Vienna (Austria)}, author = {F. {Baader} and S. {Ghilardi}}, booktitle = {Proceedings of the 5th International Workshop on Frontiers of Combining Systems (FroCoS'05)}, publisher = {Springer-Verlag}, series = {Lecture Notes in Artificial Intelligence}, title = {Connecting Many-Sorted Structures and Theories through Adjoint Functions}, volume = {3717}, year = {2005}, }
Abstract BibTeX Entry PDF File PS File ©Springer-Verlag
Basically, the connection of two many-sorted theories is obtained by taking their disjoint union, and then connecting the two parts through connection functions that must behave like homomorphisms on the shared signature. We determine conditions under which decidability of the validity of universal formulae in the component theories transfers to their connection. In addition, we consider variants of the basic connection scheme.
@inproceedings{ BaaderGhilardiCADE05, address = {Tallinn (Estonia)}, author = {F. {Baader} and S. {Ghilardi}}, booktitle = {Proceedings of the 20th International Conference on Automated Deduction (CADE-05)}, pages = {278--294}, publisher = {Springer-Verlag}, series = {Lecture Notes in Artificial Intelligence}, title = {Connecting Many-Sorted Theories}, volume = {3632}, year = {2005}, }
Abstract BibTeX Entry PDF File PS File ©Springer-Verlag
The vision of a Semantic Web has recently drawn considerable attention, both from academia and industry. Description logics are often named as one of the tools that can support the Semantic Web and thus help to make this vision reality. In this paper, we describe what description logics are and what they can do for the Semantic Web. Descriptions logics are very useful for defining, integrating, and maintaining ontologies, which provide the SemanticWeb with a common understanding of the basic semantic concepts used to annotate Web pages. We also argue that, without the last decade of basic research in this area, description logics could not play such an important role in this domain.
@incollection{ BaSaJS60, author = {F. {Baader} and I. {Horrocks} and U. {Sattler}}, booktitle = {Mechanizing Mathematical Reasoning: Essays in Honor of J{\"o}rg H. Siekmann on the Occasion of His 60th Birthday}, editor = {D. {Hutter} and W. {Stephan}}, pages = {228--248}, publisher = {Springer-Verlag}, series = {Lecture Notes in Artificial Intelligence}, title = {Description Logics as Ontology Languages for the Semantic Web}, volume = {2605}, year = {2005}, }
Abstract BibTeX Entry PDF File
Motivated by the need for semantically well-founded and algorithmically managable formalisms for describing the functionality of Web services, we introduce an action formalism that is based on description logics (DLs), but is also firmly grounded on research in the reasoning about action community. Our main contribution is an analysis of how the choice of the DL influences the complexity of standard reasoning tasks such as projection and executability, which are important for Web service discovery and composition.
@inproceedings{ BaLuMiSaWo-WSS-2005, address = {Chiba City, Japan}, author = {F. {Baader} and C. {Lutz} and M. {Milicic} and U. {Sattler} and F. {Wolter}}, booktitle = {Proceedings of the WWW 2005 Workshop on Web Service Semantics ({WSS2005})}, title = {A Description Logic Based Approach to Reasoning about Web Services}, year = {2005}, }
Abstract BibTeX Entry PDF File
We propose an action formalism that is based on description logics (DLs) and may be viewed as an instance of the Situation Calculus (SitCalc). In particular, description logic concepts can be used for describing the state of the world, and the pre- and post-conditions of actions. The main advantage of such a combination is that, on the one hand, the expressive power for describing world states and conditions is higher than in other decidable fragments of the SitCalc, which are usually propositional. On the other hand, in contrast to the full SitCalc, effective reasoning is still possible. In this paper, we perform a detailed investigation of how the choice of the DL influences the complexity of the standard reasoning tasks executability and projection in the corresponding action formalism. We also discuss semantic and computational problems in natural extensions of our framework.
@inproceedings{ BaaderEtAlDL05b, author = {F. {Baader} and C. {Lutz} and M. {Milicic} and U. {Sattler} and F. {Wolter}}, booktitle = {Proceedings of the 2005 International Workshop on Description Logics ({DL2005})}, number = {147}, series = {CEUR-WS}, title = {Integrating Description Logics and Action Formalisms: First Results}, year = {2005}, }
Abstract BibTeX Entry PDF File
We propose an action formalism that is based on description logics (DLs) and may be viewed as an instance of the Situation Calculus (SitCalc). In particular, description logic concepts can be used for describing the state of the world, and the pre- and post-conditions of actions. The main advantage of such a combination is that, on the one hand, the expressive power for describing world states and conditions is higher than in other decidable fragments of the SitCalc, which are usually propositional. On the other hand, in contrast to the full SitCalc, effective reasoning is still possible. In this paper, we perform a detailed investigation of how the choice of the DL influences the complexity of the standard reasoning tasks executability and projection in the corresponding action formalism. We also discuss semantic and computational problems in natural extensions of our framework.
@inproceedings{ BaLuMiSaWo-AAAI-2005, address = {Pittsburgh, PA, USA}, author = {F. {Baader} and C. {Lutz} and M. {Milicic} and U. {Sattler} and F. {Wolter}}, booktitle = {Proceedings of the Twentieth National Conference on Artificial Intelligence ({AAAI-05})}, title = {Integrating Description Logics and Action Formalisms: First Results}, year = {2005}, }
Abstract BibTeX Entry PDF File PS File
Extensions of the description logic EL have recently been proposed as lightweight ontology languages. The most important feature of these extensions is that, despite including powerful expressive means such as general concept inclusion axioms, reasoning can be carried out in polynomial time. In this paper, we consider one of these extensions, EL+, and introduce a refinement of the known polynomial-time classification algorithm for this logic, which was implemented in our CEL reasoner. We describe the results of several experiments with CEL on large ontologies from practice, which show that even a relatively straightforward implementation of the described algorithm outperforms highly optimized, state-of-the-art tableau reasoners for expressive description logics.
@inproceedings{ BaaLutSun-M4M-05, address = {Berlin, Germany}, author = {F. {Baader} and C. {Lutz} and B. {Suntisrivaraporn}}, booktitle = {Proceedings of the Methods for Modalities Workshop (M4M-05)}, title = {Is Tractable Reasoning in Extensions of the Description Logic $\mathcal{EL}$ Useful in Practice?}, year = {2005}, }
Abstract BibTeX Entry
This volume contains the papers presented at the 11th International Conference on Logic for Programming, Artificial Intelligence, and Reasoning (LPAR), held from March 14 to 18, 2005, in Montevideo, Uruguay, together with the 5th International Workshop on the Implementation of Logics (organised by Stephan Schulz and Boris Konev) and the Workshop on Analytic Proof Systems (organised by Matthias Baaz).
@book{ BaaderLPAR2004, address = {Montevideo, Uruguay}, editor = {F. {Baader} and A. {Voronkonv}}, publisher = {Springer-Verlag}, series = {Lecture Notes in Artificial Intelligence}, title = {11th International Conference on Logic for Programming, Artificial Intelligence, and Reasoning {LPAR 2004}}, volume = {3452}, year = {2005}, }
Abstract BibTeX Entry PDF File PS File ©Springer-Verlag
Motivated by a chemical process engineering application, we introduce a new concept constructor in Description Logics (DLs), an n-ary variant of the existential restriction constructor, which generalizes both the usual existential restrictions and so-called qualified number restrictions. We show that the new constructor can be expressed in ALCQ, the extension of the basic DL ALC by qualified number restrictions. However, this representation results in an exponential blow-up. By giving direct algorithms for ALC extended with the new constructor, we can show that the complexity of reasoning in this new DL is actually not harder than the one of reasoning in ALCQ. Moreover, in our chemical process engineering application, a restricted DL that provides only the new constructor together with conjunction, and satisfies an additional restriction on the occurrence of roles names, is sufficient. For this DL, the subsumption problem is polynomial.
@inproceedings{ BaaderEtAlKI05, author = {Franz {Baader} and Carsten {Lutz} and Eldar {Karabaev} and Manfred {Thei{\ss}en}}, booktitle = {Proceedings of the 28th Annual German Conference on Artificial Intelligence, {KI 2005}}, pages = {18--33}, publisher = {Springer-Verlag}, series = {Lecture Notes in Artificial Intelligence}, title = {A New $n$-ary Existential Quantifier in Description Logics}, volume = {3698}, year = {2005}, }
Abstract BibTeX Entry PDF File PS File
Motivated by a chemical process engineering application, we introduce a new concept constructor in Description Logics (DLs), an n-ary variant of the existential restriction constructor, which generalizes both the usual existential restrictions and so-called qualified number restrictions. We show that the new constructor can be expressed in ALCQ, the extension of the basic DL ALC by qualified number restrictions. However, this representation results in an exponential blow-up. By giving direct algorithms for ALC extended with the new constructor, we can show that the complexity of reasoning in this new DL is actually not harder than the one of reasoning in ALCQ. Moreover, in our chemical process engineering application, a restricted DL that provides only the new constructor together with conjunction, and satisfies an additional restriction on the occurrence of roles names, is sufficient. For this DL, the subsumption problem is polynomial.
@inproceedings{ BaaderEtAlDL05, author = {Franz {Baader} and Carsten {Lutz} and Eldar {Karabaev} and Manfred {Thei{\ss}en}}, booktitle = {Proceedings of the 2005 International Workshop on Description Logics ({DL2005})}, number = {147}, series = {CEUR-WS}, title = {A New $n$-ary Existential Quantifier in Description Logics}, year = {2005}, }
2004
Abstract BibTeX Entry PDF File PS File ©Springer-Verlag
In two previous papers we have investigates the problem of computing the least common subsumer (lcs) and the most specific concept (msc) for the description logic EL in the presence of terminological cycles that are interpreted with descriptive semantics, which is the usual first-order semantics for description logics. In this setting, neither the lcs nor the msc needs to exist. We were able to characterize the cases in which the lcs/msc exists, but it was not clear whether this characterization yields decidability of the existence problem. In the present paper, we develop a common graph-theoretic generalization of these characterizations, and show that the resulting property is indeed decidable, thus yielding decidability of the existence of the lcs and the msc. This is achieved by expressing the property in monadic second-order logic on infinite trees. We also show that, if it exists, then the lcs/msc can be computed in polynomial time.
@inproceedings{ BaaderWG04, address = {Bad Honnef, Germany}, author = {F. {Baader}}, booktitle = {Proceedings of the 30th International Workshop on Graph-Theoretic Concepts in Computer Science {(WG 2004)}}, editor = {J. {Hromkovic} and M. {Nagl}}, pages = {177--188}, publisher = {Springer-Verlag}, series = {Lecture Notes in Computer Science}, title = {A Graph-Theoretic Generalization of the Least Common Subsumer and the Most Specific Concept in the Description Logic $\mathcal{EL}$}, volume = {3353}, year = {2004}, }
Abstract BibTeX Entry PDF File PS File ©Springer-Verlag
Previous results for combining decision procedures for the word problem in the non-disjoint case do not apply to equational theories induced by modal logics—whose combination is not disjoint since they share the theory of Boolean algebras. Conversely, decidability results for the fusion of modal logics are strongly tailored towards the special theories at hand, and thus do not generalize to other equational theories. In this paper, we present a new approach for combining decision procedures for the word problem in the non-disjoint case that applies to equational theories induced by modal logics, but is not restricted to them. The known fusion decidability results for modal logics are instances of our approach. However, even for equational theories induced by modal logics our results are more general since they are not restricted to so-called normal modal logics.
@inproceedings{ BaaderEtAlIJCAR04, author = {F. {Baader} and S. {Ghilardi} and C. {Tinelli}}, booktitle = {Proceedings of the 2nd International Joint Conference on Automated Reasoning ({IJCAR'04})}, editor = {D. {Basin} and M. {Rusinowitch}}, pages = {183--197}, publisher = {Springer-Verlag}, series = {Lecture Notes in Artificial Intelligence}, title = {A New Combination Procedure for the Word Problem that Generalizes Fusion Decidability Results in Modal Logics}, volume = {3097}, year = {2004}, }
Abstract BibTeX Entry
In this chapter, we explain what description logics are and why they make good ontology languages. In particular, we introduce the description logic SHIQ, which has formed the basis of several well-known ontology languages, including OWL.We argue that, without the last decade of basic research in description logics, this family of knowledge representation languages could not have played such an important role in this context. Description logic reasoning can be used both during the design phase, in order to improve the quality of ontologies, and in the deployment phase, in order to exploit the rich structure of ontologies and ontology based information. We discuss the extensions to SHIQ that are required for languages such as OWL and, finally, we sketch how novel reasoning services can support building DL knowledge bases.
@incollection{ BaHoSaOntologyHB, address = {Berlin, Germany}, author = {F. {Baader} and I. {Horrocks} and U. {Sattler}}, booktitle = {Handbook on Ontologies}, editor = {S. {Staab} and R. {Studer}}, pages = {3--28}, publisher = {Springer--Verlag}, series = {International Handbooks in Information Systems}, title = {Description Logics}, year = {2004}, }
Abstract BibTeX Entry PDF File PS File ©Springer-Verlag
Given a finite set S := C1, ..., Cn of description logic concepts, we are interested in computing the subsumption hierarchy of all least common subsumers of subsets of S as well as the hierarchy of all conjunctions of subsets of S. These hierarchies can be used to support the bottom-up construction of description logic knowledge bases. The point is to compute the first hierarchy without having to compute the least common subsumer for all subsets of S, and the second hierarchy without having to check all possible pairs of such conjunctions explicitly for subsumption. We will show that methods from formal concept analysis developed for computing concept lattices can be employed for this purpose.
@inproceedings{ BaaderSertkayaICFCA04, author = {F. {Baader} and B. {Sertkaya}}, booktitle = {Proceedings of the 2nd International Conference on Formal Concept Analysis ({ICFCA 2004})}, editor = {P. {Eklund}}, pages = {261--286}, publisher = {Springer}, series = {Lecture Notes in Artificial Intelligence}, title = {Applying Formal Concept Analysis to Description Logics}, volume = {2961}, year = {2004}, }
Abstract BibTeX Entry PDF File PS File ©Springer-Verlag
Methods for computing the least common subsumer (lcs) are usually restricted to rather inexpressive Description Logics (DLs) whereas existing knowledge bases are written in very expressive DLs. In order to allow the user to re-use concepts defined in such terminologies and still support the definition of new concepts by computing the lcs, we extend the notion of the lcs of concept descriptions to the notion of the lcs w.r.t. a background terminology. We will both show a theoretical result on the existence of the least common subsumer in this setting, and describe a practical approach (based on a method from formal concept analysis) for computing good common subsumers, which may, however, not be the least ones.
@inproceedings{ BaaderSertkayaTurhan04, address = {Lisbon, Portugal}, author = {F. {Baader} and B. {Sertkaya} and A.-Y. {Turhan}}, booktitle = {Proceedings of the 9th European Conference on Logics in Artificial Intelligence {(JELIA 2004)}}, editor = {Jos{\'e} J{\'u}lio {Alferes} and Jo{\~a}o Alexandre {Leite}}, pages = {400--412}, publisher = {Springer-Verlag}, series = {Lecture Notes in Computer Science}, title = {Computing the Least Common Subsumer w.r.t.\ a Background Terminology}, volume = {3229}, year = {2004}, }
Abstract BibTeX Entry PDF File PS File
Methods for computing the least common subsumer (lcs) are usually restricted to rather inexpressive DLs whereas existing knowledge bases are written in very expressive DLs. In order to allow the user to re-use concepts defined in such terminologies and still support the definition of new concepts by computing the lcs, we extend the notion of the lcs of concept descriptions to the notion of the lcs w.r.t. a background terminology.
@inproceedings{ BaaderSertkayaTurhanDL2004, author = {Franz {Baader} and Baris {Sertkaya} and Anni-Yasmin {Turhan}}, booktitle = {Proceedings of the 2004 International Workshop on Description Logics ({DL2004})}, series = {CEUR-WS}, title = {Computing the Least Common Subsumer w.r.t. a Background Terminology}, year = {2004}, }
2003
Abstract BibTeX Entry
The purpose of this appendix is to introduce (in a compact manner) the syntax and semantics of the most prominent DLs occurring in this handbook. More information and explanations as well as some less familiar DLs can be found in the respective chapters. For DL constructors whose semantics cannot be described in a compact manner, we will only introduce the syntax and refer the reader to the respective chapter for the semantics. Following Chapter 2 on Basic Description Logics, we will first introduce the basic DL AL, and then describe several of its extensions. Thereby, we will also fix the notation employed in this handbook.
@incollection{ DLhandbookAppendix, author = {F. {Baader}}, booktitle = {The Description Logic Handbook: Theory, Implementation, and Applications}, editor = {Franz {Baader} and Diego {Calvanese} and Deborah {McGuinness} and Daniele {Nardi} and Peter F. {Patel-Schneider}}, pages = {485--495}, publisher = {Cambridge University Press}, title = {Description Logic Terminology}, year = {2003}, }
Abstract BibTeX Entry PDF File PS File ©Springer-Verlag
This paper investigates the relationship between automata- and tableau-based inference procedures for Description Logics. To be more precise, we develop an abstract notion of what a tableau-based algorithm is, and then show, on this abstract level, how tableau-based algorithms can be converted into automata-based algorithms. In particular, this allows us to characterize a large class of tableau-based algorithms that imply an ExpTime upper-bound for reasoning in the description logics for which such an algorithm exists.
@inproceedings{ BaaHlaLutWol-LPAR03, author = {F. {Baader} and J. {Hladik} and C. {Lutz} and F. {Wolter}}, booktitle = {Proceedings of the 10th International Conference on Logic for Programming, Artificial Intelligence, and Reasoning ({LPAR 2003})}, editor = {Moshe {Vardi} and Andrei {Voronkov}}, pages = {1--32}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {From Tableaux to Automata for Description Logics}, volume = {2850}, year = {2003}, }
Abstract BibTeX Entry
This chapter considers, on the one hand, extensions of Description Logics by features not available in the basic framework, but considered important for using Description Logics as a modeling language. In particular, it addresses the extensions concerning: concrete domain constraints; modal, epistemic, and temporal operators; probabilities and fuzzy logic; and defaults. On the other hand, it considers non-standard inference problems for Description Logics, i.e., inference problems that—unlike subsumption or instance checking—are not available in all systems, but have turned out to be useful in applications. In particular, it addresses the non-standard inference problems: least common subsumer and most specific concept; unification and matching of concepts; and rewriting.
@incollection{ DLhandbookCh6, author = {F. {Baader} and R {K{\"u}sters} and F. {Wolter}}, booktitle = {The Description Logic Handbook: Theory, Implementation, and Applications}, editor = {Franz {Baader} and Diego {Calvanese} and Deborah {McGuinness} and Daniele {Nardi} and Peter F. {Patel-Schneider}}, pages = {219--261}, publisher = {Cambridge University Press}, title = {Extensions to Description Logics}, year = {2003}, }
Abstract BibTeX Entry
This chapter provides an introduction to Description Logics as a formal language for representing knowledge and reasoning about it. It first gives a short overview of the ideas underlying Description Logics. Then it introduces syntax and semantics, covering the basic constructors that are used in systems or have been introduced in the literature, and the way these constructors can be used to build knowledge bases. Finally, it defines the typical inference problems, shows how they are interrelated, and describes different approaches for effectively solving these problems. Some of the topics that are only briefly mentioned in this chapter will be treated in more detail in subsequent chapters.
@incollection{ DLhandbookCh2, author = {F. {Baader} and W. {Nutt}}, booktitle = {The Description Logic Handbook: Theory, Implementation, and Applications}, editor = {Franz {Baader} and Diego {Calvanese} and Deborah {McGuinness} and Daniele {Nardi} and Peter F. {Patel-Schneider}}, pages = {43--95}, publisher = {Cambridge University Press}, title = {Basic Description Logics}, year = {2003}, }
Abstract BibTeX Entry PDF File PS File Free reprint
Description Logics are a family of knowledge representation formalisms well-suited for intensional reasoning about conceptual models of databases/data warehouses. We extend Description Logics with concrete domains (such as integers and rational numbers) that include aggregation functions over these domains (such as min, max, count, and sum) which are usually available in database systems.<br> We show that the presence of aggregation functions may easily lead to undecidability of (intensional) inference problems such as satisfiability and subsumption. However, there are also extensions for which satisfiability and subsumption are decidable, and we present decision procedures for the relevant inference problems.
@article{ BaaderSattlerIS-02, author = {F. {Baader} and U. {Sattler}}, journal = {Information Systems}, number = {8}, pages = {979--1004}, title = {Description Logics with Aggregates and Concrete Domains}, volume = {28}, year = {2003}, }
Abstract BibTeX Entry PDF File PS File ©Springer-Verlag
Computing the least common subsumer (lcs) is one of the most prominent non-standard inference in description logics. Baader, Kuesters, and Molitor have shown that the lcs of concept descriptions in the description logic EL always exists and can be computed in polynomial time. In the present paper, we try to extend this result from concept descriptions to concepts defined in a (possibly cyclic) EL-terminology interpreted with descriptive semantics, which is the usual first-order semantics for description logics. In this setting, the lcs need not exist. However, we are able to define possible candidates Pk (k 0) for the lcs, and can show that the lcs exists iff one of these candidates is the lcs. Since each of these candidates is a common subsumer, they can also be used to approximate the lcs even if it does not exist. In addition, we give a sufficient condition for the lcs to exist, and show that, under this condition, it can be computed in polynomial time.
@inproceedings{ BaaderICCS03, author = {Franz {Baader}}, booktitle = {Proceedings of the 11th International Conference on Conceptual Structures, {ICCS 2003}}, pages = {117--130}, publisher = {Springer-Verlag}, series = {Lecture Notes in Artificial Intelligence}, title = {Computing the least common subsumer in the description logic {$\cal EL$} w.r.t.\ terminological cycles with descriptive semantics}, volume = {2746}, year = {2003}, }
Abstract BibTeX Entry PDF File
Computing least common subsumers (lcs) and most specific concepts (msc) are inference tasks that can support the bottom-up construction of knowledge bases in description logics. In description logics with existential restrictions, the most specific concept need not exist if one restricts the attention to concept descriptions or acyclic TBoxes. In this paper, we extend the notions lcs and msc to cyclic TBoxes. For the description logic EL (which allows for conjunctions, existential restrictions, and the top-concept), we show that the lcs and msc always exist and can be computed in polynomial time if we interpret cyclic definitions with greatest fixpoint semantics.
@inproceedings{ BaadderIJCAI03b, author = {Franz {Baader}}, booktitle = {Proceedings of the 18th International Joint Conference on Artificial Intelligence}, editor = {Georg {Gottlob} and Toby {Walsh}}, pages = {319--324}, publisher = {Morgan Kaufman}, title = {Least Common Subsumers and Most Specific Concepts in a Description Logic with Existential Restrictions and Terminological Cycles}, year = {2003}, }
BibTeX Entry
@book{ BaaderCADE2003, address = {Miami Beach, FL, USA}, editor = {Franz {Baader}}, publisher = {Springer-Verlag}, series = {Lecture Notes in Artificial Intelligence}, title = {Proceedings of the 19th International Conference on Automated Deduction {CADE-19}}, volume = {2741}, year = {2003}, }
Abstract BibTeX Entry PDF File
In a previous paper we have investigated subsumption in the presence of terminological cycles for the description logic EL, which allows conjunctions, existential restrictions, and the top concept, and have shown that the subsumption problem remains polynomial for all three types of semantics usually considered for cyclic de nitions in description logics. In this paper we show that subsumption in EL (with or without cyclic de - nitions) remains polynomial even if one adds a certain restricted form of global role-value-maps to EL. In particular, this kind of role-value-maps can express transitivity of roles.
@inproceedings{ BaaderDL2003, author = {Franz {Baader}}, booktitle = {Proceedings of the 2003 International Workshop on Description Logics ({DL2003})}, series = {CEUR-WS}, title = {Restricted Role-value-maps in a Description Logic with Existential Restrictions and Terminological Cycles}, year = {2003}, }
Abstract BibTeX Entry PDF File
Cyclic definitions in description logics have until now been investigated only for description logics allowing for value restrictions. Even for the most basic language FL0, which allows for conjunction and value restrictions only, deciding subsumption in the presence of terminological cycles is a PSPACE-complete problem. This paper investigates subsumption in the presence of terminological cycles for the language EL, which allows for conjunction, existential restrictions, and the top-concept. In contrast to the results for FL0, subsumption in EL remains polynomial, independent of whether we use least fixpoint semantics, greatest fixpoint semantics, or descriptive semantics.
@inproceedings{ BaaderIJCAI03a, author = {Franz {Baader}}, booktitle = {Proceedings of the 18th International Joint Conference on Artificial Intelligence}, editor = {Georg {Gottlob} and Toby {Walsh}}, pages = {325--330}, publisher = {Morgan Kaufmann}, title = {Terminological Cycles in a Description Logic with Existential Restrictions}, year = {2003}, }
Abstract BibTeX Entry PDF File PS File ©Springer-Verlag
Previously, we have investigated both standard and non-standard inferences in the presence of terminological cycles for the description logic EL, which allows for conjunctions, existential restrictions, and the top concept. The present paper is concerned with two problems left open by this previous work, namely the instance problem and the problem of computing most specific concepts w.r.t. descriptive semantics, which is the usual first-order semantics for description logics. We will show that—like subsumption—the instance problem is polynomial in this context. Similar to the case of the least common subsumer, the most specific concept w.r.t. descriptive semantics need not exist, but we are able to characterize the cases in which it exists and give a decidable sufficient condition for the existence of the most specific concept. Under this condition, it can be computed in polynomial time.
@inproceedings{ BaaderKI03, address = {Hamburg, Germany}, author = {Franz {Baader}}, booktitle = {Proceedings of the 26th Annual German Conference on Artificial Intelligence, {KI 2003}}, pages = {64--78}, publisher = {Springer-Verlag}, series = {Lecture Notes in Artificial Intelligence}, title = {The instance problem and the most specific concept in the description logic {$\cal EL$} w.r.t.\ terminological cycles with descriptive semantics}, volume = {2821}, year = {2003}, }
Abstract BibTeX Entry
Description Logics are a family of knowledge representation languages that have been studied extensively in Artificial Intelligence over the last two decades. They are embodied in several knowledge-based systems and are used to develop various real-life applications. The Description Logic Handbook provides a thorough account of the subject, covering all aspects of research in this field, namely: theory, implementation, and applications. Its appeal will be broad, ranging from more theoretically-oriented readers, to those with more practically-oriented interests who need a sound and modern understanding of knowledge representation systems based on Description Logics. The chapters are written by some of the most prominent researchers in the field, introducing the basic technical material before taking the reader to the current state of the subject, and including comprehensive guides to the literature. In sum, the book will serve as a unique reference for the subject, and can also be used for self-study or in conjunction with Knowledge Representation and Artificial Intelligence courses.
@book{ DLhandbook, editor = {Franz {Baader} and Diego {Calvanese} and Deborah {McGuinness} and Daniele {Nardi} and Peter F. {Patel-Schneider}}, publisher = {Cambridge University Press}, title = {The Description Logic Handbook: Theory, Implementation, and Applications}, year = {2003}, }
Abstract BibTeX Entry PDF File PS File
This paper investigates the relationship between automata- and tableau-based inference procedures for description logics. To be more precise, we develop an abstract notion of what a tableau-based algorithm is, and then show, on this abstract level, how tableau-based algorithms can be converted into automata-based algorithms. In particular, this allows us to characterize a large class of tableau-based algorithms that imply an ExpTime upper-bound for reasoning in the description logics for which such an algorithm exists.
@article{ BaaHlaLutWol-FI-03, author = {Franz {Baader} and Jan {Hladik} and Carsten {Lutz} and Frank {Wolter}}, journal = {Fundamenta Informaticae}, pages = {1--33}, title = {From Tableaux to Automata for Description Logics}, volume = {57}, year = {2003}, }
2002
Abstract BibTeX Entry
The vision of a Semantic Web has recently drawn considerable attention, both from academia and industry. Description Logics are often named as one of the tools that can support the Semantic Web and thus help to make this vision reality. In this paper, we try to sketch what Description Logics are and what they can do for the Semantic Web. It turns out that Descriptions Logics are very useful for defining ontologies, which provide the Semantic Web with a common understanding of the basic semantic concepts used to annotate Web pages. We also argue that, without the last decade of basic research in this area, Description Logics could not play such an important role in this domain.
@article{ BaaderSattlerHorrocks-KIJ-3-02, author = {F. {Baader} and I. {Horrocks} and U. {Sattler}}, journal = {KI -- K{\"u}nstliche Intelligenz}, title = {Description Logics for the Semantic Web}, volume = {4}, year = {2002}, }
Abstract BibTeX Entry PDF File PS File
Unification considers concept patterns, i.e., concept descriptions with variables, and tries to make these descriptions equivalent by replacing the variables by appropriate concept descriptions. In a previous paper, we have shown that unification in FLreg, a description logic that allows for the concept constructors top concept, concept conjunction, and value restrictions as well as the role constructors union, composition, and transitive closure, is an ExpTime-complete problem and that solvable FLreg-unification problems always have least unifiers. In the present paper, we generalize these results to a DL which extends FLreg by the bottom concept. The proof strongly depends on the existence of least unifiers in FLreg.
@inproceedings{ BaaderKuestersDL02, address = {Toulouse, France}, author = {F. {Baader} and R. {K\"usters}}, booktitle = {Proceedings of the 2002 International Workshop on Description Logics}, editor = {I. {Horrocks} and S. {Tessaris}}, note = {See http://sunsite.informatik.rwth-aachen.de/Publications/CEUR-WS/Vol-53/}, title = {Unification in a Description Logic with Inconsistency and Transitive Closure of Roles}, year = {2002}, }
Abstract BibTeX Entry PDF File PS File
Fusions are a simple way of combining logics. For normal modal logics, fusions have been investigated in detail. In particular, it is known that, under certain conditions, decidability transfers from the component logics to their fusion. Though description logics are closely related to modal logics, they are not necessarily normal. In addition, ABox reasoning in description logics is not covered by the results from modal logics. In this paper, we extend the decidability transfer results from normal modal logics to a large class of description logics. To cover different description logics in a uniform way, we introduce abstract description systems, which can be seen as a common generalization of description and modal logics, and show the transfer results in this general setting.
@article{ BaLuStuWo-JAIR-02, author = {F. {Baader} and C. {Lutz} and H. {Sturm} and F. {Wolter}}, journal = {Journal of Artificial Intelligence Research (JAIR)}, pages = {1--58}, title = {Fusions of Description Logics and Abstract Description Systems}, volume = {16}, year = {2002}, }
Abstract BibTeX Entry PDF File PS File ©Springer-Verlag
This paper addresses the following combination problem: given two equational theories E1 and E2 whose positive theories are decidable, how can one obtain a decision procedure for the positive theory of their union. For theories over disjoint signatures, this problem was solved by Baader and Schulz in 1995. This paper is a first step towards extending this result to the case of theories sharing constructors. Since there is a close connection between positive theories and unification problems, this also extends to the non-disjoint case the work on combining decision procedures for unification modulo equational theories.
@inproceedings{ BaaderTinelliRTA02, address = {Copenhagen, Denmark}, author = {F. {Baader} and C. {Tinelli}}, booktitle = {Proceedings of the 13th International Conference on Rewriting Techniques and Applications (RTA-02)}, editor = {S. {Tison}}, pages = {338--352}, publisher = {Springer-Verlag}, series = {Lecture Notes in Computer Science}, title = {Combining Decision Procedures for Positive Theories Sharing Constructors}, volume = {2378}, year = {2002}, }
Abstract BibTeX Entry Free reprint
The main contribution of this article is a new method for combining decision procedures for the word problem in equational theories. In contrast to previous methods, it is based on transformation rules, and also applies to theories sharing "constructors."
@article{ BaaderTinelliIC02, author = {F. {Baader} and C. {Tinelli}}, journal = {Information and Computation}, number = {2}, pages = {346--390}, title = {Deciding the Word Problem in the Union of Equational Theories}, volume = {178}, year = {2002}, }
Abstract BibTeX Entry PDF File PS File ©Springer-Verlag
For Description Logics with existential restrictions, the size of the least common subsumer (lcs) of concept descriptions may grow exponentially in the size of the input descriptions. The first (negative) result presented in this paper is that it is in general not possible to express the exponentially large concept description representing the lcs in a more compact way by using an appropriate (acyclic) terminology. In practice, a second and often more severe cause of complexity was the fact that concept descriptions containing concepts defined in a terminology must first be unfolded (by replacing defined names by their definition) before the known lcs algorithms could be applied. To overcome this problem, we present a modified lcs algorithm that performs lazy unfolding, and show that this algorithm works well in practice.
@inproceedings{ BaaderTurhan-KI02, address = {Aachen, Germany}, author = {F. {Baader} and A.-Y. {Turhan}}, booktitle = {Proceedings of the German Conference on Artificial Intelligence, 25th German Conference on Artificial Intelligence (KI 2002)}, publisher = {Springer--Verlag}, series = {Lecture Notes in Artificial Intelligence}, title = {On the problem of computing small representations of least common subsumers}, year = {2002}, }
2001
Abstract BibTeX Entry
Whereas matching in Description Logics is now relatively well-investigated, there are only very few formal results on matching under additional side conditions, though these side conditions were already present in the original paper by Borgida and McGuinness introducing matching in DLs. The present paper closes this gap for sublanguages of the DL ALN.
@inproceedings{ BaaderBrandtKuesters-IJCAI, address = {Seattle, Washington}, author = {F. {Baader} and S. {Brandt} and R. {K{\"u}sters}}, booktitle = {Proceedings of the Seventeenth International Joint Conference on Artificial Intelligence, {IJCAI'01}}, editor = {B. {Nebel}}, pages = {213--218}, publisher = {Morgan Kaufmann}, title = {Matching under Side Conditions in Description Logics}, year = {2001}, }
BibTeX Entry
@book{ BaaderBrewkaEiter-01, address = {Vienna, Austria}, editor = {F. {Baader} and G. {Brewka} and Th. {Eiter}}, publisher = {Springer--Verlag}, series = {Lecture Notes in Artificial Intelligence}, title = {KI 2001: Advances in Artificial Intelligence, Proceedings of the Joint German/Austrian Conference on AI (KI 2001)}, volume = {2174}, year = {2001}, }
Abstract BibTeX Entry PDF File PS File ©Springer-Verlag
Unification of concept descriptions was introduced by Baader and Narendran as a tool for detecting redundancies in knowledge bases. It was shown that unification in the small description logic FL0, which allows for conjunction, value restriction, and the top concept only, is already ExpTime-complete. The present paper shows that the complexity does not increase if one additionally allows for composition, union, and transitive closure of roles. It also shows that matching (which is polynomial in FL0) is PSpace-complete in the extended description logic. These results are proved via a reduction to linear equations over regular languages, which are then solved using automata. The obtained results are also of interest in formal language theory.
@inproceedings{ BaaderKuesters-LPAR, address = {Havana, Cuba}, author = {F. {Baader} and R. {K{\"u}sters}}, booktitle = {Proceedings of the 8th International Conference on Logic for Programming, Artificial Intelligence, and Reasoning (LPAR 2001)}, editor = {R. {Nieuwenhuis} and A. {Voronkov}}, pages = {217--232}, publisher = {Springer-Verlag}, series = {Lecture Notes in Computer Science}, title = {Unification in a Description Logic with Transitive Closure of Roles}, volume = {2250}, year = {2001}, }
Abstract BibTeX Entry Free reprint
Unification of concept terms is a new kind of inference problem for Description Logics, which extends the equivalence problem by allowing to replace certain concept names by concept terms before testing for equivalence. We show that this inference problem is of interest for applications, and present first decidability and complexity results for a small concept description language.
@article{ Baader-Narendran-JSC-00, author = {F. {Baader} and P. {Narendran}}, journal = {J. Symbolic Computation}, number = {3}, pages = {277--305}, title = {Unification of Concepts Terms in Description Logics}, volume = {31}, year = {2001}, }
Abstract BibTeX Entry PDF File PS File
Description logics are a family of knowledge representation formalisms that are descended from semantic networks and frames via the system KL-ONE. During the last decade, it has been shown that the important reasoning problems (like subsumption and satisfiability) in a great variety of description logics can be decided using tableau-like algorithms. This is not very surprising since description logics have turned out to be closely related to propositional modal logics and logics of programs (such as propositional dynamic logic), for which tableau procedures have been quite successful. <p> Nevertheless, due to different underlying intuitions and applications, most description logics differ significantly from run-of-the-mill modal and program logics. Consequently, the research on tableau algorithms in description logics led to new techniques and results, which are, however, also of interest for modal logicians. In this article, we will focus on three features that play an important role in description logics (number restrictions, terminological axioms, and role constructors), and show how they can be taken into account by tableau algorithms.
@article{ BaaderSattler-StudiaLogica, author = {F. {Baader} and U. {Sattler}}, journal = {Studia Logica}, pages = {5--40}, title = {An Overview of Tableau Algorithms for Description Logics}, volume = {69}, year = {2001}, }
Abstract BibTeX Entry PDF File PS File ©Springer-Verlag
In many areas of Logic, Computer Science, and Artificial Intelligence, there is a need for specialized formalisms and inference mechanisms to solve domain-specific tasks. For this reason, various methods and systems have been developed that allow for an efficient and adequate treatment of such restricted problems. In most realistic applications, however, one is faced with a complex combination of different problems, which means that a system tailored to solving a single problem can only be applied if it is possible to combine it both with other specialized systems and with general purpose systems.
@incollection{ BaaderSchulzCCL00, author = {F. {Baader} and K. {Schulz}}, booktitle = {Constraints in Computational Logics}, editor = {H. {Comon} and C. {March{\'e}} and R. {Treinen}}, publisher = {Springer--Verlag}, series = {Lecture Notes in Computer Science}, title = {Combining Constraint Solving}, volume = {2002}, year = {2001}, }
Abstract BibTeX Entry PDF File PS File Free reprint
This is the final version of a chapter on unification theory to appear in the Handbook of Automated Reasoning. The chapter is not intended to give a complete coverage of all the results. Instead we try to cover a number of significant topics in more detail. This should give a feeling for unification research and its methodology, provide the most important references, and enable the reader to study recent research papers on the topic.
@incollection{ BaaderSnyderHandbook00, author = {F. {Baader} and W. {Snyder}}, booktitle = {Handbook of Automated Reasoning}, editor = {J.A. {Robinson} and A. {Voronkov}}, note = {See the handbook Web pages of Andrei Voronkov (http://www.cs.man.ac.uk/~voronkov/handbook-ar/index.html) and Elsevier (http://www.elsevier.nl/locate/isbn/0444829490).}, pages = {447--533}, publisher = {Elsevier Science Publishers}, title = {Unification Theory}, volume = {I}, year = {2001}, }
Abstract BibTeX Entry PDF File PS File ©Springer-Verlag
This paper ties together two distinct strands in automated reasoning: the tableau- and the automata-based approach. It shows that the inverse tableau method can be viewed as an implementation of the automata approach. This is of interest to automated deduction because Voronkov recently showed that the inverse method yields a viable decision procedure for the modal logic K.
@inproceedings{ BaaderTobies-IJCAR, author = {F. {Baader} and S. {Tobies}}, booktitle = {Proceedings of the International Joint Conference on Automated Reasoning {IJCAR'01}}, pages = {92--106}, publisher = {Springer-Verlag}, series = {Lecture Notes in Artificial Intelligence}, title = {The Inverse Method Implements the Automata Approach for Modal Satisfiability}, volume = {2083}, year = {2001}, }
Abstract BibTeX Entry PDF File PS File
For Description Logics with existential restrictions, the size of the least common subsumer (lcs) of concept descriptions may grow exponentially in the size of the input descriptions. This paper investigates whether the possibly exponentially large concept description representing the lcs can always be represented in a more compact way when using an appropriate (acyclic) TBox for defining this description. This conjecture was supported by our experience in a chemical process engineering application. Nevertheless, it turns out that, in general, TBoxes cannot always be used to obtain a polynomial size representation of the lcs.
@inproceedings{ BaaderTurhan-DL-2001, address = {Stanford, USA}, author = {F. {Baader} and A.-Y. {Turhan}}, booktitle = {Proceedings of the International Workshop in Description Logics 2001 {(DL2001)}}, month = {August}, title = {TBoxes do not yield a compact representation of least common subsumers}, year = {2001}, }
2000
Abstract BibTeX Entry PDF File PS File
Matching of concepts against patterns is a new inference task in Description Logics, which was originally motivated by applications of the CLASSIC system. Consequently, the work on this problem was until now mostly concerned with sublanguages of the CLASSIC language, which does not allow for existential restrictions. This paper extends the existing work on matching in two directions. On the one hand, the question of what are the most ``interesting" solutions of matching problems is explored in more detail. On the other hand, for languages with existential restrictions both, the complexity of deciding the solvability of matching problems and the complexity of actually computing sets of ``interesting" matchers are determined. The results show that existential restrictions make these computational tasks more complex. Whereas for sublanguages of CLASSIC both problems could be solved in polynomial time, this is no longer possible for languages with existential restrictions.
@inproceedings{ BaaderKuesters-KR-2000, address = {San Francisco, CA}, author = {F. {Baader} and R. {K{\"u}sters}}, booktitle = {Proceedings of the Seventh International Conference on Knowledge Representation and Reasoning (KR2000)}, editor = {A.G. {Cohn} and F. {Giunchiglia} and B. {Selman}}, pages = {261--272}, publisher = {Morgan Kaufmann Publishers}, title = {Matching in Description Logics with Existential Restrictions}, year = {2000}, }
Abstract BibTeX Entry PDF File PS File
The problem of rewriting a concept given a terminology can informally be stated as follows: given a terminology T (i.e., a set of concept definitions) and a concept description C that does not contain concept names defined in T, can this description be rewritten into a related "better" description E by using (some of) the names defined in T? In this paper, we first introduce a general framework for the rewriting problem in description logics, and then concentrate on one specific instance of the framework, namely the minimal rewriting problem (where "better" means shorter, and "related" means equivalent). We investigate the complexity of the decision problem induced by the minimal rewriting problem for the languages FL0, ALN, ALE, and ALC, and then introduce an algorithm for computing (minimal) rewritings for the language ALE. (In the full paper, a similar algorithm is also developed for ALN.) Finally, we sketch other interesting instances of the framework. Our interest for the minimal rewriting problem stems from the fact that algorithms for non-standard inferences, such as computing least common subsumers and matchers, usually produce concept descriptions not containing defined names. Consequently, these descriptions are rather large and hard to read and comprehend. First experiments in a chemical process engineering application show that rewriting can reduce the size of concept descriptions obtained as least common subsumers by almost two orders of magnitude.
@inproceedings{ BaaderKuestersMolitor-KR-2000, address = {San Francisco, CA}, author = {F. {Baader} and R. {K{\"u}sters} and R. {Molitor}}, booktitle = {Proceedings of the Seventh International Conference on Knowledge Representation and Reasoning (KR2000)}, editor = {A.G. {Cohn} and F. {Giunchiglia} and B. {Selman}}, pages = {297--308}, publisher = {Morgan Kaufmann Publishers}, title = {Rewriting Concepts Using Terminologies}, year = {2000}, }
Abstract BibTeX Entry PDF File PS File
<P> One of the major topics in Description Logic (DL) research is investigating the trade-off between the expressivity of a DL and the complexity of its inference problems. The expressiveness of a DL is usually determined by the constructors available for building concepts and roles. Given two DLs, their union is the DL that allows the unrestricted use of the constructors of both DLs. There are well-known examples that show that decidability of DLs usually does not transfer to their union. </P> <P> In this paper, we consider the fusion of two DLs, which is more restrictive than the union. Intuitively, in the fusion the role names are partitioned into two sets, and the constructors of the first DL can only use role names of one set, whereas the constructors of the second DL can only use role names of the other set. We show that under certain (rather weak) conditions decidability transfers from given DLs to their fusion. More precisely, the inference problems that we consider are satisfiability/subsumption of concept descriptions as well as satisfiability/subsumption w.r.t. general inclusion axioms. </P> <P> These results adapt and generalize known transfer results from modal logic to DL. In order to capture the notion of a DL formally, we introduce the notion of an abstract description system and prove our results within this new formal framework. </P>
@inproceedings{ BaaLutStuWol-DL-2000, address = {Aachen, Germany}, author = {F. {Baader} and C. {Lutz} and H. {Sturm} and F. {Wolter}}, booktitle = {Proceedings of the International Workshop in Description Logics 2000 {(DL2000)}}, editor = {F. {Baader} and U. {Sattler}}, month = {August}, note = {Proceedings online available from {http://SunSITE.Informatik.RWTH-Aachen.DE/Publications/CEUR-WS/Vol-33/}}, number = {33}, pages = {21--30}, publisher = {RWTH Aachen}, series = {CEUR-WS}, title = {Fusions of Description Logics}, year = {2000}, }
Abstract BibTeX Entry PDF File PS File ©Springer-Verlag
Given a finite set C:=c1, ... , cn of description logic concepts, we are interested in computing the subsumption hierarchy of all least common subsumers of subsets of C. This hierarchy can be used to support the bottom-up construction and the structuring of description logic knowledge bases. The point is to compute this hierarchy without having to compute the least common subsumer for all subsets of C. In this paper, we show that methods from formal concept analysis developed for computing concept lattices can be employed for this purpose.
@inproceedings{ BaaderMolitor-ICCS-2000, author = {F. {Baader} and R. {Molitor}}, booktitle = {Conceptual Structures: Logical, Linguistic, and Computational Issues -- Proceedings of the 8th International Conference on Conceptual Structures (ICCS2000)}, editor = {B. {Ganter} and G. {Mineau}}, pages = {290--303}, publisher = {Springer Verlag}, series = {Lecture Notes in Artificial Intelligence}, title = {Building and Structuring Description Logic Knowledge Bases Using Least Common Subsumers and Concept Analysis}, volume = {1867}, year = {2000}, }
Abstract BibTeX Entry PDF File PS File ©Springer-Verlag
Description logics are a family of knowledge representation formalisms that are descended from semantic networks and frames via the system KLONE. During the last decade, it has been shown that the important reasoning problems (like subsumption and satisfiability) in a great variety of description logics can be decided using tableau-like algorithms. This is not very surprising since description logics have turned out to be closely related to propositional modal logics and logics of programs (such as propositional dynamic logic), for which tableau procedures have been quite successful. Nevertheless, due to different underlying intuitions and applications, most description logics differ significantly from run-of-the-mill modal and program logics. Consequently, the research on tableau algorithms in description logics led to new techniques and results, which are, however, also of interest for modal logicians. In this article, we will focus on three features that play an important role in description logics (number restrictions, terminological axioms, and role constructors), and show how they can be taken into account by tableau algorithms.
@inproceedings{ BaaderSattler-Tablaux-2000, address = {St Andrews, Scotland, UK}, author = {F. {Baader} and U. {Sattler}}, booktitle = {Proceedings of the International Conference on Automated Reasoning with Tableaux and Related Methods (Tableaux 2000)}, editor = {R. {Dyckhoff}}, pages = {1--18}, publisher = {Springer-Verlag}, series = {Lecture Notes in Artificial Intelligence}, title = {Tableau Algorithms for Description Logics}, volume = {1847}, year = {2000}, }
Abstract BibTeX Entry PDF File PS File ©Springer-Verlag
In this paper we extend the applicability of our combination method for decision procedures for the word problem to theories sharing non-collapse-free constructors. This extension broadens the scope of the combination procedure considerably, for example in the direction of equational theories axiomatizing the equivalence of modal formulae.
@inproceedings{ Baader-Tinelli-FroCoS-00, address = {Nancy, France}, author = {F. {Baader} and C. {Tinelli}}, booktitle = {Proceedings of the 3rd International Workshop on Frontiers of Combining Systems (FroCoS 2000)}, editor = {H. {Kirchner} and Ch. {Ringeissen}}, pages = {257--271}, publisher = {Springer-Verlag}, series = {Lecture Notes in Computer Science}, title = {Combining Equational Theories Sharing Non-Collapse-Free Constructors}, volume = {1794}, year = {2000}, }
BibTeX Entry
@incollection{ FrancBaadSattvass-DWQ-Buch, author = {E. {Franconi} and F. {Baader} and U. {Sattler} and P. {Vassiliadis}}, booktitle = {Fundamentals of Data Warehousing}, editor = {M. {Jarke} and M. {Lenzerini} and Y. {Vassilious} and P. {Vassiliadis}}, pages = {87--106}, publisher = {Springer-Verlag}, title = {Multidimensional Data Models and Aggregation}, year = {2000}, }
1999
Abstract BibTeX Entry PDF File PS File ©Springer-Verlag
After a short analysis of the requirements that a knowledge representation language must satisfy, we introduce Description Logics, Modal Logics, and Nonmonotonic Logics as formalisms for representing terminological knowledge, time-dependent or subjective knowledge, and incomplete knowledge respectively. At the end of each section, we briefly comment on the connection to Logic Programming.
@inproceedings{ Baader-LNAI-1999, author = {F. {Baader}}, booktitle = {Artificial Intelligence Today, Recent Trends and Developments}, editor = {M.J. {Wooldridge} and M. {Veloso}}, number = {1600}, pages = {13--41}, publisher = {Springer Verlag}, series = {Lecture Notes in Computer Science}, title = {Logic-Based Knowledge Representation}, year = {1999}, }
Abstract BibTeX Entry PDF File PS File
Matching of concepts with variables (concept patterns) is a relatively new operation that has been introduced in the context of description logics, originally to help filter out unimportant aspects of large concepts appearing in industrial-strength knowledge bases. Previous work on this problem has produced polynomial-time matching algorithms for sublanguages of the DL used in CLASSIC. Consequently, these algorithms cannot handle existential restrictions. In this paper, we consider matching in DLs allowing for existential restrictions. We describe decision procedures that test solvability of matching problems as well as algorithms for computing complete sets of matchers. Unfortunately, these algorithms are no longer polynomial-time, even for the small language EL, which allows for the top concept, conjunction and existential restrictions.
@inproceedings{ BaaderKuesters-DL-1999, address = {Sweden}, author = {F. {Baader} and R. {K{\"u}sters}}, booktitle = {Proceedings of the International Workshop on Description Logics 1999 (DL'99)}, editor = {P. {Lambrix} and A. {Borgida} and M. {Lenzerini} and R. {M{\"o}ller} and P. {Patel-Schneider}}, note = {Proceedings online available from {http://SunSITE.Informatik.RWTH-Aachen.DE/Publications/CEUR-WS/Vol-22/}}, number = {22}, publisher = {Link{\"o}ping University}, series = {CEUR-WS}, title = {Matching in Description Logics with Existential Restrictions}, year = {1999}, }
Abstract BibTeX Entry Free reprint
Matching concepts against patterns (concepts with variables) is a relatively new operation that has been introduced in the context of concept description languages (description logics). The original goal was to help filter out unimportant aspects of complicated concepts appearing in large industrial knowledge bases. We propose a new approach to performing matching, based on a ``concept-centered'' normal form, rather than the more standard ``structural subsumption'' normal form for concepts. As a result, matching can be performed (in polynomial time) using arbitrary concept patterns of the description language ALN, thus removing restrictions from previous work. The paper also addresses the question of matching problems with additional ``side conditions'', which were motivated by practical needs.
@article{ BaadKuestBorgMcGuinn-JLC-99, author = {F. {Baader} and R. {K{\"u}sters} and A. {Borgida} and D. {McGuinness}}, journal = {Journal of Logic and Computation}, number = {3}, pages = {411--447}, title = {Matching in Description Logics}, volume = {9}, year = {1999}, }
Abstract BibTeX Entry
Computing the least common subsumer (lcs) is an inference task that can be used to support the "bottom-up" construction of knowledge bases for KR systems based on description logics. Previous work on how to compute the lcs has concentrated on description logics that allow for universal value restrictions, but not for existential restrictions. The main new contribution of this paper is the treatment of description logics with existential restrictions. Our approach for computing the lcs is based on an appropriate representation of concept descriptions by certain trees, and a characterization of subsumption by homomorphisms between these trees. The lcs operation then corresponds to the product operation on trees.
@inproceedings{ BaaderKuesters+-IJCAI-1999, author = {F. {Baader} and R. {K{\"u}sters} and R. {Molitor}}, booktitle = {Proceedings of the 16th International Joint Conference on Artificial Intelligence (IJCAI'99)}, editor = {T. {Dean}}, pages = {96--101}, publisher = {Morgan Kaufmann}, title = {Computing Least Common Subsumers in Description Logics with Existential Restrictions}, year = {1999}, }
Abstract BibTeX Entry PDF File PS File
In this work we consider the inference problem of computing (minimal) rewritings of concept descriptions using defined concepts from a terminology. We introduce a general framework for this problem and instantiate it with the small description logic FLo, which provides us with conjunction and value restrictions. We show that the decision problem induced by the minimal rewriting problem is NP-complete for FLo.
@inproceedings{ BaaderMolitor-DL-1999, address = {Sweden}, author = {F. {Baader} and R. {Molitor}}, booktitle = {Proceedings of the International Workshop on Description Logics 1999 (DL'99)}, editor = {P. {Lambrix} and A. {Borgida} and M. {Lenzerini} and R. {M{\"o}ller} and P. {Patel-Schneider}}, note = {Proceedings online available from {http://SunSITE.Informatik.RWTH-Aachen.DE/Publications/CEUR-WS/Vol-22/}}, number = {22}, publisher = {Link{\"o}ping University}, series = {CEUR-WS}, title = {Rewriting Concepts Using Terminologies}, year = {1999}, }
Abstract BibTeX Entry PDF File PS File ©Springer-Verlag
This paper is concerned with decidability and tractability of reasoning in conceptual graphs (CGs). It is well-known that problems like validity and subsumption of general CGs are undecidable, whereas subsumption is NP-complete for simple conceptual graphs (SGs) and tractable for the fragment of SGs that are trees. On the one hand, we will employ results on decidable fragments of first-order logic to identify a natural and expressive fragment of CGs for which validity and subsumption is decidable in deterministic exponential time. On the other hand, we will extend existing work on the connection between SGs and description logics (DLs) by identifying a DL that corresponds to the class of SGs that are trees. This yields a previously unknown tractability result for the DL in question. As a by-product, we will extend the tractability results for trees to SGs that can be transformed into trees by ``cutting cycles.''
@inproceedings{ BaaderMolitor+-ICCS-1999, author = {F. {Baader} and R. {Molitor} and S. {Tobies}}, booktitle = {Proceedings of the Seventh International Conference on Conceptual Structures (ICCS'99)}, editor = {W. {Cyre} and W. {Tepfenhart}}, number = {1640}, pages = {480--493}, publisher = {Springer Verlag}, series = {Lecture Notes in Computer Science}, title = {Tractable and Decidable Fragments of Conceptual Graphs}, year = {1999}, }
Abstract BibTeX Entry PDF File PS File Free reprint
Number restrictions are concept constructors that are available in almost all implemented Description Logic systems. However, they are mostly available only in a rather weak form, which considerably restricts their expressive power. On the one hand, the roles that may occur in number restrictions are usually of a very restricted type, namely atomic roles or complex roles built using either intersection or inversion. In the present paper, we increase the expressive power of Description Logics by allowing for more complex roles in number restrictions. As role constructors, we consider composition of roles (which will be present in all our logics) and intersection, union, and inversion of roles in different combinations. We will present two decidability results (for the basic logic that extends ALC by number restrictions on roles with composition, and for one extension of this logic), and three undecidability results for three other extensions of the basic logic. On the other hand, with the rather weak form of number restrictions available in implemented systems, the number of role successors of an individual can only be restricted by a fixed non-negative integer. To overcome this lack of expressiveness, we allow for variables ranging over the non-negative integers in place of the fixed numbers in number restrictions. The expressive power of this constructor is increased even further by introducing explicit quantifiers for the numerical variables. The Description Logic obtained this way turns out to have an undecidable satisfiability problem. For a restricted logic we show that concept satisfiability is decidable.
@article{ BaaderSattler-JLC-99, author = {F. {Baader} and U. {Sattler}}, journal = {Journal of Logic and Computation}, number = {3}, pages = {319--350}, title = {Expressive Number Restrictions in Description Logics}, volume = {9}, year = {1999}, }
Abstract BibTeX Entry ©Springer-Verlag
The main contribution of this paper is a new method for combining decision procedures for the word problem in equational theories sharing ``constructors.'' The notion of constructor adopted in this paper has a nice algebraic definition and is more general than a related notion introduced in previous work on the combination problem.
@inproceedings{ Baader-Tinelli-RTA-99, address = {Trento, Italy}, author = {F. {Baader} and C. {Tinelli}}, booktitle = {Proceedings of the 10th International Conference on Rewriting Techniques and Applications (RTA-99)}, editor = {P. {Narendran} and M. {Rusinowitch}}, pages = {175--189}, publisher = {Springer-Verlag}, series = {Lecture Notes in Computer Science}, title = {Deciding the Word Problem in the Union of Equational Theories Sharing Constructors}, volume = {1631}, year = {1999}, }
Abstract BibTeX Entry PDF File PS File DOI
Matching concepts against patterns (concepts with variables) is a relatively new operation that has been introduced in the context of concept description languages (description logics). Their original goals was to help filter out unimportant aspects of complicated concepts appearing in large industrial knowledge bases. We propose a new approach to performing matching, based on a "concept-centered" normal form, rather than the more standard "structural subsumption" normal form for concepts. As a result, matching can be performed (in polynomial time) using arbitrary concept patterns of the description language \(\mathcal{ALN}\), thus removing restrictions from previous work. The paper also addresses the question of matching problems with additional "side conditions", which were motivated by practical needs.
@article{ BaaderBorgida+-JLC-1999, author = {Franz {Baader} and Ralf {K{\"{u}}sters} and Alexander {Borgida} and Deborah L. {McGuinness}}, doi = {https://doi.org/10.1093/logcom/9.3.411}, journal = {J. Log. Comput.}, number = {3}, pages = {411--447}, title = {Matching in Description Logics}, volume = {9}, year = {1999}, }
1998
Abstract BibTeX Entry
Unification modulo the theory of Boolean algebras has been investigated by several autors. Nevertheless, the exact complexity of the decision problem for unification with constants and general unification was not known. In this research note, we show that the decision problem is \(\Pi^p_2\)-complete for unification with constants and PSPACE-complete for general unification. In contrast, the decision problem for elementary unification (where the terms to be unified contain only symbols of the signature of Boolean algebras) is ``only'' NP-complete.
@article{ Baader-IPL-98, author = {F. {Baader}}, journal = {Information Processing Letters}, number = {4}, pages = {215--220}, title = {On the Complexity of {B}oolean Unification}, volume = {67}, year = {1998}, }
Abstract BibTeX Entry
Matching of concepts with variables (concept patterns) is a relatively new operation that has been introduced in the context of concept description languages (description logics), originally to help filter out unimportant aspects of large concepts appearing in industrial-strength knowledge bases. This paper proposes a new approach to performing matching, based on a ``concept-centered'' normal form, rather than the more standard ``structural subsumption'' normal form for concepts. As a result, matching can be performed (in polynomial time) using arbitrary concept patterns of a description language allowing for conjunction, value restriction, and atomic negation, thus removing restrictions on the form of the patterns from previous work. The paper also addresses the question of matching problems with additional ``side conditions'', which were motivated by practical experience.
@inproceedings{ Baader-Borgida-McGuinness-ICCS-98, address = {Montpelier (France)}, author = {F. {Baader} and A. {Borgida} and D.L. {McGuinness}}, booktitle = {Proceedings of the Sixth International Conference on Conceptual Structures (ICCS-98)}, editor = {M.-L. {Mugnier} and M. {Chein}}, pages = {15--34}, publisher = {Springer--Verlag}, series = {Lecture Notes in Computer Science}, title = {Matching in Description Logics: {P}reliminary Results}, volume = {1453}, year = {1998}, }
Abstract BibTeX Entry
Computing least common subsumers (lcs) and most specific concepts (msc) are inference tasks that can be used to support the ``bottom up'' construction of knowledge bases for KR systems based on description logic. For the description logic ALN, the msc need not always exist if one restricts the attention to acyclic concept descriptions. In this paper, we extend the notions lcs and msc to cyclic descriptions, and show how they can be computed. Our approach is based on the automata-theoretic characterizations of fixed-point semantics for cyclic terminologies developed in previous papers.
@inproceedings{ Baader:Kuesters:KI98, address = {Bremen, Germany}, author = {F. {Baader} and R. {K\"usters}}, booktitle = {Proceedings of the 22nd Annual German Conference on Artificial Intelligence, {KI-98}}, editor = {O. {Herzog} and A. {G\"unter}}, pages = {129--140}, publisher = {Springer--Verlag}, series = {Lecture Notes in Computer Science}, title = {Computing the least common subsumer and the most specific concept in the presence of cyclic $\mathcal{ALN}$-concept descriptions}, volume = {1504}, year = {1998}, }
Abstract BibTeX Entry PDF File PS File
Computing least common subsumers (lcs) and most specific concepts (msc) are inference tasks that can be used to support the ``bottom up'' construction of knowledge bases for KR systems based on description logic. For the description logic \(\mathcal{ALN}\), the msc need not always exist if one restricts the attention to acyclic concept descriptions. In this paper, we extend the notions lcs and msc to cyclic descriptions, and show how they can be computed. Our approach is based on the automata-theoretic characterizations of fixed-point semantics for cyclic terminologies developed in previous papers.
@inproceedings{ BaaderKuesters-DL-1998, address = {Trento, Italy}, author = {F. {Baader} and R. {K{\"u}sters}}, booktitle = {Proceedings of the 1998 International Workshop on Description Logics (DL'98)}, title = {Least common subsumer computation w.r.t. cyclic $\mathcal{ALN}$-terminologies}, year = {1998}, }
Abstract BibTeX Entry PDF File PS File
This paper compares two approaches for deriving subsumption algorithms for the description logic ALN: structural subsumption and an automata-theoretic characterization of subsumption. It turns out that structural subsumption algorithms can be seen as special implementations of the automata-theoretic characterization.
@inproceedings{ BaaderKuesters+-DL-98, address = {Trento, Italy}, author = {F. {Baader} and R. {K{\"u}sters} and R. {Molitor}}, booktitle = {Proceedings of the 1998 International Workshop on Description Logics DL'98}, title = {Structural Subsumption Considered from an Automata Theoretic Point of View}, year = {1998}, }
Abstract BibTeX Entry
Unification of concept terms is a new kind of inference problem for Description Logics, which extends the equivalence problem by allowing to replace certain concept names by concept terms before testing for equivalence. We show that this inference problem is of interest for applications, and present first decidability and complexity results for a small concept description language.
@inproceedings{ Baader-Narendran-ECAI-98, author = {F. {Baader} and P. {Narendran}}, booktitle = {Proceedings of the 13th European Conference on Artificial Intelligence (ECAI-98)}, editor = {H. {Prade}}, pages = {331--335}, publisher = {John Wiley \& Sons Ltd}, title = {Unification of Concept Terms in Description Logics}, year = {1998}, }
BibTeX Entry PDF File PS File
@inproceedings{ BaSat98b, author = {F. {Baader} and U. {Sattler}}, booktitle = {Proceedings of the 13th European Conference on Artificial Intelligence (ECAI-98)}, editor = {H. {Prade}}, pages = {336--340}, publisher = {John Wiley \& Sons Ltd}, title = {Description Logics with Concrete Domains and Aggregation}, year = {1998}, }
Abstract BibTeX Entry PDF File PS File Free reprint
When combining languages for symbolic constraints, one is typically faced with the problem of how to treat ``mixed'' constraints. The two main problems are (1) how to define a combined solution structure over which these constraints are to be solved, and (2) how to combine the constraint solving methods for pure constraints into one for mixed constraints. The paper introduces the notion of a ``free amalgamated product'' as a possible solution to the first problem. We define so-called quasi-free structures (called ``strong simply-combinable structures'' in a previous publication) as a generalization of free structures. For quasi-free structures over disjoint signatures, we describe a canonical amalgamation construction that yields the free amalgamated product. The combination techniques known from unification theory can be used to combine constraint solvers for quasi-free structures over disjoint signatures into a solver for their free amalgamated product. In addition to term algebras modulo equational theories (i.e., free algebras), the class of quasi-free structures contains many solution structures that are of interest in constraint logic programming, such as the algebra of rational trees, feature structures, and domains consisting of hereditarily finite (wellfounded or non-wellfounded) nested sets and lists.
@article{ BaaderSchulz-TCS-98, author = {F. {Baader} and K. {Schulz}}, journal = {Theoretical Computer Science}, pages = {107--161}, title = {Combination of Constraint Solvers for Free and Quasi-Free Structures}, volume = {192}, year = {1998}, }
Abstract BibTeX Entry
In this chapter, we first motivate equational unification by its applications in theorem proving and term rewriting. In addition to applications that require the computation of unifiers, we will also mention constraint-based approaches, in which only solvability of unification problems (i.e., the existence of unifiers) must be tested. Then we extend the definitions known from syntactic unification (such as most general unifier) to the case of equational unification. It turns out that, for equational unification, one must be more careful when introducing these notions. In the third section, we will mention some unification results for specific equational theories. In the fourth, and central, section of this chapter, we treat the important problem of how to combine unification algorithms. This problem occurs, for example, if we have a unification algorithm that can treat the commutative symbol ``+'' and another algorithm that can treat the associative symbol ``x'', and we want to unify terms that contain both symbols. Finally, we conclude with a short section in which other interesting topics in the field of equational unification are mentioned, which could not be treated in more detail in this chapter.
@incollection{ Baader-Schulz-ADHandbook98, address = {Dordrecht, NL}, author = {F. {Baader} and K.U. {Schulz}}, booktitle = {Automated Deduction -- A Basis for Applications, Vol.~I: Foundations -- Calculi and Methods}, editor = {W. {Bibel} and P.H. {Schmidt}}, pages = {225--263}, publisher = {Kluwer Academic Publishers}, series = {Applied Logic Series}, title = {Unification Theory}, volume = {8}, year = {1998}, }
Abstract BibTeX Entry
This is the first English language textbook offering a unified and self-contained introduction to the field of term rewriting. It covers all the basic material (abstract reduction systems, termination, confluence, completion, and combination problems), but also some important and closely connected subjects: universal algebra, unification theory and Gröbner bases. The main algorithms are presented both informally and as programs in the functional language Standard ML (an appendix contains a quick and easy introduction to ML). Certain crucial algorithms like unification and congruence closure are covered in more depth and efficient Pascal programs are developed. The book contains many examples and over 170 exercises.<p> This text is also an ideal reference book for professional researchers: results that have been spread over many conference and journal articles are collected together in a unified notation, detailed proofs of almost all theorems are provided, and each chapter closes with a guide to the literature.<p> <a href="http://www4.informatik.tu-muenchen.de/ nipkow/TRaAT/index.html">More Information</a> (Table of contents, sample programs, errata)
@book{ BaaderNipkow-98, address = {United Kingdom}, author = {Franz {Baader} and Tobias {Nipkow}}, publisher = {Cambridge University Press}, title = {Term Rewriting and All That}, year = {1998}, }
1997
Abstract BibTeX Entry
Reduction orderings that are compatible with an equational theory E and total on (the E-equivalence classes of) ground terms play an important role in automated deduction. This paper presents a general approach for combining such orderings: it shows how E1-compatible reduction orderings total on S1-ground terms and E2-compatible reduction orderings total on S2-ground terms can be used to construct an E-compatible reduction ordering total on S-ground terms (where F is the union of the theories E1 and E2, and S is the union of the signatures S1 and S2), provided that S1 and S2 are disjoint and some other (rather weak) restrictions are satisfied. This work was motivated by the observation that it is often easier to construct such orderings for "small" signatures and theories separately, rather than directly for their union.
@inproceedings{ Baader-LICS-97, address = {Warsaw, Poland}, author = {F. {Baader}}, booktitle = {Proceedings of the Twelfth Annual {IEEE} Symposium on Logic in Computer Science (LICS-97)}, editor = {G. {Winskel}}, pages = {2--13}, publisher = {IEEE Computer Society Press}, title = {Combination of Compatible Reduction Orderings that are Total on Ground Terms}, year = {1997}, }
Abstract BibTeX Entry PDF File PS File
Unification of concept terms in Description Logics can be used to determine whether a newly introduced concept may have already been defined before, possibly using other atomic names or modelling concepts on a different level of granularity. We show that unification of concept terms in the small concept description language \({\cal FL}_0\) can be reduced to unification modulo an appropriate equational theory. Using results from unification theory, we can further reduce this unification problem to a formal language problem, which can be solved (in Exptime) with the help of tree automata. It can also be shown that the problem is PSPACE hard.
@inproceedings{ Baader-Narendran-UNIF-97, author = {F. {Baader} and P. {Narendran}}, booktitle = {Proceedings of the 11th International Workshop on Unification, {UNIF-97}, {LIFO} Technical Report 97-8}, publisher = {LIFO, Universit\`e de Orl\`eans}, title = {Unification of Concept Terms}, year = {1997}, }
Abstract BibTeX Entry PDF File PS File
Unification of concept terms is a new kind of inference problem for Description Logics, which extends the equivalence problem by allowing to substitute certain concept names by concept terms before testing for equivalence. We show that this inference problem is of interest for applications, and present first decidability and complexity results for a small concept description language.
@inproceedings{ Baader-Narendran-DL-97, author = {F. {Baader} and P. {Narendran}}, booktitle = {Proceedings of the International Workshop on Description Logics, {DL'97}}, pages = {34--38}, publisher = {LRI, Universit\`e PARIS-SUD, Cente d'Orsay}, title = {Unification of Concept Terms in Description Logics}, year = {1997}, }
BibTeX Entry PDF File PS File
@inproceedings{ BaSatDL97, address = {Gif sur Yvette, France}, author = {F. {Baader} and U. {Sattler}}, booktitle = {Proceedings of the International Workshop on Description Logics}, title = {Description Logics with Aggregates and Concrete Domains}, year = {1997}, }
Abstract BibTeX Entry
The Nelson-Oppen combination method can be used to combine decision procedures for the validity of quantifier-free formulae in first-order theories with disjoint signatures, provided that the theories to be combined are stably infinite. We show that, even though equational theories need not satisfy this property, Nelson and Oppen's method can be applied, after some minor modifications, to combine decision procedures for the validity of quantifier-free formulae in equational theories. Unfortunately, and contrary to a common belief, the method cannot be used to combine decision procedures for the word problem. We present a method that solves this kind of combination problem. Our method is based on transformation rules and also applies to equational theories that share a finite number of constant symbols.
@inproceedings{ Baader-Tinelli-CADE-97, author = {F. {Baader} and C. {Tinelli}}, booktitle = {Proceedings of the 14th International Conference on Automated Deduction (CADE-97)}, editor = {W. {McCune}}, pages = {19--33}, publisher = {Springer-Verlag}, series = {Lecture Notes in Artificial Intelligence}, title = {A New Approach for Combining Decision Procedures for the Word Problem, and Its Connection to the {Nelson-Oppen} Combination Method}, volume = {1249}, year = {1997}, }
1996
Abstract BibTeX Entry Free reprint
The notions `expressive power' or `expressiveness' of knowledge representation languages (KR languages) can be found in most papers on knowledge representation; but these terms are usually just employed in an intuitive sense. The papers contain only informal descriptions of what is meant by expressiveness. There are several reasons that speak in favour of a formal definition of expressiveness: for example, if we want to show that certain expressions in one language cannot be expressed in another language, we need a strict formalism that can be used in mathematical proofs. Even though we shall only consider terminological KR languages—i.e. KR languages descending from the original system KL-ONE—in our motivation and in the examples, the definition of expressive power that will be given in this paper can be used for all KR languages with Tarski-style model-theoretic semantics. This definition will shed a new light on the tradeoff between expressiveness of a representation language and its computational tractability. There are KR languages with identical expressive power, but different complexity results for reasoning, which comes from the fact that sometimes the tradeoff lies between convenience and computational tractability. The definition of expressive power will be applied to compare various terminological KR languages known from the literature with respect to their expressiveness. This will yield examples for how to utilize the definition both in positive proofs—that is, proofs where it is shown that one language can be expressed by another language—and, more interestingly, in negative proofs—which show that a given language cannot be expressed by the other language.
@article{ Baader-JLC-96, author = {F. {Baader}}, journal = {J. of Logic and Computation}, number = {1}, pages = {33--54}, title = {A Formal Definition for the Expressive Power of Terminological Knowledge Representation Languages}, volume = {6}, year = {1996}, }
Abstract BibTeX Entry PDF File PS File
Reduction orderings that are compatible with an equational theory \(E\) and total on the \(E\)-equivalence classes of ground terms play an important role in automated deduction. It has turned out to be rather hard to define such orderings. This paper supports the process of designing compatible total reduction orderings. It describes how total reduction orderings \(>_1\) and \(>_2\) that are respectively compatible with \(E_1\) and \(E_2\) can be combined to a total reduction ordering \(>\) that is compatible with \(E_1 \cup E_2\), provided that the theories are over disjoint signatures and some other properties are satisfied.
@inproceedings{ Baader-UNIF-96, author = {F. {Baader}}, booktitle = {Proceedings of the 10th International Workshop on Unification, {UNIF-96}, CIS-Report 96-91}, pages = {97--106}, publisher = {CIS, Universit{\"a}t M{\"u}nchen}, title = {Combination of Compatible Reduction Orderings that are Total on Ground Terms}, year = {1996}, }
Abstract BibTeX Entry PDF File PS File Free reprint
In most of the implemented terminological knowledge representation systems it is not possible to state recursive concept definitions, so-called terminological cycles. One reason is that it is not clear what kind of semantics to use for such cyles. In addition, the inference algorithms used in such systems may go astray in the presence of terminological cycles. In this paper we consider terminological cycles in a very small terminological representation language. For this language, the effect of the three types of semantics introduced by B. Nebel can completely be described with the help of finite automata. These descriptions provide for a rather intuitive understanding of terminologies with recursive definitions, and they give an insight into the essential features of the respective semantics. In addition, one obtains algorithms and complexity results for the subsumption problem and for related inference tasks. The results of this paper may help to decide what kind of semantics is most appropriate for cyclic definitions, depending on the representation task.
@article{ Baader-AMAI-96, author = {F. {Baader}}, journal = {Annals of Mathematics and Artificial Intelligence}, number = {2--4}, pages = {175--219}, title = {Using Automata Theory for Characterizing the Semantics of Terminological Cycles}, volume = {18}, year = {1996}, }
Abstract BibTeX Entry
Nach einer kurzen Betrachtung der Anforderungen, die eine Wissensrepr"asentationssprache erf"ullen sollte, werden wir auf Beschreibungslogiken, Modallogiken und nichtmonotone Logiken als Formalismen zur Repr"asentation terminologischen Wissens, zeitabh"angigen und subjektiven Wissens sowie unvollst"andigen Wissens eingehen. Am Ende jedes Abschnitts wird kurz auf die Verbindung zur Logischen Programmierung eingegangen.
@article{ Baader-KI-96, author = {F. {Baader}}, journal = {{KI}}, pages = {8--16}, title = {{L}ogik-basierte {W}issensrepr{\"a}sentation}, volume = {{3/96}}, year = {1996}, }
Abstract BibTeX Entry PDF File PS File Free reprint
The concept description formalisms of existing description logics systems allow the user to express local cardinality restrictions on the fillers of a particular role. It is not possible, however, to introduce global restrictions on the number of instances of a given concept. This article argues that such cardinality restrictions on concepts are of importance in applications such as configuration of technical systems, an application domain of description logics systems that is currently gaining in interest. It shows that including such restrictions in the description language leaves the important inference problems such as instance testing decidable. The algorithm combines and simplifies the ideas developed for the treatment of qualified number restrictions and of general terminological axioms.
@article{ BaaderBuchheit+-AIJ-1996, author = {F. {Baader} and M. {Buchheit} and B. {Hollunder}}, journal = {Artificial Intelligence}, number = {1--2}, pages = {195--213}, title = {Cardinality Restrictions on Concepts}, volume = {88}, year = {1996}, }
Abstract BibTeX Entry
We study the class of theories for which solving unification problems is equivalent to solving systems of linear equations over a semiring. It encompasses important examples like the theories of Abelian monoids, idempotent Abelian monoids, and Abelian groups. This class has been introduced by the authors independently of each other as ``commutative theories'' (Baader) and ``monoidal theories'' (Nutt). We show that commutative theories and monoidal theories indeed define the same class (modulo a translation of the signature), and we prove that it is undecidable whether a given theory belongs to it. In the remainder of the paper we investigate combinations of commutative/monoidal theories with other theories. We show that finitary commutative/monoidal theories always satisfy the requirements for applying general methods developed for the combination of unification algorithms for disjoint equational theories. Then we study the adjunction of monoids of homomorphismss to commutative/monoidal theories. This is a special case of a non-disjoint combination, which has an algebraic counterpart in the corresponding semiring. By studying equations over this semiring, we identify a large subclass of commutative/monoidal theories that are of unification type zero by. We also show with methods from linear algebra that unitary and finitary commutative/monoidal theories do not change their unification type when they are augmented by a finite monoid of s, and how algorithms for the extended theory can be obtained from algorithms for the basic theory.
@article{ BaaderNutt-AAECC-96, author = {F. {Baader} and W. {Nutt}}, journal = {J. Applicable Algebra in Engineering, Communication and Computing}, number = {4}, pages = {309--337}, title = {Combination Problems for Commutative/Monoidal Theories: {H}ow Algebra Can Help in Equational Reasoning}, volume = {7}, year = {1996}, }
Abstract BibTeX Entry PDF File PS File
Motivated by a chemical engineering application, we introduce an extension of the concept description language ALCN by symbolic number restrictions. This first extension turns out to have an undecidable concept satisfiability problem. For a restricted language-whose expressive power is sufficient for our application-we show that concept satisfiability is decidable.
@inproceedings{ BaSat95, author = {F. {Baader} and U. {Sattler}}, booktitle = {Proceedings of the Twelfth European Conference on Artificial Intelligence (ECAI-96)}, editor = {W. {Wahlster}}, note = {An extended version has appeared as Technical Report LTCS-96-03}, pages = {283--287}, publisher = {John Wiley \& Sons Ltd}, title = {Description Logics with Symbolic Number Restrictions}, year = {1996}, }
Abstract BibTeX Entry PDF File PS File
In process engineering, as in many other application domains, the domain specific knowledge is far too complex to be described entirely using description logics. Hence this knowledge is often stored using an object-oriented system, which, because of its high expressiveness, provides only weak inference services. In particular, the process engineers at RWTH Aachen have developed a frame-like language for describing process models. In this paper, we investigate how the powerful inference services provided by a DL system can support the users of this frame-based system. In addition, we consider extensions of description languages that are necessary to represent the relevant process engineering knowledge.
@inproceedings{ BaaderSattler-DL-96, address = {Cambridge (Boston), MA, U.S.A.}, author = {F. {Baader} and U. {Sattler}}, booktitle = {Proceedings of the International Workshop on Description Logics}, publisher = {{AAAI} Press/The {MIT} Press}, title = {Knowledge Representation in Process Engineering}, year = {1996}, }
Abstract BibTeX Entry
Number restrictions are concept constructors that are available in almost all implemented description logic systems. However, even though there has lately been considerable effort on integrating expressive role constructors into description logics, the roles that may occur in number restrictions are usually of a very restricted type. Until now, only languages with number restrictions on atomic roles and inversion of atomic roles, or with number restrictions on intersection of atomic roles have been investigated in detail. In the present paper, we increase the expressive power of description languages by allowing for more complex roles in number restrictions. As role constructors, we consider composition of roles (which will be present in all our languages), and intersection, union and inversion of roles in different combinations. We will present one decidability result (for the basic language that extends ALC by number restrictions on roles with composition), and three undecidability results for three different extensions of the basic language.
@inproceedings{ BaSat96b, author = {F. {Baader} and U. {Sattler}}, booktitle = {Proceedings of the Fifth International Conference on the Principles of Knowledge Representation and Reasoning (KR-96)}, note = {An extended version has appeared as Technical Report LTCS-96-02}, publisher = {Morgan Kaufmann, Los Altos}, title = {Number Restrictions on Complex Roles in Description Logics}, year = {1996}, }
Abstract BibTeX Entry Free reprint
Most of the work on the combination of unification algorithms for the union of disjoint equational theories has been restricted to algorithms that compute finite complete sets of unifiers. Thus the developed combination methods usually cannot be used to combine decision procedures, i.e., algorithms that just decide solvability of unification problems without computing unifiers. In this paper we describe a combination algorithm for decision procedures that works for arbitrary equational theories, provided that solvability of so-called unification problems with constant restrictions—a slight generalization of unification problems with constants—is decidable for these theories. As a consequence of this new method, we can, for example, show that general A-unifiability, i.e., solvability of A-unification problems with free function symbols, is decidable. Here A stands for the equational theory of one associative function symbol. Our method can also be used to combine algorithms that compute finite complete sets of unifiers. Manfred Schmidt-Schauß' combination result, the until now most general result in this direction, can be obtained as a consequence of this fact. We also obtain the new result that unification in the union of disjoint equational theories is finitary, if general unification—i.e., unification of terms with additional free function symbols—is finitary in the single theories.
@article{ BaaderSchulz-JSC-96, author = {F. {Baader} and K. U. {Schulz}}, journal = {J. Symbolic Computation}, pages = {211--243}, title = {Unification in the Union of Disjoint Equational Theories: {C}ombining Decision Procedures}, volume = {21}, year = {1996}, }
Abstract BibTeX Entry
The combination of formal systems and algorithms, the logical and algebraic background, as well as the general architecture of complex and interacting systems has recently become a very active research area. The first international workshop Frontiers of Combining Systems created a common forum for the different research activities on this topic in the fields of logic, computer science, and artificial intelligence. Its main intention was to stimulate an interdisciplinary discussion that focuses on different aspects of the combination problem.<p> The volume contains research papers that cover the combination of logics, the combination of constraint-solving techniques and decision procedures, the combination of deductive systems, the integration of data structures into Constraint Logic Programming formalisms, and logic modelling of multi-agent systems. These problems are addressed on different conceptual levels: from the investigation of formal properties of combined systems using methods of logic and mathematics to the consideration of physical connections and communication languages relavent for combination of software tools.<p> <a href="http://www-lti.informatik.rwth-aachen.de/Forschung/misc/frocos-toc.html"> Table of Contents</a>
@book{ BaaderSchulz-FroCoS-96, editor = {Franz {Baader} and Klaus U. {Schulz}}, publisher = {Kluwer Academic Publishers}, series = {Proceedings of First International Workshop, Applied Logic Series 3}, title = {Frontiers of Combining Systems}, year = {1996}, }
1995
Abstract BibTeX Entry PDF File PS File
For a given TBox of a terminological KR system, the classification algorithm computes (a representation of) the subsumption hierarchy of all concepts introduced in the TBox. In general, this hierarchy does not contain sufficient information to derive all subsumption relationships between conjunctions of these concepts. We show how a method developed in the area of ``formal concept analysis'' for computing minimal implication bases can be used to determine a minimal representation of the subsumption hierarchy between conjunctions of concepts introduced in a TBox. To this purpose, the subsumption algorithm must be extended such that it yields (sufficient information about) a counterexample in cases where there is no subsumption relationship. For the concept language ALC, this additional requirement does not change the worst-case complexity of the subsumption algorithm. One advantage of the extended hierarchy is that it is a lattice, and not just a partial ordering.
@inproceedings{ Baader-KRUSE-95, address = {Santa Cruz, USA}, author = {F. {Baader}}, booktitle = {Proceedings of the International Symposium on Knowledge Retrieval, Use, and Storage for Efficiency, KRUSE 95}, pages = {168--178}, title = {Computing a Minimal Representation of the Subsumption Lattice of all Conjunctions of Concepts Defined in a Terminology}, year = {1995}, }
BibTeX Entry
@article{ BaaderBuchheit+-KER-95, author = {F. {Baader} and M. {Buchheit} and M.A {Jeusfeld} and W. {Nutt}}, journal = {The Knowledge Engineering Review}, number = {1}, pages = {73--76}, title = {Reasoning About Structured Objects: Knowledge Representation Meets Databases}, volume = {10}, year = {1995}, }
Abstract BibTeX Entry Free reprint
We consider the problem of integrating Reiter's default logic into terminological representation systems. It turns out that such an integration is less straightforward than we expected, considering the fact that the terminological language is a decidable sublanguage of first-order logic. Semantically, one has the unpleasant effect that the consequences of a terminological default theory may be rather unintuitive, and may even vary with the syntactic structure of equivalent concept expressions. This is due to the unsatisfactory treatment of open defaults via Skolemization in Reiter's semantics. On the algorithmic side, we show that this treatment may lead to an undecidable default consequence relation, even though our base language is decidable, and we have only finitely many (open) defaults. Because of these problems, we then consider a restricted semantics for open defaults in our terminological default theories: default rules are only applied to individuals that are explicitly present in the knowledge base. In this semantics it is possible to compute all extensions of a finite terminological default theory, which means that this type of default reasoning is decidable. We describe an algorithm for computing extensions, and show how the inference procedures of terminological systems can be modified to give optimal support to this algorithm.
@article{ BaaderHollunderA-JAR-95, author = {F. {Baader} and B. {Hollunder}}, journal = {J. Automated Reasoning}, pages = {149--180}, title = {Embedding Defaults into Terminological Representation Systems}, volume = {14}, year = {1995}, }
Abstract BibTeX Entry Free reprint
In a recent paper we have proposed terminological default logic as a formalism which combines both means for structured representation of classes and objects, and for default inheritance of properties. The major drawback that terminological default logic inherits from general default logic is that it does not take precedence of more specific defaults over more general ones into account. This behaviour has already been criticized in the general context of default logic, but it is all the more problematic in the terminological case where the emphasis lies on the hierarchical organization of concepts. The present paper addresses the problem of modifying terminological default logic such that more specific defaults are preferred. We assume that the specificity ordering is induced by the hierarchical organization of concepts, which means that default information is not taken into account when computing priorities. It turns out that the existing approaches for expressing priorities between defaults do not seem to be appropriate for defaults with prerequisites. Therefore we shall consider an alternative approach for dealing with prioritization in the framework of Reiter's default logic. The formalism is presented in the general setting of default logic where priorities are given by an arbitrary partial ordering on the defaults. We shall exhibit some interesting properties of the new formalism, compare it with existing approaches, and describe an algorithm for computing extensions. In the terminological case, we thus obtain an automated default reasoning procedure that takes specificity into account.
@article{ BaaderHollunderB-JAR-95, author = {F. {Baader} and B. {Hollunder}}, journal = {J. Automated Reasoning}, pages = {41--68}, title = {Priorities on Defaults with Prerequisites, and their Application in Treating Specificity in Terminological Default Logic}, volume = {15}, year = {1995}, }
Abstract BibTeX Entry
Terminological knowledge representation formalisms can be used to represent objective, time-independent facts about an application domain. Notions like belief, intentions, and time which are essential for the representation of multi-agent environments can only be expressed in a very limited way. For such notions, modal logics with possible worlds semantics provides a formally well-founded and well-investigated basis. This paper presents a framework for integrating modal operators into terminological knowledge representation languages. These operators can be used both inside of concept expressions and in front of terminological and assertional axioms. We introduce syntax and semantics of the extended language, and show that satisfiability of finite sets of formulas is decidable, provided that all modal operators are interpreted in the basic logic K, and that the increasing domain assumption is used.
@inproceedings{ BaaderLaux-IJCAI-95, address = {Montr{\'e}al, Canada}, author = {F. {Baader} and A. {Laux}}, booktitle = {Proceedings of the 14th International Joint Conference on Artificial Intelligence}, editor = {C. {Mellish}}, pages = {808--814}, publisher = {Morgan Kaufmann}, title = {Terminological Logics with Modal Operators}, year = {1995}, }
BibTeX Entry
@article{ BaaderOhlbach-JANCL-95, author = {F. {Baader} and H.-J. {Ohlbach}}, journal = {J. Applied Non-Classical Logics}, pages = {153--197}, title = {A Multi-Dimensional Terminological Knowledge Representation Language}, volume = {5}, year = {1995}, }
Abstract BibTeX Entry Free reprint
Previous work on combination techniques considered the question of how to combine unification algorithms for disjoint equational theories \(E_1,\ldots,E_n\) in order to obtain a unification algorithm for the union \(E_1 \cup \ldots\cup E_n\) of the theories. Here we want to show that variants of this method may be used to decide solvability and ground solvability of disunification problems in \(E_1 \cup \ldots\cup E_n\). Our first result says that solvability of disunification problems in the free algebra of the combined theory \(E_1 \cup \ldots\cup E_n\) is decidable if solvability of disunification problems with linear constant restrictions in the free algebras of the theories \(E_i\) (\(i = 1,\ldots,n\)) is decidable. In order to decide ground solvability (i.e., solvability in the initial algebra) of disunification problems in \(E_1 \cup \ldots\cup E_n\) we have to consider a new kind of subproblem for the particular theories \(E_i\), namely solvability (in the free algebra) of disunification problems with linear constant restriction under the additional constraint that values of variables are not \(E_i\)-equivalent to variables. The correspondence between ground solvability and this new kind of solvability holds, (1) if one theory \(E_i\) is the free theory with at least one function symbol and one constant, or (2) if the initial algebras of all theories \(E_i\) are infinite. Our results can be used to show that the existential fragment of the theory of the (ground) term algebra modulo associativity of a finite number of function symbols is decidable; the same result follows for function symbols which are associative and commutative, or associative, commutative and idempotent.
@article{ BaaderSchulz-TCS-95, author = {F. {Baader} and K.U. {Schulz}}, journal = {Theoretical Computer Science B}, pages = {229--255}, title = {Combination Techniques and Decision Problems for Disunification}, volume = {142}, year = {1995}, }
Abstract BibTeX Entry
In a previous paper we have introduced a method that allows one to combine decision procedures for unifiability in disjoint equational theories. Lately, it has turned out that the prerequisite for this method to apply—namely that unification with so-called linear constant restrictions is decidable in the single theories—is equivalent to requiring decidability of the positive fragment of the first order theory of the equational theories. Thus, the combination method can also be seen as a tool for combining decision procedures for positive theories of free algebras defined by equational theories. The present paper uses this observation as the starting point of a more abstract, algebraic approach to formulating and solving the combination problem. Its contributions are twofold. As a new result, we describe an optimization and an extension of our combination method to the case of constraint solvers that also take relational constraints (such as ordering constraints) into account. The second contribution is a new proof method, which depends on abstract notions and results from universal algebra, as opposed to technical manipulations of terms (such as ordered rewriting, abstraction functions, etc.)
@inproceedings{ BaaderSchulz-RTA-95, address = {Kaiserslautern, Germany}, author = {F. {Baader} and K.U. {Schulz}}, booktitle = {Proceedings of the 6th International Conference on Rewriting Techniques and Applications}, pages = {352--366}, publisher = {Springer Verlag}, series = {Lecture Notes in Artificial Intelligence}, title = {Combination of Constraint Solving Techniques: {A}n Algebraic Point of View}, volume = {914}, year = {1995}, }
Abstract BibTeX Entry
When combining languages for symbolic constraints, one is typically faced with the problem of how to treat ``mixed'' constraints. The two main problems are (1) how to define a combined solution structure over which these constraints are to be solved, and (2) how to combine the constraint solving methods for pure constraints into one for mixed constraints. The paper introduces the notion of a ``free amalgamated product'' as a possible solution to the first problem. Subsequently, we define so-called simply-combinable structures (SC-structures). For SC-structures over disjoint signatures, a canonical amalgamation construction exists, which for the subclass of strong SC-structures yields the free amalgamated product. The combination technique of BaaderSchulzCADE,BaaderSchulzRTA95 can be used to combine constraint solvers for (strong) SC-structures over disjoint signatures into a solver for their (free) amalgamated product. In addition to term algebras modulo equational theories, the class of SC-structures contains many solution structures that have been used in constraint logic programming, such as the algebra of rational trees, feature structures, and domains consisting of hereditarily finite (wellfounded or non-wellfounded) nested sets and lists.
@inproceedings{ BaaderSchulz-CP-95, address = {Cassis, France}, author = {F. {Baader} and K.U. {Schulz}}, booktitle = {Proceedings of the International Conference on Principles and Practice of Constraint Programming, CP95}, pages = {380--397}, publisher = {Springer Verlag}, series = {Lecture Notes in Artificial Intelligence}, title = {On the Combination of Symbolic Constraints, Solution Domains, and Constraint Solvers}, volume = {976}, year = {1995}, }
Abstract BibTeX Entry
Termersetzungssysteme sind ein wichtiges Hilfmittel zur automatisierten Behandlung von Gleichheitsaxiomen, da sie das Rechnen in gleichungsdefinierten Algebren ermöglichen. Sie finden deshalb zum Beispiel im Bereich der Algebraischen Spezifikation, der funktionalen Programmierung und des automatischen Theorembeweisens Verwendung. Das Skriptum stellt eine ausführliche Einführung in die im Bereich Termersetzungssysteme wichtigen Begriffe, Methoden und Resultate dar. Es werden Eigenschaften abstrakter Redunktionssysteme, Wortersetzungssysteme (Semi-Thue-Systeme), Grundbegriffe aus der universellen Algebra, Konfluenz und Terminierung von Termersetzungssystemen, Unifikation, Knuth-Bendix-Vervollständigung, Vervollständigung ohne Abbruch und Termersetzung modulo Gleichungstheorien behandelt.
@book{ BaaderAlbayrak95, address = {Pontstr.~96, D-52062 Aachen}, author = {Franz {Baader} and Can Adam {Albayrak}}, note = {ISBN 3-86073-148-3}, publisher = {Verlag der Augustinus Buchhandlung}, series = {{A}achener {B}eitr{\"a}ge zur {I}nformatik}, title = {Termersetzungssysteme, Skript zur Vorlesung}, volume = {12}, year = {1995}, }
1994
Abstract BibTeX Entry
The concept description formalisms of existing terminological systems allow the user to express local cardinality restrictions on the fillers of a particular role. It is not possible, however, to introduce global restrictions on the number of instances of a given concept. This paper argues that such cardinality restrictions on concepts are of importance in applications such as configuration of technical systems, an application domain of terminological systems that is currently gaining in interest. It shows that including such restrictions into the description language leaves the important inference problems such as instance testing decidable. The algorithm combines and simplifies the ideas developed for the treatment of qualifying number restrictions and of general terminological axioms.
@inproceedings{ BaaderBuchheit+-KI-94, address = {Saarbr\"ucken (Germany)}, author = {F. {Baader} and M. {Buchheit} and B. {Hollunder}}, booktitle = {Proceedings of the German AI Conference, {KI'94}}, pages = {51--62}, publisher = {Springer--Verlag}, series = {Lecture Notes in Computer Science}, title = {Cardinality Restrictions on Concepts}, volume = {861}, year = {1994}, }
Abstract BibTeX Entry PDF File PS File
We consider different methods of optimizing the classification process of terminological representation systems and evaluate their effect on three different types of test data. Though these techniques can probably be found in many existing systems, until now there has been no coherent description of these techniques and their impact on the performance of a system. One goal of this article is to make such a description available for future implementors of terminological systems. Building the optimizations that came off best into theKRIS system greatly enhanced its efficiency.
@article{ BaaderFranconi+-OptJournal-94, author = {F. {Baader} and E. {Franconi} and B. {Hollunder} and B. {Nebel} and H.J. {Profitlich}}, journal = {Applied Intelligence}, number = {2}, pages = {109--132}, title = {An Empirical Analysis of Optimization Techniques for Terminological Representation Systems}, volume = {4}, year = {1994}, }
BibTeX Entry
@incollection{ BaaderHollunder-LNAI-93, author = {F. {Baader} and B. {Hollunder}}, booktitle = {Foundations of Knowledge Representation and Reasoning}, editor = {G. {Lakemeyer}}, publisher = {Springer--Verlag}, series = {Lecture Notes in Artificial Intelligence}, title = {Computing extensions of terminological default theories}, volume = {810}, year = {1994}, }
BibTeX Entry
@incollection{ BaaderSiekmann-Handbook-94, address = {Oxford, UK}, author = {F. {Baader} and J.H. {Siekmann}}, booktitle = {Handbook of Logic in Artificial Intelligence and Logic Programming}, editor = {D.M. {Gabbay} and C.J. {Hogger} and J.A. {Robinson}}, pages = {41--125}, publisher = {Oxford University Press}, title = {Unification Theory}, year = {1994}, }
1993
BibTeX Entry
@article{ Baader-JACM-93, author = {F. {Baader}}, journal = {J. ACM}, number = {3}, pages = {477--503}, title = {Unification in Commutative Theories, {H}ilbert's Basis Theorem and {G}r\"obner Bases}, volume = {40}, year = {1993}, }
BibTeX Entry PDF File PS File
@article{ BaaderBuerckert+-JLLI-93, author = {F. {Baader} and H.-J. {B\"urckert} and B. {Nebel} and W. {Nutt} and G. {Smolka}}, journal = {Journal of Logic, Language and Information}, pages = {1--18}, title = {On the Expressivity of Feature Logics with Negation, Functional Uncertainty, and Sort Equations}, volume = {2}, year = {1993}, }
BibTeX Entry
@inproceedings{ BaaderHanschke-GWAI-92, address = {Bonn (Germany)}, author = {F. {Baader} and P. {Hanschke}}, booktitle = {Proceedings of the 16th German AI-Conference, {GWAI-92}}, pages = {132--143}, publisher = {Springer--Verlag}, series = {Lecture Notes in Computer Science}, title = {Extensions of Concept Languages for a Mechanical Engineering Application}, volume = {671}, year = {1993}, }
BibTeX Entry
@inproceedings{ BaaderHollunder-IJCAI-93, address = {Chambery (France)}, author = {F. {Baader} and B. {Hollunder}}, booktitle = {Proceedings of the 13th International Joint Conference on Artificial Intelligence, {IJCAI}-93}, pages = {669--674}, title = {How to Prefer More Specific Defaults in Terminological Default Logic}, year = {1993}, }
BibTeX Entry
@inproceedings{ BaaderSchlechta-ECSQARU-93, address = {Granada (Spain)}, author = {F. {Baader} and K. {Schlechta}}, booktitle = {Proceedings of the European Conference on Symbolic and Quantitative Approaches to Reasoning under Uncertainty, {ECSQARU 93}}, pages = {9--16}, publisher = {Springer--Verlag}, series = {Lecture Notes in Computer Science}, title = {A Semantics for Open Normal Defaults via a Modified Preferential Approach}, volume = {747}, year = {1993}, }
BibTeX Entry
@inproceedings{ BaaderSchulz-RTA-93, address = {Montreal (Canada)}, author = {F. {Baader} and K. {Schulz}}, booktitle = {Proceedings of the International Conference on Rewriting Techniques and Applications, {RTA 93}}, pages = {301--315}, publisher = {Springer--Verlag}, series = {Lecture Notes in Computer Science}, title = {Combination Techniques and Decision Problems for Disunification}, volume = {690}, year = {1993}, }
BibTeX Entry
@book{ BaaderSiekmannSnyderBU93:004, address = {Boston University, Boston, USA}, booktitle = {Proceedings of the {S}ixth {I}nternational {W}orkshop on {U}nification, {S}chlo{\ss} {D}agstuhl, July 29--31, 1992}, editor = {F. {Baader} and J. {Siekmann} and W. {Snyder}}, series = {{BU} Technical Report 93-004}, title = {Proceedings of the {S}ixth {I}nternational {W}orkshop on {U}nification, {S}chlo{\ss} {D}agstuhl, July 29--31, 1992}, year = {1993}, }
BibTeX Entry
@inproceedings{ OhlbachBaader-IJCAI-93, address = {Chambery (France)}, author = {H.-J. {Ohlbach} and F. {Baader}}, booktitle = {Proceedings of the 13th International Joint Conference on Artificial Intelligence, {IJCAI}-93}, pages = {690--695}, title = {A Multi-Dimensional Terminological Knowledge Representation Language}, year = {1993}, }
1992
BibTeX Entry
@article{ BaaderBuerckert+-KI-92, author = {F. {Baader} and H.-J. {B\"urckert} and B. {Hollunder} and A. {Laux} and W. {Nutt}}, journal = {{KI}}, pages = {23--33}, title = {{T}erminologische {L}ogiken}, volume = {{3/92}}, year = {1992}, }
BibTeX Entry
@inproceedings{ BaaderHollunder-KR-92, address = {Boston (USA)}, author = {F. {Baader} and B. {Hollunder}}, booktitle = {Proceedings of the Third International Conference on Principles of Knowledge Representation and Reasoning, {KR}-92}, pages = {306--317}, title = {Embedding Defaults into Terminological Representation Systems}, year = {1992}, }
BibTeX Entry
@inproceedings{ BaaderHollunder+-KR-92, address = {Boston (USA)}, author = {F. {Baader} and B. {Hollunder} and B. {Nebel} and H.J. {Profitlich} and E. {Franconi}}, booktitle = {Proceedings of the Third International Conference on Principles of Knowledge Representation and Reasoning, {KR}-92}, pages = {270--281}, title = {An Empirical Analysis of Optimization Techniques for Terminological Representation Systems, or: {M}aking {KRIS} get a move on}, year = {1992}, }
BibTeX Entry
@inproceedings{ BaaderSchulz-CADE-92, address = {Saratoga Springs (USA)}, author = {F. {Baader} and K. {Schulz}}, booktitle = {Proceedings of the 11th International Conference on Automated Deduction, {CADE-92}}, pages = {50--65}, publisher = {Springer--Verlag}, series = {Lecture Notes in Computer Science}, title = {Unification in the Union of Disjoint Equational Theories: {C}ombining Decision Procedures}, volume = {607}, year = {1992}, }
BibTeX Entry
@inproceedings{ BaaderSchulz-IWWERT-91, address = {Rouen (France)}, author = {F. {Baader} and K.U. {Schulz}}, booktitle = {Proceedings of the Second International Workshop on Word Equations and Related Topics, {IWWERT-91}}, pages = {23--42}, publisher = {Springer--Verlag}, series = {Lecture Notes in Computer Science}, title = {General {A}- and {AX}-Unification via Optimized Combination Procedures}, volume = {677}, year = {1992}, }
BibTeX Entry
@book{ BaaderSiekmannSnyderUnif, address = {{IBFI} {GmbH}, {S}chlo{\ss} {D}agstuhl, Germany}, booktitle = {6th {W}orkshop on {U}nification}, editor = {F. {Baader} and J. {Siekmann} and W. {Snyder}}, series = {Dagstuhl-Seminar-Report 42}, title = {6th {W}orkshop on {U}nification}, year = {1992}, }
1991
BibTeX Entry
@inproceedings{ Baader-IJCAI-91, address = {Sydney (Australia)}, author = {F. {Baader}}, booktitle = {Proceedings of the 12th International Joint Conference on Artificial Intelligence, {IJCAI-91}}, pages = {446--451}, title = {Augmenting Concept Languages by Transitive Closure of Roles: An Alternative to Terminological Cycles}, year = {1991}, }
BibTeX Entry
@inproceedings{ BaaderA-IWWERT-91, address = {T\"ubingen (Germany)}, author = {F. {Baader}}, booktitle = {Proceedings of the First International Workshop on Word Equations and Related Topics, {IWWERT-90}}, pages = {151--170}, publisher = {Springer--Verlag}, series = {Lecture Notes in Computer Science}, title = {Unification Theory}, volume = {572}, year = {1991}, }
BibTeX Entry
@inproceedings{ BaaderB-IWWERT-91, address = {T\"ubingen (Germany)}, author = {F. {Baader}}, booktitle = {Proceedings of the First International Workshop on Word Equations and Related Topics, {IWWERT-90}}, pages = {210--230}, publisher = {Springer--Verlag}, series = {Lecture Notes in Computer Science}, title = {Unification in Varieties of Completely Regular Semigroups}, volume = {572}, year = {1991}, }
BibTeX Entry
@inproceedings{ Baader-RTA-91, address = {Como (Italy)}, author = {F. {Baader}}, booktitle = {Proceedings of the 4th International Conference on Rewriting Techniques and Applications, {RTA} 91}, pages = {86--97}, publisher = {Springer--Verlag}, series = {Lecture Notes in Computer Science}, title = {Unification, Weak Unification, Upper Bound, Lower Bound and Generalization Problems}, volume = {488}, year = {1991}, }
BibTeX Entry
@inproceedings{ BaaderHanschke-IJCAI-91, address = {Sydney (Australia)}, author = {F. {Baader} and P. {Hanschke}}, booktitle = {Proceedings of the 12th International Joint Conference on Artificial Intelligence, {IJCAI-91}}, pages = {452--457}, title = {A Scheme for Integrating Concrete Domains into Concept Languages}, year = {1991}, }
Abstract BibTeX Entry PDF File PS File ©Springer-Verlag
The knowledge representation system KLONE first appeared in 1977. Since then many systems based on the idea of KLONE have been built. The formal model-theoretic semantics which has been introduced for KLONE languages provides means for investigating soundness and completeness of inference algorithms. It turned out that almost all implemented KLONE systems such as BACK, KLTWO, LOOM, NIKL, SBONE use sound but incomplete algorithms. Until recently, sound AND complete algorithms for the basic reasoning facilities in these systems such as consistency checking, subsumption checking (classification) and realization were only known for rather trivial languages. However, in the last two years concept languages (term subsumption languages) have been thoroughly investigated. As a result of these investigations it is now possible to provide sound and complete algorithms for relatively large concept languages. In this paper we describe KRIS, which is an implemented prototype of a KLONE system where all reasoning facilities are realized by sound and complete algorithms. This system can be used to investigate the behaviour of sound and complete algorithms in practical applications. Hopefully, this may shed a new light on the usefulness of complete algorithms for practical applications, even if their worst case complexity is NP or worse. KRIS provides a very expressive concept language, an assertional language, and sound and complete algorithms for reasoning. We have chosen the concept language such that it contains most of the constructs used in KLONE systems, with the obvious restriction that the interesting inferences such as consistency checking, subsumption checking, and realization are decidable. The assertional language is similar to languages normally used in such systems. The reasoning component of KRIS depends on sound and complete algorithms for reasoning facilities such as consistency checking, subsumption checking, retrieval, and querying.
@inproceedings{ BaaderHollunder-PDK-91, address = {Kaiserslautern (Germany)}, author = {F. {Baader} and B. {Hollunder}}, booktitle = {Proceedings of the First International Workshop on Processing Declarative Knowledge}, pages = {67--85}, publisher = {Springer--Verlag}, series = {Lecture Notes in Computer Science}, title = {A Terminological Knowledge Representation System with Complete Inference Algorithms}, volume = {572}, year = {1991}, }
BibTeX Entry
@article{ BaaderHollunder-SIGART-91, author = {F. {Baader} and B. {Hollunder}}, journal = {{ACM} {SIGART} Bulletin}, pages = {8--14}, title = {{KRIS}: Knowledge Representation and Inference System, System Description}, volume = {2}, year = {1991}, }
BibTeX Entry
@inproceedings{ BaaderNutt-RTA-91, address = {Como (Italy)}, author = {F. {Baader} and W. {Nutt}}, booktitle = {Proceedings of the 4th International Conference on Rewriting Techniques and Applications, {RTA} 91}, pages = {124--135}, publisher = {Springer--Verlag}, series = {Lecture Notes in Computer Science}, title = {Adding Homomorphisms to Commutative/Monoidal Theories, or: {H}ow Algebra Can Help in Equational Unification}, volume = {488}, year = {1991}, }
BibTeX Entry
@inproceedings{ HollunderBaader-KR-91, address = {Boston (USA)}, author = {B. {Hollunder} and F. {Baader}}, booktitle = {Proceedings of the Second International Conference on Principles of Knowledge Representation and Reasoning, {KR}-91}, pages = {335--346}, title = {Qualifying Number Restrictions in Concept Languages}, year = {1991}, }
1990
BibTeX Entry
@inproceedings{ Baader-ECAI-90, address = {Stockholm (Schweden)}, author = {F. {Baader}}, booktitle = {Proceedings of the 9th European Conference on Artificial Intelligence, {ECAI-90}}, pages = {53--58}, title = {A Formal Definition for Expressive Power of Knowledge Representation Languages}, year = {1990}, }
BibTeX Entry
@inproceedings{ Baader-CADE-90, address = {Kaiserslautern (Germany)}, author = {F. {Baader}}, booktitle = {Proceedings of the 10th International Conference on Automated Deduction, {CADE}-90}, pages = {396--410}, publisher = {Springer--Verlag}, series = {Lecture Notes in Computer Science}, title = {Rewrite Systems for Varieties of Semigroups}, volume = {488}, year = {1990}, }
BibTeX Entry
@inproceedings{ Baader-AAAI-90, address = {Boston (USA)}, author = {F. {Baader}}, booktitle = {Proceedings of the Eighth National Conference on Artificial Intelligence, {AAAI-90}}, pages = {621--626}, title = {Terminological Cycles in {KL}-{ONE}-based Knowledge Representation Languages}, year = {1990}, }
BibTeX Entry
@inproceedings{ BaaderBuerckert+-SCL-90, address = {Brussels (Belgien)}, author = {F. {Baader} and H.-J. {B\"urckert} and B. {Hollunder} and W. {Nutt} and J. {Siekmann}}, booktitle = {Proceedings of the Symposium on Computational Logic}, pages = {177--201}, title = {Concept Logic}, year = {1990}, }
1989
BibTeX Entry
@inproceedings{ Baader-RTA-89, address = {Chapel Hill (USA)}, author = {F. {Baader}}, booktitle = {Proceedings of the 3rd International Conference on Rewriting Techniques and Applications, {RTA} 89}, pages = {2--14}, publisher = {Springer--Verlag}, series = {Lecture Notes in Computer Science}, title = {Characterizations of Unification Type Zero}, volume = {355}, year = {1989}, }
BibTeX Entry
@inproceedings{ Baader-CTCS-89, address = {Manchester (UK)}, author = {F. {Baader}}, booktitle = {Proceedings of the Conference on Category Theory and Computer Science}, pages = {273--299}, publisher = {Springer--Verlag}, series = {Lecture Notes in Computer Science}, title = {Unification Properties of Commutative Theories: A Categorical Treatment}, volume = {389}, year = {1989}, }
BibTeX Entry Free reprint
@article{ Baader-JSC-89, author = {F. {Baader}}, journal = {J. Symbolic Computation}, pages = {479--497}, title = {Unification in Commutative Theories}, volume = {8}, year = {1989}, }
1988
BibTeX Entry Free reprint
@article{ Baader-IPL-88, author = {F. {Baader}}, journal = {Information Processing Letters}, pages = {91--93}, title = {A Note on Unification Type Zero}, volume = {27}, year = {1988}, }
BibTeX Entry Free reprint
@article{ BaaderBuettner-TCS-88, author = {F. {Baader} and W. {B{\"u}ttner}}, journal = {J. Theoretical Computer Science}, pages = {345--352}, title = {Unification in Commutative Idempotent Monoids}, volume = {56}, year = {1988}, }
1987
BibTeX Entry Free reprint
@article{ Baader-SemigroupForum-87, author = {F. {Baader}}, journal = {Semigroup Forum}, pages = {127--145}, title = {Unification in Varieties of Idempotent Semigroups}, volume = {36}, year = {1987}, }
1986
BibTeX Entry Free reprint
@article{ Baader-JAR-86, author = {F. {Baader}}, journal = {J. Automated Reasoning}, pages = {283--286}, title = {The Theory of Idempotent Semigroups is of Unification Type Zero}, volume = {2}, year = {1986}, }
Generated 19 December 2024, 10:12:14.
Technische Berichte
2024
Abstract BibTeX Entry PDF File DOI
Unification has been introduced in Description Logic (DL) as a means to detect redundancies in ontologies. In particular, it was shown that testing unifiability in the DL EL is an NP-complete problem, and this result has been extended in several directions. Surprisingly, it turned out that the complexity increases to PSpace if one disallows the use of the top concept in concept descriptions. Motivated by features of the medical ontology SNOMED CT, we extend this result to a setting where the top concept is disallowed, but there is a background ontology consisting of restricted forms of concept and role inclusion axioms. We are able to show that the presence of such axioms does not increase the complexity of unification without top, i.e., testing for unifiability remains a PSpace-complete problem.
@techreport{ BaFe-LTCS-24-01, address = {Dresden, Germany}, author = {Franz {Baader} and Oliver {Fern{\'a}ndez Gil}}, doi = {https://doi.org/10.25368/2024.34}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, number = {24-01}, title = {Unification in the Description Logic {$\mathcal{ELH}_{\mathcal{R}^+}$} without the Top Concept modulo Cycle-Restricted Ontologies (Extended Version)}, type = {LTCS-Report}, year = {2024}, }
Abstract BibTeX Entry PDF File DOI Conference Article
Errors in knowledge bases (KBs) written in a Description Logic (DL) are usually detected when reasoning derives an inconsistency or a consequence that does not hold in the application domain modelled by the KB. Whereas classical repair approaches produce maximal subsets of the KB not implying the inconsistency or unwanted consequence, optimal repairs maximize the consequence sets. In this paper, we extend previous results on how to compute optimal repairs from the DL \(\mathcal{EL}\) to its extension \(\mathcal{EL}^{\bot}\), which in contrast to \(\mathcal{EL}\) can express inconsistency. The problem of how to deal with inconsistency in the context of optimal repairs was addressed previously, but in a setting where the (fixed) terminological part of the KB must satisfy a restriction on cyclic dependencies. Here, we consider a setting where this restriction is not required. We also show how the notion of optimal repairs obtained this way can be used in inconsistency- and error-tolerant reasoning.
@techreport{ BaKrNu-LTCS-24-02, address = {Dresden, Germany}, author = {Franz {Baader} and Francesco {Kriegel} and Adrian {Nuradiansyah}}, doi = {https://doi.org/10.25368/2024.6}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, number = {24-02}, title = {Inconsistency- and Error-Tolerant Reasoning w.r.t.\ Optimal Repairs of $\mathcal{EL}^{\bot}$ Ontologies (Extended Version)}, type = {LTCS-Report}, year = {2024}, }
Abstract BibTeX Entry PDF File DOI
Removing unwanted consequences from a knowledge base has been investigated in belief change under the name contraction and is called repair in ontology engineering. Simple repair and contraction approaches based on removing statements from the knowledge base (respectively called belief base contractions and classical repairs) have the disadvantage that they are syntax-dependent and may remove more consequences than necessary. Belief set contractions do not have these problems, but may result in belief sets that have no finite representation if one works with logics that are not fragments of propositional logic. Similarly, optimal repairs, which are syntax-independent and maximize the retained consequences, may not exist. In this paper, we want to leverage advances in characterizing and computing optimal repairs of ontologies based on the description logics EL to obtain contraction operations that combine the advantages of belief set and belief base contractions. The basic idea is to employ, in the partial meet contraction approach, optimal repairs instead of optimal classical repairs as remainders. We introduce this new approach in a very general setting, and prove a characterization theorem that relates the obtained contractions with well-known postulates. Then, we consider several interesting instances, not only in the standard repair/contraction setting where one wants to get rid of a consequence, but also in other settings such as variants of forgetting in propositional and description logic. We also show that classical belief set contraction is an instance of our approach.
@techreport{ BaWa-LTCS-24-03, address = {Dresden, Germany}, author = {Franz {Baader} and Renata {Wassermann}}, doi = {https://doi.org/10.25368/2024.122}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, number = {24-03}, title = {Contractions Based on Optimal Repairs (Extended Version)}, type = {LTCS-Report}, year = {2024}, }
2023
Abstract BibTeX Entry PDF File DOI
Concrete domains have been introduced in Description Logic (DL) to enable reference to concrete objects (such as numbers) and predefined predicates on these objects (such as numerical comparisons) when defining concepts. The primary research goal in this context was to find restrictions on the concrete domain such that its integration into certain DLs preserves decidability or tractability. In this paper, we investigate the abstract expressive power of logics extended with concrete domains, namely which classes of first-order interpretations can be expressed using these logics. In the first part of the paper, we show that, under natural conditions on the concrete domain \(\mathfrak{D}\) (which also play a role for decidability), extensions of first-order logic (\(\texttt{FOL}\)) or \(\mathcal{ALC}\) with \(\mathfrak{D}\) share important formal properties with \(\texttt{FOL}\), such as the compactness and the Löwenheim-Skolem property. Nevertheless, their abstract expressive power need not be contained in that of \(\texttt{FOL}\). In the second part of the paper, we investigate whether finitely bounded homogeneous structures, which preserve decidability if employed as concrete domains, can be used to express certain universal first-order sentences, which then could be added to DL knowledge bases without destroying decidability. We show that this requires rather strong conditions on said sentences or an extended scheme for integrating the concrete domain that leads to undecidability.
@techreport{ BaBo-LTCS-23-02, address = {Dresden, Germany}, author = {Franz {Baader} and Filippo {De Bortoli}}, doi = {https://doi.org/10.25368/2024.240}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tu-dresden.de/inf/lat/reports#BaBo-LTCS-23-02}}, number = {23-02}, title = {{On the Abstract Expressive Power of Description Logics with Concrete Domains (Extended Version)}}, type = {LTCS-Report}, year = {2023}, }
Abstract BibTeX Entry PDF File DOI Conference Article Erratum
Ontologies based on Description Logics may contain errors, which are usually detected when reasoning produces consequences that follow from the ontology, but do not hold in the modelled application domain. In previous work, we have introduced repair approaches for \(\mathcal{EL}\) ontologies that are optimal in the sense that they preserve a maximal amount of consequences. In this paper, we will, on the one hand, review these approaches, but with an emphasis on motivation rather than on technical details. On the other hand, we will describe new results that address the problems that optimal repairs may become very large or need not even exist unless strong restrictions on the terminological part of the ontology apply. We will show how one can deal with these problems by introducing concise representations of optimal repairs.
@techreport{ BaKoKr-LTCS-23-03, address = {Dresden, Germany}, author = {Franz {Baader} and Patrick {Koopmann} and Francesco {Kriegel}}, doi = {https://doi.org/10.25368/2023.121}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, number = {23-03}, title = {Optimal Repairs in the Description Logic $\mathcal{EL}$ Revisited (Extended Version)}, type = {LTCS-Report}, year = {2023}, }
2022
Abstract BibTeX Entry PDF File DOI Conference Article
Errors in Description Logic (DL) ontologies are often detected when a reasoner computes unwanted consequences. The question is then how to repair the ontology such that the unwanted consequences no longer follow, but as many of the other consequences as possible are preserved. The problem of computing such optimal repairs was addressed in our previous work in the setting where the data (expressed by an ABox) may contain errors, but the schema (expressed by an \(\mathcal{EL}\) TBox) is assumed to be correct. Actually, we consider a generalization of ABoxes called quantified ABoxes (qABoxes) both as input for and as result of the repair process. Using qABoxes for repair allows us to retain more information, but the disadvantage is that standard DL systems do not accept qABoxes as input. This raises the question, investigated in the present paper, whether and how one can obtain optimal repairs if one restricts the output of the repair process to being ABoxes. In general, such optimal ABox repairs need not exist. Our main contribution is that we show how to decide the existence of optimal ABox repairs in exponential time, and how to compute all such repairs in case they exist.
@techreport{ BaKoKrNu-LTCS-22-01, address = {Dresden, Germany}, author = {Franz {Baader} and Patrick {Koopmann} and Francesco {Kriegel} and Adrian {Nuradiansyah}}, doi = {https://doi.org/10.25368/2022.65}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, number = {22-01}, title = {Optimal ABox Repair w.r.t.\ Static {$\mathcal{EL}$} TBoxes: from Quantified ABoxes back to ABoxes (Extended Version)}, type = {LTCS-Report}, year = {2022}, }
Abstract BibTeX Entry PDF File DOI Conference Article Addendum
Ontologies based on Description Logic (DL) represent general background knowledge in a terminology (TBox) and the actual data in an ABox. DL systems can then be used to compute consequences (such as answers to certain queries) from an ontology consisting of a TBox and an ABox. Since both human-made and machine-learned data sets may contain errors, which manifest themselves as unintuitive or obviously incorrect consequences, repairing DL-based ontologies in the sense of removing such unwanted consequences is an important topic in DL research. Most of the repair approaches described in the literature produce repairs that are not optimal, in the sense that they do not guarantee that only a minimal set of consequences is removed. In a series of papers, we have developed an approach for computing optimal repairs, starting with the restricted setting of an \(\mathcal{EL}\) instance store, extending this to the more general setting of a quantified ABox (where some individuals may be anonymous), and then adding a static \(\mathcal{EL}\) TBox.
Here, we extend the expressivity of the underlying DL considerably, by adding nominals, inverse roles, regular role inclusions and the bottom concept to \(\mathcal{EL}\), which yields a fragment of the well-known DL Horn-\(\mathcal{SROIQ}\). The ideas underlying our repair approach still apply to this DL, though several non-trivial extensions are needed to deal with the new constructors and axioms. The developed repair approach can also be used to treat unwanted consequences expressed by certain conjunctive queries or regular path queries, and to handle Horn-\(\mathcal{ALCOI}\) TBoxes with regular role inclusions.
@techreport{ BaKr-LTCS-22-02, address = {Dresden, Germany}, author = {Franz {Baader} and Francesco {Kriegel}}, doi = {https://doi.org/10.25368/2022.131}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, number = {22-02}, title = {Pushing Optimal ABox Repair from {$\mathcal{EL}$} Towards More Expressive Horn-DLs (Extended Version)}, type = {LTCS-Report}, year = {2022}, }
2021
Abstract BibTeX Entry PDF File DOI Conference Article
The application of automated reasoning approaches to Description Logic (DL) ontologies may produce certain consequences that either are deemed to be wrong or should be hidden for privacy reasons. The question is then how to repair the ontology such that the unwanted consequences can no longer be deduced. An optimal repair is one where the least amount of other consequences is removed. Most of the previous approaches to ontology repair are of a syntactic nature in that they remove or weaken the axioms explicitly present in the ontology, and thus cannot achieve semantic optimality. In previous work, we have addressed the problem of computing optimal repairs of (quantified) ABoxes, where the unwanted consequences are described by concept assertions of the light-weight DL \(\mathcal{EL}\). In the present paper, we improve on the results achieved so far in two ways. First, we allow for the presence of terminological knowledge in the form of an \(\mathcal{EL}\) TBox. This TBox is assumed to be static in the sense that it cannot be changed in the repair process. Second, the construction of optimal repairs described in our previous work is best case exponential. We introduce an optimized construction that is exponential only in the worst case. First experimental results indicate that this reduces the size of the computed optimal repairs considerably.
@techreport{ BaKoKrNu-LTCS-21-01, address = {Dresden, Germany}, author = {Franz {Baader} and Patrick {Koopmann} and Francesco {Kriegel} and Adrian {Nuradiansyah}}, doi = {https://doi.org/10.25368/2022.64}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, number = {21-01}, title = {Computing Optimal Repairs of Quantified ABoxes w.r.t. Static {$\mathcal{EL}$} TBoxes (Extended Version)}, type = {LTCS-Report}, year = {2021}, }
Abstract BibTeX Entry PDF File DOI
Unification in the Description Logic (DL) FL0 is known to be ExpTime-complete, and of unification type zero. We investigate in this paper whether a lower complexity of the unification problem can be achieved by either syntactically restricting the role depth of concepts or semantically restricting the length of role paths in interpretations. We show that the answer to this question depends on whether the number formulating such a restriction is encoded in unary or binary: for unary coding, the complexity drops from ExpTime to PSpace. As an auxiliary result, which is however also of interest in its own right, we prove a PSpace-completeness result for a depth-restricted version of the intersection emptiness problem for deterministic root-to-frontier tree automata. Finally, we show that the unification type of FL0 improves from type zero to unitary (finitary) for unification without (with) constants in the restricted setting.
@techreport{ BaGiRo21, address = {Dresden, Germany}, author = {Franz {Baader} and Oliver {Fern\'andez Gil} and Maryam {Rostamigiv}}, doi = {https://doi.org/10.25368/2022.266}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, number = {21-02}, title = {Restricted Unification in the {DL} {$\mathcal{FL}_0$} (Extended Version)}, type = {LTCS-Report}, year = {2021}, }
Abstract BibTeX Entry PDF File DOI Conference Article
We review our recent work on how to compute optimal repairs, optimal compliant anonymizations, and optimal safe anonymizations of ABoxes containing possibly anonymized individuals. The results can be used both to remove erroneous consequences from a knowledge base and to hide secret information before publication of the knowledge base, while keeping as much as possible of the original information.
@techreport{ BaKoKrNuPe-LTCS-21-04, address = {Dresden, Germany}, author = {Franz {Baader} and Patrick {Koopmann} and Francesco {Kriegel} and Adrian {Nuradiansyah} and Rafael {Pe\~{n}aloza}}, doi = {https://doi.org/10.25368/2022.268}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, number = {21-04}, title = {{Privacy-Preserving Ontology Publishing: The Case of Quantified ABoxes w.r.t.\ a Static Cycle-Restricted $\mathcal{EL}$ TBox (Extended Version)}}, type = {LTCS-Report}, year = {2021}, }
2020
Abstract BibTeX Entry PDF File DOI
Concrete domains have been introduced in the area of Description Logic to enable reference to concrete objects (such as numbers) and predefined predicates on these objects (such as numerical comparisons) when defining concepts. Unfortunately, in the presence of general concept inclusions (GCIs), which are supported by all modern DL systems, adding concrete domains may easily lead to undecidability. One contribution of this paper is to strengthen the existing undecidability results further by showing that concrete domains even weaker than the ones considered in the previous proofs may cause undecidability. To regain decidability in the presence of GCIs, quite strong restrictions, in sum called omega-admissiblity, need to be imposed on the concrete domain. On the one hand, we generalize the notion of omega-admissiblity from concrete domains with only binary predicates to concrete domains with predicates of arbitrary arity. On the other hand, we relate omega-admissiblity to well-known notions from model theory. In particular, we show that finitely bounded, homogeneous structures yield omega-admissible concrete domains. This allows us to show omega-admissibility of concrete domains using existing results from model theory.
@techreport{ BaRy-LTCS-20-01, address = {Dresden, Germany}, author = {Franz {Baader} and Jakub {Rydval}}, doi = {https://doi.org/10.25368/2022.259}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, number = {20-01}, title = {Using Model-Theory to Find $\omega$-Admissible Concrete Domains}, type = {LTCS-Report}, year = {2020}, }
Abstract BibTeX Entry PDF File DOI
The word problem for a finite set of ground identities is known to be decidable in polynomial time using congruence closure, and this is also the case if some of the function symbols are assumed to be commutative. We show that decidability in P is preserved if we add the assumption that certain function symbols f are extensional in the sense that f(s1,...,sn) = f(t1,...,tn) implies s1 = t1,...,sn = tn. In addition, we investigate a variant of extensionality that is more appropriate for commutative function symbols, but which raises the complexity of the word problem to coNP.
@techreport{ BaKa-LTCS-20-02, address = {Dresden, Germany}, author = {Franz {Baader} and Deepak {Kapur}}, doi = {https://doi.org/10.25368/2022.260}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, number = {20-02}, title = {Deciding the Word Problem for Ground Identities with Commutative and Extensional Symbols}, type = {LTCS-Report}, year = {2020}, }
Abstract BibTeX Entry PDF File DOI Conference Article
We adapt existing approaches for privacy-preserving publishing of linked data to a setting where the data are given as Description Logic (DL) ABoxes with possibly anonymised (formally: existentially quantified) individuals and the privacy policies are expressed using sets of concepts of the DL \(\mathcal{E\!L}\). We provide a chacterization of compliance of such ABoxes w.r.t. \(\mathcal{E\!L}\) policies, and show how optimal compliant anonymisations of ABoxes that are non-compliant can be computed. This work extends previous work on privacy-preserving ontology publishing, in which a very restricted form of ABoxes, called instance stores, had been considered, but restricts the attention to compliance. The approach developed here can easily be adapted to the problem of computing optimal repairs of quantified ABoxes.
@techreport{ BaKrNuPe-LTCS-20-08, address = {Dresden, Germany}, author = {Franz {Baader} and Francesco {Kriegel} and Adrian {Nuradiansyah} and Rafael {Pe\~{n}aloza}}, doi = {https://doi.org/10.25368/2022.263}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, number = {20-08}, title = {Computing Compliant Anonymisations of Quantified ABoxes w.r.t. $\mathcal{E\!L}$ Policies (Extended Version)}, type = {LTCS-Report}, year = {2020}, }
Abstract BibTeX Entry PDF File DOI Conference Article
In recent work, we have shown how to compute compliant anonymizations of quantified ABoxes w.r.t. \(\mathcal{E\!L}\) policies. In this setting, quantified ABoxes can be used to publish information about individuals, some of which are anonymized. The policy is given by concepts of the Description Logic (DL) \(\mathcal{E\!L}\), and compliance means that one cannot derive from the ABox that some non-anonymized individual is an instance of a policy concept. If one assumes that a possible attacker could have additional knowledge about some of the involved non-anonymized individuals, then compliance with a policy is not sufficient. One wants to ensure that the quantified ABox is safe in the sense that none of the secret instance information is revealed, even if the attacker has additional compliant knowledge. In the present paper, we show that safety can be decided in polynomial time, and that the unique optimal safe anonymization of a non-safe quantified ABox can be computed in exponential time, provided that the policy consists of a single \(\mathcal{E\!L}\) concept.
@techreport{ BaKrNuPe-LTCS-20-09, address = {Dresden, Germany}, author = {Franz {Baader} and Francesco {Kriegel} and Adrian {Nuradiansyah} and Rafael {Pe\~{n}aloza}}, doi = {https://doi.org/10.25368/2022.264}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, number = {20-09}, title = {Computing Safe Anonymisations of Quantified ABoxes w.r.t.\ $\mathcal{E\!L}$ Policies (Extended Version)}, type = {LTCS-Report}, year = {2020}, }
Abstract BibTeX Entry PDF File DOI
Concrete domains have been introduced in Description Logics (DLs) to enable reference to concrete objects (such as numbers) and predefined predicates on these objects (such as numerical comparisons) when defining concepts. To retain decidability when integrating a concrete domain into a decidable DL, the domain must satisfy quite strong restrictions. In previous work, we have analyzed the most prominent such condition, called omega-admissibility, from an algebraic point of view. This provided us with useful algebraic tools for proving omega-admissibility, which allowed us to find new examples for concrete domains whose integration leaves the prototypical expressive DL ALC decidable. When integrating concrete domains into lightweight DLs of the EL family, achieving decidability is not enough. One wants reasoning in the resulting DL to be tractable. This can be achieved by using so-called p-admissible concrete domains and restricting the interaction between the DL and the concrete domain. In the present paper, we investigate p-admissibility from an algebraic point of view. Again, this yields strong algebraic tools for demonstrating p-admissibility. In particular, we obtain an expressive numerical p-admissible concrete domain based on the rational numbers. Although omega-admissibility and p-admissibility are orthogonal conditions that are almost exclusive, our algebraic characterizations of these two properties allow us to locate an infinite class of p-admissible concrete domains whose integration into ALC yields decidable DLs.
@techreport{ BaRy-LTCS-20-10, address = {Dresden, Germany}, author = {Franz {Baader} and Jakub {Rydval}}, doi = {https://doi.org/10.25368/2022.265}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, number = {20-10}, title = {An Algebraic View on p-Admissible Concrete Domains for Lightweight Description Logics (Extended Version)}, type = {LTCS-Report}, year = {2020}, }
2019
Abstract BibTeX Entry PDF File DOI Conference Article
We make a first step towards adapting an existing approach for privacy-preserving publishing of linked data to Description Logic (DL) ontologies. We consider the case where both the knowledge about individuals and the privacy policies are expressed using concepts of the DL \(\mathcal{EL}\), which corresponds to the setting where the ontology is an \(\mathcal{EL}\) instance store. We introduce the notions of compliance of a concept with a policy and of safety of a concept for a policy, and show how optimal compliant (safe) generalizations of a given \(\mathcal{EL}\) concept can be computed. In addition, we investigate the complexity of the optimality problem.
@techreport{ BaKrNu-LTCS-19-01, address = {Dresden, Germany}, author = {Franz {Baader} and Francesco {Kriegel} and Adrian {Nuradiansyah}}, doi = {https://doi.org/10.25368/2022.250}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tu-dresden.de/inf/lat/reports#BaKrNu-LTCS-19-01}}, number = {19-01}, title = {{Privacy-Preserving Ontology Publishing for $\mathcal{EL}$ Instance Stores (Extended Version)}}, type = {LTCS-Report}, year = {2019}, }
Abstract BibTeX Entry PDF File DOI
We investigate the impact that general concept inclusions and role-value maps have on the complexity and decidability of reasoning in the Description Logic FL0. On the one hand, we give a more direct proof for ExpTime-hardness of subsumption w.r.t. general concept inclusions in FL0. On the other hand, we determine restrictions on role-value maps that ensure decidability of subsumption, but we also show undecidability for the cases where these restrictions are not satisfied.
@techreport{ BaTh-LTCS-19-08, address = {Dresden, Germany}, author = {Franz {Baader} and Cl\'ement {Th\'eron}}, doi = {https://doi.org/10.25368/2022.257}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tu-dresden.de/inf/lat/reports#BaTh-LTCS-19-08}}, number = {19-08}, title = {Role-Value Maps and General Concept Inclusions in the Description Logic {$\mathcal{FL}_0$}}, type = {LTCS-Report}, year = {2019}, }
Abstract BibTeX Entry PDF File DOI
Simple counting quantifiers that can be used to compare the number of role successors of an individual or the cardinality of a concept with a fixed natural number have been employed in Description Logics (DLs) for more than two decades under the respective names of number restrictions and cardinality restrictions on concepts. Recently, we have considerably extended the expressivity of such quantifiers by allowing to impose set and cardinality constraints formulated in the quantifier-free fragment of Boolean Algebra with Presburger Arithmetic (QFBAPA) on sets of role successors and concepts, respectively. We were able to prove that this extension does not increase the complexity of reasoning. In the present paper, we investigate the expressive power of the DLs obtained this way, using appropriate bisimulation characterizations and 0–1 laws as tools for distinguishing the expressiveness of different logics. In particular, we show that, in contrast to most classical DLs, these logics are no longer expressible in first-order predicate logic (FOL), and we characterize their first-order fragments. In most of our previous work on DLs with QFBAPA-based set and cardinality constraints we have employed finiteness restrictions on interpretations to ensure that the obtained sets are finite. Here we dispense with these restrictions to make the comparison with classical DLs, where one usually considers arbitrary models rather than finite ones, easier. It turns out that doing so does not change the complexity of reasoning.
@techreport{ BaBo-LTCS-19-09, address = {Dresden, Germany}, author = {Franz {Baader} and Filippo {De Bortoli}}, doi = {https://doi.org/10.25368/2022.258}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tu-dresden.de/inf/lat/reports#BaBo-LTCS-19-09}}, number = {19-09}, title = {{On the Complexity and Expressiveness of Description Logics with Counting}}, type = {LTCS-Report}, year = {2019}, }
Abstract BibTeX Entry PDF File DOI
Selecting patients for clinical trials is very labor-intensive. Our goal is to develop an automated system that can support doctors in this task. This paper describes a major step towards such a system: the automatic translation of clinical trial eligibility criteria from natural language into formal, logic-based queries. First, we develop a semantic annotation process that can capture many types of clinical trial criteria. Then, we map the annotated criteria to the formal query language. We have built a prototype system based on state-of-the-art NLP tools such as Word2Vec, Stanford NLP tools, and the MetaMap Tagger, and have evaluated the quality of the produced queries on a number of criteria from clinicaltrials.gov. Finally, we discuss some criteria that were hard to translate, and give suggestions for how to formulate eligibility criteria to make them easier to translate automatically.
@techreport{ XFBBZ-LTCS-19-13, address = {Dresden, Germany}, author = {Chao {Xu} and Walter {Forkel} and Stefan {Borgwardt} and Franz {Baader} and Beihai {Zhou}}, doi = {https://doi.org/10.25368/2023.224}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, number = {19-13}, title = {Automatic Translation of Clinical Trial Eligibility Criteria into Formal Queries (Extended Version)}, type = {LTCS-Report}, year = {2019}, }
2018
Abstract BibTeX Entry PDF File DOI Conference Article
The classical approach for repairing a Description Logic ontology \(\mathfrak{O}\) in the sense of removing an unwanted consequence \(\alpha\) is to delete a minimal number of axioms from \(\mathfrak{O}\) such that the resulting ontology \(\mathfrak{O}'\) does not have the consequence \(\alpha\). However, the complete deletion of axioms may be too rough, in the sense that it may also remove consequences that are actually wanted. To alleviate this problem, we propose a more gentle way of repair in which axioms are not necessarily deleted, but only weakened. On the one hand, we investigate general properties of this gentle repair method. On the other hand, we propose and analyze concrete approaches for weakening axioms expressed in the Description Logic \(\mathcal{E\mkern-1.618mu L}\).
@techreport{ BaKrNuPe-LTCS-18-01, address = {Dresden, Germany}, author = {Franz {Baader} and Francesco {Kriegel} and Adrian {Nuradiansyah} and Rafael {Pe\~{n}aloza}}, doi = {https://doi.org/10.25368/2022.238}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tu-dresden.de/inf/lat/reports#BaKrNuPe-LTCS-18-01}}, number = {18-01}, title = {{Repairing Description Logic Ontologies by Weakening Axioms}}, type = {LTCS-Report}, year = {2018}, }
Abstract BibTeX Entry PDF File DOI
Although being quite inexpressive, the description logic (DL) FL0, which provides only conjunction, value restriction and the top concept as concept constructors, has an intractable subsumption problem in the presence of terminologies (TBoxes): subsumption reasoning w.r.t. acyclic FL0 TBoxes is coNP-complete, and becomes even ExpTime-complete in case general TBoxes are used. In the present paper, we use automata working on infinite trees to solve both standard and non-standard inferences in FL0 w.r.t. general TBoxes. First, we give an alternative proof of the ExpTime upper bound for subsumption in FL0 w.r.t. general TBoxes based on the use of looping tree automata. Second, we employ parity tree automata to tackle non-standard inference problems such as computing the least common subsumer and the difference of FL0 concepts w.r.t. general TBoxes.
@techreport{ BaFePe-LTCS-18-04, address = {Dresden, Germany}, author = {Franz {Baader} and Oliver {Fern{\'a}ndez Gil} and Maximilian {Pensel}}, doi = {https://doi.org/10.25368/2022.240}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See http://lat.inf.tu-dresden.de/research/reports.html}, number = {18-04}, title = {Standard and Non-Standard Inferences in the Description Logic $\mathcal{FL}_0$ Using Tree Automata}, type = {LTCS-Report}, year = {2018}, }
2017
Abstract BibTeX Entry PDF File DOI
We introduce a new description logic that extends the well-known logic ALCQ by allowing the statement of constraints on role successors that are more general than the qualified number restrictions of ALCQ. To formulate these constraints, we use the quantifier-free fragment of Boolean Algebra with Presburger Arithmetic (QFBAPA), in which one can express Boolean combinations of set constraints and numerical constraints on the cardinalities of sets. Though our new logic is considerably more expressive than ALCQ, we are able to show that the complexity of reasoning in it is the same as in ALCQ, both without and with TBoxes.
@techreport{ Baad-LTCS-17-02, address = {Dresden, Germany}, author = {Franz {Baader}}, doi = {https://doi.org/10.25368/2022.232}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See http://lat.inf.tu-dresden.de/research/reports.html}, number = {17-02}, title = {Concept Descriptions with Set Constraints and Cardinality Constraints}, type = {LTCS-Report}, year = {2017}, }
Abstract BibTeX Entry PDF File DOI
In contrast to qualitative linear temporal logics, which can be used to state that some property will eventually be satisfied, metric temporal logics allow to formulate constraints on how long it may take until the property is satisfied. While most of the work on combining Description Logics (DLs) with temporal logics has concentrated on qualitative temporal logics, there has recently been a growing interest in extending this work to the quantitative case. In this paper, we complement existing results on the combination of DLs with metric temporal logics over the natural numbers by introducing interval-rigid names. This allows to state that elements in the extension of certain names stay in this extension for at least some specified amount of time.
@techreport{ BaBoKoOzTh-LTCS-17-03, address = {Dresden, Germany}, author = {Franz {Baader} and Stefan {Borgwardt} and Patrick {Koopmann} and Ana {Ozaki} and Veronika {Thost}}, doi = {https://doi.org/10.25368/2022.233}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {see \url{https://lat.inf.tu-dresden.de/research/reports.html}}, number = {17-03}, title = {Metric Temporal Description Logics with Interval-Rigid Names (Extended Version)}, type = {LTCS-Report}, year = {2017}, }
Abstract BibTeX Entry PDF File DOI
We investigate ontology-based query answering (OBQA) in a setting where both the ontology and the query can refer to concrete values such as numbers and strings. In contrast to previous work on this topic, the built-in predicates used to compare values are not restricted to being unary. We introduce restrictions on these predicates and on the ontology language that allow us to reduce OBQA to query answering in databases using the so-called combined rewriting approach. Though at first sight our restrictions are different from the ones used in previous work, we show that our results strictly subsume some of the existing first-order rewritability results for unary predicates.
@techreport{ BaBL-LTCS-17-04, address = {Germany}, author = {Franz {Baader} and Stefan {Borgwardt} and Marcel {Lippmann}}, doi = {https://doi.org/10.25368/2022.234}, institution = {Chair for Automata Theory, Technische Universit{\"a}t Dresden}, note = {see \url{https://lat.inf.tu-dresden.de/research/reports.html}}, number = {17-04}, title = {Query Rewriting for \textit{{DL-Lite}} with {$n$}-ary Concrete Domains (Extended Version)}, type = {LTCS-Report}, year = {2017}, }
Abstract BibTeX Entry PDF File DOI
We consider ontology-based query answering in a setting where some of the data are numerical and of a probabilistic nature, such as data obtained from uncertain sensor readings. The uncertainty for such numerical values can be more precisely represented by continuous probability distributions than by discrete probabilities for numerical facts concerning exact values. For this reason, we extend existing approaches using discrete probability distributions over facts by continuous probability distributions over numerical values. We determine the exact (data and combined) complexity of query answering in extensions of the well-known description logics EL and ALC with numerical comparison operators in this probabilistic setting.
@techreport{ BaKoTu-LTCS-17-05, address = {Germany}, author = {Franz {Baader} and Patrick {Koopmann} and Anni-Yasmin {Turhan}}, doi = {https://doi.org/10.25368/2022.235}, institution = {Chair for Automata Theory, Technische Universit{\"a}t Dresden}, note = {See \url{https://lat.inf.tu-dresden.de/research/reports.html}}, number = {17-05}, title = {Using Ontologies to Query Probabilistic Numerical Data (Extended Version)}, type = {LTCS-Report}, year = {2017}, }
Abstract BibTeX Entry PDF File DOI
We investigate the use of disunification in \(\mathcal{EL}\) for ontology generation. In particular, we study how disunification can construct new SNOMED CT concepts when given information about the position of the new concept in the concept hierarchy. To evaluate our approach, we randomly select concept names from SNOMED CT, remove their definitions, and formulate disunification problems that recover these definitions from information about the parents and siblings of the removed concept names. Our evaluation shows that this approach works well for some subhierarchies of SNOMED CT. Overall, one can reconstruct about 6% of all SNOMED CT concept names in this way.
@techreport{ BaBM-LTCS-17-07, address = {Dresden, Germany}, author = {Franz {Baader} and Stefan {Borgwardt} and Barbara {Morawska}}, doi = {https://doi.org/10.25368/2022.237}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See https://tu-dresden.de/ing/informatik/thi/lat/forschung/technische-berichte.}, number = {17-07}, title = {Constructing {SNOMED CT} Concepts via Disunification}, type = {LTCS-Report}, year = {2017}, }
2016
Abstract BibTeX Entry PDF File DOI
In a previous paper, we have introduced an extension of the lightweight Description Logic EL that allows us to define concepts in an approximate way. For this purpose, we have defined a graded membership function deg, which for each individual and concept yields a number in the interval [0,1] expressing the degree to which the individual belongs to the concept. Threshold concepts C t for in <, <=, >, >= then collect all the individuals that belong to C with degree t. We have then investigated the complexity of reasoning in the Description Logic tEL(deg), which is obtained from EL by adding such threshold concepts. In the present paper, we extend these results, which were obtained for reasoning without TBoxes, to the case of reasoning w.r.t. acyclic TBoxes. Surprisingly, this is not as easy as might have been expected. On the one hand, one must be quite careful to define acyclic TBoxes such that they still just introduce abbreviations for complex concepts, and thus can be unfolded. On the other hand, it turns out that, in contrast to the case of EL, adding acyclic TBoxes to tEL(deg) increases the complexity of reasoning by at least on level of the polynomial hierarchy.
@techreport{ BaFe-LTCS-16-02, address = {Dresden, Germany}, author = {Franz {Baader} and Oliver {Fern{\'a}ndez Gil}}, doi = {https://doi.org/10.25368/2022.226}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See http://lat.inf.tu-dresden.de/research/reports.html}, number = {16-02}, title = {Extending the Description Logic $\tau\mathcal{EL}(deg)$ with Acyclic TBoxes}, type = {LTCS-Report}, year = {2016}, }
Abstract BibTeX Entry PDF File DOI
Unification with constants modulo the theory of an associative (A), commutative (C) and idempotent (I) binary function symbol with a unit (U) corresponds to solving a very simple type of set equations. It is well-known that solvability of systems of such equations can be decided in polynomial time by reducing it to satisfiability of propositional Horn formulae. Here we introduce a modified version of this problem by no longer requiring all equations to be completely solved, but allowing for a certain number of violations of the equations. We introduce three different ways of counting the number of violations, and investigate the complexity of the respective decision problem, i.e., the problem of deciding whether there is an assignment that solves the system with at most \(\ell\) violations for a given threshold value \(\ell\).
@techreport{ BaMaOk-LTCS-16-03, address = {Dresden, Germany}, author = {Franz {Baader} and Pavlos {Marantidis} and Alexander {Okhotin}}, doi = {https://doi.org/10.25368/2022.227}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See http://lat.inf.tu-dresden.de/research/reports.html}, number = {16-03}, title = {Approximately Solving Set Equations}, type = {LTCS-Report}, year = {2016}, }
Abstract BibTeX Entry PDF File DOI
Unification in description logics (DLs) has been introduced as a novel inference service that can be used to detect redundancies in ontologies, by finding different concepts that may potentially stand for the same intuitive notion. It was first investigated in detail for the DL \(\mathcal{FL}_0\), where unification can be reduced to solving certain language equations. In order to increase the recall of this method for finding redundancies, we introduce and investigate the notion of approximate unification, which basically finds pairs of concepts that ``almost'' unify. The meaning of ``almost'' is formalized using distance measures between concepts. We show that approximate unification in \(\mathcal{FL}_0\) can be reduced to approximately solving language equations, and devise algorithms for solving the latter problem for two particular distance measures.
@techreport{ BaMaOk-LTCS-16-04, address = {Dresden, Germany}, author = {Franz {Baader} and Pavlos {Marantidis} and Alexander {Okhotin}}, doi = {https://doi.org/10.25368/2022.228}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See http://lat.inf.tu-dresden.de/research/reports.html}, number = {16-04}, title = {Approximate Unification in the Description Logic {$\mathcal{FL}_0$}}, type = {LTCS-Report}, year = {2016}, }
Abstract BibTeX Entry PDF File DOI
In a recent research paper, we have proposed an extension of the light-weight Description Logic (DL) EL in which concepts can be defined in an approximate way. To this purpose, the notion of a graded membership function m, which instead of a Boolean membership value 0 or 1 yields a membership degree from the interval [0,1], was introduced. Threshold concepts can then, for example, require that an individual belongs to a concept C with degree at least 0.8. Reasoning in the threshold DL tel(m) obtained this way of course depends on the employed graded membership function m. The paper defines a specific such function, called deg, and determines the exact complexity of reasoning in tel(deg). In addition, it shows how concept similarity measures (CSMs) satisfying certain properties can be used to define graded membership functions m , but it does not investigate the complexity of reasoning in the induced threshold DLs tel(m ). In the present paper, we start filling this gap. In particular, we show that computability of implies decidability of tel(m ), and we introduce a class of CSMs for which reasoning in the induced threshold DLs has the same complexity as in tel(deg).
@techreport{ BaFe-LTCS-16-07, address = {Dresden, Germany}, author = {Franz {Baader} and Oliver {Fern{\'a}ndez Gil}}, doi = {https://doi.org/10.25368/2022.229}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See http://lat.inf.tu-dresden.de/research/reports.html}, number = {16-07}, title = {Decidability and Complexity of Threshold Description Logics Induced by Concept Similarity Measures}, type = {LTCS-Report}, year = {2016}, }
Abstract BibTeX Entry PDF File DOI
Recently introduced approaches for relaxed query answering, approximately defining concepts, and approximately solving unification problems in Description Logics have in common that they are based on the use of concept comparison measures together with a threshold construction. In this paper, we will briefly review these approaches, and then show how weighted automata working on infinite trees can be used to construct computable concept comparison measures for FL0 that are equivalence invariant w.r.t. general TBoxes. This is a first step towards employing such measures in the mentioned approximation approaches.
@techreport{ BaFM-LTCS-16-08, address = {Dresden, Germany}, author = {Franz {Baader} and Oliver {Fern{\'a}ndez Gil} and Pavlos {Marantidis}}, doi = {https://doi.org/10.25368/2022.230}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See http://lat.inf.tu-dresden.de/research/reports.html}, number = {16-08}, title = {Approximation in Description Logics: How Weighted Tree Automata Can Help to Define the Required Concept Comparison Measures in $\mathcal{FL}_0$}, type = {LTCS-Report}, year = {2016}, }
2015
Abstract BibTeX Entry PDF File DOI
Unification in Description Logics has been introduced as a means to detect redundancies in ontologies. We try to extend the known decidability results for unification in the Description Logic EL to disunification since negative constraints on unifiers can be used to avoid unwanted unifiers. While decidability of the solvability of general EL-disunification problems remains an open problem, we obtain NP-completeness results for two interesting special cases: dismatching problems, where one side of each negative constraint must be ground, and local solvability of disunification problems, where we restrict the attention to solutions that are built from so-called atoms occurring in the input problem. More precisely, we first show that dismatching can be reduced to local disunification, and then provide two complementary NP-algorithms for finding local solutions of (general) disunification problems.
@techreport{ BaBM-LTCS-15-03, address = {Dresden, Germany}, author = {Franz {Baader} and Stefan {Borgwardt} and Barbara {Morawska}}, doi = {https://doi.org/10.25368/2022.210}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See http://lat.inf.tu-dresden.de/research/reports.html.}, number = {15-03}, title = {Dismatching and Local Disunification in {$\mathcal{EL}$}}, type = {LTCS-Report}, year = {2015}, }
Abstract BibTeX Entry PDF File DOI
We introduce an extension of the lightweight Description Logic EL that allows us to define concepts in an approximate way. For this purpose, we use a graded membership function, which for each individual and concept yields a number in the interval [0,1] expressing the degree to which the individual belongs to the concept. Threshold concepts C t for in <,<=,>,>= then collect all the individuals that belong to C with degree t. We generalize a well-known characterization of membership in EL concepts to construct a specific graded membership function deg, and investigate the complexity of reasoning in the Description Logic tel(deg), which extends EL by threshold concepts defined using deg. We also compare the instance problem for threshold concepts of the form C> t in tel(deg) with the relaxed instance queries of Ecke et al.
@techreport{ BaBrF-LTCS-15-09, address = {Dresden, Germany}, author = {Franz {Baader} and Gerhard {Brewka} and Oliver Fern{\'a}ndez {Gil}}, doi = {https://doi.org/10.25368/2022.215}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See http://lat.inf.tu-dresden.de/research/reports.html.}, number = {15-09}, title = {Adding Threshold Concepts to the Description Logic $\mathcal{E\!L}$}, type = {LTCS-Report}, year = {2015}, }
Abstract BibTeX Entry PDF File DOI
In Ontology-Based Data Access (OBDA), user queries are evaluated over a set of facts under the open world assumption, while taking into account background knowledge given in the form of a Description Logic (DL) ontology. In order to deal with dynamically changing data sources, temporal conjunctive queries (TCQs) have recently been proposed as a useful extension of OBDA to support the processing of temporal information. We extend the existing complexity analysis of TCQ entailment to very expressive DLs underlying the OWL 2 standard, and in contrast to previous work also allow for queries containing transitive roles.
@techreport{ BaBL-LTCS-15-17, address = {Germany}, author = {Franz {Baader} and Stefan {Borgwardt} and Marcel {Lippmann}}, doi = {https://doi.org/10.25368/2022.222}, institution = {Chair for Automata Theory, Technische Universit{\"a}t Dresden}, number = {15-17}, title = {Temporal Conjunctive Queries in Expressive {DLs} with Non-simple Roles}, type = {LTCS-Report}, year = {2015}, }
2014
Abstract BibTeX Entry PDF File DOI
Formulae of linear temporal logic (LTL) can be used to specify (wanted or unwanted) properties of a dynamical system. In model checking, the system's behaviour is described by a transition system, and one needs to check whether all possible traces of this transition system satisfy the formula. In runtime verification, one observes the actual system behaviour, which at any point in time yields a finite prefix of a trace. The task is then to check whether all continuations of this prefix to a trace satisfy (violate) the formula. More precisely, one wants to construct a monitor, i.e., a finite automaton that receives the finite prefix as input and then gives the right answer based on the state currently reached. In this paper, we extend the known approaches to LTL runtime verification in two directions. First, instead of propositional LTL we use the more expressive temporal logic ALC-LTL, which can use axioms of the Description Logic (DL) ALC instead of propositional variables to describe properties of single states of the system. Second, instead of assuming that the observed system behaviour provides us with complete information about the states of the system, we assume that states are described in an incomplete way by ALC-knowledge bases. We show that also in this setting monitors can effectively be constructed. The (double-exponential) size of the constructed monitors is in fact optimal, and not higher than in the propositional case. As an auxiliary result, we show how to construct Büchi automata for ALC-LTL-formulae, which yields alternative proofs for the known upper bounds of deciding satisfiability in ALC-LTL.
@techreport{ BaLi-LTCS-14-01, address = {Dresden, Germany}, author = {Franz {Baader} and Marcel {Lippmann}}, doi = {https://doi.org/10.25368/2022.203}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See \url{http://lat.inf.tu-dresden.de/research/reports.html}.}, number = {14-01}, title = {Runtime Verification Using a Temporal Description Logic Revisited}, type = {LTCS-Report}, year = {2014}, }
Abstract BibTeX Entry PDF File DOI
Matching concept descriptions against concept patterns was introduced as a new inference task in Description Logics (DLs) almost 20 years ago, motivated by applications in the Classic system. For the DL EL, it was shown in 2000 that the matching problem is NP-complete. It then took almost 10 years before this NP-completeness result could be extended from matching to unification in EL. The next big challenge was then to further extend these results from matching and unification without a TBox to matching and unification w.r.t. a general TBox, i.e., a finite set of general concept inclusions. For unification, we could show some partial results for general TBoxes that satisfy a certain restriction on cyclic dependencies between concepts, but the general case is still open. For matching, we solve the general case in this paper: we show that matching in EL w.r.t. general TBoxes is NP-complete by introducing a goal-oriented matching algorithm that uses non-deterministic rules to transform a given matching problem into a solved form by a polynomial number of rule applications. We also investigate some tractable variants of the matching problem.
@techreport{ BaMo-LTCS-14-3, address = {Dresden, Germany}, author = {Franz {Baader} and Barbara {Morawska}}, doi = {https://doi.org/10.25368/2022.205}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See http://lat.inf.tu-dresden.de/research/reports.html.}, number = {14-03}, title = {Matching with respect to general concept inclusions in the Description Logic $\mathcal{EL}$}, type = {LTCS-Report}, year = {2014}, }
2013
Abstract BibTeX Entry PDF File DOI
Ontology-based data access (OBDA) generalizes query answering in databases towards deduction since (i) the fact base is not assumed to contain complete knowledge (i.e., there is no closed world assumption), and (ii) the interpretation of the predicates occurring in the queries is constrained by axioms of an ontology. OBDA has been investigated in detail for the case where the ontology is expressed by an appropriate Description Logic (DL) and the queries are conjunctive queries. Motivated by situation awareness applications, we investigate an extension of OBDA to the temporal case. As query language we consider an extension of the well-known propositional temporal logic LTL where conjunctive queries can occur in place of propositional variables, and as ontology language we use the prototypical expressive DL ALC. For the resulting instance of temporalized OBDA, we investigate both data complexity and combined complexity of the query entailment problem.
@techreport{ BaBoLi-LTCS-13-01, address = {Dresden, Germany}, author = {Franz {Baader} and Stefan {Borgwardt} and Marcel {Lippmann}}, doi = {https://doi.org/10.25368/2022.191}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See \url{http://lat.inf.tu-dresden.de/research/reports.html}.}, number = {13-01}, title = {On the Complexity of Temporal Query Answering}, type = {LTCS-Report}, year = {2013}, }
Abstract BibTeX Entry PDF File DOI
Unification in Description Logics (DLs) has been proposed as an inference service that can, for example, be used to detect redundancies in ontologies. For the DL EL, which is used to define several large biomedical ontologies, unification is NP-complete. However, the unification algorithms for EL developed until recently could not deal with ontologies containing general concept inclusions (GCIs). In a series of recent papers we have made some progress towards addressing this problem, but the ontologies the developed unification algorithms can deal with need to satisfy a certain cycle restriction. In the present paper, we follow a different approach. Instead of restricting the input ontologies, we generalize the notion of unifiers to so-called hybrid unifiers. Whereas classical unifiers can be viewed as acyclic TBoxes, hybrid unifiers are cyclic TBoxes, which are interpreted together with the ontology of the input using a hybrid semantics that combines fixpoint and declarative semantics. We show that hybrid unification in EL is NP-complete and introduce a goal-oriented algorithm for computing hybrid unifiers.
@techreport{ BaFM-LTCS-13-07, address = {Dresden, Germany}, author = {Franz {Baader} and Oliver {Fern\'{a}ndez Gil} and Barbara {Morawska}}, doi = {https://doi.org/10.25368/2022.197}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See \url{http://lat.inf.tu-dresden.de/research/reports.html}.}, number = {13-07}, title = {Hybrid Unification in the Description Logic EL}, type = {LTCS-Report}, year = {2013}, }
BibTeX Entry PDF File DOI
@techreport{ BaZa-LTCS-13-08, address = {Dresden, Germany}, author = {Franz {Baader} and Benjamin {Zarrie{\"s}}}, doi = {https://doi.org/10.25368/2022.198}, institution = {Chair of Automata Theory, TU Dresden}, note = {See http://lat.inf.tu-dresden.de/research/reports.html.}, number = {13-08}, title = {Verification of Golog Programs over Description Logic Actions}, type = {LTCS-Report}, year = {2013}, }
2012
Abstract BibTeX Entry PDF File DOI
We extend previous results on the complexity of solving language equations with one-sided concatenation and all Boolean operations to the case where also disequations (i.e., negated equations) may occur. To show that solvability of systems of equations and disequations is still in ExpTime, we introduce a new type of automata working on infinite trees, which we call looping automata with colors. As applications of these results, we show new complexity results for disunification in the description logic FL0 and for monadic set constraints with negation. We believe that looping automata with colors may also turn out to be useful in other applications.
@techreport{ BaOk-LTCS-12-01, address = {Dresden, Germany}, author = {Franz {Baader} and Alexander {Okhotin}}, doi = {https://doi.org/10.25368/2022.185}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See http://lat.inf.tu-dresden.de/research/reports.html.}, number = {12-01}, title = {Solving Language Equations and Disequations Using Looping Tree Automata with Colors}, type = {LTCS-Report}, year = {2012}, }
Abstract BibTeX Entry PDF File DOI
Unification in Description Logics has been proposed as an inference service that can, for example, be used to detect redundancies in ontologies. For the Description Logic EL, which is used to define several large biomedical ontologies, unification is NP-complete. An NP unification algorithm for EL based on a translation into propositional satisfiability (SAT) has recently been presented. In this report, we extend this SAT encoding in two directions: on the one hand, we add general concept inclusion axioms, and on the other hand, we add role hierarchies (H) and transitive roles (R+). For the translation to be complete, however, the ontology needs to satisfy a certain cycle restriction. The SAT translation depends on a new rewriting-based characterization of subsumption w.r.t. ELHR+-ontologies.
@techreport{ BaBM-LTCS-12-02, address = {Dresden, Germany}, author = {Franz {Baader} and Stefan {Borgwardt} and Barbara {Morawska}}, doi = {https://doi.org/10.25368/2022.186}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See http://lat.inf.tu-dresden.de/research/reports.html.}, number = {12-02}, title = {{SAT} Encoding of Unification in {$\mathcal{ELH}_{R^+}$} w.r.t. Cycle-Restricted Ontologies}, type = {LTCS-Report}, year = {2012}, }
Abstract BibTeX Entry PDF File DOI
Unification has been investigated both in modal logics and in description logics, albeit with different motivations. In description logics, unification can be used to detect redundancies in ontologies. In this context, it is not sufficient to decide unifiability, one must also compute appropriate unifiers and present them to the user. For the description logic EL, which is used to define several large biomedical ontologies, deciding unifiability is an NP-complete problem. It is known that every solvable EL-unification problem has a minimal unifier, and that every minimal unifier is a local unifier. Existing unification algorithms for EL compute all minimal unifiers, but additionally (all or some) non-minimal local unifiers. Computing only the minimal unifiers would be better since there are considerably less minimal unifiers than local ones, and their size is usually also quite small. In this paper we investigate the question whether the known algorithms for EL-unification can be modified such that they compute exactly the minimal unifiers without changing the complexity and the basic nature of the algorithms. Basically, the answer we give to this question is negative.
@techreport{ BaBM-LTCS-12-03, address = {Dresden, Germany}, author = {Franz {Baader} and Stefan {Borgwardt} and Barbara {Morawska}}, doi = {https://doi.org/10.25368/2022.187}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See http://lat.inf.tu-dresden.de/research/reports.html.}, number = {12-03}, title = {Computing Minimal {$\mathcal{EL}$}-Unifiers is Hard}, type = {LTCS-Report}, year = {2012}, }
Abstract BibTeX Entry PDF File DOI
Unification in Description Logics (DLs) has been proposed as an inference service that can, for example, be used to detect redundancies in ontologies. For the DL EL, which is used to define several large biomedical ontologies, unification is NP-complete. A goal-oriented NP unification algorithm for EL that uses nondeterministic rules to transform a given unification problem into solved form has recently been presented. In this report, we extend this goal-oriented algorithm in two directions: on the one hand, we add general concept inclusion axioms (GCIs), and on the other hand, we add role hierarchies (H) and transitive roles (R+). For the algorithm to be complete, however, the ontology consisting of the GCIs and role axioms needs to satisfy a certain cycle restriction.
@techreport{ BaBM-LTCS-12-05, address = {Dresden, Germany}, author = {Franz {Baader} and Stefan {Borgwardt} and Barbara {Morawska}}, doi = {https://doi.org/10.25368/2022.189}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See http://lat.inf.tu-dresden.de/research/reports.html.}, number = {12-05}, title = {A Goal-Oriented Algorithm for Unification in {$\mathcal{ELH}_{R^+}$} w.r.t. Cycle-Restricted Ontologies}, type = {LTCS-Report}, year = {2012}, }
2011
Abstract BibTeX Entry PDF File DOI
Unification in Description Logics has been proposed as a novel inference service that can, for example, be used to detect redundancies in ontologies. The inexpressive Description Logic EL is of particular interest in this context since, on the one hand, several large biomedical ontologies are defined using EL. On the other hand, unification in EL has recently been shown to be NP-complete, and thus of considerably lower complexity than unification in other DLs of similarly restricted expressive power. However, EL allows the use of the top concept, which represents the whole interpretation domain, whereas the large medical ontology SNOMED CT makes no use of this feature. Surprisingly, removing the top concept from EL makes the unification problem considerably harder. More precisely, we will show in this paper that unification in EL without the top concept is PSpace-complete.
@techreport{ BBBM-LTCS-11-01, address = {Dresden, Germany}, author = {Franz {Baader} and Nguyen Thanh {Binh} and Stefan {Borgwardt} and Barbara {Morawska}}, doi = {https://doi.org/10.25368/2022.179}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See http://lat.inf.tu-dresden.de/research/reports.html.}, number = {11-01}, title = {Unification in the Description Logic {$\mathcal{EL}$} Without the Top Concept}, type = {LTCS-Report}, year = {2011}, }
Abstract BibTeX Entry PDF File DOI
Unification in Description Logics (DLs) has been proposed as an inference service that can, for example, be used to detect redundancies in ontologies. The inexpressive Description Logic EL is of particular interest in this context since, on the one hand, several large biomedical ontologies are defined using EL. On the other hand, unification in EL has recently been shown to be NP-complete, and thus of significantly lower complexity than unification in other DLs of similarly restricted expressive power. However, the unification algorithms for EL developed so far cannot deal with general concept inclusion axioms (GCIs). This paper makes a considerable step towards addressing this problem, but the GCIs our new unification algorithm can deal with still need to satisfy a certain cycle restriction.
@techreport{ BaBM-LTCS-11-05, address = {Dresden, Germany}, author = {Franz {Baader} and Stefan {Borgwardt} and Barbara {Morawska}}, doi = {https://doi.org/10.25368/2022.183}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See http://lat.inf.tu-dresden.de/research/reports.html.}, number = {11-05}, title = {Unification in the Description Logic {$\mathcal{EL}$} w.r.t.\ Cycle-Restricted {TB}oxes}, type = {LTCS-Report}, year = {2011}, }
2010
Abstract BibTeX Entry PDF File DOI
In the reasoning about actions community, causal relationships have been proposed as a possible approach for solving the ramification problem, i.e., the problem of how to deal with indirect effects of actions. In this paper, we show that causal relationships can be added to action formalisms based on Description Logics without destroying the decidability of the consistency and the projection problem.
@techreport{ BaLiLi-LTCS-10-01, address = {Dresden, Germany}, author = {Franz {Baader} and Marcel {Lippmann} and Hongkai {Liu}}, doi = {https://doi.org/10.25368/2022.174}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See http://lat.inf.tu-dresden.de/research/reports.html.}, number = {10-01}, title = {Adding Causal Relationships to {DL}-based Action Formalisms}, type = {LTCS-Report}, year = {2010}, }
Abstract BibTeX Entry PDF File DOI
The Description Logic EL is an inexpressive knowledge representation language, which nevertheless has recently drawn considerable attention in the knowledge representation and the ontology community since, on the one hand, important inference problems such as the subsumption problem are polynomial. On the other hand, EL is used to define large biomedical ontologies. Unification in Description Logics has been proposed as a novel inference service that can, for example, be used to detect redundancies in ontologies. In a recent paper, we have shown that unification in EL is NP-complete, and thus of a complexity that is considerably lower than in other Description Logics of comparably restricted expressive power. In this paper, we introduce a new NP-algorithm for solving unification problem in EL, which is based on a reduction to satisfiability in propositional logic (SAT). The advantage of this new algorithm is, on the one hand, that it allows us to employ highly optimized state of the art SAT solvers when implementing an EL-unification algorithm. On the other hand, this reduction provides us with a proof of the fact that EL-unification is in NP that is much simpler than the one given in our previous paper on EL-unification.
@techreport{ BaMo-LTCS-10-04, address = {Dresden, Germany}, author = {Franz {Baader} and Barbara {Morawska}}, doi = {https://doi.org/10.25368/2022.177}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See http://lat.inf.tu-dresden.de/research/reports.html.}, number = {10-04}, title = {SAT Encoding of Unification in $\mathcal{EL}$}, type = {LTCS-Report}, year = {2010}, }
2009
Abstract BibTeX Entry PDF File DOI
When trying to apply recently developed approaches for updating Description Logic ABoxes in the context of an action programming language, one encounters two problems. First, updates generate so-called Boolean ABoxes, which cannot be handled by traditional Description Logic reasoners. Second, iterated update operations result in very large Boolean ABoxes, which, however, contain a huge amount of redundant information. In this paper, we address both issues from a practical point of view.
@techreport{ LTCS-Report09-01, address = {Germany}, author = {Conrad {Drescher} and Hongkai {Liu} and Franz {Baader} and Steffen {Guhlemann} and Uwe {Petersohn} and Peter {Steinke} and Michael {Thielscher}}, doi = {https://doi.org/10.25368/2022.170}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Dresden University of Technology}, note = {See http://lat.inf.tu-dresden.de/research/reports.html.}, number = {09-01}, title = {Putting ABox Updates into Action}, type = {LTCS-Report}, year = {2009}, }
Abstract BibTeX Entry PDF File PS File DOI
Consider an ontology O where every axiom is labeled with an element of a lattice L. Then every element l of L determines a sub-ontology Ol, which consists of the axioms of O whose labels are greater or equal to l. These labels may be interpreted as required access rights, in which case Ol is the sub-ontology that a user with access right l is allowed to see, or as trust levels, in which case Ol consists of those axioms that we trust with level at least l. Given a consequence C (such as a subsumption relationship between concepts) that follows from the whole ontology O, we want to know from which of the sub-ontologies Ol, determined by lattice elements l, C still follows. However, instead of reasoning with Ol in the deployment phase of the ontology, we want to pre-compute this information during the development phase. More precisely, we want to compute what we call a boundary for C, i.e., an element mC of L such that C follows from Ol iff l is smaller or equal to mC. In this paper we show that, under certain restrictions on the elements l used to define the sub-ontologies, such a boundary always exists, and we describe black-box approaches for computing it that are generalizations of approaches for axiom pinpointing in description logics. We also present first experimental results that compare the efficiency of these approaches on real-world ontologies.
@techreport{ BaKP-LTCS-09, address = {Germany}, author = {Franz {Baader} and Martin {Knechtel} and Rafael {Pe{\~n}aloza}}, doi = {https://doi.org/10.25368/2022.171}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Dresden University of Technology}, note = {See http://lat.inf.tu-dresden.de/research/reports.html.}, number = {09-02}, title = {Computing Boundaries for Reasoning in Sub-Ontologies}, type = {LTCS-Report}, year = {2009}, }
Abstract BibTeX Entry PDF File DOI
The verification problem for action logic programs with non-terminating behaviour is in general undecidable. In this paper, we consider a restricted setting in which the problem becomes decidable. On the one hand, we abstract from the actual execution sequences of a non-terminating program by considering infinite sequences of actions defined by a Büchi automaton. On the other hand, we assume that the logic underlying our action formalism is a decidable description logic rather than full first-order predicate logic.
@techreport{ BMLSW-LTCS-09-03, address = {Germany}, author = {Franz {Baader} and Hongkai {Liu} and Anees ul {Mehdi}}, doi = {https://doi.org/10.25368/2022.172}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Dresden University of Technology}, note = {See http://lat.inf.tu-dresden.de/research/reports.html.}, number = {LTCS-09-03}, title = {{I}ntegrate {A}ction {F}ormalisms into {L}inear {T}emporal {D}escription {L}ogics}, type = {LTCS-Report}, year = {2009}, }
2008
Abstract BibTeX Entry PDF File PS File DOI
Most of the research on temporalized Description Logics (DLs) has concentrated on the case where temporal operators can occur within DL concept descriptions. In this setting, reasoning usually becomes quite hard if rigid roles, i.e., roles whose interpretation does not change over time, are available. In this paper, we consider the case where temporal operators are allowed to occur only in front of DL axioms (i.e., ABox assertions and general concept inclusion axioms), but not inside of concepts descriptions. As the temporal component, we use linear temporal logic (LTL) and in the DL component we consider the basic DL ALC. We show that reasoning in the presence of rigid roles becomes considerably simpler in this setting.
@techreport{ LTCS-Report08-01, address = {Germany}, author = {Franz {Baader} and Silvio {Ghilardi} and Carsten {Lutz}}, doi = {https://doi.org/10.25368/2022.164}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Dresden University of Technology}, note = {See http://lat.inf.tu-dresden.de/research/reports.html.}, number = {08-01}, title = {{LTL} over Description Logic Axioms}, type = {LTCS-Report}, year = {2008}, }
Abstract BibTeX Entry PDF File PS File DOI
In a previous ICFCA paper we have shown that, in the Description Logics EL and ELgfp, the set of general concept inclusions holding in a finite model always has a finite basis. In this paper, we address the problem of how to compute this basis efficiently, by adapting methods from formal concept analysis.
@techreport{ BaaDi08, address = {Dresden}, author = {Franz {Baader} and Felix {Distel}}, doi = {https://doi.org/10.25368/2022.168}, institution = {Institute for Theoretical Computer Science, TU Dresden}, note = {See http://lat.inf.tu-dresden.de/research/reports.html.}, number = {08-05}, title = {Exploring finite models in the Description Logic {$\mathcal{EL}_\mathrm{gfp}$}}, type = {LTCS-Report}, year = {2008}, }
Abstract BibTeX Entry PDF File PS File DOI
Axiom pinpointing has been introduced in Description Logics (DLs) to help the used understand the reasons why consequences hold by computing minimal subsets of the knowledge base that have the consequence in consideration. Several pinpointing algorithms have been described as extensions of the standard tableau-based reasoning algorithms for deciding consequences from DL knowledge bases. Although these extensions are based on similar ideas, they are all introduced for a particular tableau-based algorithm for a particular DL, using specific traits of them. In the past, we have developed a general approach for extending tableau-based algorithms into pinpointing algorithms. In this paper we explore some issues of termination of general tableaux and their pinpointing extensions. We also define a subclass of tableaux that allows the use of so-called blocking conditions, which stop the execution of the algorithm once a pattern is found, and adapt the pinpointing extensions accordingly, guaranteeing its correctness and termination.
@techreport{ BaPe-LTCS-07-01, address = {Germany}, author = {Franz {Baader} and Rafael {Pe{\~n}aloza}}, doi = {https://doi.org/10.25368/2022.165}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Dresden University of Technology}, note = {See http://lat.inf.tu-dresden.de/research/reports.html.}, number = {LTCS-08-02}, title = {Blocking and Pinpointing in Forest Tableaux}, type = {LTCS-Report}, year = {2008}, }
Abstract BibTeX Entry PDF File PS File DOI
Axiom pinpointing has been introduced in description logics (DLs) to help the user to understand the reasons why consequences hold and to remove unwanted consequences by computing minimal (maximal) subsets of the knowledge base that have (do not have) the consequence in question. The pinpointing algorithms described in the DL literature are obtained as extensions of the standard tableau-based reasoning algorithms for computing consequences from DL knowledge bases. Although these extensions are based on similar ideas, they are all introduced for a particular tableau-based algorithm for a particular DL. The purpose of this paper is to develop a general approach for extending a tableau-based algorithm to a pinpointing algorithm. This approach is based on a general definition of ``tableau algorithms,'' which captures many of the known tableau-based algorithms employed in DLs, but also other kinds of reasoning procedures.
@techreport{ BaPe-LTCS-08-03, address = {Germany}, author = {Franz {Baader} and Rafael {Pe{\~n}aloza}}, doi = {https://doi.org/10.25368/2022.166}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Dresden University of Technology}, note = {See http://lat.inf.tu-dresden.de/research/reports.html.}, number = {LTCS-08-03}, title = {Pinpointing in Terminating Forest Tableaux}, type = {LTCS-Report}, year = {2008}, }
2007
Abstract BibTeX Entry PDF File DOI
Formal Concept Analysis (FCA) can be used to analyze data given in the form of a formal context. In particular, FCA provides efficient algorithms for computing a minimal basis of the implications holding in the context. In this paper, we extend classical FCA by considering data that are represented by relational structures rather than formal contexts, and by replacing atomic attributes by complex formulae defined in some logic. After generalizing some of the FCA theory to this more general form of contexts, we instantiate the general framework with attributes defined in the Description Logic (DL) EL, and with relational structures over a signature of unary and binary predicates, i.e., models for EL. In this setting, an implication corresponds to a so-called general concept inclusion axiom (GCI) in EL. The main technical result of this report is that, in EL, for any finite model there is a finite set of implications (GCIs) holding in this model from which all implications (GCIs) holding in the model follow.
@techreport{ Dist07, address = {Dresden, Germany}, author = {Franz {Baader} and Felix {Distel}}, doi = {https://doi.org/10.25368/2022.160}, institution = {Inst.\ f{\"u}r Theoretische Informatik, TU Dresden}, number = {07-02}, title = {A finite basis for the set of {EL}-implications holding in a finite model}, year = {2007}, }
2006
Abstract BibTeX Entry PDF File PS File DOI
Language equations are equations where both the constants occurring in the equations and the solutions are formal languages. They have first been introduced in formal language theory, but are now also considered in other areas of computer science. In the present paper, we restrict the attention to language equations with one-sided concatenation, but in contrast to previous work on these equations, we allow not just union but all Boolean operations to be used when formulating them. In addition, we are not just interested in deciding solvability of such equations, but also in deciding other properties of the set of solutions, like its cardinality (finite, infinite, uncountable) and whether it contains least/greatest solutions. We show that all these decision problems are ExpTime-complete.
@techreport{ BaaderOkhotin-LTCS-06-01, address = {Germany}, author = {Franz {Baader} and Alexander {Okhotin}}, doi = {https://doi.org/10.25368/2022.154}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Dresden University of Technology}, note = {See http://lat.inf.tu-dresden.de/research/reports.html.}, number = {LTCS-06-01}, title = {On Language Equations with One-sided Concatenation}, type = {LTCS-Report}, year = {2006}, }
Abstract BibTeX Entry PDF File PS File DOI
We propose an approach for extending both the terminological and the assertional part of a Description Logic knowledge base by using information provided by the assertional part and by a domain expert. The use of techniques from Formal Concept Analysis ensures that, on the one hand, the interaction with the expert is kept to a minimum, and, on the other hand, we can show that the extended knowledge base is complete in a certain sense.
@techreport{ BGSS-LTCS-06-02, address = {Germany}, author = {Franz {Baader} and Bernhard {Ganter} and Ulrike {Sattler} and Baris {Sertkaya}}, doi = {https://doi.org/10.25368/2022.155}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Dresden University of Technology}, note = {See http://lat.inf.tu-dresden.de/research/reports.html.}, number = {LTCS-06-02}, title = {Completing Description Logic Knowledge Bases using Formal Concept Analysis}, type = {LTCS-Report}, year = {2006}, }
Abstract BibTeX Entry PDF File DOI
In Description Logics (DLs), both tableau-based and automata-based algorithms are frequently used to show decidability and complexity results for basic inference problems such as satisfiability of concepts. Whereas tableau-based algorithms usually yield worst-case optimal algorithms in the case of PSPACE-complete logics, it is often very hard to design optimal tableau-based algorithms for EXPTIME-complete DLs. In contrast, the automata-based approach is usually well-suited to prove EXPTIME upper-bounds, but its direct application will usually also yield an EXPTIME-algorithm for a PSPACE-complete logic since the (tree) automaton constructed for a given concept is usually exponentially large. In the present paper, we formulate conditions under which an on-the-fly construction of such an exponentially large automaton can be used to obtain a PSPACE-algorithm. We illustrate the usefulness of this approach by proving a new PSPACE upper-bound for satisfiability of concepts w.r.t. acyclic terminologies in the DL SI, which extends the basic DL ALC with transitive and inverse roles.
@techreport{ BaaHlaPen-LTCS-06, address = {Germany}, author = {F. {Baader} and J. {Hladik} and R. {Penaloza}}, doi = {https://doi.org/10.25368/2022.157}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Dresden University of Technology}, note = {See http://lat.inf.tu-dresden.de/research/reports.html.}, number = {LTCS-06-04}, title = {{PSPACE} Automata with Blocking for Description Logics}, type = {LTCS-Report}, year = {2006}, }
Abstract BibTeX Entry PDF File PS File DOI
Axiom pinpointing has been introduced in description logics (DLs) to help the user to understand the reasons why consequences hold and to remove unwanted consequences by computing minimal (maximal) subsets of the knowledge base that have (do not have) the consequence in question. The pinpointing algorithms described in the DL literature are obtained as extensions of the standard tableau-based reasoning algorithms for computing consequences from DL knowledge bases. Although these extensions are based on similar ideas, they are all introduced for a particular tableau-based algorithm for a particular DL. The purpose of this paper is to develop a general approach for extending a tableau-based algorithm to a pinpointing algorithm. This approach is based on a general definition of ``tableaux algorithms,'' which captures many of the known tableau-based algorithms employed in DLs, but also other kinds of reasoning procedures.
@techreport{ BaaderPenaloza-LTCS-07-01, address = {Germany}, author = {Franz {Baader} and Rafael {Penaloza}}, doi = {https://doi.org/10.25368/2022.159}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Dresden University of Technology}, note = {See http://lat.inf.tu-dresden.de/research/reports.html.}, number = {LTCS-07-01}, title = {Axiom Pinpointing in General Tableaux}, type = {LTCS-Report}, year = {2006}, }
2005
Abstract BibTeX Entry PDF File PS File DOI
Recently, it has been shown that the small DL EL, which allows for conjunction and existential restrictions, has better algorithmic properties than its counterpart FL0, which allows for conjunction and value restrictions. Whereas the subsumption problem in FL0 becomes already intractable in the presence of acyclic TBoxes, it remains tractable in EL even w.r.t. general concept inclusion axioms (GCIs). On the one hand, we will extend the positive result for EL by identifying a set of expressive means that can be added to EL without sacrificing tractability. On the other hand, we will show that basically all other additions of typical DL constructors to EL with GCIs make subsumption intractable, and in most cases even ExpTime-complete. In addition, we will show that subsumption in FL0 with GCIs is ExpTime complete.
@techreport{ BaaderBrandtLutz-LTCS-05-01, address = {Germany}, author = {F. {Baader} and S. {Brandt} and C. {Lutz}}, doi = {https://doi.org/10.25368/2022.144}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Dresden University of Technology}, note = {See http://lat.inf.tu-dresden.de/research/reports.html.}, number = {LTCS-05-01}, title = {Pushing the EL Envelope}, type = {LTCS-Report}, year = {2005}, }
Abstract BibTeX Entry PDF File PS File DOI
Motivated by the need for semantically well-founded and algorithmically managable formalisms for describing the functionality of Web services, we introduce an action formalism that is based on description logics (DLs), but is also firmly grounded on research in the reasoning about action community. Our main contribution is an analysis of how the choice of the DL influences the complexity of standard reasoning tasks such as projection and executability, which are important for Web service discovery and composition.
@techreport{ BMLSW-LTCS-05-02, address = {Germany}, author = {F. {Baader} and M. {Milicic} and C. {Lutz} and U. {Sattler} and F. {Wolter}}, doi = {https://doi.org/10.25368/2010.145}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Dresden University of Technology}, note = {See http://lat.inf.tu-dresden.de/research/reports.html.}, number = {LTCS-05-02}, title = {Integrating Description Logics and Action Formalisms for Reasoning about Web Services}, type = {LTCS-Report}, year = {2005}, }
Abstract BibTeX Entry PDF File PS File DOI
Basically, the connection of two many-sorted theories is obtained by taking their disjoint union, and then connecting the two parts through connection functions that must behave like homomorphisms on the shared signature. We determine conditions under which decidability of the validity of universal formulae in the component theories transfers to their connection. In addition, we consider variants of the basic connection scheme.
@techreport{ BaaderGhilardiLTCS-05-04, address = {Germany}, author = {Franz {Baader} and Silvio {Ghilardi}}, doi = {https://doi.org/10.25368/2022.147}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Dresden University of Technology}, note = {See http://lat.inf.tu-dresden.de/research/reports.html.}, number = {LTCS-05-04}, title = {Connecting Many-Sorted Theories}, type = {LTCS-Report}, year = {2005}, }
Abstract BibTeX Entry PDF File PS File DOI
Motivated by a chemical process engineering application, we introduce a new concept constructor in Description Logics (DLs), an n-ary variant of the existential restriction constructor, which generalizes both the usual existential restrictions and so-called qualified number restrictions. We show that the new constructor can be expressed in ALCQ, the extension of the basic DL ALC by qualified number restrictions. However, this representation results in an exponential blow-up. By giving direct algorithms for ALC extended with the new constructor, we can show that the complexity of reasoning in this new DL is actually not harder than the one of reasoning in ALCQ. Moreover, in our chemical process engineering application, a restricted DL that provides only the new constructor together with conjunction, and satisfies an additional restriction on the occurrence of roles names, is sufficient. For this DL, the subsumption problem is polynomial.
@techreport{ BaaderEtAll-LTCS-05-08, address = {Germany}, author = {Franz {Baader} and Carsten {Lutz} and Eldar {Karabaev} and Manfred {Thei{\ss}en}}, doi = {https://doi.org/10.25368/2022.151}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Dresden University of Technology}, note = {See http://lat.inf.tu-dresden.de/research/reports.html.}, number = {LTCS-05-08}, title = {A New $n$-ary Existential Quantifier in Description Logics}, type = {LTCS-Report}, year = {2005}, }
2004
Abstract BibTeX Entry PDF File PS File DOI
In two previous papers we have investigates the problem of computing the least common subsumer (lcs) and the most specific concept (msc) for the description logic EL in the presence of terminological cycles that are interpreted with descriptive semantics, which is the usual first-order semantics for description logics. In this setting, neither the lcs nor the msc needs to exist. We were able to characterize the cases in which the lcs/msc exists, but it was not clear whether this characterization yields decidability of the existence problem. In the present paper, we develop a common graph-theoretic generalization of these characterizations, and show that the resulting property is indeed decidable, thus yielding decidability of the existence of the lcs and the msc. This is achieved by expressing the property in monadic second-order logic on infinite trees. We also show that, if it exists, then the lcs/msc can be computed in polynomial time.
@techreport{ Baader-LTCS-04-02, address = {Germany}, author = {F. {Baader}}, doi = {https://doi.org/10.25368/2022.139}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Dresden University of Technology}, note = {See http://lat.inf.tu-dresden.de/research/reports.html.}, number = {LTCS-04-02}, title = {A Graph-Theoretic Generalization of the Least Common Subsumer and the Most Specific Concept in the Description Logic EL}, type = {LTCS-Report}, year = {2004}, }
2003
Abstract BibTeX Entry PDF File PS File DOI
and non-standard inferences in the presence of terminological cycles for the description logic EL, which allows for conjunctions, existential restrictions, and the top concept. Regarding standard inference problems, it was shown there that the subsumption problem remains polynomial for all three types of semantics usually considered for cyclic definitions in description logics, and that the instance problem remains polynomial for greatest fixpoint semantics. Regarding non-standard inference problems, it was shown that, w.r.t. greatest fixpoint semantics, the least common subsumer and the most specific concept always exist and can be computed in polynomial time, and that, w.r.t. descriptive semantics, the least common subsumer need not exist. The present report is concerned with two problems left open by this previous work, namely the instance problem and the problem of computing most specific concepts w.r.t. descriptive semantics, which is the usual first-order semantics for description logics. We will show that the instance problem is polynomial also in this context. Similar to the case of the least common subsumer, the most specific concept w.r.t. descriptive semantics need not exist, but we are able to characterize the cases in which it exists and give a decidable sufficient condition for the existence of the most specific concept. Under this condition, it can be computed in polynomial time.
@techreport{ Baader-LTCS-03-01, address = {Germany}, author = {F. {Baader}}, doi = {https://doi.org/10.25368/2022.126}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Dresden University of Technology}, note = {See http://lat.inf.tu-dresden.de/research/reports.html.}, number = {LTCS-03-01}, title = {The Instance Problem and the Most Specific Concept in the Description Logic {$\mathcal{EL}$} w.r.t.\ Terminological Cycles with Descriptive Semantics}, type = {LTCS-Report}, year = {2003}, }
Abstract BibTeX Entry PDF File PS File DOI
Previous results for combining decision procedures for the word problem in the non-disjoint case do not apply to equational theories induced by modal logics—whose combination is not disjoint since they share the theory of Boolean algebras. Conversely, decidability results for the fusion of modal logics are strongly tailored towards the special theories at hand, and thus do not generalize to other equational theories. In this paper, we present a new approach for combining decision procedures for the word problem in the non-disjoint case that applies to equational theories induced by modal logics, but is not restricted to them. The known fusion decidability results for modal logics are instances of our approach. However, even for equational theories induced by modal logics our results are more general since they are not restricted to so-called normal modal logics.
@techreport{ BaGiTiLTCS-03-05, address = {Germany}, author = {F. {Baader} and Silvio {Ghilardi} and Cesare {Tinelli}}, doi = {https://doi.org/10.25368/2022.130}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Dresden University of Technology}, note = {See http://lat.inf.tu-dresden.de/research/reports.html.}, number = {LTCS-03-05}, title = {A New Combination Procedure for the Word Problem that Generalizes Fusion Decidability Results in Modal Logics}, type = {LTCS-Report}, year = {2003}, }
2002
Abstract BibTeX Entry PDF File PS File DOI
Cyclic definitions in description logics have until now been investigated only for description logics allowing for value restrictions. Even for the most basic language FL0, which allows for conjunction and value restrictions only, deciding subsumption in the presence of terminological cycles is a PSPACE-complete problem. This report investigates subsumption in the presence of terminological cycles for the language EL, which allows for conjunction and existential restrictions. In contrast to the results for FL0, subsumption in EL remains polynomial, independent of whether we use least fixpoint semantics, greatest fixpoint semantics, or descriptive semantics. These results are shown via a characterization of subsumption through the existence of certain simulation relations between nodes of the description graph associated with a given cyclic terminology.
@techreport{ Baader-LTCS-02-02, address = {Germany}, author = {F. {Baader}}, doi = {https://doi.org/10.25368/2022.120}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Dresden University of Technology}, note = {See http://lat.inf.tu-dresden.de/research/reports.html.}, number = {LTCS-02-02}, title = {Terminological Cycles in a Description Logic with Existential Restrictions}, type = {LTCS-Report}, year = {2002}, }
Abstract BibTeX Entry PDF File PS File DOI
In a previous report we have investigates subsumption in the presence of terminological cycles for the description logic EL, which allows conjunctions, existential restrictions, and the top concept, and have shown that the subsumption problem remains polynomial for all three types of semantics usually considered for cyclic definitions in description logics. This result depends on a characterization of subsumption through the existence of certain simulation relations on the graph associated with a terminology. In the present report we will use this characterization to show how the most specific concept and the least common subsumer can be computed in EL with cyclic definitions. In addition, we show that subsumption in EL (with or without cyclic definitions) remains polynomial even if one adds a certain restricted form of global role-value-maps to EL. In particular, this kind of role-value-maps can express transitivity of roles.
@techreport{ Baader-LTCS-02-07, address = {Germany}, author = {F. {Baader}}, doi = {https://doi.org/10.25368/2022.125}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Dresden University of Technology}, note = {See http://lat.inf.tu-dresden.de/research/reports.html.}, number = {LTCS-02-07}, title = {Least Common Subsumers, Most Specific Concepts, and Role-Value-Maps in a Description Logic with Existential Restrictions and Terminological Cycles}, type = {LTCS-Report}, year = {2002}, }
2001
Abstract BibTeX Entry PDF File PS File DOI
Whereas matching in Description Logics is now relatively well-investigated, there are only very few formal results on matching under additional side conditions, though these side conditions were already present in the original paper by Borgida and McGuinness introducing matching in DLs. The present paper closes this gap for the DL \(\aln\) and its sublanguages.
@techreport{ BaaderBrandt+-LTCS-01-02, address = {Germany}, author = {F. {Baader} and S. {Brandt} and R. {K{\"u}sters}}, doi = {https://doi.org/10.25368/2022.112}, institution = {LuFG Theoretical Computer Science, RWTH Aachen}, note = {See http://www-lti.informatik.rwth-aachen.de/Forschung/Reports.html.}, number = {01-02}, title = {Matching under Side Conditions in Description Logics}, type = {LTCS-Report}, year = {2001}, }
Abstract BibTeX Entry PDF File PS File DOI
Tableaux-based decision procedures for satisfiability of modal and description logics behave quite well in practice, but it is sometimes hard to obtain exact worst-case complexity results using these approaches, especially for EXPTIME-complete logics.In contrast, automata-based approaches often yield algorithms for which optimal worst-case complexity can easily be proved. However, the algorithms obtained this way are usually not only worst-case, but also best-case exponential: they first construct an automaton that is always exponential in the size of the input, and then apply the (polynomial) emptiness test to this large automaton. To overcome this problem, one must try to construct the automaton ``on-the-fly'' while performing the emptiness test. In this paper we will show that Voronkov's inverse method for the modal logic K can be seen as an on-the-fly realization of the emptiness test done by the automata approach for K. The benefits of this result are two-fold. First, it shows that Voronkov's implementation of the inverse method, which behaves quite well in practice, is an optimized on-the-fly implementation of the automata-based satisfiability procedure for K. Second, it can be used to give a simpler proof of the fact that Voronkov's optimizations do not destroy completeness of the procedure. We will also show that the inverse method can easily be extended to handle global axioms, and that the correspondence to the automata approach still holds in this setting. In particular, the inverse method yields an EXPTIME-algorithm for satisfiability in K w.r.t. global axioms.
@techreport{ BaaderTobies-LTCS-01-03, address = {Germany}, author = {F. {Baader} and S. {Tobies}}, doi = {https://doi.org/10.25368/2022.113}, institution = {LuFG Theoretical Computer Science, RWTH Aachen}, note = {See http://www-lti.informatik.rwth-aachen.de/Forschung/Reports.html.}, number = {01-03}, title = {The Inverse Method Implements the Automata Approach for Modal Satisfiability}, type = {LTCS-Report}, year = {2001}, }
Abstract BibTeX Entry PDF File PS File DOI
Unification of concept descriptions was introduced by Baader and Narendran as a tool for detecting redundancies in knowledge bases. It was shown that unification in the small description logic FL0, which allows for conjunction, value restriction, and the top concept only, is already ExpTime-complete. The present paper shows that the complexity does not increase if one additionally allows for composition, union, and transitive closure of roles. It also shows that matching (which is polynomial in FL0) is PSpace-complete in the extended description logic. These results are proved via a reduction to linear equations over regular languages, which are then solved using automata. The obtained results are also of interest in formal language theory.
@techreport{ BaaderKuesters-LTCS-01-05, address = {Germany}, author = {F. {Baader} and R. {K{\"u}sters}}, doi = {https://doi.org/10.25368/2022.115}, institution = {LuFG Theoretical Computer Science, RWTH Aachen}, note = {See http://www-lti.informatik.rwth-aachen.de/Forschung/Reports.html.}, number = {01-05}, title = {Unification in a Description Logic with Transitive Closure of Roles}, type = {LTCS-Report}, year = {2001}, }
2000
Abstract BibTeX Entry PDF File PS File DOI
The problem of rewriting a concept given a terminology can informally be stated as follows: given a terminology T (i.e., a set of concept definitions) and a concept description C that does not contain concept names defined in T, can this description be rewritten into a "related better" description E by using (some of) the names defined in T? In this paper, we first introduce a general framework for the rewriting problem in description logics, and then concentrate on one specific instance of the framework, namely the minimal rewriting problem (where "better" means shorter, and "related" means equivalent). We investigate the complexity of the decision problem induced by the minimal rewriting problem for the languages FL0, ALN, ALE, and ALC, and then introduce an algorithm for computing (minimal) rewritings for the languages ALE and ALN. Finally, we sketch other interesting instances of the framework. Our interest for the minimal rewriting problem stems from the fact that algorithms for non-standard inferences, such as computing least common subsumers and matchers, usually produce concept descriptions not containing defined names. Consequently, these descriptions are rather large and hard to read and comprehend. First experiments in a chemical process engineering application show that rewriting can reduce the size of concept descriptions obtained as least common subsumers by almost two orders of magnitude.
@techreport{ BaaderKuestersMolitor-LTCS-00-04, address = {Germany}, author = {F. {Baader} and R. {K{\"u}sters} and R. {Molitor}}, doi = {https://doi.org/10.25368/2022.107}, institution = {LuFG Theoretical Computer Science, RWTH Aachen}, note = {See http://www-lti.informatik.rwth-aachen.de/Forschung/Reports.html.}, number = {00-04}, title = {Rewriting Concepts Using Terminologies -- Revisited}, type = {LTCS-Report}, year = {2000}, }
1999
Abstract BibTeX Entry PDF File PS File DOI
The problem of rewriting a concept given a terminology can informally be stated as follows: given a terminology T (i.e., a set of concept definitions) and a concept description C that does not contain concept names defined in T, can this description be rewritten into a "related better" description E by using (some of) the names defined in T? In this paper, we first introduce a general framework for the rewriting problem in description logics, and then concentrate on one specific instance of the framework, namely the minimal rewriting problem (where "better" means shorter, and "related" means equivalent). We investigate the complexity of the decision problem induced by the minimal rewriting problem for the languages FL0, ALN, ALE, and ALC, and then introduce an algorithm for computing (minimal) rewritings for the languages ALE and ALN. Finally, we sketch other interesting instances of the framework. Our interest for the minimal rewriting problem stems from the fact that algorithms for non-standard inferences, such as computing least common subsumers and matchers, usually produce concept descriptions not containing defined names. Consequently, these descriptions are rather large and hard to read and comprehend. First experiments in a chemical process engineering application show that rewriting can reduce the size of concept descriptions obtained as least common subsumers by almost two orders of magnitude.
@techreport{ BaaderKuestersMolitor-LTCS-99-12, address = {Germany}, author = {F. {Baader} and R. {K{\"u}sters} and R. {Molitor}}, doi = {https://doi.org/10.25368/2022.97}, institution = {LuFG Theoretical Computer Science, RWTH Aachen}, note = {Please refer to the revised version LTCS-Report 00-04.}, number = {99-12}, title = {Rewriting Concepts Using Terminologies -- Revisited}, type = {LTCS-Report}, year = {1999}, }
Abstract BibTeX Entry PDF File PS File
In a previous work, we describe a method to combine decision procedures for the word problem for theories sharing constructors. One of the requirements of our combination method is that the constructors be collapse-free. This paper removes that requirement by modifying the method so that it applies to non-collapse-free constructors as well. This broadens the scope of our combination results considerably, for example in the direction of equational theories corresponding to modal logics.
@techreport{ Baader-Tinelli-Report-99-13, author = {Franz {Baader} and Cesare {Tinelli}}, institution = {Department of Computer Science, University of Iowa}, month = {October}, number = {99-13}, title = {Combining Equational Theories Sharing Non-Collapse-Free Constructors}, year = {1999}, }
Abstract BibTeX Entry PDF File PS File DOI
In this work we consider the inference problem of computing (minimal) rewritings of concept descriptions using defined concepts from a terminology. We introduce a general framework for this problem. For the small description logic FLo, which provides us with conjunction and value restrictions, we show that the decision problem induced by the minimal rewriting problem is NP-complete.
@techreport{ BaaderMolitor-LTCS-99-06, address = {Germany}, author = {F. {Baader} and R. {Molitor}}, doi = {https://doi.org/10.25368/2022.92}, institution = {LuFG Theoretical Computer Science, RWTH Aachen}, note = {See http://www-lti.informatik.rwth-aachen.de/Forschung/Reports.html}, number = {LTCS-99-06}, title = {Rewriting Concepts using Terminologies}, type = {LTCS-Report}, year = {1999}, }
Abstract BibTeX Entry PDF File PS File DOI
Matching of concepts with variables (concept patterns) is a relatively new operation that has been introduced in the context of description logics, originally to help filter out unimportant aspects of large concepts appearing in industrial-strength knowledge bases. Previous work has concentrated on (sub-)languages of CLASSIC, which in particular do not allow for existential restrictions. In this work, we present sound and complete decision algorithms for the solvability of matching problems and for computing sets of matchers for matching problems in description logics with existential restrictions.
@techreport{ BaaderKuesters-LTCS-99-07-1999, address = {Germany}, author = {F. {Baader} and R. {K{\"u}sters}}, doi = {https://doi.org/10.25368/2022.93}, institution = {LuFg Theoretical Computer Science, RWTH Aachen}, note = {See http://www-lti.informatik.rwth-aachen.de/Forschung/Reports.html}, number = {LTCS-99-07}, title = {Matching in Description Logics with Existential Restrictions}, type = {LTCS-Report}, year = {1999}, }
Abstract BibTeX Entry PDF File PS File DOI
Matching of concepts against patterns is a new inference task in Description Logics, which was originally motivated by applications of the Classic system. Consequently, the work on this problem was until now mostly concerned with sublanguages of the Classic language, which does not allow for existential restrictions. Motivated by an application in chemical process engineering, which requires a description language with existential restrictions, this paper investigates the matching problem in Description Logics with existential restrictions. It turns out that existential restrictions make matching more complex in two respects. First, whereas matching in sublanguages of Classic is polynomial, deciding the existence of matchers is an NP-complete problem in the presence of existential restrictions. Second, whereas in sublanguages of Classic solvable matching problems have a unique least matcher, this is not the case for languages with existential restrictions. Thus, it is not a priori clear which of the (possibly infinitely many) matchers should be returned by a matching algorithm. After determining the complexity of the decision problem, the present paper first investigates the question of what are "interesting" sets of matchers, and then describes algorithms for computing these sets for the languages EL (which allows for conjunction and existential restrictions) and ALE.
@techreport{ BaaderKuesters-LTCS-99-13-1999, address = {Germany}, author = {F. {Baader} and R. {K{\"u}sters}}, doi = {https://doi.org/10.25368/2022.98}, institution = {LuFG Theoretical Computer Science, RWTH Aachen}, note = {See http://www-lti.informatik.rwth-aachen.de/Forschung/Reports.html.}, number = {LTCS-99-13}, title = {Matching Concept Descriptions with Existential Restrictions Revisited}, type = {LTCS-Report}, year = {1999}, }
1998
BibTeX Entry PDF File PS File DOI
@techreport{ BaSat98, author = {F. {Baader} and U. {Sattler}}, doi = {https://doi.org/10.25368/2022.78}, institution = {LuFg Theoretical Computer Science, RWTH Aachen, Germany}, number = {LTCS-98-02}, title = {Description Logics with Aggregates and Concrete Domains, Part II (extended)}, type = {LTCS-Report}, year = {1998}, }
Abstract BibTeX Entry PDF File PS File DOI
This paper compares two approaches for deriving subsumption algorithms for the description logic ALN: structural subsumption and an automata-theoretic characterization of subsumption. It turns out that structural subsumption algorithms can be seen as special implementations of the automata-theoretic characterization.
@techreport{ BaaderKuestersMolitor-LTCS-98-04, address = {Germany}, author = {F. {Baader} and R. {K{\"u}sters} and R. {Molitor}}, doi = {https://doi.org/10.25368/2022.80}, institution = {LuFG Theoretical Computer Science, RWTH Aachen}, note = {See http://www-lti.informatik.rwth-aachen.de/Forschung/Reports.html.}, number = {LTCS-98-04}, title = {Structural Subsumption Considered from an Automata Theoretic Point of View}, type = {LTCS-Report}, year = {1998}, }
Abstract BibTeX Entry PDF File PS File DOI
Computing least common subsumers (lcs) and most specific concepts (msc) are inference tasks that can be used to support the ``bottom up'' construction of knowledge bases for KR systems based on description logic. For the description logic ALN, the msc need not always exist if one restricts the attention to acyclic concept descriptions. In this paper, we extend the notions lcs and msc to cyclic descriptions, and show how they can be computed. Our approach is based on the automata-theoretic characterizations of fixed-point semantics for cyclic terminologies developed in previous papers.
@techreport{ BaaderKuesters-LTCS-98-06-1998, author = {F. {Baader} and R. {K{\"u}sters}}, doi = {https://doi.org/10.25368/2022.82}, institution = {LuFg Theoretical Computer Science, RWTH Aachen, Germany}, number = {LTCS-98-06}, title = {Computing the least common subsumer and the most specific concept in the presence of cyclic ALN-concept descriptions}, type = {LTCS-Report}, year = {1998}, }
Abstract BibTeX Entry PDF File PS File DOI
Unification of concept terms is a new kind of inference problem for Description Logics, which extends the equivalence problem by allowing to substitute certain concept names by concept terms before testing for equivalence. We show that this inference problem is of interest for applications, and present first decidability and complexity results for a small concept description language. This is a revised version of LTCS-Report 97-02: it provides a stronger complexity result in Section 6.
@techreport{ Baader-Narendran-LTCS-98, author = {F. {Baader} and P. {Narendran}}, doi = {https://doi.org/10.25368/2022.83}, institution = {LuFg Theoretical Computer Science, RWTH Aachen, Germany}, number = {LTCS-98-07}, title = {Unification of Concept Terms in Description Logics: Revised Version}, type = {LTCS-Report}, year = {1998}, }
Abstract BibTeX Entry PDF File PS File DOI
Computing least common subsumers (lcs) is an inference task that can be used to support the "bottom-up" construction of knowledge bases for KR systems based on description logics. Previous work on how to compute the lcs has concentrated on description logics that allow for universal value restrictions, but not for existential restrictions. The main new contribution of this paper is the treatment of description logics with existential restrictions. More precisely, we show that, for the description logic ALE (which allows for conjunction, universal value restrictions, existential restrictions, negation of atomic concepts, as well as the top and the bottom concept), the lcs always exists and can effectively be computed. Our approach for computing the lcs is based on an appropriate representation of concept descriptions by certain trees, and a characterization of subsumption by homomorphisms between these trees. The lcs operation then corresponds to the product operation on trees.
@techreport{ BaaderKuesters+-LTCS-98-09, address = {Germany}, author = {F. {Baader} and R. {K{\"u}sters} and R. {Molitor}}, doi = {https://doi.org/10.25368/2022.85}, institution = {LuFG Theoretical Computer Science, RWTH Aachen}, note = {See http://www-lti.informatik.rwth-aachen.de/Forschung/Papers.html.}, number = {LTCS-98-09}, title = {Computing Least Common Subsumers in Description Logics with Existential Restrictions}, type = {LTCS-Report}, year = {1998}, }
Abstract BibTeX Entry PDF File PS File DOI
Conceptual graphs (CGs) are an expressive and intuitive formalism, which plays an important role in the area of knowledge representation. Due to their expressiveness, most interesting problems for CGs are inherently undecidable. We identify the syntactically defined guarded fragment of CGs, for which both subsumption and validity is decidable in deterministic exponential time.
@techreport{ Baader-Molitor-Tobies-LTCS-98a, address = {Germany}, author = {F. {Baader} and R. {Molitor} and S. {Tobies}}, doi = {https://doi.org/10.25368/2022.86}, institution = {LuFG Theoretical Computer Science, RWTH Aachen}, note = {See http://www-lti.informatik.rwth-aachen.de/Forschung/Papers.html}, number = {LTCS-98-10}, title = {The Guarded Fragment of Conceptual Graphs}, type = {LTCS-Report}, year = {1998}, }
BibTeX Entry PDF File PS File DOI
@techreport{ BaaderMolitorTobies-LTCS-98-11, address = {Germany}, author = {F. {Baader} and R. {Molitor} and S. {Tobies}}, doi = {https://doi.org/10.25368/2022.87}, institution = {LuFG Theoretical Computer Science, RWTH Aachen}, note = {See http://www-lti.informatik.rwth-aachen.de/Forschung/Reports.html.}, number = {LTCS-98-11}, title = {On the Relation between Descripion Logics and Conceptual Graphs}, type = {LTCS-Report}, year = {1998}, }
Abstract BibTeX Entry PDF File PS File
The main contribution of this report is a new method for combining decision procedures for the word problem in equational theories. In contrast to previous methods, it is based on transformation rules, and also applies to theories sharing ``constructors.'' In addition, we show that—contrary to a common belief—the Nelson-Oppen combination method cannot be used to combine decision procedures for the word problem, even in the case of equational theories with disjoint signatures.
@techreport{ Baader-Tinelli-UIUCDCS-R-98-2073, author = {F. {Baader} and C. {Tinelli}}, institution = {Department of Computer Science, University of Illinois at Urbana-Champaign}, number = {UIUCDCS-R-98-2073}, title = {Deciding the Word Problem in the Union of Equational Theories}, type = {{UIUCDCS}-Report}, year = {1998}, }
1997
Abstract BibTeX Entry PDF File PS File DOI
This work is a preliminary version of the chapter on unification theory in a volume on automated deduction produced by the participants of the nationwide German research programme on automated deduction (SSP ``Deduktion'').
@techreport{ BaaderSchulz-CIS-97-103, address = {Oettingenstra{\ss}e 67, D-80538 Munich, Germany}, author = {Franz {Baader} and Klaus U. {Schulz}}, doi = {https://doi.org/10.25368/2022.135}, institution = {Center for Language and Information Processing (CIS)}, month = {January}, number = {CIS-Rep-97-103}, title = {Unification Theory -- {A}n Introduction}, type = {Research Report}, year = {1997}, }
Abstract BibTeX Entry PDF File PS File DOI
We show that extending description logics by simple aggregation functions as available in database systems may lead to undecidability of inference problems such as satisfiability and subsumption.
@techreport{ BaSat97, author = {F. {Baader} and U. {Sattler}}, doi = {https://doi.org/10.25368/2022.73}, institution = {LuFg Theoretical Computer Science, RWTH Aachen, Germany}, note = {An abridged version has appeared in the Proceedings of the International Workshop on Description Logics 97.}, number = {LTCS-97-01}, title = {Description Logics with Aggregates and Concrete Domains}, type = {LTCS-Report}, year = {1997}, }
Abstract BibTeX Entry PDF File PS File DOI
Unification of concept terms is a new kind of inference problem for Description Logics, which extends the equivalence problem by allowing to substitute certain concept names by concept terms before testing for equivalence. We show that this inference problem is of interest for applications, and present first decidability and complexity results for a small concept description language.
@techreport{ Baader-Narendran-LTCS-97, author = {F. {Baader} and P. {Narendran}}, doi = {https://doi.org/10.25368/2022.134}, institution = {LuFg Theoretical Computer Science, RWTH Aachen, Germany}, number = {LTCS-97-02}, title = {Unification of Concept Terms in Description Logics}, type = {LTCS-Report}, year = {1997}, }
Abstract BibTeX Entry PDF File PS File DOI
Unification modulo the theory of Boolean algebras has been investigated by several autors. Nevertheless, the exact complexity of the decision problem for unification with constants and general unification was not known. In this research note, we show that the decision problem is \(\Pi^p_2\)-complete for unification with constants and PSPACE-complete for general unification. In contrast, the decision problem for elementary unification (where the terms to be unified contain only symbols of the signature of Boolean algebras) is ``only'' NP-complete.
@techreport{ Baader-LTCS-97-03, author = {F. {Baader}}, doi = {https://doi.org/10.25368/2022.74}, institution = {LuFg Theoretical Computer Science, RWTH Aachen, Germany}, number = {LTCS-97-03}, title = {On the Complexity of Boolean Unification}, type = {LTCS-Report}, year = {1997}, }
1996
Abstract BibTeX Entry PDF File PS File DOI
The Nelson-Oppen combination method can be used to combine decision procedures for the validity of quantifier-free formulae in first-order theories with disjoint signatures, provided that the theories to be combined are stably infinite. We show that, even though equational theories need not satisfy this property, Nelson and Oppen's method can be applied, after some minor modifications, to combine decision procedures for the validity of quantifier-free formulae in equational theories. Unfortunately, and contrary to a common belief, the method cannot be used to combine decision procedures for the word problem. We present a method that solves this kind of combination problem. Our method is based on transformation rules and also applies to equational theories that share a finite number of constant symbols.
@techreport{ Baader-Tinelli-LTCS-96, author = {F. {Baader} and C. {Tinelli}}, doi = {https://doi.org/10.25368/2022.69}, institution = {LuFg Theoretical Computer Science, RWTH Aachen}, note = {An abridged version has appeared in Proc.\ CADE'97, Springer LNAI 1249.}, number = {LTCS-96-01}, title = {A New Approach for Combining Decision Procedures for the Word Problem, and Its Connection to the {Nelson-Oppen} Combination Method}, type = {LTCS-Report}, year = {1996}, }
BibTeX Entry PDF File PS File DOI
@techreport{ BaaderSattler-LTCS-96-02, author = {F. {Baader} and U. {Sattler}}, doi = {https://doi.org/10.25368/2022.70}, institution = {LuFg Theoretical Computer Science, RWTH Aachen}, note = {An abridged version has appeared in the Proceedings of the Fifth International Conference on Knowledge Representation and Reasoning, 1996, Cambridge, Massachusetts.}, number = {LTCS-96-02}, title = {Number Restrictions on Complex Roles in Description Logics}, year = {1996}, }
BibTeX Entry PDF File PS File DOI
@techreport{ BaaderSattler-LTCS-96-03, author = {F. {Baader} and U. {Sattler}}, doi = {https://doi.org/10.25368/2022.71}, institution = {LuFg Theoretical Computer Science, RWTH Aachen}, note = {An abridged version has appeared in the Proceedings of the 12th European Conference on Artificial Intelligence, 1996, Budapest, Hungary.}, number = {LTCS-96-03}, title = {Description Logics with Symbolic Number Restrictions}, year = {1996}, }
Abstract BibTeX Entry PDF File PS File DOI
Reduction orderings that are compatible with an equational theory \(E\) and total on (the \(E\)-equivalence classes of) ground terms play an important role in automated deduction. We present a general approach for combining such orderings. To be more precise, we show how \(E_1\)-compatible reduction orderings total on \(\Sigma_1\)-ground terms and \(E_2\)-compatible reduction orderings total on \(\Sigma_2\)-ground terms can be used to construct an \((E_1\cup E_2)\)-compatible reduction ordering total on \((\Sigma_1\cup \Sigma_2)\)-ground terms, provided that the signatures are disjoint and some other (rather weak) restrictions are satisfied. This work was motivated by the observation that it is often easier to construct such orderings for ``small'' signatures and theories separately, rather than directly for their union.
@techreport{ Baader-LTCS-96, author = {F. {Baader}}, doi = {https://doi.org/10.25368/2022.68}, institution = {LuFg Theoretical Computer Science, RWTH Aachen, Germany}, number = {LTCS-96-05}, title = {Combination of Compatible Reduction Orderings that are Total on Ground Terms}, type = {LTCS-Report}, year = {1996}, }
1995
Abstract BibTeX Entry PDF File PS File
An extension of the concept description language ALC used in KL-ONE-like terminological reasoning is presented. The extension includes multi-modal operators that can either stand for the usual role quantifications or for modalities such as belief, time etc. The modal operators can be used at all levels of the concept terms, and they can be used to modify both concepts and roles. This is an instance of a new kind of combination of modal logics where the modal operators of one logic may operate directly on the operators of the other logic. Different versions of this logic are investigated and various results about decidability and undecidability are presented. The main problem, however, decidability of the basic version of the logic, remains open.
@techreport{ MPI-I-95-2-005, address = {Saarbr{\"u}cken}, author = {Franz {Baader} and Hans J{\"u}rgen {Ohlbach}}, institution = {Max-Planck-Institut f{\"u}r Informatik}, number = {MPI-I-95-2-005}, pages = {32}, title = {A Multi-Dimensional Terminological Knowledge Representation Language}, type = {Technical Report}, year = {1995}, }
1994
Abstract BibTeX Entry PDF File PS File
When combining languages for symbolic constraints, one is typically faced with the problem of how to treat ``mixed'' constraints. The two main problems are (1) how to define a combined solution structure over which these constraints are to be solved, and (2) how to combine the constraint solving methods for pure constraints into one for mixed constraints. The paper introduces the notion of a ``free amalgamated product'' as a possible solution to the first problem. Subsequently, we define so-called simply-combinable structures (SC-structures). For SC-structures over disjoint signatures, a canonical amalgamation construction exists, which for the subclass of strong SC-structures yields the free amalgamated product. The combination technique of [Baader&Schulz92,Baader&Schulz95] can be used to combine constraint solvers for (strong) SC-structures over disjoint signatures into a solver for their (free) amalgamated product. In addition to term algebras modulo equational theories, the class of SC-structures contains many solution structures that have been used in constraint logic programming, such as the algebra of rational trees, feature structures, domains consisting of hereditarily finite (wellfounded or non-wellfounded) nested sets and lists.
@techreport{ CIS-94-82, author = {F. {Baader} and K. {Schulz}}, institution = {Universit\"at M\"unchen}, number = {94-82}, title = {On the Combination of Symbolic Constraints, Solution Domains, and Constraint Solvers}, type = {{CIS}-Report}, year = {1994}, }
Abstract BibTeX Entry PDF File PS File
In a previous paper we have introduced a method that allows one to combine decision procedures for unifiability in disjoint equational theories. Lately, it has turned out that the prerequisite for this method to apply—namely that unification with so-called linear constant restrictions is decidable in the single theories—is equivalent to requiring decidability of the positive fragment of the first order theory of the equational theories. Thus, the combination method can also be seen as a tool for combining decision procedures for positive theories of free algebras defined by equational theories. The present paper uses this observation as the starting point of a more abstract, algebraic approach to formulating and solving the combination problem. Its contributions are twofold. As a new result, we describe an (optimized) extension of our combination method to the case of constraint solvers that also take relational constraints (such as ordering constraints) into account. The second contribution is a new proof method, which depends on abstract notions and results from universal algebra, as opposed to technical manipulations of terms (such as ordered rewriting, abstraction functions, etc.)
@techreport{ BaaderSchulz-CIS-94-75, address = {Wagm\"ullerstra{\ss}e 23, D-80538 Munich, Germany}, author = {Franz {Baader} and Klaus U. {Schulz}}, institution = {Center for Language and Information Processing (CIS)}, month = {July}, number = {CIS-Rep-94-75}, title = {Combination of Constraint Solving Techniques: {A}n Algebraic Point of View}, type = {Research Report}, year = {1994}, }
Abstract BibTeX Entry
Terminological knowledge representation formalisms can be used to represent objective, time-independent facts about an application domain. Notions like belief, intentions, time—which are essential for the representation of multi-agent environments—can only be expressed in a very limited way. For such notions, modal logics with possible worlds semantics provides a formally well-founded and well-investigated basis. <P> This paper presents a framework for integrating modal operators into terminological knowledge representation languages. These operators can be used both inside of concept expressions and in front of terminological and assertional axioms. The main restrictions are that all modal operators are interpreted in the basic logic <em>K</em>, and that we consider increasing domains instead of constant domains. We introduce syntax and semantics of the extended language, and show that satisfiability of finite sets of formulas is decidable.
@techreport{ DFKI-RR-94-33, author = {F. {Baader} and A. {Laux}}, institution = {{D}eutsches {F}orschungszentrum f\"ur {K}\"unstliche {I}ntelligenz, Kaiserslautern}, number = {{RR}-94-33}, title = {Terminological Logics with Modal Operators}, type = {{DFKI} Research Report}, year = {1994}, }
1993
BibTeX Entry
@techreport{ MPII-I-93-212, author = {F. {Baader} and H.-J. {Ohlbach}}, institution = {Max-Planck-Institut f\"ur Informatik, Saarbr\"ucken}, number = {MPI-I-93-212}, title = {A Multi-Dimensional Terminological Knowledge Representation Language}, type = {{MPII} Report}, year = {1993}, }
BibTeX Entry
@techreport{ DFKI-RR-93-03, author = {F. {Baader} and B. {Hollunder} and B. {Nebel} and H.J. {Profitlich} and E. {Franconi}}, institution = {{D}eutsches {F}orschungszentrum f\"ur {K}\"unstliche {I}ntelligenz, Kaiserslautern}, number = {{RR}-93-03}, title = {An Empirical Analysis of Optimization Techniques for Terminological Representation Systems, or: {M}aking {KRIS} get a move on}, type = {{DFKI} Research Report}, year = {1993}, }
BibTeX Entry
@techreport{ DFKI-RR-93-05, author = {F. {Baader} and K. {Schulz}}, institution = {{D}eutsches {F}orschungszentrum f\"ur {K}\"unstliche {I}ntelligenz, Kaiserslautern}, number = {{RR}-93-05}, title = {Combination Techniques and Decision Problems for Disunification}, type = {{DFKI} Research Report}, year = {1993}, }
BibTeX Entry
@techreport{ DFKI-RR-93-13, author = {F. {Baader} and K. {Schlechta}}, institution = {{D}eutsches {F}orschungszentrum f\"ur {K}\"unstliche {I}ntelligenz, Kaiserslautern}, number = {{RR}-93-13}, title = {A Semantics for Open Normal Defaults via a Modified Preferential Approach}, type = {{DFKI} Research Report}, year = {1993}, }
Abstract BibTeX Entry PDF File PS File
We consider the problem of integrating Reiter's default logic into terminological representation systems. It turns out that such an integration is less straightforward than we expected, considering the fact that the terminological language is a decidable sublanguage of first-order logic. Semantically, one has the unpleasant effect that the consequences of a terminological default theory may be rather unintuitive, and may even vary with the syntactic structure of equivalent concept expressions. This is due to the unsatisfactory treatment of open defaults via Skolemization in Reiter's semantics. On the algorithmic side, we show that this treatment may lead to an undecidable default consequence relation, even though our base language is decidable, and we have only finitely many (open) defaults. Because of these problems, we then consider a restricted semantics for open defaults in our terminological default theories: default rules are only applied to individuals that are explicitly present in the knowledge base. In this semantics it is possible to compute all extensions of a finite terminological default theory, which means that this type of default reasoning is decidable.
@techreport{ DFKI-RR-93-20, author = {F. {Baader} and B. {Hollunder}}, institution = {{D}eutsches {F}orschungszentrum f\"ur {K}\"unstliche {I}ntelligenz, Kaiserslautern}, number = {{RR}-93-20}, title = {Embedding Defaults into Terminological Representation Systems}, type = {{DFKI} Research Report}, year = {1993}, }
Abstract BibTeX Entry
The concept description formalisms of existing terminological systems allow the user to express local cardinality restrictions on the fillers of a particular role. It is not possible, however, to introduce global restrictions on the number of instances of a given concept. The paper argues that such cardinality restrictions on concepts are of importance in applications such as configuration of technical systems, an application domain of terminological systems that is currently gaining in interest. It shows that including such restrictions into the description language leaves the important inference problems such as instance testing decidable. The algorithm combines and simplifies the ideas developed for the treatment of qualifying number restrictions and of general terminological axioms.
@techreport{ DFKI-RR-93-48, author = {F. {Baader} and M. {Buchheit} and B. {Hollunder}}, institution = {{D}eutsches {F}orschungszentrum f\"ur {K}\"unstliche {I}ntelligenz, Kaiserslautern}, note = {A short version has appeared in Proceedings of the {KI}'94, Springer LNCS 861}, number = {{RR}-93-48}, title = {Cardinality Restrictions on Concepts}, type = {{DFKI} Research Report}, year = {1993}, }
1992
BibTeX Entry
@techreport{ CIS-92-58, author = {F. {Baader} and K. {Schulz}}, institution = {Universit\"at M\"unchen}, number = {92-58}, title = {General {A}- and {AX}-Unification via Optimized Combination Procedures}, type = {{CIS}-Report}, year = {1992}, }
BibTeX Entry
@techreport{ DFKI-RR-92-33, author = {F. {Baader}}, institution = {{D}eutsches {F}orschungszentrum f\"ur {K}\"unstliche {I}ntelligenz, Kaiserslautern}, number = {{RR}-92-33}, title = {Unification Theory}, type = {{DFKI} Research Report}, year = {1992}, }
BibTeX Entry
@techreport{ DFKI-RR-92-36, author = {F. {Baader} and P. {Hanschke}}, institution = {{D}eutsches {F}orschungszentrum f\"ur {K}\"unstliche {I}ntelligenz, Kaiserslautern}, number = {{RR}-92-36}, title = {Extensions of Concept Languages for a Mechanical Engineering Application}, type = {{DFKI} Research Report}, year = {1992}, }
BibTeX Entry
@techreport{ DFKI-RR-92-58, author = {F. {Baader} and B. {Hollunder}}, institution = {{D}eutsches {F}orschungszentrum f\"ur {K}\"unstliche {I}ntelligenz, Kaiserslautern}, number = {{RR}-92-58}, title = {How to Prefer More Specific Defaults in Terminological Default Logic}, type = {{DFKI} Research Report}, year = {1992}, }
1991
BibTeX Entry
@techreport{ DFKI-RR-91-01, author = {F. {Baader} and H.-J. {B\"urckert} and B. {Nebel} and W. {Nutt} and G. {Smolka}}, institution = {{D}eutsches {F}orschungszentrum f\"ur {K}\"unstliche {I}ntelligenz, Kaiserslautern}, number = {{RR}-91-01}, title = {On the Expressivity of Feature Logics with Negation, Functional Uncertainty, and Sort Equations}, type = {{DFKI} Research Report}, year = {1991}, }
BibTeX Entry
@techreport{ DFKI-RR-91-03, author = {B. {Hollunder} and F. {Baader}}, institution = {{D}eutsches {F}orschungszentrum f\"ur {K}\"unstliche {I}ntelligenz, Kaiserslautern}, number = {{RR}-91-03}, title = {Qualifying Number Restrictions in Concept Languages}, type = {{DFKI} Research Report}, year = {1991}, }
Abstract BibTeX Entry PDF File PS File
A drawback which concept languages based on KL-ONE have is that all the terminological knowledge has to be defined on an abstract logical level. In many applications, one would like to be able to refer to concrete domains and predicates on these domains when defining concepts. Examples for such concrete domains are the integers, the real numbers, or also non-arithmetic domains, and predicates could be equality, inequality, or more complex predicates. In the present paper we shall propose a scheme for integrating such concrete domains into concept languages rather than describing a particular extension by some specific concrete domain. We shall define a terminological and an assertional language, and consider the important inference problems such as subsumption, instantiation, and consistency. The formal semantics as well as the reasoning algorithms are given on the scheme level. In contrast to existing KL-ONE based systems, these algorithms will be not only sound but also complete. They generate subtasks which have to be solved by a special purpose reasoner of the concrete domain.
@techreport{ DFKI-RR-91-10, author = {F. {Baader} and P. {Hanschke}}, institution = {{D}eutsches {F}orschungszentrum f\"ur {K}\"unstliche {I}ntelligenz, Kaiserslautern}, number = {{RR}-91-10}, title = {A Scheme for Integrating Concrete Domains into Concept Languages}, type = {{DFKI} Research Report}, year = {1991}, }
Abstract BibTeX Entry PDF File PS File
Most of the work on the combination of unification algorithms for the union of disjoint equational theories has been restricted to algorithms which compute finite complete sets of unifiers. Thus the developed combination methods usually cannot be used to combine decision procedures, i.e., algorithms which just decide solvability of unification problems without computing unifiers. In this paper we describe a combination algorithm for decision procedures which works for arbitrary equational theories, provided that solvability of so-called unification problems with constant restrictions—a slight generalization of unification problems with constants—is decidable for these theories. As a consequence of this new method, we can for example show that general A-unifiability, i.e., solvability of A-unification problems with free function symbols, is decidable. Here A stands for the equational theory of one associative function symbol. Our method can also be used to combine algorithms which compute finite complete sets of unifiers. Manfred Schmidt-Schauss' combination result, the until now most general result in this direction, can be obtained as a consequence of this fact. We also get the new result that unification in the union of disjoint equational theories is finitary, if general unification—i.e., unification of terms with additional free function symbols—is finitary in the single theories.
@techreport{ DFKI-RR-91-33, author = {F. {Baader} and K. {Schulz}}, institution = {{D}eutsches {F}orschungszentrum f\"ur {K}\"unstliche {I}ntelligenz, Kaiserslautern}, number = {{RR}-91-33}, title = {Unification in the Union of Disjoint Equational Theories: Combining Decision Procedures}, type = {{DFKI} Research Report}, year = {1991}, }
1990
BibTeX Entry
@techreport{ DFKI-RR-90-01, author = {F. {Baader}}, institution = {{D}eutsches {F}orschungszentrum f\"ur {K}\"unstliche {I}ntelligenz, Kaiserslautern}, number = {{RR}-90-01}, title = {Terminological Cycles in {KL}-{ONE}-based Knowledge Representation Languages}, type = {{DFKI} Research Report}, year = {1990}, }
BibTeX Entry
@techreport{ DFKI-RR-90-05, author = {F. {Baader}}, institution = {{D}eutsches {F}orschungszentrum f\"ur {K}\"unstliche {I}ntelligenz, Kaiserslautern}, number = {{RR}-90-05}, title = {A Formal Definition for Expressive Power of Knowledge Representation Languages}, type = {{DFKI} Research Report}, year = {1990}, }
BibTeX Entry PDF File PS File
@techreport{ DFKI-RR-90-10, author = {F. {Baader} and H.-J. {B\"urckert} and B. {Hollunder} and W. {Nutt} and J. {Siekmann}}, institution = {{D}eutsches {F}orschungszentrum f\"ur {K}\"unstliche {I}ntelligenz, Kaiserslautern}, number = {{RR}-90-10}, title = {Concept Logic}, type = {{DFKI} Research Report}, year = {1990}, }
BibTeX Entry PDF File PS File
@techreport{ DFKI-RR-90-13, author = {F. {Baader}}, institution = {{D}eutsches {F}orschungszentrum f\"ur {K}\"unstliche {I}ntelligenz, Kaiserslautern}, number = {{RR}-90-13}, title = {Augmenting Concept Languages by Transitive Closure of Roles: An Alternative to Terminological Cycles}, type = {{DFKI} Research Report}, year = {1990}, }
BibTeX Entry
@techreport{ DFKI-RR-90-16, author = {F. {Baader} and W. {Nutt}}, institution = {{D}eutsches {F}orschungszentrum f\"ur {K}\"unstliche {I}ntelligenz, Kaiserslautern}, number = {{RR}-90-16}, title = {Adding Homomorphisms to Commutative/Monoidal Theories, or: {H}ow Algebra Can Help in Equational Unification}, type = {{DFKI} Research Report}, year = {1990}, }
BibTeX Entry
@techreport{ SEKI-SR-90-1, author = {F. {Baader}}, institution = {Universit\"at Kaiserslautern}, number = {{SR}-90-1}, title = {Unification in Commutative Theories, {H}ilbert's Basis Theorem and {G}r\"obner Bases}, type = {{SEKI} Report}, year = {1990}, }
BibTeX Entry
@techreport{ SEKI-SR-90-2, author = {F. {Baader}}, institution = {Universit\"at Kaiserslautern}, number = {{SR}-90-2}, title = {Unification, Weak Unification, Upper Bound, Lower Bound and Generalization Problems}, type = {{SEKI} Report}, year = {1990}, }
BibTeX Entry
@techreport{ DFKI-TM-90-03, author = {F. {Baader} and B. {Hollunder}}, institution = {{D}eutsches {F}orschungszentrum f\"ur {K}\"unstliche {I}ntelligenz, Kaiserslautern}, number = {{TM}-90-03}, title = {{KRIS}: Knowledge Representation and Inference System, System Description}, type = {{DFKI} Technical Memo}, year = {1990}, }
Abstract BibTeX Entry PDF File PS File
This paper contains a proposal for a terminological logic. The formalisms for representing knowledge as well as the needed inferences are described.
@techreport{ DFKI-TM-90-04, author = {F. {Baader} and H.-J. {B\"urckert} and J. {Heinsohn} and J. {M\"uller} and B. {Hollunder} and B. {Nebel} and W. {Nutt} and H.-J. {Profitlich}}, institution = {{D}eutsches {F}orschungszentrum f\"ur {K}\"unstliche {I}ntelligenz, Kaiserslautern}, note = {Updated version, taking into account the results of a discussion at the ``International Worksop on Terminological Logics,'' Dagstuhl, May 1991.}, number = {{TM}-90-04}, title = {Terminological Knowledge Representation: A Proposal for a Terminological Logic}, type = {{DFKI} Technical Memo}, year = {1990}, }
1989
BibTeX Entry
@techreport{ IMMD-89-22-8, author = {F. {Baader}}, institution = {Institut f{\"u}r Mathematische Maschinen und Datenverarbeitung, Universit{\"a}t Erlangen}, note = {Dissertation}, number = {8}, title = {Unifikation und {R}eduktionssysteme f{\"u}r {H}albgruppenvariet{\"a}ten}, type = {Arbeitsbericht}, volume = {22}, year = {1989}, }
1985
BibTeX Entry
@techreport{ IMMD-85-18-8, author = {F. {Baader}}, institution = {Institut f{\"u}r Mathematische Maschinen und Datenverarbeitung, Universit{\"a}t Erlangen}, number = {8}, title = {Die {S}-{V}ariet{\"a}t {DS} und einige {U}ntervariet{\"a}ten}, type = {Arbeitsbericht}, volume = {18}, year = {1985}, }
Generated 19 December 2024, 10:12:27.