# Research projects

## Table of contents

Details of research projects assigned to a cluster of excellence, a research training group, or a research unit can be found on the corresponding linked websites below. Independent research projects are mentioned below in the two lists.

## Clusters of Excellence, Collaborative Research Centres, Research Training Groups, and Research Units

## Current Research Projects

Description Logics (DLs) are a family of logic-based knowledge representation languages, which are used to formalize ontologies for application domains such as biology and medicine. As the size of DL-based ontologies grows, tools for improving their quality become more important. DL reasoners can detect inconsistencies and infer other implicit consequences. However, for the developer of an ontology, it is often hard to understand why a consequence computed by the reasoner follows, and how to repair the ontology if this consequence is not intended. The classical algorithmic approach for repairing DL-based ontologies is to compute (one or more) maximal subsets of the ontology that no longer have the unintended consequence. In previous work we have introduced a more "gentle" approach for repairing ontologies that allows us to retain more consequences than the classical approach: instead of completely removing axioms, our gentle repair replaces them by "weaker" axioms, i.e., axioms that have less consequences. In addition to introducing the general framework, we have defined useful properties of weakening relations on axioms and investigated two particular such relations for the DL EL. The purpose of this project is foremost to investigate this gentle repair approach in more detail. On the one hand, we will consider variants of the general framework, e.g., w.r.t. which and how many axioms are chosen to be weakened. On the other hand, we will design different weakening relations, both for light-weight DLs such as EL and for expressive DLs such as ALC, and investigate their properties. In addition, we will clarify the relationship to similar approaches in other areas such as belief revision and inconsistency-tolerant reasoning, and extend our approach to the setting of privacy-preserving ontology publishing. Regarding belief revision, it should be noted that our gentle repair approach is related to what is called pseudo-contraction in that area. However, the emphasis in belief revision is more on abstract approaches for generating contractions that satisfy certain postulates than on defining and investigating concrete repair approaches. We will investigate which of these postulates are satisfied by our approaches. Inconsistency-tolerant reasoning is of interest since it is often based on considering what follows from some, all, or the intersection of all repairs. We will investigate how our new notion of repair influences these inconsistency-tolerant reasoning approaches. In privacy-preserving ontology publishing, removing consequences that are to be hidden from the ontology is not sufficient since an attacker might have additional background information. In previous work, we have considered this problem in a very restricted setting, where both the information to be published and the background knowledge are represented using EL concepts. We intend to generalize this to more general forms of knowledge bases and more expressive DLs.

### Publications & Technical Reports

## 2021

**Computing Optimal Repairs of Quantified ABoxes w.r.t. Static**\(\mathcal{EL}\)

**TBoxes (Extended Version)**. LTCS-Report 21-01, Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universität Dresden, Dresden, Germany, 2021.

Abstract BibTeX Entry PDF File

The application of automated reasoning approaches to Description Logic (DL) ontologies may produce certain consequences that either are deemed to be wrong or should be hidden for privacy reasons. The question is then how to repair the ontology such that the unwanted consequences can no longer be deduced. An optimal repair is one where the least amount of other consequences is removed. Most of the previous approaches to ontology repair are of a syntactic nature in that they remove or weaken the axioms explicitly present in the ontology, and thus cannot achieve semantic optimality. In previous work, we have addressed the problem of computing optimal repairs of (quantified) ABoxes, where the unwanted consequences are described by concept assertions of the light-weight DL \(\mathcal{EL}\). In the present paper, we improve on the results achieved so far in two ways. First, we allow for the presence of terminological knowledge in the form of an \(\mathcal{EL}\) TBox. This TBox is assumed to be static in the sense that it cannot be changed in the repair process. Second, the construction of optimal repairs described in our previous work is best case exponential. We introduce an optimized construction that is exponential only in the worst case. First experimental results indicate that this reduces the size of the computed optimal repairs considerably.

@techreport{ BaKoKrNu-LTCS-21-01, address = {Dresden, Germany}, author = {Franz {Baader} and Patrick {Koopmann} and Francesco {Kriegel} and Adrian {Nuradiansyah}}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, number = {21-01}, title = {Computing Optimal Repairs of Quantified ABoxes w.r.t. Static {$\mathcal{EL}$} TBoxes (Extended Version)}, type = {LTCS-Report}, year = {2021}, }

**Safety of Quantified ABoxes w.r.t. Singleton**\(\mathcal{E\!L}\)

**Policies**. In

*Proceedings of the 36th ACM/SIGAPP Symposium on Applied Computing (SAC 2021)*, 2021. To appear.

Abstract BibTeX Entry PDF File DOI Extended Technical Report

In recent work, we have shown how to compute compliant anonymizations of quantified ABoxes w.r.t. \(\mathcal{E\!L}\) policies. In this setting, quantified ABoxes can be used to publish information about individuals, some of which are anonymized. The policy is given by concepts of the Description Logic (DL) \(\mathcal{E\!L}\), and compliance means that one cannot derive from the ABox that some non-anonymized individual is an instance of a policy concept. If one assumes that a possible attacker could have additional knowledge about some of the involved non-anonymized individuals, then compliance with a policy is not sufficient. One wants to ensure that the quantified ABox is safe in the sense that none of the secret instance information is revealed, even if the attacker has additional compliant knowledge. In the present paper, we show that safety can be decided in polynomial time, and that the unique optimal safe anonymization of a non-safe quantified ABox can be computed in exponential time, provided that the policy consists of a single \(\mathcal{E\!L}\) concept.

@inproceedings{ BaKrNuPe-SAC2021, author = {Franz {Baader} and Francesco {Kriegel} and Adrian {Nuradiansyah} and Rafael {Pe{\~n}aloza}}, booktitle = {Proceedings of the 36th ACM/SIGAPP Symposium on Applied Computing (SAC 2021)}, doi = {https://doi.org/10.1145/3412841.3441961}, note = {To appear.}, title = {{Safety of Quantified ABoxes w.r.t.\ Singleton $\mathcal{E\!L}$ Policies}}, year = {2021}, }

## 2020

**Computing Compliant Anonymisations of Quantified ABoxes w.r.t.**\(\mathcal{E\!L}\)

**Policies (Extended Version)**. LTCS-Report 20-08, Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universität Dresden, Dresden, Germany, 2020.

Abstract BibTeX Entry PDF File

We adapt existing approaches for privacy-preserving publishing of linked data to a setting where the data are given as Description Logic (DL) ABoxes with possibly anonymised (formally: existentially quantified) individuals and the privacy policies are expressed using sets of concepts of the DL \(\mathcal{E\!L}\). We provide a chacterization of compliance of such ABoxes w.r.t. \(\mathcal{E\!L}\) policies, and show how optimal compliant anonymisations of ABoxes that are non-compliant can be computed. This work extends previous work on privacy-preserving ontology publishing, in which a very restricted form of ABoxes, called instance stores, had been considered, but restricts the attention to compliance. The approach developed here can easily be adapted to the problem of computing optimal repairs of quantified ABoxes.

@techreport{ BaKrNuPe-LTCS-20-08, address = {Dresden, Germany}, author = {Franz {Baader} and Francesco {Kriegel} and Adrian {Nuradiansyah} and Rafael {Pe\~{n}aloza}}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, number = {20-08}, title = {Computing Compliant Anonymisations of Quantified ABoxes w.r.t. $\mathcal{E\!L}$ Policies (Extended Version)}, type = {LTCS-Report}, year = {2020}, }

**Computing Safe Anonymisations of Quantified ABoxes w.r.t.**\(\mathcal{E\!L}\)

**Policies (Extended Version)**. LTCS-Report 20-09, Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universität Dresden, Dresden, Germany, 2020.

Abstract BibTeX Entry PDF File

In recent work, we have shown how to compute compliant anonymizations of quantified ABoxes w.r.t. \(\mathcal{E\!L}\) policies. In this setting, quantified ABoxes can be used to publish information about individuals, some of which are anonymized. The policy is given by concepts of the Description Logic (DL) \(\mathcal{E\!L}\), and compliance means that one cannot derive from the ABox that some non-anonymized individual is an instance of a policy concept. If one assumes that a possible attacker could have additional knowledge about some of the involved non-anonymized individuals, then compliance with a policy is not sufficient. One wants to ensure that the quantified ABox is safe in the sense that none of the secret instance information is revealed, even if the attacker has additional compliant knowledge. In the present paper, we show that safety can be decided in polynomial time, and that the unique optimal safe anonymization of a non-safe quantified ABox can be computed in exponential time, provided that the policy consists of a single \(\mathcal{E\!L}\) concept.

@techreport{ BaKrNuPe-LTCS-20-09, address = {Dresden, Germany}, author = {Franz {Baader} and Francesco {Kriegel} and Adrian {Nuradiansyah} and Rafael {Pe\~{n}aloza}}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, number = {20-09}, title = {Computing Safe Anonymisations of Quantified ABoxes w.r.t.\ $\mathcal{E\!L}$ Policies (Extended Version)}, type = {LTCS-Report}, year = {2020}, }

**Computing Compliant Anonymisations of Quantified ABoxes w.r.t.**\(\mathcal{E\!L}\)

**Policies**. In Jeff Z. Pan, Valentina A. M. Tamma, Claudia d'Amato, Krzysztof Janowicz, Bo Fu, Axel Polleres, Oshani Seneviratne, and Lalana Kagal, editors,

*Proceedings of the 19th International Semantic Web Conference (ISWC 2020), Part I, Athens, Greece, November 2-6, 2020*, volume 12506 of

*Lecture Notes in Computer Science*, pages 3–20. Springer, 2020.

Abstract BibTeX Entry PDF File DOI ©Springer-Verlag Extended Technical Report

We adapt existing approaches for privacy-preserving publishing of linked data to a setting where the data are given as Description Logic (DL) ABoxes with possibly anonymised (formally: existentially quantified) individuals and the privacy policies are expressed using sets of concepts of the DL \(\mathcal{E\!L}\). We provide a chacterization of compliance of such ABoxes w.r.t. \(\mathcal{E\!L}\) policies, and show how optimal compliant anonymisations of ABoxes that are non-compliant can be computed. This work extends previous work on privacy-preserving ontology publishing, in which a very restricted form of ABoxes, called instance stores, had been considered, but restricts the attention to compliance. The approach developed here can easily be adapted to the problem of computing optimal repairs of quantified ABoxes.

@inproceedings{ BaKrNuPe-ISWC2020, author = {Franz {Baader} and Francesco {Kriegel} and Adrian {Nuradiansyah} and Rafael {Pe{\~n}aloza}}, booktitle = {Proceedings of the 19th International Semantic Web Conference (ISWC 2020), Part {I}, Athens, Greece, November 2-6, 2020}, doi = {https://doi.org/10.1007/978-3-030-62419-4_1}, editor = {Jeff Z. {Pan} and Valentina A. M. {Tamma} and Claudia {d'Amato} and Krzysztof {Janowicz} and Bo {Fu} and Axel {Polleres} and Oshani {Seneviratne} and Lalana {Kagal}}, pages = {3--20}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {{Computing Compliant Anonymisations of Quantified ABoxes w.r.t. $\mathcal{E\!L}$ Policies}}, volume = {12506}, year = {2020}, }

## 2019

**Privacy-Preserving Ontology Publishing for**\(\mathcal{EL}\)

**Instance Stores (Extended Version)**. LTCS-Report 19-01, Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universität Dresden, Dresden, Germany, 2019.

Abstract BibTeX Entry PDF File

We make a first step towards adapting an existing approach for privacy-preserving publishing of linked data to Description Logic (DL) ontologies. We consider the case where both the knowledge about individuals and the privacy policies are expressed using concepts of the DL \(\mathcal{EL}\), which corresponds to the setting where the ontology is an \(\mathcal{EL}\) instance store. We introduce the notions of compliance of a concept with a policy and of safety of a concept for a policy, and show how optimal compliant (safe) generalizations of a given \(\mathcal{EL}\) concept can be computed. In addition, we investigate the complexity of the optimality problem.

@techreport{ BaKrNu-LTCS-19-01, address = {Dresden, Germany}, author = {Franz {Baader} and Francesco {Kriegel} and Adrian {Nuradiansyah}}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tu-dresden.de/inf/lat/reports#BaKrNu-LTCS-19-01}}, number = {19-01}, title = {{Privacy-Preserving Ontology Publishing for $\mathcal{EL}$ Instance Stores (Extended Version)}}, type = {LTCS-Report}, year = {2019}, }

**Privacy-Preserving Ontology Publishing for**\(\mathcal{EL}\)

**Instance Stores**. In Francesco Calimeri, Nicola Leone, and Marco Manna, editors,

*16th European Conference on Logics in Artificial Intelligence, JELIA 2019, Rende, Italy, May 7-11, 2019, Proceedings*, volume 11468 of

*Lecture Notes in Computer Science*, pages 323–338. Springer, 2019.

Abstract BibTeX Entry PDF File DOI ©Springer-Verlag Extended Technical Report

We make a first step towards adapting an existing approach for privacy-preserving publishing of linked data to Description Logic (DL) ontologies. We consider the case where both the knowledge about individuals and the privacy policies are expressed using concepts of the DL \(\mathcal{EL}\), which corresponds to the setting where the ontology is an \(\mathcal{EL}\) instance store. We introduce the notions of compliance of a concept with a policy and of safety of a concept for a policy, and show how optimal compliant (safe) generalizations of a given \(\mathcal{EL}\) concept can be computed. In addition, we investigate the complexity of the optimality problem.

@inproceedings{ BaKrNu-JELIA19, author = {Franz {Baader} and Francesco {Kriegel} and Adrian {Nuradiansyah}}, booktitle = {16th European Conference on Logics in Artificial Intelligence, {JELIA} 2019, Rende, Italy, May 7-11, 2019, Proceedings}, doi = {https://dx.doi.org/10.1007/978-3-030-19570-0_21}, editor = {Francesco {Calimeri} and Nicola {Leone} and Marco {Manna}}, pages = {323--338}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {{Privacy-Preserving Ontology Publishing for $\mathcal{EL}$ Instance Stores}}, volume = {11468}, year = {2019}, }

**Mixing Description Logics in Privacy-Preserving Ontology Publishing**. In Christoph Benzmüller and Heiner Stuckenschmidt, editors,

*KI 2019: Advances in Artificial Intelligence - 42nd German Conference on AI, Kassel, Germany, September 23 - 26, 2019, Proceedings*, volume 11793 of

*Lecture Notes in Computer Science*, pages 87–100. Springer, 2019.

Abstract BibTeX Entry PDF File DOI ©Springer-Verlag

In previous work, we have investigated privacy-preserving publishing of Description Logic (DL) ontologies in a setting where the knowledge about individuals to be published is an \(\mathcal{EL}\) instance store, and both the privacy policy and the possible background knowledge of an attacker are represented by concepts of the DL \(\mathcal{EL}\). We have introduced the notions of compliance of a concept with a policy and of safety of a concept for a policy, and have shown how, in the context mentioned above, optimal compliant (safe) generalizations of a given \(\mathcal{EL}\) concept can be computed. In the present paper, we consider a modified setting where we assume that the background knowledge of the attacker is given by a DL different from the one in which the knowledge to be published and the safety policies are formulated. In particular, we investigate the situations where the attacker’s knowledge is given by an \(\mathcal{FL}_0\) or an \(\mathcal{FLE}\) concept. In both cases, we show how optimal safe generalizations can be computed. Whereas the complexity of this computation is the same (ExpTime) as in our previous results for the case of \(\mathcal{FL}_0\), it turns out to be actually lower (polynomial) for the more expressive DL \(\mathcal{FLE}\).

@inproceedings{ BaNu-KI19, author = {Franz {Baader} and Adrian {Nuradiansyah}}, booktitle = {KI 2019: Advances in Artificial Intelligence - 42nd German Conference on AI, Kassel, Germany, September 23 - 26, 2019, Proceedings}, doi = {https://dx.doi.org/10.1007/978-3-030-30179-8_7}, editor = {Christoph {Benzm{\"u}ller} and Heiner {Stuckenschmidt}}, pages = {87--100}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {Mixing Description Logics in Privacy-Preserving Ontology Publishing}, volume = {11793}, year = {2019}, }

## 2018

**Making Repairs in Description Logics More Gentle (Extended Abstract)**. In Magdalena Ortiz and Thomas Schneider, editors,

*Proceedings of the 31st International Workshop on Description Logics, Tempe, Arizona, October 27-29, 2018*, volume 2211 of

*CEUR Workshop Proceedings*. CEUR-WS.org, 2018.

BibTeX Entry PDF File PDF File (ceur-ws.org) Full Conference Paper

@inproceedings{ BaKrNuPe-DL2018, author = {Franz {Baader} and Francesco {Kriegel} and Adrian {Nuradiansyah} and Rafael {Pe{\~n}aloza}}, booktitle = {Proceedings of the 31st International Workshop on Description Logics, Tempe, Arizona, October 27-29, 2018}, editor = {Magdalena {Ortiz} and Thomas {Schneider}}, publisher = {CEUR-WS.org}, series = {{CEUR} Workshop Proceedings}, title = {{Making Repairs in Description Logics More Gentle (Extended Abstract)}}, volume = {2211}, year = {2018}, }

**Making Repairs in Description Logics More Gentle**. In Frank Wolter, Michael Thielscher, and Francesca Toni, editors,

*Principles of Knowledge Representation and Reasoning: Proceedings of the Sixteenth International Conference, KR 2018, Tempe, Arizona, 30 October - 2 November 2018*, pages 319–328. USA, AAAI Press, 2018.

Abstract BibTeX Entry PDF File ©AAAI Extended Abstract Extended Technical Report

The classical approach for repairing a Description Logic ontology \(\mathfrak{O}\) in the sense of removing an unwanted consequence \(\alpha\) is to delete a minimal number of axioms from \(\mathfrak{O}\) such that the resulting ontology \(\mathfrak{O}'\) does not have the consequence \(\alpha\). However, the complete deletion of axioms may be too rough, in the sense that it may also remove consequences that are actually wanted. To alleviate this problem, we propose a more gentle notion of repair in which axioms are not deleted, but only weakened. On the one hand, we investigate general properties of this gentle repair method. On the other hand, we propose and analyze concrete approaches for weakening axioms expressed in the Description Logic \(\mathcal{E\mkern-1.618mu L}\).

@inproceedings{ BaKrNuPe-KR2018, address = {USA}, author = {Franz {Baader} and Francesco {Kriegel} and Adrian {Nuradiansyah} and Rafael {Pe{\~n}aloza}}, booktitle = {Principles of Knowledge Representation and Reasoning: Proceedings of the Sixteenth International Conference, {KR} 2018, Tempe, Arizona, 30 October - 2 November 2018}, editor = {Frank {Wolter} and Michael {Thielscher} and Francesca {Toni}}, pages = {319--328}, publisher = {{AAAI} Press}, title = {{Making Repairs in Description Logics More Gentle}}, year = {2018}, }

**Repairing Description Logic Ontologies by Weakening Axioms**. LTCS-Report 18-01, Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universität Dresden, Dresden, Germany, 2018.

Abstract BibTeX Entry PDF File

The classical approach for repairing a Description Logic ontology \(\mathfrak{O}\) in the sense of removing an unwanted consequence \(\alpha\) is to delete a minimal number of axioms from \(\mathfrak{O}\) such that the resulting ontology \(\mathfrak{O}'\) does not have the consequence \(\alpha\). However, the complete deletion of axioms may be too rough, in the sense that it may also remove consequences that are actually wanted. To alleviate this problem, we propose a more gentle way of repair in which axioms are not necessarily deleted, but only weakened. On the one hand, we investigate general properties of this gentle repair method. On the other hand, we propose and analyze concrete approaches for weakening axioms expressed in the Description Logic \(\mathcal{E\mkern-1.618mu L}\).

@techreport{ BaKrNuPe-LTCS-18-01, address = {Dresden, Germany}, author = {Franz {Baader} and Francesco {Kriegel} and Adrian {Nuradiansyah} and Rafael {Pe\~{n}aloza}}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tu-dresden.de/inf/lat/reports#BaKrNuPe-LTCS-18-01}}, number = {18-01}, title = {{Repairing Description Logic Ontologies by Weakening Axioms}}, type = {LTCS-Report}, year = {2018}, }

**Towards Privacy-Preserving Ontology Publishing**. In Magdalena Ortiz and Thomas Schneider, editors,

*Proceedings of the 31st International Workshop on Description Logics, Tempe, Arizona, October 27-29, 2018*,

*CEUR Workshop Proceedings*. CEUR-WS.org, 2018.

Abstract BibTeX Entry PDF File

We make a first step towards adapting the approach of Cuenca Grau and Kostylev for privacy-preserving publishing of linked data to Description Logic ontologies. We consider the case where both the knowledge about individuals and the privacy policies are expressed using EL concepts. We introduce the notions of compliance of a concept with a policy and of safety of a concept for a policy, and show how optimal compliant (safe) generalizations of a given EL concept can be computed.

@inproceedings{ BaNu-DL2018, author = {Franz {Baader} and Adrian {Nuradiansyah}}, booktitle = {Proceedings of the 31st International Workshop on Description Logics, Tempe, Arizona, October 27-29, 2018}, editor = {Magdalena {Ortiz} and Thomas {Schneider}}, note = {}, publisher = {CEUR-WS.org}, series = {{CEUR} Workshop Proceedings}, title = {{Towards Privacy-Preserving Ontology Publishing}}, year = {2018}, }

Generated 17 April 2021, 22:01:00.

Description Logics (DLs) are a well-investigated family of logic-based knowledge representation languages, which are, e.g., frequently used to formalize ontologies for application domains such as biology and medicine. To define the important notions of such an application domain as formal concepts, DLs state necessary and sufficient conditions for an individual to belong to a concept. Once the relevant concepts of an application domain are formalized this way, they can be used in queries in order to retrieve new information from data. Since traditional DLs are based on classical first-order logic, their semantics is strict in the sense that all the stated properties need to be satisfied for an individual to belong to a concept, and the same is true for answers to queries. In applications where exact definitions are hard to come by, it would be useful to relax this strict requirement and allow for approximate definitions of concepts, where most, but not all, of the stated properties are required to hold. Similarly, if a query has no exact answer, approximate answers that satisfy most of the features the query is looking for could be useful.In order to allow for approximate definitions of concepts, we have introduced the notion of a graded membership function, which instead of a Boolean membership value 0 or 1 yields a membership degree from the interval [0,1]. Threshold concepts can then, for example, require that an individual belongs to a concept C with degree at least 0.8. A different approach, which is based on the use of similarity measures on concepts, was used to relax instance queries (i.e., queries that consist of a single concept). Given a query concept C, we are looking for answers to queries D whose similarity to C is higher than a certain threshold. While these two approaches were developed independently by the two applicants together with doctoral students, it has turned out that there are close connections. Similarity measures can be used to define graded membership functions, and threshold concepts w.r.t. these functions provide a more natural semantics for relaxed instance queries.The goals of this project are to1. explore the connection between the two approaches in detail, and develop and investigate a combined approach; 2. extend the applicability of this combined approach by considering a) more expressive DLs than the DL EL for which the approaches were initially developed; b) more general terminological formalisms; c) more general queries such as conjunctive queries; d) more general inference problems such as determining appropriate threshold values rather than requiring the user to provide them; 3. conduct an empirical study to a) evaluate the approach on a prototypical application; b) generate appropriate graded membership functions or similarity measures using automated parameter tuning approaches.

### Publications & Technical Reports

## 2020

**Answering Regular Path Queries Under Approximate Semantics in Lightweight Description Logics**. LTCS-Report 20-05, Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universität Dresden, Dresden, Germany, 2020.

Abstract BibTeX Entry PDF File

Classical regular path queries (RPQs) can be too restrictive for some applications and answering such queries under approximate semantics to relax the query is desirable. While for answering regular path queries over graph databases under approximate semantics algorithms are available, such algorithms are scarce for the ontology-mediated setting. In this paper we extend an approach for answering RPQs over graph databases that uses weighted transducers to approximate paths from the query in two ways. The first extension is to answering approximate conjunctive 2-way regular path queries (C2RPQs) over graph databases and the second is to answering C2RPQs such queries over ELH and DL-LiteR ontologies. We provide results on the computational complexity of the underlying reasoning problems and devise approximate query answering algorithms.

@techreport{ FeAt-LTCS-20-05, address = {Dresden, Germany}, author = {Oliver {Fern{\'a}ndez Gil} and Anni-Yasmin {Turhan}}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See http://lat.inf.tu-dresden.de/research/reports.html}, number = {20-05}, title = {Answering Regular Path Queries Under Approximate Semantics in Lightweight Description Logics}, type = {LTCS-Report}, year = {2020}, }

## 2019

**Matching in the Description Logic FL0 with respect to General TBoxes (Extended abstract)**. In Mantas Simkus and Grant Weddell, editors,

*Proceedings of the 32nd International Workshop on Description Logics (DL'19)*, volume 2373 of

*CEUR Workshop Proceedings*. CEUR-WS, 2019.

Abstract BibTeX Entry PDF File

Matching concept descriptions against concept patterns was introduced as a new inference task in Description Logics two decades ago, motivated by applications in the Classic system. Shortly afterwards, a polynomial-time matching algorithm was developed for the DL FL0. However, this algorithm cannot deal with general TBoxes (i.e., finite sets of general concept inclusions). Here we show that matching in FL0 w.r.t. general TBoxes is in ExpTime, which is the best possible complexity for this problem since already subsumption w.r.t. general TBoxes is ExpTime-hard in FL0. We also show that, w.r.t. a restricted form of TBoxes, the complexity of matching in FL0 can be lowered to PSpace.

@inproceedings{ BaFeMa-DL19, author = {Franz {Baader} and Oliver {Fern\'andez Gil} and Pavlos {Marantidis}}, booktitle = {Proceedings of the 32nd International Workshop on Description Logics (DL'19)}, editor = {Mantas {Simkus} and Grant {Weddell}}, publisher = {CEUR-WS}, series = {CEUR Workshop Proceedings}, title = {Matching in the Description Logic {FL0} with respect to General TBoxes (Extended abstract)}, volume = {2373}, year = {2019}, }

**Quantitative Variants of Language Equations and their Applications to Description Logics**. Doctoral Thesis, Technische Universität Dresden, Dresden, Germany, 2019.

Abstract BibTeX Entry PDF File Publication

Unification in description logics (DLs) has been introduced as a novel inference service that can be used to detect redundancies in ontologies, by finding different concepts that may potentially stand for the same intuitive notion. Together with the special case of matching, they were first investigated in detail for the DL FL0, where these problems can be reduced to solving certain language equations. In this thesis, we extend this service in two directions. In order to increase the recall of this method for finding redundancies, we introduce and investigate the notion of approximate unification, which basically finds pairs of concepts that “almost” unify, in order to account for potential small modelling errors. The meaning of “almost” is formalized using distance measures between concepts. We show that approximate unification in FL0 can be reduced to approximately solving language equations, and devise algorithms for solving the latter problem for particular distance measures. Furthermore, we make a first step towards integrating background knowledge, formulated in so-called TBoxes, by investigating the special case of matching in the presence of TBoxes of different forms. We acquire a tight complexity bound for the general case, while we prove that the problem becomes easier in a restricted setting. To achieve these bounds, we take advantage of an equivalence characterization of FL0 concepts that is based on formal languages. In addition, we incorporate TBoxes in computing concept distances. Even though our results on the approximate setting cannot deal with TBoxes yet, we prepare the framework that future research can build on. Before we journey to the technical details of the above investigations, we showcase our program in the simpler setting of the equational theory ACUI, where we are able to also combine the two extensions. In the course of studying the above problems, we make heavy use of automata theory, where we also derive novel results that could be of independent interest.

@thesis{ Marantidis-Diss-2019, address = {Dresden, Germany}, author = {Pavlos {Marantidis}}, school = {Technische Universit\"{a}t Dresden}, title = {Quantitative Variants of Language Equations and their Applications to Description Logics}, type = {Doctoral Thesis}, year = {2019}, }

## 2018

**Matching in the Description Logic**\(\mathcal{FL}_0\)

**with respect to General TBoxes**. In Gilles Barthe, Geoff Sutcliffe, and Margus Veanes, editors,

*LPAR-22. 22nd International Conference on Logic for Programming, Artificial Intelligence and Reasoning*, volume 57 of

*EPiC Series in Computing*, pages 76–94. EasyChair, 2018.

Abstract BibTeX Entry PDF File DOI

Matching concept descriptions against concept patterns was introduced as a new inference task in Description Logics two decades ago, motivated by applications in the Classic system. Shortly afterwards, a polynomial-time matching algorithm was developed for the DL FL0. However, this algorithm cannot deal with general TBoxes (i.e., finite sets of general concept inclusions). Here we show that matching in FL0 w.r.t. general TBoxes is in ExpTime, which is the best possible complexity for this problem since already subsumption w.r.t. general TBoxes is ExpTime-hard in FL0. We also show that, w.r.t. a restricted form of TBoxes, the complexity of matching in FL0 can be lowered to PSpace.

@inproceedings{ BaFeMa18-LPAR, author = {Franz {Baader} and Oliver {Fern\'andez Gil} and Pavlos {Marantidis}}, booktitle = {LPAR-22. 22nd International Conference on Logic for Programming, Artificial Intelligence and Reasoning}, doi = {https://dx.doi.org/10.29007/q74p}, editor = {Gilles {Barthe} and Geoff {Sutcliffe} and Margus {Veanes}}, pages = {76--94}, publisher = {EasyChair}, series = {EPiC Series in Computing}, title = {Matching in the Description Logic $\mathcal{FL}_0$ with respect to General TBoxes}, volume = {57}, year = {2018}, }

Generated 17 April 2021, 22:01:02.

## Completed Research Projects

More and more information on individuals (e.g., persons, events, biological objects) are available electronically in a structured or semi-structured form. Selecting individuals satisfying certain constraints based on such data manually is a complex, error-prone, and time and personnel-consuming effort. For this reason, tools that can automatically or semi-automatically answer questions based on the available data need to be developed. While simple questions can directly be expressed and answered using keywords in natural language, complex questions that can refer to type and relational information increase the precision of the retrieved results, and thus reduce the effort for posterior manual verification of the results. One example for this situation is the setting where electronic patient records are used to find patients satisfying non-trivial combinations of certain properties, such as eligibility criteria for clinical trials. Another example that will also be considered as a use case in this project is the setting where a student asks the examination office questions about study and examination regulations. In both cases, the original question is formulated in natural language.In the GoAsq project, we will investigate, compare, and finally combine two different approaches for answering questions formulated in natural language over textual, semi-structured, and structured data. One approach uses the expertise in text-based question answering of the French partners to directly answer natural language questions using natural language processing and information retrieval techniques. The other tries to translate the natural language questions into formal, database-like queries and then answer these formal queries w.r.t. a domain-dependent ontology using database techniques. The automatic translation is required since it would be quite hard for the people asking the questions (medical doctors, students) to formulate them as formal queries. The ontology allows to overcome the possible semantic mismatch between the person producing the source data (e.g., the GPs writing the clinical notes) and the person formulating the question (e.g., the researcher formulating the trial criteria). GoAsq can thus leverage recent advances obtained in the ontology community on accessing data through ontologies, called ontology-based query answering (OBQA). More precisely, in Task 1 of the project we investigate the two use cases mentioned above (eligibility criteria; study regulations). In Task 2 we will introduce and analyze extensions to existing formal query languages that are required by these use cases. Task 3 will develop techniques for extracting formal queries from textual queries, and Task 4 will evaluate the approach obtained this way, compare it with approaches for text-based question answering, and develop a hybrid approach that combines the advantages of both.

### Publications & Technical Reports

## 2020

**Metric Temporal Description Logics with Interval-Rigid Names**.

*ACM Transactions on Computational Logic*, 21(4):30:1–30:46, 2020.

Abstract BibTeX Entry PDF File DOI

In contrast to qualitative linear temporal logics, which can be used to state that some property will eventually be satisfied, metric temporal logics allow us to formulate constraints on how long it may take until the property is satisfied. While most of the work on combining description logics (DLs) with temporal logics has concentrated on qualitative temporal logics, there is a growing interest in extending this work to the quantitative case. In this article, we complement existing results on the combination of DLs with metric temporal logics by introducing interval-rigid concept and role names. Elements included in an interval-rigid concept or role name are required to stay in it for some specified amount of time. We investigate several combinations of (metric) temporal logics with ALC by either allowing temporal operators only on the level of axioms or also applying them to concepts. In contrast to most existing work on the topic, we consider a timeline based on the integers and also allow assertional axioms. We show that the worst-case complexity does not increase beyond the previously known bound of 2-ExpSpace and investigate in detail how this complexity can be reduced by restricting the temporal logic and the occurrences of interval-rigid names.

@article{ BaBoKoOzTh-TOCL20, author = {Franz {Baader} and Stefan {Borgwardt} and Patrick {Koopmann} and Ana {Ozaki} and Veronika {Thost}}, doi = {https://dx.doi.org/10.1145/3399443}, journal = {ACM Transactions on Computational Logic}, month = {August}, number = {4}, pages = {30:1--30:46}, title = {Metric Temporal Description Logics with Interval-Rigid Names}, volume = {21}, year = {2020}, }

**Closed-World Semantics for Query Answering in Temporal Description Logics**. Doctoral Thesis, Technische Universität Dresden, Dresden, Germany, 2020.

Abstract BibTeX Entry PDF File Publication

Ontology-mediated query answering is a popular paradigm for enriching answers to user queries with background knowledge. For querying the absence of information, however, there exist only few ontology-based approaches. Moreover, these proposals conflate the closed-domain and closed-world assumption, and therefore are not suited to deal with the anonymous objects that are common in ontological reasoning. Many real-world applications, like processing electronic health records (EHRs), also contain a temporal dimension, and require efficient reasoning algorithms. Moreover, since medical data is not recorded on a regular basis, reasoners must deal with sparse data with potentially large temporal gaps. Our contribution consists of three main parts: Firstly, we introduce a new closed-world semantics for answering conjunctive queries with negation over ontologies formulated in the description logic ELH⊥, which is based on the minimal universal model. We propose a rewriting strategy for dealing with negated query atoms, which shows that query answering is possible in polynomial time in data complexity. Secondly, we introduce a new temporal variant of ELH⊥ that features a convexity operator. We extend this minimal-world semantics for answering metric temporal conjunctive queries with negation over the logic and obtain similar rewritability and complexity results. Thirdly, apart from the theoretical results, we evaluate minimal-world semantics in practice by selecting patients, based on their EHRs, that match given criteria.

@thesis{ Forkel-Diss-20, address = {Dresden, Germany}, author = {Walter {Forkel}}, school = {Technische Universit\"{a}t Dresden}, title = {Closed-World Semantics for Query Answering in Temporal Description Logics}, type = {Doctoral Thesis}, year = {2020}, }

## 2019

**Temporalized Ontology-Mediated Query Answering under Minimal-World Semantics**. In Franz Baader, Brigitte Grau, and Yue Ma, editors,

*2nd International Workshop on Hybrid Question Answering with Structured and Unstructured Knowledge (HQA'19)*, 2019.

Abstract BibTeX Entry PDF File

Selecting patients for clinical trials is very labor-intensive. Our goal is to design (semi-)automated techniques that can support clinical researchers in this task. In this paper we summarize our recent advances towards such a system: First, we present the challenges involved when representing electronic health records and eligibility criteria for clinical trials in a formal language. Second, we introduce temporal conjunctive queries with negation as a formal language suitable to represent clinical trials. Third, we describe our methodology for automatic translation of clinical trial eligibility criteria from natural language into our query language. The evaluation of our prototypical implementation shows promising results. Finally, we talk about the parts we are currently working on and the challenges involved.

@inproceedings{ BBFKXZHQA19, address = {Marina del Rey, USA}, author = {Franz {Baader} and Stefan {Borgwardt} and Walter {Forkel} and Alisa {Kovtunova} and Chao {Xu} and Beihai {Zhou}}, booktitle = {2nd International Workshop on Hybrid Question Answering with Structured and Unstructured Knowledge (HQA'19)}, editor = {Franz {Baader} and Brigitte {Grau} and Yue {Ma}}, title = {Temporalized Ontology-Mediated Query Answering under Minimal-World Semantics}, year = {2019}, }

**Ontology-Mediated Query Answering over Log-linear Probabilistic Data**. In Pascal Van Hentenryck and Zhi-Hua Zhou, editors,

*Proceedings of the 33rd AAAI Conference on Artificial Intelligence (AAAI'19)*, pages 2711–2718. Honolulu, USA, AAAI Press, 2019.

Abstract BibTeX Entry PDF File PDF File (Extended Technical Report) DOI

Large-scale knowledge bases are at the heart of modern information systems. Their knowledge is inherently uncertain, and hence they are often materialized as probabilistic databases. However, probabilistic database management systems typically lack the capability to incorporate implicit background knowledge and, consequently, fail to capture some intuitive query answers. Ontology-mediated query answering is a popular paradigm for encoding commonsense knowledge, which can provide more complete answers to user queries. We propose a new data model that integrates the paradigm of ontology-mediated query answering with probabilistic databases, employing a log-linear probability model. We compare our approach to existing proposals, and provide supporting computational results.

@inproceedings{ BoCL-AAAI19, address = {Honolulu, USA}, author = {Stefan {Borgwardt} and Ismail Ilkan {Ceylan} and Thomas {Lukasiewicz}}, booktitle = {Proceedings of the 33rd AAAI Conference on Artificial Intelligence (AAAI'19)}, doi = {https://doi.org/10.1609/aaai.v33i01.33012711}, editor = {Pascal Van {Hentenryck} and Zhi-Hua {Zhou}}, pages = {2711--2718}, publisher = {AAAI Press}, title = {Ontology-Mediated Query Answering over Log-linear Probabilistic Data}, year = {2019}, }

**Ontology-Mediated Query Answering over Log-linear Probabilistic Data (Abstract)**. In Mantas Simkus and Grant Weddell, editors,

*Proceedings of the 32nd International Workshop on Description Logics (DL'19)*, volume 2373 of

*CEUR Workshop Proceedings*. Oslo, Norway, CEUR-WS, 2019.

Abstract BibTeX Entry PDF File

Large-scale knowledge bases are at the heart of modern information systems. Their knowledge is inherently uncertain, and hence they are often materialized as probabilistic databases. However, probabilistic database management systems typically lack the capability to incorporate implicit background knowledge and, consequently, fail to capture some intuitive query answers. Ontology-mediated query answering is a popular paradigm for encoding commonsense knowledge, which can provide more complete answers to user queries. We propose a new data model that integrates the paradigm of ontology-mediated query answering with probabilistic databases, employing a log-linear probability model. We compare our approach to existing proposals, and provide supporting computational results.

@inproceedings{ BoCL-DL19, address = {Oslo, Norway}, author = {Stefan {Borgwardt} and Ismail Ilkan {Ceylan} and Thomas {Lukasiewicz}}, booktitle = {Proceedings of the 32nd International Workshop on Description Logics (DL'19)}, editor = {Mantas {Simkus} and Grant {Weddell}}, publisher = {CEUR-WS}, series = {CEUR Workshop Proceedings}, title = {Ontology-Mediated Query Answering over Log-linear Probabilistic Data (Abstract)}, volume = {2373}, year = {2019}, }

**Closed-World Semantics for Conjunctive Queries with Negation over**\(\mathcal{ELH}_\bot\)

**Ontologies**. In Sarit Kraus, editor,

*Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI'19)*, pages 6131–6135. Macao, China, IJCAI, 2019. Invited contribution to Sister Conference Best Paper Track.

BibTeX Entry PDF File DOI

@inproceedings{ BoFo-IJCAI19, address = {Macao, China}, author = {Stefan {Borgwardt} and Walter {Forkel}}, booktitle = {Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI'19)}, doi = {https://doi.org/10.24963/ijcai.2019/849}, editor = {Sarit {Kraus}}, note = {Invited contribution to Sister Conference Best Paper Track.}, pages = {6131--6135}, publisher = {IJCAI}, title = {Closed-World Semantics for Conjunctive Queries with Negation over {$\mathcal{ELH}_\bot$} Ontologies}, year = {2019}, }

**Closed-World Semantics for Conjunctive Queries with Negation over**\(\mathcal{ELH}_\bot\)

**Ontologies**. In Francesco Calimeri, Nicola Leone, and Marco Manna, editors,

*Proc. of the 16th European Conf. on Logics in Artificial Intelligence (JELIA'19)*, volume 11468 of

*Lecture Notes in Artificial Intelligence*, pages 371–386. Rende, Italy, Springer, 2019.

Abstract BibTeX Entry PDF File PDF File (Extended Technical Report) DOI ©Springer-Verlag

Ontology-mediated query answering is a popular paradigm for enriching answers to user queries with background knowledge. For querying the absence of information, however, there exist only few ontology-based approaches. Moreover, these proposals conflate the closed-domain and closed-world assumption, and therefore are not suited to deal with the anonymous objects that are common in ontological reasoning. We propose a new closed-world semantics for answering conjunctive queries with negation over ontologies formulated in the description logic \(\mathcal{ELH}_\bot\), which is based on the minimal canonical model. We propose a rewriting strategy for dealing with negated query atoms, which shows that query answering is possible in polynomial time in data complexity.

@inproceedings{ BoFo-JELIA19, address = {Rende, Italy}, author = {Stefan {Borgwardt} and Walter {Forkel}}, booktitle = {Proc.\ of the 16th European Conf.\ on Logics in Artificial Intelligence (JELIA'19)}, doi = {https://doi.org/10.1007/978-3-030-19570-0_24}, editor = {Francesco {Calimeri} and Nicola {Leone} and Marco {Manna}}, pages = {371--386}, publisher = {Springer}, series = {Lecture Notes in Artificial Intelligence}, title = {Closed-World Semantics for Conjunctive Queries with Negation over {$\mathcal{ELH}_\bot$} Ontologies}, volume = {11468}, year = {2019}, }

**Closed-World Semantics for Conjunctive Queries with Negation over**\(\mathcal{ELH}_\bot\)

**Ontologies (Extended Abstract)**. In Mantas Simkus and Grant Weddell, editors,

*Proceedings of the 32nd International Workshop on Description Logics (DL'19)*, volume 2373 of

*CEUR Workshop Proceedings*. Oslo, Norway, CEUR-WS, 2019.

Abstract BibTeX Entry PDF File

Ontology-mediated-query-answering is a popular paradigm for enriching answers to user queries with background knowledge. For querying the absence of information, however, there exist only few ontology-based approaches. Moreover, these proposals conflate the closed-domain and closed-world assumption, and therefore are not suited to deal with the anonymous objects that are common in ontological reasoning. We propose a new closed-world semantics for answering conjunctive queries with negation over ontologies formulated in the description logic \(\mathcal{ELH}_\bot\), which is based on the minimal canonical model. We propose a rewriting strategy for dealing with negated query atoms, which shows that query answering is possible in polynomial time in data complexity.

@inproceedings{ BoFo-DL19, address = {Oslo, Norway}, author = {Stefan {Borgwardt} and Walter {Forkel}}, booktitle = {Proceedings of the 32nd International Workshop on Description Logics (DL'19)}, editor = {Mantas {Simkus} and Grant {Weddell}}, publisher = {CEUR-WS}, series = {CEUR Workshop Proceedings}, title = {Closed-World Semantics for Conjunctive Queries with Negation over {$\mathcal{ELH}_\bot$} Ontologies (Extended Abstract)}, volume = {2373}, year = {2019}, }

**Finding New Diamonds: Temporal Minimal- World Query Answering over Sparse ABoxes**. In Paul Fodor, Marco Montali, Diego Calvanese, and Dumitru Roman, editors,

*Proc. of the 3rd International Joint Conference on Rules and Reasoning (RuleML+RR'19)*, volume 11784 of

*Lecture Notes in Computer Science*, pages 3–18. Bolzano, Italy, Springer, 2019.

Abstract BibTeX Entry PDF File PDF File (Extended Technical Report) DOI ©Springer-Verlag

Lightweight temporal ontology languages have become a very active field of research in recent years. Many real-world applications, like processing electronic health records (EHRs), inherently contain a temporal dimension, and require efficient reasoning algorithms. Moreover, since medical data is not recorded on a regular basis, reasoners must deal with sparse data with potentially large temporal gaps. In this paper, we introduce a temporal extension of the tractable language ELH bottom, which features a new class of convex diamond operators that can be used to bridge temporal gaps. We develop a completion algorithm for our logic, which shows that entailment remains tractable. Based on this, we develop a minimal-world semantics for answering metric temporal conjunctive queries with negation. We show that query answering is combined first-order rewritable, and hence in polynomial time in data complexity.

@inproceedings{ BoFoKo-RRML, address = {Bolzano, Italy}, author = {Stefan {Borgwardt} and Walter {Forkel} and Alisa {Kovtunova}}, booktitle = {Proc.\ of the 3rd International Joint Conference on Rules and Reasoning (RuleML+RR'19)}, doi = {https://dx.doi.org/10.1007/978-3-030-31095-0_1}, editor = {Paul {Fodor} and Marco {Montali} and Diego {Calvanese} and Dumitru {Roman}}, pages = {3--18}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {Finding New Diamonds: {T}emporal Minimal- World Query Answering over Sparse {AB}oxes}, volume = {11784}, year = {2019}, }

**Automatic Translation of Clinical Trial Eligibility Criteria into Formal Queries**. In Martin Boeker, Ludger Jansen, Frank Loebe, and Stefan Schulz, editors,

*Proc. of the 9th Workshop on Ontologies and Data in Life Sciences (ODLS'19), part of The Joint Ontology Workshops (JOWO'19)*, volume 2518 of

*CEUR Workshop Proceedings*, 2019.

Abstract BibTeX Entry PDF File

Selecting patients for clinical trials is very labor-intensive. Our goal is to develop an automated system that can support doctors in this task. This paper describes a major step towards such a system: the automatic translation of clinical trial eligibility criteria from natural language into formal, logic-based queries. First, we develop a semantic annotation process that can capture many types of clinical trial criteria. Then, we map the annotated criteria to the formal query language. We have built a prototype system based on state-of-the-art NLP tools such as Word2Vec, Stanford NLP tools, and the MetaMap Tagger, and have evaluated the quality of the produced queries on a number of criteria from clinicaltrials.gov. Finally, we discuss some criteria that were hard to translate, and give suggestions for how to formulate eligibility criteria to make them easier to translate automatically.

@inproceedings{ XFB-ODLS15, address = {Graz, Austria}, author = {Chao {Xu} and Walter {Forkel} and Stefan {Borgwardt} and Franz {Baader} and Beihai {Zhou}}, booktitle = {Proc.\ of the 9th Workshop on Ontologies and Data in Life Sciences (ODLS'19), part of The Joint Ontology Workshops (JOWO'19)}, editor = {Martin {Boeker} and Ludger {Jansen} and Frank {Loebe} and Stefan {Schulz}}, series = {CEUR Workshop Proceedings}, title = {Automatic Translation of Clinical Trial Eligibility Criteria into Formal Queries}, volume = {2518}, year = {2019}, }

## 2018

**Patient Selection for Clinical Trials Using Temporalized Ontology-Mediated Query Answering**. In

*Proc. of the 1st Int. Workshop on Hybrid Question Answering with Structured and Unstructured Knowledge (HQA'18), Companion of the The Web Conference 2018*, pages 1069–1074. ACM, 2018.

Abstract BibTeX Entry PDF File DOI

Finding suitable candidates for clinical trials is a labor-intensive task that requires expert medical knowledge. Our goal is to design (semi-)automated techniques that can support clinical researchers in this task. We investigate the issues involved in designing formal query languages for selecting patients that are eligible for a given clinical trial, leveraging existing ontology-based query answering techniques. In particular, we propose to use a temporal extension of existing approaches for accessing data through ontologies written in Description Logics. We sketch how such a query answering system could work and show that eligibility criteria and patient data can be adequately modeled in our formalism.

@inproceedings{ BaBF-HQA18, author = {Franz {Baader} and Stefan {Borgwardt} and Walter {Forkel}}, booktitle = {Proc.\ of the 1st Int.\ Workshop on Hybrid Question Answering with Structured and Unstructured Knowledge (HQA'18), Companion of the The Web Conference 2018}, doi = {https://dx.doi.org/10.1145/3184558.3191538}, pages = {1069--1074}, publisher = {ACM}, title = {Patient Selection for Clinical Trials Using Temporalized Ontology-Mediated Query Answering}, year = {2018}, }

**Recent Advances in Querying Probabilistic Knowledge Bases**. In Jérôme Lang, editor,

*Proceedings of the 27th International Joint Conferences on Artificial Intelligence and the 23rd European Conference on Artificial Intelligence (IJCAI-ECAI'18)*, pages 5420–5426. Stockholm, Sweden, IJCAI, 2018. Survey paper.

Abstract BibTeX Entry PDF File

We give a survey on recent advances at the forefront of research on probabilistic knowledge bases for representing and querying large-scale automatically extracted data. We concentrate especially on increasing the semantic expressivity of formalisms for representing and querying probabilistic knowledge (i) by giving up the closed-world assumption, (ii) by allowing for commonsense knowledge (and in parallel giving up the tuple-independence assumption), and (iii) by giving up the closed-domain assumption, while preserving some computational properties of query answering in such formalisms.

@inproceedings{ BoCL-IJCAI18, address = {Stockholm, Sweden}, author = {Stefan {Borgwardt} and Ismail Ilkan {Ceylan} and Thomas {Lukasiewicz}}, booktitle = {Proceedings of the 27th International Joint Conferences on Artificial Intelligence and the 23rd European Conference on Artificial Intelligence (IJCAI-ECAI'18)}, editor = {J{\'e}r{\^o}me {Lang}}, note = {Survey paper.}, pages = {5420--5426}, publisher = {IJCAI}, title = {Recent Advances in Querying Probabilistic Knowledge Bases}, year = {2018}, }

## 2017

**Metric Temporal Description Logics with Interval-Rigid Names**. In Clare Dixon and Marcelo Finger, editors,

*Proceedings of the 11th International Symposium on Frontiers of Combining Systems (FroCoS'17)*, volume 10483 of

*Lecture Notes in Computer Science*, pages 60–76, 2017.

Abstract BibTeX Entry PDF File ©Springer-Verlag

In contrast to qualitative linear temporal logics, which can be used to state that some property will eventually be satisfied, metric temporal logics allow to formulate constraints on how long it may take until the property is satisfied. While most of the work on combining Description Logics (DLs) with temporal logics has concentrated on qualitative temporal logics, there has recently been a growing interest in extending this work to the quantitative case. In this paper, we complement existing results on the combination of DLs with metric temporal logics over the natural numbers by introducing interval-rigid names. This allows to state that elements in the extension of certain names stay in this extension for at least some specified amount of time.

@inproceedings{ BaBoKoOzTh-FroCoS17, address = {Bras{\'i}lia, Brazil}, author = {Franz {Baader} and Stefan {Borgwardt} and Patrick {Koopmann} and Ana {Ozaki} and Veronika {Thost}}, booktitle = {Proceedings of the 11th International Symposium on Frontiers of Combining Systems (FroCoS'17)}, editor = {Clare {Dixon} and Marcelo {Finger}}, pages = {60--76}, series = {Lecture Notes in Computer Science}, title = {Metric Temporal Description Logics with Interval-Rigid Names}, volume = {10483}, year = {2017}, }

**Query Rewriting for DL-Lite with**\(n\)

**-ary Concrete Domains**. In Carles Sierra, editor,

*Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI'17)*, pages 786–792, 2017.

Abstract BibTeX Entry PDF File ©IJCAI

We investigate ontology-based query answering (OBQA) in a setting where both the ontology and the query can refer to concrete values such as numbers and strings. In contrast to previous work on this topic, the built-in predicates used to compare values are not restricted to being unary. We introduce restrictions on these predicates and on the ontology language that allow us to reduce OBQA to query answering in databases using the so-called combined rewriting approach. Though at first sight our restrictions are different from the ones used in previous work, we show that our results strictly subsume some of the existing first-order rewritability results for unary predicates.

@inproceedings{ BaBL-IJCAI17, address = {Melbourne, Australia}, author = {Franz {Baader} and Stefan {Borgwardt} and Marcel {Lippmann}}, booktitle = {Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI'17)}, editor = {Carles {Sierra}}, pages = {786--792}, title = {Query Rewriting for \textit{{DL-Lite}} with {$n$}-ary Concrete Domains}, year = {2017}, }

Generated 17 April 2021, 22:00:59.

*Doctoral Project:*Construction and Extension of Description Logic Knowledge Bases with Methods of Formal Concept Analysis*Principal Investigator:*Prof. Dr.-Ing. Franz Baader*Involved Person:*Dipl.-Math. Francesco Kriegel*Start Date:*October 2013- Funded by Technische Universität Dresden

### Description

Description Logics (DLs) are a formally well-founded and frequently used family of knowledge representation languages, which provide their users with automated inferences services that can derive implicit knowledge from the explicitly represented knowledge. They have been used in various application domains, but their most notable success is the fact that they provide the logical underpinning of the Web Ontology Language (OWL) and its various dialects.

Formal Concept Analysis (FCA) is a successful field of applied algebra, which uses lattice theory to extract knowledge from data represented by tables (called formal contexts). This approach has been utilized in many application domains such as conceptual clustering, data visualization, automatic and semi-automatic knowledge acquisition, etc. In spite of the different notions of concepts used in FCA and in DLs, there has been a very fruitful interaction between these two research areas.

In this project, I will describe how methods from FCA can be used to support the (semi-)automatic construction and extension of terminological knowledge represented in lightweight DLs from data.

### Publications and Technical Reports

## 2020

**Constructing and Extending Description Logic Ontologies using Methods of Formal Concept Analysis: A Dissertation Summary**.

*KI - Künstliche Intelligenz*, 34:399–403, 2020. This is a post-peer-review, pre-copyedit version of an article published in KI - Künstliche Intelligenz. The final authenticated version is available online.

Abstract BibTeX Entry PDF File DOI Doctoral Thesis

My thesis describes how methods from Formal Concept Analysis can be used for constructing and extending description logic ontologies. In particular, it is shown how concept inclusions can be axiomatized from data in the description logics \(\mathcal{E\mkern-1.618mu L}\), \(\mathcal{M}\), \(\textsf{Horn-}\mathcal{M}\), and \(\textsf{Prob-}\mathcal{E\mkern-1.618mu L}\). All proposed methods are not only sound but also complete, i.e., the result not only consists of valid concept inclusions but also entails each valid concept inclusion. Moreover, a lattice-theoretic view on the description logic \(\mathcal{E\mkern-1.618mu L}\) is provided. For instance, it is shown how upper and lower neighbors of \(\mathcal{E\mkern-1.618mu L}\) concept descriptions can be computed and further it is proven that the set of \(\mathcal{E\mkern-1.618mu L}\) concept descriptions forms a graded lattice with a non-elementary rank function.

@article{ Kr-KI20, author = {Francesco {Kriegel}}, doi = {https://doi.org/10.1007/s13218-020-00673-8}, journal = {KI - K{\"{u}}nstliche Intelligenz}, note = {This is a post-peer-review, pre-copyedit version of an article published in KI - K{\"{u}}nstliche Intelligenz. The final authenticated version is available online.}, pages = {399--403}, title = {{Constructing and Extending Description Logic Ontologies using Methods of Formal Concept Analysis: A Dissertation Summary}}, volume = {34}, year = {2020}, }

**Most Specific Consequences in the Description Logic**\(\mathcal{E\mkern-1.618mu L}\).

*Discrete Applied Mathematics*, 273:172–204, 2020.

Abstract BibTeX Entry PDF File DOI ©Elsevier

The notion of a most specific consequence with respect to some terminological box is introduced, conditions for its existence in the description logic \(\mathcal{E\mkern-1.618mu L}\) and its variants are provided, and means for its computation are developed. Algebraic properties of most specific consequences are explored. Furthermore, several applications that make use of this new notion are proposed and, in particular, it is shown how given terminological knowledge can be incorporated in existing approaches for the axiomatization of observations. For instance, a procedure for an incremental learning of concept inclusions from sequences of interpretations is developed.

@article{ Kr-DAM20, author = {Francesco {Kriegel}}, doi = {https://doi.org/10.1016/j.dam.2019.01.029}, journal = {Discrete Applied Mathematics}, pages = {172--204}, title = {{Most Specific Consequences in the Description Logic $\mathcal{E\mkern-1.618mu L}$}}, volume = {273}, year = {2020}, }

## 2019

**Constructing and Extending Description Logic Ontologies using Methods of Formal Concept Analysis**. Doctoral Thesis, Technische Universität Dresden, Dresden, Germany, 2019.

Abstract BibTeX Entry PDF File Publication Summary

Description Logic (abbrv. DL) belongs to the field of knowledge representation and reasoning. DL researchers have developed a large family of logic-based languages, so-called description logics (abbrv. DLs). These logics allow their users to explicitly represent knowledge as ontologies, which are finite sets of (human- and machine-readable) axioms, and provide them with automated inference services to derive implicit knowledge. The landscape of decidability and computational complexity of common reasoning tasks for various description logics has been explored in large parts: there is always a trade-off between expressibility and reasoning costs. It is therefore not surprising that DLs are nowadays applied in a large variety of domains: agriculture, astronomy, biology, defense, education, energy management, geography, geoscience, medicine, oceanography, and oil and gas. Furthermore, the most notable success of DLs is that these constitute the logical underpinning of the Web Ontology Language (abbrv. OWL) in the Semantic Web.

Formal Concept Analysis (abbrv. FCA) is a subfield of lattice theory that allows to analyze data-sets that can be represented as formal contexts. Put simply, such a formal context binds a set of objects to a set of attributes by specifying which objects have which attributes. There are two major techniques that can be applied in various ways for purposes of conceptual clustering, data mining, machine learning, knowledge management, knowledge visualization, etc. On the one hand, it is possible to describe the hierarchical structure of such a data-set in form of a formal concept lattice. On the other hand, the theory of implications (dependencies between attributes) valid in a given formal context can be axiomatized in a sound and complete manner by the so-called canonical base, which furthermore contains a minimal number of implications w.r.t. the properties of soundness and completeness.

In spite of the different notions used in FCA and in DLs, there has been a very fruitful interaction between these two research areas. My thesis continues this line of research and, more specifically, I will describe how methods from FCA can be used to support the automatic construction and extension of DL ontologies from data.

@thesis{ Kriegel-Diss-2019, address = {Dresden, Germany}, author = {Francesco {Kriegel}}, school = {Technische Universit\"{a}t Dresden}, title = {Constructing and Extending Description Logic Ontologies using Methods of Formal Concept Analysis}, type = {Doctoral Thesis}, year = {2019}, }

**Joining Implications in Formal Contexts and Inductive Learning in a Horn Description Logic (Extended Version)**. LTCS-Report 19-02, Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universität Dresden, Dresden, Germany, 2019.

Abstract BibTeX Entry PDF File

A joining implication is a restricted form of an implication where it is explicitly specified which attributes may occur in the premise and in the conclusion, respectively. A technique for sound and complete axiomatization of joining implications valid in a given formal context is provided. In particular, a canonical base for the joining implications valid in a given formal context is proposed, which enjoys the property of being of minimal cardinality among all such bases. Background knowledge in form of a set of valid joining implications can be incorporated. Furthermore, an application to inductive learning in a Horn description logic is proposed, that is, a procedure for sound and complete axiomatization of \(\textsf{Horn-}\mathcal{M}\) concept inclusions from a given interpretation is developed. A complexity analysis shows that this procedure runs in deterministic exponential time.

@techreport{ Kr-LTCS-19-02, address = {Dresden, Germany}, author = {Francesco {Kriegel}}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tu-dresden.de/inf/lat/reports#Kr-LTCS-19-02}}, number = {19-02}, title = {{Joining Implications in Formal Contexts and Inductive Learning in a Horn Description Logic (Extended Version)}}, type = {LTCS-Report}, year = {2019}, }

**Joining Implications in Formal Contexts and Inductive Learning in a Horn Description Logic**. In Diana Cristea, Florence Le Ber, and Barış Sertkaya, editors,

*15th International Conference on Formal Concept Analysis, ICFCA 2019, Frankfurt, Germany, June 25-28, 2019, Proceedings*, volume 11511 of

*Lecture Notes in Computer Science*, pages 110–129. Springer, 2019.

Abstract BibTeX Entry PDF File DOI ©Springer-Verlag Extended Technical Report

A joining implication is a restricted form of an implication where it is explicitly specified which attributes may occur in the premise and in the conclusion, respectively. A technique for sound and complete axiomatization of joining implications valid in a given formal context is provided. In particular, a canonical base for the joining implications valid in a given formal context is proposed, which enjoys the property of being of minimal cardinality among all such bases. Background knowledge in form of a set of valid joining implications can be incorporated. Furthermore, an application to inductive learning in a Horn description logic is proposed, that is, a procedure for sound and complete axiomatization of \(\textsf{Horn-}\mathcal{M}\) concept inclusions from a given interpretation is developed. A complexity analysis shows that this procedure runs in deterministic exponential time.

@inproceedings{ Kr-ICFCA19, author = {Francesco {Kriegel}}, booktitle = {15th International Conference on Formal Concept Analysis, {ICFCA} 2019, Frankfurt, Germany, June 25-28, 2019, Proceedings}, doi = {https://dx.doi.org/10.1007/978-3-030-21462-3_9}, editor = {Diana {Cristea} and Florence {Le Ber} and Bar\i{}\c{s} {Sertkaya}}, pages = {110--129}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {{Joining Implications in Formal Contexts and Inductive Learning in a Horn Description Logic}}, volume = {11511}, year = {2019}, }

**Learning Description Logic Axioms from Discrete Probability Distributions over Description Graphs**. In Francesco Calimeri, Nicola Leone, and Marco Manna, editors,

*16th European Conference on Logics in Artificial Intelligence, JELIA 2019, Rende, Italy, May 7-11, 2019, Proceedings*, volume 11468 of

*Lecture Notes in Computer Science*, pages 399–417. Springer, 2019.

Abstract BibTeX Entry PDF File DOI ©Springer-Verlag Extended Technical Report

Description logics in their standard setting only allow for representing and reasoning with crisp knowledge without any degree of uncertainty. Of course, this is a serious shortcoming for use cases where it is impossible to perfectly determine the truth of a statement. For resolving this expressivity restriction, probabilistic variants of description logics have been introduced. Their model-theoretic semantics is built upon so-called probabilistic interpretations, that is, families of directed graphs the vertices and edges of which are labeled and for which there exists a probability measure on this graph family.

Results of scientific experiments, e.g., in medicine, psychology, or biology, that are repeated several times can induce probabilistic interpretations in a natural way. In this document, we shall develop a suitable axiomatization technique for deducing terminological knowledge from the assertional data given in such probabilistic interpretations. More specifically, we consider a probabilistic variant of the description logic \(\mathcal{E\mkern-1.618mu L}^{\!\bot}\), and provide a method for constructing a set of rules, so-called concept inclusions, from probabilistic interpretations in a sound and complete manner.

@inproceedings{ Kr-JELIA19, author = {Francesco {Kriegel}}, booktitle = {16th European Conference on Logics in Artificial Intelligence, {JELIA} 2019, Rende, Italy, May 7-11, 2019, Proceedings}, doi = {https://dx.doi.org/10.1007/978-3-030-19570-0_26}, editor = {Francesco {Calimeri} and Nicola {Leone} and Marco {Manna}}, pages = {399--417}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {{Learning Description Logic Axioms from Discrete Probability Distributions over Description Graphs}}, volume = {11468}, year = {2019}, }

## 2018

**Acquisition of Terminological Knowledge in Probabilistic Description Logic**. In Frank Trollmann and Anni-Yasmin Turhan, editors,

*KI 2018: Advances in Artificial Intelligence - 41st German Conference on AI, Berlin, Germany, September 24-28, 2018, Proceedings*, volume 11117 of

*Lecture Notes in Computer Science*, pages 46–53. Berlin, Germany, Springer, 2018.

Abstract BibTeX Entry PDF File DOI ©Springer-Verlag Extended Technical Report

For a probabilistic extension of the description logic \(\mathcal{E\mkern-1.618mu L}^{\bot}\), we consider the task of automatic acquisition of terminological knowledge from a given probabilistic interpretation. Basically, such a probabilistic interpretation is a family of directed graphs the vertices and edges of which are labeled, and where a discrete probability measure on this graph family is present. The goal is to derive so-called concept inclusions which are expressible in the considered probabilistic description logic and which hold true in the given probabilistic interpretation. A procedure for an appropriate axiomatization of such graph families is proposed and its soundness and completeness is justified.

@inproceedings{ Kr-KI18, address = {Berlin, Germany}, author = {Francesco {Kriegel}}, booktitle = {{{KI} 2018: Advances in Artificial Intelligence - 41st German Conference on AI, Berlin, Germany, September 24-28, 2018, Proceedings}}, doi = {https://doi.org/10.1007/978-3-030-00111-7_5}, editor = {Frank {Trollmann} and Anni-Yasmin {Turhan}}, pages = {46--53}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {{Acquisition of Terminological Knowledge in Probabilistic Description Logic}}, volume = {11117}, year = {2018}, }

**Learning Description Logic Axioms from Discrete Probability Distributions over Description Graphs (Extended Version)**. LTCS-Report 18-12, Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universität Dresden, Dresden, Germany, 2018.

Abstract BibTeX Entry PDF File

Description logics in their standard setting only allow for representing and reasoning with crisp knowledge without any degree of uncertainty. Of course, this is a serious shortcoming for use cases where it is impossible to perfectly determine the truth of a statement. For resolving this expressivity restriction, probabilistic variants of description logics have been introduced. Their model-theoretic semantics is built upon so-called probabilistic interpretations, that is, families of directed graphs the vertices and edges of which are labeled and for which there exists a probability measure on this graph family.

Results of scientific experiments, e.g., in medicine, psychology, or biology, that are repeated several times can induce probabilistic interpretations in a natural way. In this document, we shall develop a suitable axiomatization technique for deducing terminological knowledge from the assertional data given in such probabilistic interpretations. More specifically, we consider a probabilistic variant of the description logic \(\mathcal{E\mkern-1.618mu L}^{\!\bot}\), and provide a method for constructing a set of rules, so-called concept inclusions, from probabilistic interpretations in a sound and complete manner.

@techreport{ Kr-LTCS-18-12, address = {Dresden, Germany}, author = {Francesco {Kriegel}}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tu-dresden.de/inf/lat/reports#Kr-LTCS-18-12}}, number = {18-12}, title = {{Learning Description Logic Axioms from Discrete Probability Distributions over Description Graphs (Extended Version)}}, type = {LTCS-Report}, year = {2018}, }

**Most Specific Consequences in the Description Logic**\(\mathcal{E\mkern-1.618mu L}\). LTCS-Report 18-11, Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universität Dresden, Dresden, Germany, 2018.

Abstract BibTeX Entry PDF File

The notion of a most specific consequence with respect to some terminological box is introduced, conditions for its existence in the description logic \(\mathcal{E\mkern-1.618mu L}\) and its variants are provided, and means for its computation are developed. Algebraic properties of most specific consequences are explored. Furthermore, several applications that make use of this new notion are proposed and, in particular, it is shown how given terminological knowledge can be incorporated in existing approaches for the axiomatization of observations. For instance, a procedure for an incremental learning of concept inclusions from sequences of interpretations is developed.

@techreport{ Kr-LTCS-18-11, address = {Dresden, Germany}, author = {Francesco {Kriegel}}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tu-dresden.de/inf/lat/reports#Kr-LTCS-18-11}, accepted for publication in {Discrete Applied Mathematics}}, number = {18-11}, title = {{Most Specific Consequences in the Description Logic $\mathcal{E\mkern-1.618mu L}$}}, type = {LTCS-Report}, year = {2018}, }

**Terminological Knowledge Acquisition in Probabilistic Description Logic**. LTCS-Report 18-03, Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universität Dresden, Dresden, Germany, 2018.

Abstract BibTeX Entry PDF File

For a probabilistic extension of the description logic \(\mathcal{E\mkern-1.618mu L}^{\bot}\), we consider the task of automatic acquisition of terminological knowledge from a given probabilistic interpretation. Basically, such a probabilistic interpretation is a family of directed graphs the vertices and edges of which are labeled, and where a discrete probability measure on this graph family is present. The goal is to derive so-called concept inclusions which are expressible in the considered probabilistic description logic and which hold true in the given probabilistic interpretation. A procedure for an appropriate axiomatization of such graph families is proposed and its soundness and completeness is justified.

@techreport{ Kr-LTCS-18-03, address = {Dresden, Germany}, author = {Francesco {Kriegel}}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tu-dresden.de/inf/lat/reports#Kr-LTCS-18-03}}, number = {18-03}, title = {{Terminological Knowledge Acquisition in Probabilistic Description Logic}}, type = {LTCS-Report}, year = {2018}, }

**The Distributive, Graded Lattice of**\(\mathcal{E\mkern-1.618mu L}\)

**Concept Descriptions and its Neighborhood Relation (Extended Version)**. LTCS-Report 18-10, Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universität Dresden, Dresden, Germany, 2018.

Abstract BibTeX Entry PDF File

For the description logic \(\mathcal{E\mkern-1.618mu L}\), we consider the neighborhood relation which is induced by the subsumption order, and we show that the corresponding lattice of \(\mathcal{E\mkern-1.618mu L}\) concept descriptions is distributive, modular, graded, and metric. In particular, this implies the existence of a rank function as well as the existence of a distance function.

@techreport{ Kr-LTCS-18-10, address = {Dresden, Germany}, author = {Francesco {Kriegel}}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tu-dresden.de/inf/lat/reports#Kr-LTCS-18-10}}, number = {18-10}, title = {{The Distributive, Graded Lattice of $\mathcal{E\mkern-1.618mu L}$ Concept Descriptions and its Neighborhood Relation (Extended Version)}}, type = {LTCS-Report}, year = {2018}, }

**The Distributive, Graded Lattice of**\(\mathcal{E\mkern-1.618mu L}\)

**Concept Descriptions and its Neighborhood Relation**. In Dmitry I. Ignatov and Lhouari Nourine, editors,

*Proceedings of the 14th International Conference on Concept Lattices and Their Applications (CLA 2018)*, volume 2123 of

*CEUR Workshop Proceedings*, pages 267–278. Olomouc, Czech Republic, CEUR-WS.org, 2018.

Abstract BibTeX Entry PDF File PDF File (ceur-ws.org)

For the description logic \(\mathcal{E\mkern-1.618mu L}\), we consider the neighborhood relation which is induced by the subsumption order, and we show that the corresponding lattice of \(\mathcal{E\mkern-1.618mu L}\) concept descriptions is distributive, modular, graded, and metric. In particular, this implies the existence of a rank function as well as the existence of a distance function.

@inproceedings{ Kr-CLA18, address = {Olomouc, Czech Republic}, author = {Francesco {Kriegel}}, booktitle = {{Proceedings of the 14th International Conference on Concept Lattices and Their Applications ({CLA 2018})}}, editor = {Dmitry I. {Ignatov} and Lhouari {Nourine}}, pages = {267--278}, publisher = {CEUR-WS.org}, series = {{CEUR} Workshop Proceedings}, title = {{The Distributive, Graded Lattice of $\mathcal{E\mkern-1.618mu L}$ Concept Descriptions and its Neighborhood Relation}}, volume = {2123}, year = {2018}, }

## 2017

**Acquisition of Terminological Knowledge from Social Networks in Description Logic**. In Rokia Missaoui, Sergei O. Kuznetsov, and Sergei Obiedkov, editors,

*Formal Concept Analysis of Social Networks*,

*Lecture Notes in Social Networks (LNSN)*, pages 97–142. Cham, Springer International Publishing, 2017.

Abstract BibTeX Entry PDF File DOI ©Springer-Verlag

The Web Ontology Language (OWL) has gained serious attraction since its foundation in 2004, and it is heavily used in applications requiring representation of as well as reasoning with knowledge. It is the language of the Semantic Web, and it has a strong logical underpinning by means of so-called Description Logics (DLs). DLs are a family of conceptual languages suitable for knowledge representation and reasoning due to their strong logical foundation, and for which the decidability and complexity of common reasoning problems are widely explored. In particular, the reasoning tasks allow for the deduction of implicit knowledge from explicitly stated facts and axioms, and plenty of appropriate algorithms were developed, optimized, and implemented, e.g., tableaux algorithms and completion algorithms. In this document, we present a technique for the acquisition of terminological knowledge from social networks. More specifically, we show how OWL axioms, i.e., concept inclusions and role inclusions in DLs, can be obtained from social graphs in a sound and complete manner. A social graph is simply a directed graph, the vertices of which describe the entities, e.g., persons, events, messages, etc.; and the edges of which describe the relationships between the entities, e.g., friendship between persons, attendance of a person to an event, a person liking a message, etc. Furthermore, the vertices of social graphs are labeled, e.g., to describe properties of the entities, and also the edges are labeled to specify the concrete relationships. As an exemplary social network we consider Facebook, and show that it fits our use case.

@incollection{ Kr-FCAoSN17, address = {Cham}, author = {Francesco {Kriegel}}, booktitle = {Formal Concept Analysis of Social Networks}, doi = {https://dx.doi.org/10.1007/978-3-319-64167-6_5}, editor = {Rokia {Missaoui} and Sergei O. {Kuznetsov} and Sergei {Obiedkov}}, pages = {97--142}, publisher = {Springer International Publishing}, series = {Lecture Notes in Social Networks ({LNSN})}, title = {Acquisition of Terminological Knowledge from Social Networks in Description Logic}, year = {2017}, }

**First Notes on Maximum Entropy Entailment for Quantified Implications**. In Karell Bertet, Daniel Borchmann, Peggy Cellier, and Sébastien Ferré, editors,

*Proceedings of the 14th International Conference on Formal Concept Analysis (ICFCA 2017)*, volume 10308 of

*Lecture Notes in Computer Science (LNCS)*, pages 155–167. Rennes, France, Springer Verlag, 2017.

Abstract BibTeX Entry PDF File DOI ©Springer-Verlag

Entropy is a measure for the uninformativeness or randomness of a data set, i.e., the higher the entropy is, the lower is the amount of information. In the field of propositional logic it has proven to constitute a suitable measure to be maximized when dealing with models of probabilistic propositional theories. More specifically, it was shown that the model of a probabilistic propositional theory with maximal entropy allows for the deduction of other formulae which are somehow expected by humans, i.e., allows for some kind of common sense reasoning. In order to pull the technique of maximum entropy entailment to the field of Formal Concept Analysis, we define the notion of entropy of a formal context with respect to the frequency of its object intents, and then define maximum entropy entailment for quantified implication sets, i.e., for sets of partial implications where each implication has an assigned degree of confidence. Furthermore, then this entailment technique is utilized to define so-called maximum entropy implicational bases (ME-bases), and a first general example of such a ME-base is provided.

@inproceedings{ Kr-ICFCA17b, address = {Rennes, France}, author = {Francesco {Kriegel}}, booktitle = {Proceedings of the 14th International Conference on Formal Concept Analysis ({ICFCA} 2017)}, doi = {https://doi.org/10.1007/978-3-319-59271-8_10}, editor = {Karell {Bertet} and Daniel {Borchmann} and Peggy {Cellier} and S\'{e}bastien {Ferr\'{e}}}, pages = {155--167}, publisher = {Springer Verlag}, series = {Lecture Notes in Computer Science ({LNCS})}, title = {First Notes on Maximum Entropy Entailment for Quantified Implications}, volume = {10308}, year = {2017}, }

**Implications over Probabilistic Attributes**. In Karell Bertet, Daniel Borchmann, Peggy Cellier, and Sébastien Ferré, editors,

*Proceedings of the 14th International Conference on Formal Concept Analysis (ICFCA 2017)*, volume 10308 of

*Lecture Notes in Computer Science (LNCS)*, pages 168–183. Rennes, France, Springer Verlag, 2017.

Abstract BibTeX Entry PDF File DOI ©Springer-Verlag

We consider the task of acquisition of terminological knowledge from given assertional data. However, when evaluating data of real-world applications we often encounter situations where it is impractical to deduce only crisp knowledge, due to the presence of exceptions or errors. It is rather appropriate to allow for degrees of uncertainty within the derived knowledge. Consequently, suitable methods for knowledge acquisition in a probabilistic framework should be developed. In particular, we consider data which is given as a probabilistic formal context, i.e., as a triadic incidence relation between objects, attributes, and worlds, which is furthermore equipped with a probability measure on the set of worlds. We define the notion of a probabilistic attribute as a probabilistically quantified set of attributes, and define the notion of validity of implications over probabilistic attributes in a probabilistic formal context. Finally, a technique for the axiomatization of such implications from probabilistic formal contexts is developed. This is done is a sound and complete manner, i.e., all derived implications are valid, and all valid implications are deducible from the derived implications. In case of finiteness of the input data to be analyzed, the constructed axiomatization is finite, too, and can be computed in finite time.

@inproceedings{ Kr-ICFCA17a, address = {Rennes, France}, author = {Francesco {Kriegel}}, booktitle = {Proceedings of the 14th International Conference on Formal Concept Analysis ({ICFCA} 2017)}, doi = {https://doi.org/10.1007/978-3-319-59271-8_11}, editor = {Karell {Bertet} and Daniel {Borchmann} and Peggy {Cellier} and S\'{e}bastien {Ferr\'{e}}}, pages = {168--183}, publisher = {Springer Verlag}, series = {Lecture Notes in Computer Science ({LNCS})}, title = {Implications over Probabilistic Attributes}, volume = {10308}, year = {2017}, }

**Probabilistic Implication Bases in FCA and Probabilistic Bases of GCIs in**\(\mathcal{E\mkern-1.618mu L}^{\bot}\).

*International Journal of General Systems*, 46(5):511–546, 2017.

Abstract BibTeX Entry PDF File DOI ©Taylor and Francis

A probabilistic formal context is a triadic context the third dimension of which is a set of worlds equipped with a probability measure. After a formal definition of this notion, this document introduces probability of implications with respect to probabilistic formal contexts, and provides a construction for a base of implications the probabilities of which exceed a given lower threshold. A comparison between confidence and probability of implications is drawn, which yields the fact that both measures do not coincide. Furthermore, the results are extended towards the lightweight description logic \(\mathcal{E\mkern-1.618mu L}^{\bot}\) with probabilistic interpretations, and a method for computing a base of general concept inclusions the probabilities of which are greater than a pre-defined lower bound is proposed. Additionally, we consider so-called probabilistic attributes over probabilistic formal contexts, and provide a method for the axiomatization of implications over probabilistic attributes.

@article{ Kr-IJGS17, author = {Francesco {Kriegel}}, doi = {https://doi.org/10.1080/03081079.2017.1349575}, journal = {International Journal of General Systems}, number = {5}, pages = {511--546}, title = {Probabilistic Implication Bases in {FCA} and Probabilistic Bases of GCIs in $\mathcal{E\mkern-1.618mu L}^{\bot}$}, volume = {46}, year = {2017}, }

**NextClosures: Parallel Computation of the Canonical Base with Background Knowledge**.

*International Journal of General Systems*, 46(5):490–510, 2017.

Abstract BibTeX Entry PDF File DOI ©Taylor and Francis

The canonical base of a formal context plays a distinguished role in Formal Concept Analysis, as it is the only minimal implicational base known so far that can be described explicitly. Consequently, several algorithms for the computation of this base have been proposed. However, all those algorithms work sequentially by computing only one pseudo-intent at a time – a fact that heavily impairs the practicability in real-world applications. In this paper, we shall introduce an approach that remedies this deficit by allowing the canonical base to be computed in a parallel manner with respect to arbitrary implicational background knowledge. First experimental evaluations show that for sufficiently large data sets the speed-up is proportional to the number of available CPU cores.

@article{ KrBo-IJGS17, author = {Francesco {Kriegel} and Daniel {Borchmann}}, doi = {https://doi.org/10.1080/03081079.2017.1349570}, journal = {International Journal of General Systems}, number = {5}, pages = {490--510}, title = {NextClosures: Parallel Computation of the Canonical Base with Background Knowledge}, volume = {46}, year = {2017}, }

## 2016

**Axiomatisation of General Concept Inclusions from Finite Interpretations**.

*Journal of Applied Non-Classical Logics*, 26(1):1–46, 2016.

Abstract BibTeX Entry PDF File DOI ©Taylor and Francis

Description logic knowledge bases can be used to represent knowledge about a particular domain in a formal and unambiguous manner. Their practical relevance has been shown in many research areas, especially in biology and the Semantic Web. However, the tasks of constructing knowledge bases itself, often performed by human experts, is difficult, time-consuming and expensive. In particular the synthesis of terminological knowledge is a challenge every expert has to face. Because human experts cannot be omitted completely from the construction of knowledge bases, it would therefore be desirable to at least get some support from machines during this process. To this end, we shall investigate in this work an approach which shall allow us to extract terminological knowledge in the form of general concept inclusions from factual data, where the data is given in the form of vertex and edge labeled graphs. As such graphs appear naturally within the scope of the Semantic Web in the form of sets of RDF triples, the presented approach opens up another possibility to extract terminological knowledge from the Linked Open Data Cloud.

@article{ BoDiKr-JANCL16, author = {Daniel {Borchmann} and Felix {Distel} and Francesco {Kriegel}}, doi = {https://dx.doi.org/10.1080/11663081.2016.1168230}, journal = {Journal of Applied Non-Classical Logics}, number = {1}, pages = {1--46}, title = {Axiomatisation of General Concept Inclusions from Finite Interpretations}, volume = {26}, year = {2016}, }

**Axiomatization of General Concept Inclusions from Streams of Interpretations with Optional Error Tolerance**. In Sergei Kuznetsov, Amedeo Napoli, and Sebastian Rudolph, editors,

*Proceedings of the 5th International Workshop "What can FCA do for Artificial Intelligence?" (FCA4AI 2016) co-located with the European Conference on Artificial Intelligence (ECAI 2016)*, volume 1703 of

*CEUR Workshop Proceedings*, pages 9–16. The Hague, The Netherlands, CEUR-WS.org, 2016.

Abstract BibTeX Entry PDF File PDF File (ceur-ws.org)

We propose applications that utilize the infimum and the supremum of closure operators that are induced by structures occuring in the field of Description Logics. More specifically, we consider the closure operators induced by interpretations as well as closure operators induced by TBoxes, and show how we can learn GCIs from streams of interpretations, and how an error-tolerant axiomatization of GCIs from an interpretation guided by a hand-crafted TBox can be achieved.

@inproceedings{ Kr-FCA4AI16, address = {The Hague, The Netherlands}, author = {Francesco {Kriegel}}, booktitle = {Proceedings of the 5th International Workshop "What can {FCA} do for Artificial Intelligence?" ({FCA4AI} 2016) co-located with the European Conference on Artificial Intelligence ({ECAI} 2016)}, editor = {Sergei {Kuznetsov} and Amedeo {Napoli} and Sebastian {Rudolph}}, pages = {9--16}, publisher = {CEUR-WS.org}, series = {{CEUR} Workshop Proceedings}, title = {Axiomatization of General Concept Inclusions from Streams of Interpretations with Optional Error Tolerance}, volume = {1703}, year = {2016}, }

**NextClosures with Constraints**. In Marianne Huchard and Sergei Kuznetsov, editors,

*Proceedings of the 13th International Conference on Concept Lattices and Their Applications (CLA 2016)*, volume 1624 of

*CEUR Workshop Proceedings*, pages 231–243. Moscow, Russia, CEUR-WS.org, 2016.

Abstract BibTeX Entry PDF File PDF File (ceur-ws.org)

In a former paper, the algorithm NextClosures for computing the set of all formal concepts as well as the canonical base for a given formal context has been introduced. Here, this algorithm shall be generalized to a setting where the data-set is described by means of a closure operator in a complete lattice, and furthermore it shall be extended with the possibility to handle constraints that are given in form of a second closure operator. As a special case, constraints may be predefined as implicational background knowledge. Additionally, we show how the algorithm can be modified in order to do parallel Attribute Exploration for unconstrained closure operators, as well as give a reason for the impossibility of (parallel) Attribute Exploration for constrained closure operators if the constraint is not compatible with the data-set.

@inproceedings{ Kr-CLA16, address = {Moscow, Russia}, author = {Francesco {Kriegel}}, booktitle = {Proceedings of the 13th International Conference on Concept Lattices and Their Applications ({CLA 2016})}, editor = {Marianne {Huchard} and Sergei {Kuznetsov}}, pages = {231--243}, publisher = {CEUR-WS.org}, series = {{CEUR} Workshop Proceedings}, title = {NextClosures with Constraints}, volume = {1624}, year = {2016}, }

**Parallel Attribute Exploration**. In Ollivier Haemmerlé, Gem Stapleton, and Catherine Faron-Zucker, editors,

*Proceedings of the 22nd International Conference on Conceptual Structures (ICCS 2016)*, volume 9717 of

*Lecture Notes in Computer Science*, pages 91–106. Annecy, France, Springer-Verlag, 2016.

Abstract BibTeX Entry PDF File DOI ©Springer-Verlag

The canonical base of a formal context is a minimal set of implications that is sound and complete. A recent paper has provided a new algorithm for the parallel computation of canonical bases. An important extension is the integration of expert interaction for Attribute Exploration in order to explore implicational bases of inaccessible formal contexts. This paper presents and analyzes an algorithm that allows for Parallel Attribute Exploration.

@inproceedings{ Kr-ICCS16, address = {Annecy, France}, author = {Francesco {Kriegel}}, booktitle = {Proceedings of the 22nd International Conference on Conceptual Structures ({ICCS 2016})}, doi = {http://dx.doi.org/10.1007/978-3-319-40985-6_8}, editor = {Ollivier {Haemmerl{\'{e}}} and Gem {Stapleton} and Catherine {Faron{-}Zucker}}, pages = {91--106}, publisher = {Springer-Verlag}, series = {Lecture Notes in Computer Science}, title = {Parallel Attribute Exploration}, volume = {9717}, year = {2016}, }

## 2015

**Axiomatization of General Concept Inclusions from Finite Interpretations**. LTCS-Report 15-13, Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universität Dresden, Dresden, Germany, 2015.

Abstract BibTeX Entry PDF File

Description logic knowledge bases can be used to represent knowledge about a particular domain in a formal and unambiguous manner. Their practical relevance has been shown in many research areas, especially in biology and the semantic web. However, the tasks of constructing knowledge bases itself, often performed by human experts, is difficult, time-consuming and expensive. In particular the synthesis of terminological knowledge is a challenge every expert has to face. Because human experts cannot be omitted completely from the construction of knowledge bases, it would therefore be desirable to at least get some support from machines during this process. To this end, we shall investigate in this work an approach which shall allow us to extract terminological knowledge in the form of general concept inclusions from factual data, where the data is given in the form of vertex and edge labeled graphs. As such graphs appear naturally within the scope of the Semantic Web in the form of sets of RDF triples, the presented approach opens up the possibility to extract terminological knowledge from the Linked Open Data Cloud. We shall also present first experimental results showing that our approach has the potential to be useful for practical applications.

@techreport{ BoDiKr-LTCS-15-13, address = {Dresden, Germany}, author = {Daniel {Borchmann} and Felix {Distel} and Francesco {Kriegel}}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tu-dresden.de/inf/lat/reports#BoDiKr-LTCS-15-13}}, number = {15-13}, title = {{Axiomatization of General Concept Inclusions from Finite Interpretations}}, type = {LTCS-Report}, year = {2015}, }

**Axiomatization of General Concept Inclusions in Probabilistic Description Logics**. In Steffen Hölldobler, Sebastian Rudolph, and Markus Krötzsch, editors,

*Proceedings of the 38th German Conference on Artificial Intelligence (KI 2015)*, volume 9324 of

*Lecture Notes in Artificial Intelligence (LNAI)*, pages 124–136. Dresden, Germany, Springer Verlag, 2015.

Abstract BibTeX Entry PDF File DOI ©Springer-Verlag

Probabilistic interpretations consist of a set of interpretations with a shared domain and a measure assigning a probability to each interpretation. Such structures can be obtained as results of repeated experiments, e.g., in biology, psychology, medicine, etc. A translation between probabilistic and crisp description logics is introduced, and then utilized to reduce the construction of a base of general concept inclusions of a probabilistic interpretation to the crisp case for which a method for the axiomatization of a base of GCIs is well-known.

@inproceedings{ Kr-KI15, address = {Dresden, Germany}, author = {Francesco {Kriegel}}, booktitle = {Proceedings of the 38th German Conference on Artificial Intelligence ({KI 2015})}, doi = {http://dx.doi.org/10.1007/978-3-319-24489-1_10}, editor = {Steffen {H{\"o}lldobler} and Sebastian {Rudolph} and Markus {Kr{\"o}tzsch}}, pages = {124--136}, publisher = {Springer Verlag}, series = {Lecture Notes in Artificial Intelligence ({LNAI})}, title = {Axiomatization of General Concept Inclusions in Probabilistic Description Logics}, volume = {9324}, year = {2015}, }

**Extracting**\(\mathcal{A\mkern-1.618mu L\mkern-1.618mu E\mkern-1.618mu Q\mkern-1.618mu R}(\mathsf{Self})\)

**-Knowledge Bases from Graphs**. In Sergei O. Kuznetsov, Rokia Missaoui, and Sergei A. Obiedkov, editors,

*Proceedings of the International Workshop on Social Network Analysis using Formal Concept Analysis (SNAFCA 2015) in conjunction with the 13th International Conference on Formal Concept Analysis (ICFCA 2015)*, volume 1534 of

*CEUR Workshop Proceedings*. Nerja, Spain, CEUR-WS.org, 2015.

Abstract BibTeX Entry PDF File PDF File (ceur-ws.org)

A description graph is a directed graph that has labeled vertices and edges. This document proposes a method for extracting a knowledge base from a description graph. The technique is presented for the description logic \(\mathcal{A\mkern-1.618mu L\mkern-1.618mu E\mkern-1.618mu Q\mkern-1.618mu R}(\mathsf{Self})\) which allows for conjunctions, primitive negation, existential restrictions, value restrictions, qualified number restrictions, existential self restrictions, and complex role inclusion axioms, but also sublogics may be chosen to express the axioms in the knowledge base. The extracted knowledge base entails all statements that can be expressed in the chosen description logic and are encoded in the input graph.

@inproceedings{ Kr-SNAFCA15, address = {Nerja, Spain}, author = {Francesco {Kriegel}}, booktitle = {Proceedings of the International Workshop on Social Network Analysis using Formal Concept Analysis ({SNAFCA 2015}) in conjunction with the 13th International Conference on Formal Concept Analysis ({ICFCA} 2015)}, editor = {Sergei O. {Kuznetsov} and Rokia {Missaoui} and Sergei A. {Obiedkov}}, publisher = {CEUR-WS.org}, series = {CEUR Workshop Proceedings}, title = {Extracting $\mathcal{A\mkern-1.618mu L\mkern-1.618mu E\mkern-1.618mu Q\mkern-1.618mu R}(\mathsf{Self})$-Knowledge Bases from Graphs}, volume = {1534}, year = {2015}, }

**Incremental Learning of TBoxes from Interpretation Sequences with Methods of Formal Concept Analysis**. In Diego Calvanese and Boris Konev, editors,

*Proceedings of the 28th International Workshop on Description Logics (DL 2015)*, volume 1350 of

*CEUR Workshop Proceedings*, pages 452–464. Athens, Greece, CEUR-WS.org, 2015.

Abstract BibTeX Entry PDF File PDF File (ceur-ws.org)

Formal Concept Analysis and its methods for computing minimal implicational bases have been successfully applied to axiomatise minimal \(\mathcal{E\mkern-1.618mu L}\)-TBoxes from models, so called bases of GCIs. However, no technique for an adjustment of an existing \(\mathcal{E\mkern-1.618mu L}\)-TBox w.r.t. a new model is available, i.e., on a model change the complete TBox has to be recomputed. This document proposes a method for the computation of a minimal extension of a TBox w.r.t. a new model. The method is then utilised to formulate an incremental learning algorithm that requires a stream of interpretations, and an expert to guide the learning process, respectively, as input.

@inproceedings{ Kr-DL15, address = {Athens, Greece}, author = {Francesco {Kriegel}}, booktitle = {Proceedings of the 28th International Workshop on Description Logics ({DL 2015})}, editor = {Diego {Calvanese} and Boris {Konev}}, pages = {452--464}, publisher = {CEUR-WS.org}, series = {CEUR Workshop Proceedings}, title = {Incremental Learning of TBoxes from Interpretation Sequences with Methods of Formal Concept Analysis}, volume = {1350}, year = {2015}, }

**Probabilistic Implicational Bases in FCA and Probabilistic Bases of GCIs in**\(\mathcal{E\mkern-1.618mu L}^{\bot}\). In Sadok Ben Yahia and Jan Konecny, editors,

*Proceedings of the 12th International Conference on Concept Lattices and their Applications (CLA 2015)*, volume 1466 of

*CEUR Workshop Proceedings*, pages 193–204. Clermont-Ferrand, France, CEUR-WS.org, 2015.

Abstract BibTeX Entry PDF File PDF File (ceur-ws.org)

A probabilistic formal context is a triadic context whose third dimension is a set of worlds equipped with a probability measure. After a formal definition of this notion, this document introduces probability of implications, and provides a construction for a base of implications whose probability satisfy a given lower threshold. A comparison between confidence and probability of implications is drawn, which yields the fact that both measures do not coincide, and cannot be compared. Furthermore, the results are extended towards the light-weight description logic \(\mathcal{E\mkern-1.618mu L}^{\bot}\) with probabilistic interpretations, and a method for computing a base of general concept inclusions whose probability fulfill a certain lower bound is proposed.

@inproceedings{ Kr-CLA15, address = {Clermont-Ferrand, France}, author = {Francesco {Kriegel}}, booktitle = {Proceedings of the 12th International Conference on Concept Lattices and their Applications ({CLA 2015})}, editor = {Sadok {Ben Yahia} and Jan {Konecny}}, pages = {193--204}, publisher = {CEUR-WS.org}, series = {CEUR Workshop Proceedings}, title = {Probabilistic Implicational Bases in FCA and Probabilistic Bases of GCIs in $\mathcal{E\mkern-1.618mu L}^{\bot}$}, volume = {1466}, year = {2015}, }

**Learning General Concept Inclusions in Probabilistic Description Logics**. LTCS-Report 15-14, Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universität Dresden, Dresden, Germany, 2015.

Abstract BibTeX Entry PDF File

Probabilistic interpretations consist of a set of interpretations with a shared domain and a measure assigning a probability to each interpretation. Such structures can be obtained as results of repeated experiments, e.g., in biology, psychology, medicine, etc. A translation between probabilistic and crisp description logics is introduced and then utilised to reduce the construction of a base of general concept inclusions of a probabilistic interpretation to the crisp case for which a method for the axiomatisation of a base of GCIs is well-known.

@techreport{ Kr-LTCS-15-14, address = {Dresden, Germany}, author = {Francesco {Kriegel}}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tu-dresden.de/inf/lat/reports#Kr-LTCS-15-14}}, number = {15-14}, title = {{Learning General Concept Inclusions in Probabilistic Description Logics}}, type = {LTCS-Report}, year = {2015}, }

**Model-Based Most Specific Concept Descriptions and Least Common Subsumers in**\(\mathcal{A\mkern-1.618mu L\mkern-1.618mu E\mkern-1.618mu Q}^{\geq}\mkern-1.618mu\mathcal{N}^{\leq}\mkern-1.618mu(\mathsf{Self})\). LTCS-Report 15-02, Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universität Dresden, Dresden, Germany, 2015.

Abstract BibTeX Entry

Model-based most specific concept descriptions are a useful means to compactly represent all knowledge about a certain individual of an interpretation that is expressible in the underlying description logic. Existing approaches only cover their construction in the case of \(\mathcal{E\mkern-1.618mu L}\) and \(\mathcal{F\mkern-1.618mu L\mkern-1.618mu E}\) w.r.t. greatest fixpoint semantics, and the case of \(\mathcal{E\mkern-1.618mu L}\) w.r.t. a role-depth bound, respectively. This document extends the results towards the more expressive description logic \(\mathcal{A\mkern-1.618mu L\mkern-1.618mu E\mkern-1.618mu Q}^{\geq}\mkern-1.618mu\mathcal{N}^{\leq}\mkern-1.618mu(\mathsf{Self})\) w.r.t. role-depth bounds and also gives a method for the computation of least common subsumers.

@techreport{ Kr-LTCS-15-02, address = {Dresden, Germany}, author = {Francesco {Kriegel}}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tu-dresden.de/inf/lat/reports#Kr-LTCS-15-02}}, number = {15-02}, title = {{Model-Based Most Specific Concept Descriptions and Least Common Subsumers in $\mathcal{A\mkern-1.618mu L\mkern-1.618mu E\mkern-1.618mu Q}^{\geq}\mkern-1.618mu\mathcal{N}^{\leq}\mkern-1.618mu(\mathsf{Self})$}}, type = {LTCS-Report}, year = {2015}, }

**NextClosures – Parallel Exploration of Constrained Closure Operators**. LTCS-Report 15-01, Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universität Dresden, Dresden, Germany, 2015.

Abstract BibTeX Entry

It is well-known that the canonical implicational base of all implications valid w.r.t. a closure operator can be obtained from the set of all pseudo-closures. NextClosures is a parallel algorithm to compute all closures and pseudo-closures of a closure operator in a graded lattice, e.g., in a powerset. Furthermore, the closures and pseudo-closures can be constrained, and partially known closure operators can be explored.

@techreport{ Kr-LTCS-15-01, address = {Dresden, Germany}, author = {Francesco {Kriegel}}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tu-dresden.de/inf/lat/reports#Kr-LTCS-15-01}}, number = {15-01}, title = {{NextClosures -- Parallel Exploration of Constrained Closure Operators}}, type = {LTCS-Report}, year = {2015}, }

**NextClosures: Parallel Computation of the Canonical Base**. In Sadok Ben Yahia and Jan Konecny, editors,

*Proceedings of the 12th International Conference on Concept Lattices and their Applications (CLA 2015)*, volume 1466 of

*CEUR Workshop Proceedings*, pages 182–192. Clermont-Ferrand, France, CEUR-WS.org, 2015. Best Paper Award.

Abstract BibTeX Entry PDF File PDF File (ceur-ws.org)

The canonical base of a formal context plays a distinguished role in formal concept analysis. This is because it is the only minimal base so far that can be described explicitly. For the computation of this base several algorithms have been proposed. However, all those algorithms work sequentially, by computing only one pseudo-intent at a time - a fact which heavily impairs the practicability of using the canonical base in real-world applications. In this paper we shall introduce an approach that remedies this deficit by allowing the canonical base to be computed in a parallel manner. First experimental evaluations show that for sufficiently large data-sets the speedup is proportional to the number of available CPUs.

@inproceedings{ KrBo-CLA15, address = {Clermont-Ferrand, France}, author = {Francesco {Kriegel} and Daniel {Borchmann}}, booktitle = {Proceedings of the 12th International Conference on Concept Lattices and their Applications ({CLA 2015})}, editor = {Sadok {Ben Yahia} and Jan {Konecny}}, note = {Best Paper Award.}, pages = {182--192}, publisher = {CEUR-WS.org}, series = {CEUR Workshop Proceedings}, title = {NextClosures: Parallel Computation of the Canonical Base}, volume = {1466}, year = {2015}, }

Generated 17 April 2021, 22:00:52.

DFG Project: Reasoning in Fuzzy Description Logics with General Concept Inclusion Axioms

Principal Investigators: F. Baader, R. Peñaloza

Involved person: S. Borgwardt

Start date: May 1, 2012

Duration years: 3 years (+ 4 month parental leave)

Funded by: DFG

Description logics (DLs) are a family of logic-based knowledge representation languages that are tailored towards representing terminological knowledge, by allowing the knowledge engineer to define the relevant concepts of an application domain within this logic and then reason about these definitions using terminating inference algorithms. In order to deal with applications where the boundaries between members and non-members of concepts (e.g., “tall man,” “high blood pressure,” or “heavy network load”) are blurred, DLs have been combined with fuzzy logics, resulting in fuzzy description logics (fuzzy DLs). Considering the literature on fuzzy description logics of the last 20 years, one could get the impression that, from an algorithmic point of view, fuzzy DLs behave very similarly to their crisp counterparts: for fuzzy DLs based on simple t-norms such as Gödel, black-box procedures that call reasoners for the corresponding crisp DLs can be used, whereas fuzzy DLs based on more complicated t-norms (such as product and Lukasiewicz) can be dealt with by appropriately modifying the tableau-based reasoners for the crisp DLs. However, it has recently turned out that, in the presence of so-called general concept inclusion axioms (GCIs), the published extensions of tableaubased reasoners to fuzzy DLs do not work correctly. In fact, we were able to show that GCIs can cause undecidability for certain fuzzy DLs based on product t-norm. However, for most fuzzy DLs, the decidability status of reasoning w.r.t. GCIs is still open. The purpose of this project is to investigate the border between decidability and undecidability for fuzzy DLs with GCIs. On the one hand, we will try to show more undecidability results for specific fuzzy DLs, and then attempt to derive from these results general criteria that imply undecidability. On the other hand, we will try to determine decidable special cases, by extending tableau- and automatabased decision procedures for DLs to the fuzzy case, and also looking at other reasoning approaches for inexpressive DLs.

Publications on the topics of this project can be found on our publications web page.

PhD Project: Temporalised Description Logics with Good Algorithmic Properties, and Their Application for Monitoring Partially Observable Events

Principal Investigator: F. Baader

Involved person: M. Lippmann

Start date: July 1, 2010

Funded by: TU Dresden

Using the temporalised Description Logic ALC-LTL as a starting point, this work has as objective to investigate different kinds of extensions of this logic. The main focus will be on decidability results, the complexity of the satisfiability problem, and their usefulness for runtime monitoring. To this end, it is essential to understand the formal properties of such temporal Description Logics.

Publications on the topics of this project can be found on our publications web page.

DFG Project: Unification in Description Logics for Avoiding Redundancies in Medical Ontologies

Principal Investigator: F. Baader

Involved person: B. Morawska, S. Borgwardt, J. Mendez

Start date: July 1, 2009

Duration: 3 years

Funded by: DFG

Unification in Description Logics can be used to discover redundancies in ontologies. Up to now the unification procedure has been available only for the description logic FL0 that does not have any applications in medical ontologies. The unification in FL0 has bad complexity, and all attempts to extend this procedure to other description logics has failed up to now. We have developed recently the algorithm for unification in the description logic EL. The procedure has better complexity than that for FL0. Medical ontologies (e.g. SNOMED CT) use EL as the language of knowledge representation and the problem of redundancy occurs in them in practice.

In this project we will optimize and implement the new algorithm for the unification in EL. We will show, with the examples from SNOMED CT, how the redundancies can be discovered and removed. We will also attempt to extend the algorithm to some extensions of EL that are important for practical applications. We will define and analyze equational theories for which the procedure for EL-unification can be extended, in order to discover possible syntactic criteria that enable such extensions.

Publications on the topics of this project can be found on our publications web page.

DFG Project: Completing knowledge bases using Formal Concept Analysis

Principal Investigator: F. Baader

Involved persons: J. Mendez, B. Sertkaya

Start date: November 15, 2007

Duration: 2 years

Funded by: DFG

Description Logics are employed in various application domains, such as natural language processing, configuration, databases, and bio-medical ontologies, but their most notable success so far is due to the fact that DLs provide the logical underpinning of OWL, the standard ontology language for the semantic web. As a consequence of this standardization, ontologies written in OWL are employed in more and more applications. As the size of these ontologies grows, tools that support improving their quality becomes more important. The tools available until now use DL reasoning to detect inconsistencies and to infer consequences, i.e., implicit knowledge that can be deduced from the explicitly represented knowledge. These approaches address the quality dimension of soundness of an ontology, both within itself (consistency) and w.r.t. the intended application domain. In this project we are concerned with a different quality dimension: completeness. We aim to develop formally well-founded techniques and tools that support the ontology engineer in checking whether an ontology contains all the relevant information about the application domain, and to extend the ontology appropriately if this is not the case.

#### Literature:

Daniel Borchmann and Felix Distel: Mining of *EL*-GCIs. In *The 11th IEEE International Conference on Data Mining Workshops*. Vancouver, Canada, IEEE Computer Society, 11 December 2011.

Felix Distel: Learning Description Logic Knowledge Bases from Data using Methods from Formal Concept Analysis. PhD Thesis, TU Dresden, Germany, April 2011.

Felix Distel and Barış Sertkaya: On the complexity of enumerating pseudo-intents. *Discrete Applied Mathematics*, 159(6):450–466, 2011.

Felix Distel: An Approach to Exploring Description Logic Knowledge Bases. In Barış Sertkaya and Léonard Kwuida, editors, *Proceedings of the 8th International Conference on Formal Concept Analysis, (ICFCA 2010)*, volume 5986 of in *Lecture Notes in Artificial Intelligence*, pages 209–224. Springer, 2010.

Felix Distel: Hardness of Enumerating Pseudo-Intents in the Lectic Order. In Barış Sertkaya and Léonard Kwuida, editors, *Proceedings of the 8th International Conference on Formal Concept Analysis, (ICFCA 2010)*, volume 5986 of in *Lecture Notes in Artificial Intelligence*, pages 124–137. Springer, 2010.

Bernardo Cuenca Grau, Christian Halaschek-Wiener, Yevgeny Kazakov, and Boontawee Suntisrivaraporn: Incremental classification of description logics ontologies. *J. of Automated Reasoning*, 44(4):337–369, 2010.

Rafael Peñaloza and Barış Sertkaya: On the Complexity of Axiom Pinpointing in the EL Family of Description Logics. In Fangzhen Lin, Ulrike Sattler, and Miroslaw Truszczynski, editors, *Proceedings of the Twelfth International Conference on Principles of Knowledge Representation and Reasoning (KR 2010)*. AAAI Press, 2010.

Franz Baader and Felix Distel: Exploring Finite Models in the Description Logic ELgfp. In Sébastien Ferré and Sebastian Rudolph, editors, *Proceedings of the 7th International Conference on Formal Concept Analysis, (ICFCA 2009)*, volume 5548 of in *Lecture Notes in Artificial Intelligence*, pages 146–161. Springer Verlag, 2009.

Franz Baader and Barış Sertkaya: Usability Issues in Description Logic Knowledge Base Completion. In Sébastien Ferré and Sebastian Rudolph, editors, *Proceedings of the 7th International Conference on Formal Concept Analysis, (ICFCA 2009)*, volume 5548 of in *Lecture Notes in Artificial Ingelligence*, pages 1–21. Springer Verlag, 2009.

Barış Sertkaya: OntoComP System Description. In Bernardo Cuenca Grau, Ian Horrocks, Boris Motik, and Ulrike Sattler, editors, *Proceedings of the 2009 International Workshop on Description Logics (DL2009)*, volume 477 of in *CEUR-WS*, 2009.

Barış Sertkaya: OntoComP: A Protege Plugin for Completing OWL Ontologies. In *Proceedings of the 6th European Semantic Web Conference, (ESWC 2009)*, volume 5554 of in *Lecture Notes in Computer Science*, pages 898–902. Springer Verlag, 2009.

Barış Sertkaya: Some Computational Problems Related to Pseudo-intents. In Sébastien Ferré and Sebastian Rudolph, editors, *Proceedings of the 7th International Conference on Formal Concept Analysis, (ICFCA 2009)*, volume 5548 of in *Lecture Notes in Artificial Intelligence*, pages 130–145. Springer Verlag, 2009.

Barış Sertkaya: Towards the Complexity of Recognizing Pseudo-intents. In Frithjof Dau and Sebastian Rudolph, editors, *Proceedings of the 17th International Conference on Conceptual Structures, (ICCS 2009)*, pages 284–292, 2009.

Franz Baader and Felix Distel: A Finite Basis for the Set of EL-Implications Holding in a Finite Model. In Raoul Medina and Sergei Obiedkov, editors, *Proceedings of the 6th International Conference on Formal Concept Analysis, (ICFCA 2008)*, volume 4933 of in *Lecture Notes in Artificial Intelligence*, pages 46–61. Springer, 2008.

Franz Baader and Boontawee Suntisrivaraporn: Debugging SNOMED CT Using Axiom Pinpointing in the Description Logic *EL ^{+}*. In

*Proceedings of the 3rd Knowledge Representation in Medicine (KR-MED'08): Representing and Sharing Knowledge Using SNOMED*, volume 410 of in

*CEUR-WS*, 2008.

Miki Hermann and Barış Sertkaya: On the Complexity of Computing Generators of Closed Sets. In Raoul Medina and Sergei A. Obiedkov, editors, *Proceedings of the 6th International Conference on Formal Concept Analysis, (ICFCA 2008)*, volume 4933 of in *Lecture Notes in Computer Science*, pages 158–168. Springer Verlag, 2008.

Boontawee Suntisrivaraporn: Module Extraction and Incremental Classification: A Pragmatic Approach for *EL ^{+}* Ontologies. In Sean Bechhofer, Manfred Hauswirth, Joerg Hoffmann, and Manolis Koubarakis, editors,

*Proceedings of the 5th European Semantic Web Conference (ESWC'08)*, volume 5021 of in

*Lecture Notes in Computer Science*, pages 230–244. Springer-Verlag, 2008.

Franz Baader, Bernhard Ganter, Ulrike Sattler, and Baris Sertkaya: Completing Description Logic Knowledge Bases using Formal Concept Analysis. In *Proceedings of the Twentieth International Joint Conference on Artificial Intelligence (IJCAI-07)*. AAAI Press, 2007.

Franz Baader, Rafael Peñaloza, and Boontawee Suntisrivaraporn: Pinpointing in the Description Logic *EL*. In *Proceedings of the 30th German Conference on Artificial Intelligence (KI2007)*, volume 4667 of in *Lecture Notes in Artificial Intelligence*, pages 52–67. Osnabrück, Germany, Springer-Verlag, 2007.

PhD Project: Knowledge Acquisition in Description Logics by Means of Formal Concept Analysis

Principal Investigator: F. Baader

Involved persons: F. Distel, D. Borchmann

Start date: May 1, 2007

Funded by: Cusanuswerk e.V. (until April 30, 2009) and TU Dresden

This work's objective is making the capabilities of Formal Concept Analysis applicable in Description Logics. The major interest will be on supporting ontology engineers in defining new concepts. At first glance Formal Concept Analysis appears to be a good starting point for this. However, a deeper examination shows that there are grave differences between concepts in FCA and concepts in DL. These differences make it necessary to extend and modify the Theory of Formal Concept Analysis. The major discrepancies lie in expressiveness with respect to intensional concept descriptions and in the contrast between open-world semantics and closed-world semantics. We try to expand Formal Concept Analysis in this direction.

#### Literature:

Felix Distel: Learning Description Logic Knowledge Bases from Data using Methods from Formal Concept Analysis. PhD Thesis, TU Dresden, Germany, April 2011.

Felix Distel: An Approach to Exploring Description Logic Knowledge Bases. In Barış Sertkaya and Léonard Kwuida, editors, *Proceedings of the 8th International Conference on Formal Concept Analysis, (ICFCA 2010)*, volume 5986 of in *Lecture Notes in Artificial Intelligence*, pages 209–224. Springer, 2010.

Franz Baader and Felix Distel: Exploring Finite Models in the Description Logic ELgfp. In Sébastien Ferré and Sebastian Rudolph, editors, *Proceedings of the 7th International Conference on Formal Concept Analysis, (ICFCA 2009)*, volume 5548 of in *Lecture Notes in Artificial Intelligence*, pages 146–161. Springer Verlag, 2009.

Franz Baader and Felix Distel: A Finite Basis for the Set of EL-Implications Holding in a Finite Model. In Raoul Medina and Sergei Obiedkov, editors, *Proceedings of the 6th International Conference on Formal Concept Analysis, (ICFCA 2008)*, volume 4933 of in *Lecture Notes in Artificial Intelligence*, pages 46–61. Springer, 2008.

DFG Project: Description Logics with Existential Quantifiers and Polynomial Subsumption Problem and Their Applications in Bio-Medical Ontologies

Principal Investigator: F. Baader

Involved persons: M. Lippmann, C. Lutz, B. Suntisrivaraporn.

Start date: June 1, 2006

Duration: 2 years + 1 year extension

Funded by: Deutsche Forschungsgemeinschaft (DFG), Project BA 1122/11-1

Description logics (DLs) with value restrictions have so far been well-investigated. In particular, every expressive DLs, together with practical algorithms, have been developed. Despite having high worst-case complexity, their highly optimized implementations behave well in practice. However, it has turned out that, in bio-medical ontology applications, inexpressive DLs with existential restrictions, but without value restrictions, suffice. In the scope of this project, DLs with existential restrictions shall be investigated, both theoretically and practically. This includes identifying the polynomial borders of subsumption problems, developing optimizations for the subsumption algorithms, evaluating against realistic large-scale bio-medical ontologies. Moreover, supplemental reasoning problems (e.g., conjunctive queries) shall be investigated.

#### Literature:

Julian Mendez: A classification algorithm for *ELHIf _{R+}*. Master Thesis, TU Dresden, Germany, 2011.

Julian Mendez, Andreas Ecke, and Anni-Yasmin Turhan: Implementing completion-based inferences for the *el*-family. In Riccardo Rosati, Sebastian Rudolph, and Michael Zakharyaschev, editors, *Proceedings of the international Description Logics workshop*. CEUR, 2011.

Franz Baader, Meghyn Bienvenu, Carsten Lutz, and Frank Wolter: Query and Predicate Emptiness in Description Logics. In Fangzhen Lin and Ulrike Sattler, editors, *Proceedings of the 12th International Conference on Principles of Knowledge Representation and Reasoning (KR2010)*. AAAI Press, 2010.

Franz Baader, Carsten Lutz, and Anni-Yasmin Turhan: Small is again Beautiful in Description Logics. *KI – Künstliche Intelligenz*, 24(1):25–33, 2010.

Boris Konev, Carsten Lutz, Denis Ponomaryov, and Frank Wolter: Decomposing description logic ontologies. In Fangzhen Lin, Ulrike Sattler, and Miroslaw Trusz- czynski, editors, *Proceedings of the Twelfth International Conference on Principles of Knowledge Representation and Reasoning (KR2010)*. AAAI Press, 2010.

Carsten Lutz and Frank Wolter. Deciding inseparability and conservative extensions in the description logic *EL*. *Journal of Symbolic Computation*, 45(2):194–228, 2010.

Roman Kontchakov, Carsten Lutz, David Toman, Frank Wolter, and Michael Zakharyaschev: The combined approach to query answering in DL-Lite. In Fangzhen Lin, Ulrike Sattler, and Miroslaw Truszczynski, editors, *Proceedings of the Twelfth International Conference on Principles of Knowledge Representation and Reasoning (KR2010)*. AAAI Press, 2010.

Franz Baader, Stefan Schulz, Kent Spackmann, and Bontawee Suntisrivaraporn: How Should Parthood Relations be Expressed in SNOMED CT?. In *Proceedings of 1. Workshop des GI-Arbeitskreises Ontologien in Biomedizin und Lebenswissenschaften (OBML 2009)*, 2009.

Carsten Lutz, David Toman, and Frank Wolter: Conjunctive query answering in the description logic EL using a relational database system. In Craig Boutilier, editor, *Proc. of the 21st Int. Joint Conf. on Artificial Intelligence (IJCAI 2009)*, pages 2070–2075. IJCAI/AAAI, 2009.

Julian Mendez and Boontawee Suntisrivaraporn: Reintroducing CEL as an OWL 2 EL Reasoner. In Bernardo Cuenca Grau, Ian Horrocks, Boris Motik, and Ulrike Sattler, editors, *Proceedings of the 2009 International Workshop on Description Logics (DL2009)*, volume 477 of in *CEUR-WS*, 2009.

Boris Motik, Bernardo Cuenca Grau, Ian Horrocks, Zhe Wu, and Carsten Lutz, editors: OWL 2 Web Ontology Language: Profiles. *W3C Recommendation*, 27 October 2009. Available at http://www.w3.org/TR/owl-profiles/.

Stefan Schulz, Boontawee Suntisrivaraporn, Franz Baader, and Martin Boeker: SNOMED reaching its adolescence: Ontologists' and logicians' health check. *International Journal of Medical Informatics*, 78(Supplement 1):S86–S94, 2009.

Franz Baader, Sebastian Brandt, and Carsten Lutz: Pushing the EL Envelope Further. In Kendall Clark and Peter F. Patel-Schneider, editors, *In Proceedings of the OWLED 2008 DC Workshop on OWL: Experiences and Directions*, 2008.

Christoph Haase and Carsten Lutz: Complexity of Subsumption in the EL Family of Description Logics: Acyclic and Cyclic TBoxes. In Malik Ghallab, Constantine D. Spyropoulos, Nikos Fakotakis, and Nikos Avouris, editors, *Proceedings of the 18th European Conference on Artificial Intelligence (ECAI08)*, volume 178 of in *Frontiers in Artificial Intelligence and Applications*, pages 25–29. IOS Press, 2008.

Boris Konev, Carsten Lutz, Dirk Walther, and Frank Wolter: Semantic Modularity and Module Extraction in Description Logics. In Malik Ghallab, Constantine D. Spyropoulos, Nikos Fakotakis, and Nikos Avouris, editors, *Proceedings of the 18th European Conference on Artificial Intelligence (ECAI08)*, volume 178 of in *Frontiers in Artificial Intelligence and Applications*, pages 55–59. IOS Press, 2008.

Boontawee Suntisrivaraporn: Module Extraction and Incremental Classification: A Pragmatic Approach for *EL ^{+}* Ontologies. In Sean Bechhofer, Manfred Hauswirth, Joerg Hoffmann, and Manolis Koubarakis, editors,

*Proceedings of the 5th European Semantic Web Conference (ESWC'08)*, volume 5021 of in

*Lecture Notes in Computer Science*, pages 230–244. Springer-Verlag, 2008.

Quoc Huy Vu: Subsumption in the description logic *ELHIf _{R+}*. Master Thesis, TU Dresden, Germany, 2008.

A. Artale, R. Kontchakov, C. Lutz, F. Wolter, and M. Zakharyaschev: Temporalising Tractable Description Logics. In *Proceedings of the Fourteenth International Symposium on Temporal Representation and Reasoning*. IEEE Computer Society Press, 2007.

Adila Krisnadhi and Carsten Lutz: Data Complexity in the *EL* family of Description Logics. In Nachum Dershowitz and Andrei Voronkov, editors, *Proceedings of the 14th International Conference on Logic for Programming, Artificial Intelligence, and Reasoning (LPAR2007)*, volume 4790 of in *Lecture Notes in Artificial Intelligence*, pages 333–347. Springer-Verlag, 2007.

Christoph Haase: Complexity of subsumption in extensions of *EL*. Master Thesis, TU Dresden, Germany, 2007.

Carsten Lutz, Dirk Walther, and Frank Wolter: Conservative Extensions in Expressive Description Logics. In Manuela Veloso, editor, *Proceedings of the Twentieth International Joint Conference on Artificial Intelligence (IJCAI'07)*, pages 453–458. AAAI Press, 2007.

Carsten Lutz and Frank Wolter: Conservative Extensions in the Lightweight Description Logic *EL*. In Frank Pfenning, editor, *Proceedings of the 21th Conference on Automated Deduction (CADE-21)*, volume 4603 of in *Lecture Notes in Artificial Intelligence*, pages 84–99. Springer-Verlag, 2007.

Stefan Schulz, Boontawee Suntisrivaraporn, and Franz Baader: SNOMED CT's Problem List: Ontologists' and Logicians' Therapy Suggestions. In , editor, *Proceedings of The Medinfo 2007 Congress*, volume of in *Studies in Health Technology and Informatics (SHTI-series)*, page . IOS Press, 2007.

Boontawee Suntisrivaraporn, Franz Baader, Stefan Schulz, and Kent Spackman: Replacing SEP-Triplets in SNOMED CT using Tractable Description Logic Operators. In Jim Hunter Riccardo Bellazzi, Ameen Abu-Hanna, editor, *Proceedings of the 11th Conference on Artificial Intelligence in Medicine (AIME'07)*, volume of in *Lecture Notes in Computer Science*, page . Springer-Verlag, 2007.

F. Baader, C. Lutz, and B. Suntisrivaraporn: Efficient Reasoning in *EL ^{+}*. In

*Proceedings of the 2006 International Workshop on Description Logics (DL2006)*, in

*CEUR-WS*, 2006.

F. Baader, C. Lutz, and B. Suntisrivaraporn: CEL—A Polynomial-time Reasoner for Life Science Ontologies. In U. Furbach and N. Shankar, editors, *Proceedings of the 3rd International Joint Conference on Automated Reasoning (IJCAR'06)*, volume 4130 of in *Lecture Notes in Artificial Intelligence*, pages 287–291. Springer-Verlag, 2006.

F. Baader, C. Lutz, and B. Suntisrivaraporn: Is Tractable Reasoning in Extensions of the Description Logic *EL* Useful in Practice?. In *Proceedings of the Methods for Modalities Workshop (M4M-05)*, 2005.

DFG Project: Explaining Ontology Consequences

Principal Investigator: F. Baader

Involved person: R. Peñaloza

Start date: April 1, 2006

Duration: 2.5 years

Funded by: Deutsche Forschungsgemeinschaft (DFG) Graduiertenkolleg GRK 446

The objective of this work is to develop methods for finding small (preferably minimal) sub-ontologies from which a given consequence follows. These sub-ontologies are called explanations. The approach followed is to modify the procedures used to detect the consequence to allow for tracking the ontology axioms responsible for it. The major interest is on supporting Knowledge Engineers in diagnosing and correcting errors in the built ontologies.

#### Literature

Franz Baader and Rafael Peñaloza: Automata-Based Axiom Pinpointing. In Alessandro Armando, Peter Baumgartner, and Gilles Dowek, editors, *Proceedings of the 4th International Joint Conference on Automated Reasoning, (IJCAR 2008)*, volume 5195 of in *Lecture Notes in Artificial Intelligence*, pages 226–241. Springer, 2008.

Franz Baader and Boontawee Suntisrivaraporn: Debugging SNOMED CT Using Axiom Pinpointing in the Description Logic *EL ^{+}*. In

*Proceedings of the 3rd Knowledge Representation in Medicine (KR-MED'08): Representing and Sharing Knowledge Using SNOMED*, volume 410 of in

*CEUR-WS*, 2008.

Rafael Peñaloza: Automata-based Pinpointing for DLs. In *Proceedings of the 2008 International Workshop on Description Logics (DL2008)*, volume 353 of in *CEUR-WS*, 2008.

Franz Baader and Rafael Peñaloza: Axiom Pinpointing in General Tableaux. In N. Olivetti, editor, *Proceedings of the 16th International Conference on Automated Reasoning with Analytic Tableaux and Related Methods TABLEAUX 2007*, volume 4548 of in *Lecture Notes in Computer Science*, pages 11–27. Aix-en-Provence, France, Springer-Verlag, 2007.

Franz Baader, Rafael Peñaloza, and Boontawee Suntisrivaraporn: Pinpointing in the Description Logic *EL*. In *Proceedings of the 2007 International Workshop on Description Logics (DL2007)*, in *CEUR-WS*, 2007.

DFG Project: Action Formalisms with Description Logic

Principal Investigators: F. Baader, M. Thielscher

Involved persons: C. Drescher, H. Liu, C. Lutz, M. Lippmann, M. Milicic

Start date: September 1, 2005

Duration: 2 years + 2 years extension

Project Partners: University of Leipzig (Germany), Aachen University of Technology (Germany), University of Freiburg(Germany)

Funded by: Deutsche Forschungsgemeinschaft (DFG), Project BA 1122/13

The aim of this project is the integration of description logic and action formalisms. The motivation for this integration is twofold. On the one hand, general action calculi like the fluent calculus and the situation calculus are based on full first order logic. This entails undecidability in general of basic reasoning tasks like e.g. checking state consistency, action applicability or computing updated states. By identifying suitable description logics for describing the current world state these issues may be adressed. On the other hand, the need for integrating some kind of action representation into description logics has arisen. Description logics are a highly successful static knowledge representation formalism with applications e.g. on the semantic web or in the life sciences. Clearly, it is desirable to have the means to model semantic web services or reason about dynamic domains in the life sciences, like e.g. clinical protocols.

Another objective of this project is to develop a version of the logic programming paradigm designed specifically for programming intelligent agents. This may be thought of as adapting the successful constraint-logic programming scheme CLP(X) to CLP(Sigma), where Sigma is a domain axiomatization in an action calculus. Of course, for this it is of tantamount importance that the special atoms of the logic program can effectively be decided via the underlying domain axiomatization. The resulting scheme instantiated with the action calculi developed in the afore-mentioned steps can then be implemented by combining the mature technologies of both plain prolog and description logic reasoners. The system will be evaluated by modeling semantic web services or clinical protocols.

#### Literature:

Hongkai Liu, Carsten Lutz, Maja Milicic, and Frank Wolter: Foundations of instance level updates in expressive description logics. *Artificial Intelligence*, 175(18):2170–2197, 2011.

M. Thielscher: A unifying action calculus. *Artificial Intelligence*, 175(1):120–141, 2011.

Franz Baader, Marcel Lippmann, and Hongkai Liu: Using Causal Relationships to Deal with the Ramification Problem in Action Formalisms Based on Description Logics. In Christian G. Fermüller and Andrei Voronkov, editors, *Proceedings of the 17th International Conference on Logic for Programming, Artifical Intelligence, and Reasoning (LPAR-17)*, volume 6397 of in *Lecture Notes in Computer Science (subline Advanced Research in Computing and Software Science)*, pages 82–96. Yogyakarta, Indonesia, Springer-Verlag, October 2010.

Franz Baader, Hongkai Liu, and Anees ul Mehdi: Verifying Properties of Infinite Sequences of Description Logic Actions. In Helder Coelho, Rudi Studer, and Michael Wooldridge, editors, *Proceedings of the 19th European Conference on Artificial Intelligence (ECAI10)*, volume 215 of in *Frontiers in Artificial Intelligence and Applications*, pages 53–58. IOS Press, 2010.

C. Drescher: Action Logic Programs — How to Specify Strategic Behavior in Dynamic Domains Using Logical Rules. PhD thesis, TU Dresden, Germany, 2010.

Hongkai Liu: Updating Description Logic ABoxes. PhD thesis, TU Dresden, Germany, 2010.

Franz Baader, Andreas Bauer, and Marcel Lippmann: Runtime Verification Using a Temporal Description Logic. In Silvio Ghilardi and Roberto Sebastiani, editors, *Proceedings of the 7th International Symposium on Frontiers of Combining Systems (FroCoS 2009)*, volume 5749 of in *Lecture Notes in Computer Science*, pages 149–164. Springer-Verlag, 2009.

Conrad Drescher, Hongkai Liu, Franz Baader, Steffen Guhlemann, Uwe Petersohn, Peter Steinke, and Michael Thielscher: Putting ABox Updates into Action. In Silvio Ghilardi and Roberto Sebastiani, editors, *The Seventh International Symposium on Frontiers of Combining Systems (FroCoS-2009)*, volume 5749 of in *Lecture Notes in Computer Science*, pages 149–164. Springer-Verlag, 2009.

Conrad Drescher, Hongkai Liu, Franz Baader, Peter Steinke, and Michael Thielscher: Putting ABox Updates into Action. In *Proceedings of the 8th IJCAI International Workshop on Nonmontonic Reasoning, Action and Change (NRAC-09)*, 2009.

C. Drescher, S. Schiffel, and M. Thielscher: A declarative agent programming language based on action theories. In Ghilardi, S. and Sebastiani, R., editors, *Proceedings of the Seventh International Symposium on Frontiers of Combining Systems (FroCoS 2009)*, volume 5749 of *LNCS*, pages 230–245, Trento, Italy. Springer, 2009.

A. ul Mehdi: Integrate action formalisms into linear temporal logics. Master thesis, TU Dresden, Germany, 2009.

Franz Baader, Silvio Ghilardi, and Carsten Lutz: LTL over Description Logic Axioms. In *Proceedings of the 11th International Conference on Principles of Knowledge Representation and Reasoning (KR2008)*, 2008.

C. Drescher and M. Thielscher: A fluent calculus semantics for ADL with plan constraints. In Hölldobler, S., Lutz, C., and Wansing, H., editors, *Proceedings of the 11th European Conference on Logics in Artificial Intelligence (JELIA08)*, volume 5293 of *LNCS*, pages 140–152, Dresden, Germany. Springer, 2008.

Hongkai Liu, Carsten Lutz, and Maja Milicic: The Projection Problem for *EL* Actions. In *Proceedings of the 2008 International Workshop on Description Logics (DL2008)*, volume 353 of in *CEUR-WS*, 2008.

Y. Bong: Description Logic ABox Updates Revisited. Master thesis, TU Dresden, Germany, 2007.

C. Drescher and M. Thielscher: Integrating action calculi and description logics. In Hertzberg, J., Beetz, M., and Englert, R., editors, *Proceedings of the 30th Annual German Conference on Artificial Intelligence (KI 2007)*, volume 4667 of *LNCS*, pages 68–83, Osnabrück, Germany. Springer, 2007.

Conrad Drescher and Michael Thielscher: Reasoning about actions with description logics. In P. Peppas and M.-A. Williams, editors, *Proceedings of the 7th IJCAI International Workshop on Nonmonotonic Reasoning, Action and Change (NRAC 2007)*, Hyderabad, India, January 2007.

H. Liu, C. Lutz, M. Milicic, and F. Wolter: Description Logic Actions with general TBoxes: a Pragmatic Approach. In *Proceedings of the 2006 International Workshop on Description Logics (DL2006)*, 2006.

H. Liu, C. Lutz, M. Milicic, and F. Wolter: Reasoning about Actions using Description Logics with general TBoxes. In Michael Fisher, Wiebe van der Hoek, Boris Konev, and Alexei Lisitsa, editors, *Proceedings of the 10th European Conference on Logics in Artificial Intelligence (JELIA 2006)*, volume 4160 of in *Lecture Notes in Artificial Intelligence*, pages 266–279. Springer-Verlag, 2006.

H. Liu, C. Lutz, M. Milicic, and F. Wolter: Updating Description Logic ABoxes. In Patrick Doherty, John Mylopoulos, and Christopher Welty, editors, *Proceedings of the Tenth International Conference on Principles of Knowledge Representation and Reasoning (KR'06)*, pages 46–56. AAAI Press, 2006.

Michael Thielscher and Thomas Witkowski: The Features-and-Fluents semantics for the fluent calculus. In P. Doherty, J. Mylopoulos, and C. Welty, editors, *Proceedings of the International Conference on Principles of Knowledge Representation and Reasoning (KR)*, pages 362–370, Lake District, UK, June 2006.

F. Baader, C. Lutz, M. Milicic, U. Sattler, and F. Wolter: A Description Logic Based Approach to Reasoning about Web Services. In *Proceedings of the WWW 2005 Workshop on Web Service Semantics (WSS2005)*, 2005.

F. Baader, C. Lutz, M. Milicic, U. Sattler, and F. Wolter: Integrating Description Logics and Action Formalisms: First Results. In *Proceedings of the 2005 International Workshop on Description Logics (DL2005)*, number 147 in *CEUR-WS*, 2005.

F. Baader, C. Lutz, M. Milicic, U. Sattler, and F. Wolter: Integrating Description Logics and Action Formalisms: First Results. In *Proceedings of the Twentieth National Conference on Artificial Intelligence (AAAI-05)*, 2005.

EU Project: TONES (Thinking Ontologies)

Principal Investigator: F. Baader

Involved persons: C. Lutz, M. Milicic, B. Sertkaya, B. Suntisrivaraporn, A.-Y. Turhan.

Start date: September 1, 2005

Duration: 3 years

Project Partners: Free University of Bolzano (Italy), Università degli Studi di Roma "La Sapienza" (Italy), The University of Manchester (UK), Technische Universität Hamburg-Harburg (Germany)

Funded by: EU (FP6-7603)

Ontologies are seen as the key technology used to describe the semantics of information at various sites, overcoming the problem of implicit and hidden knowledge and thus enabling exchange of semantic contents. As such, they have found applications in key growth areas, such as e-commerce, bio-informatics, Grid computing, and the Semantic Web.

The aim of the project is to study and develop automated reasoning techniques for both offline and online tasks associated with ontologies, either seen in isolation or as a community of interoperating systems, and devise methodologies for the deployment of such techniques, on the one hand in advanced tools supporting ontology design and management, and on the other hand in applications supporting software agents in operating with ontologies.

Literature can be found here.

Reports on the TONES project appeared in the following news:

- the ACM tech news (Dec. 2009)
- L'ATELIER—a french on-line magazine (Nov. 2009)

Konkretes Ziel der zweiten Projektphase soll daher sein, verschiedene Ansätze zur Entwicklung von Entscheidungsalgorithmen für Logiken zu vergleichen und miteinander zu integrieren. Insbesondere sollen hier tableau- und automatenbasierte Verfahren für Beschreibungs- und Modallogiken sowie GF untersucht werden, mit dem Ziel einen einheitlichen algorithmischen Ansatz zu erhalten, der die Vorteile beider Verfahren aufweist. Die so erhaltenen Algorithmen sollen wieder prototypisch implementiert und evaluiert werden. Ein weiteres Ziel ist die Konstruktion effizienter Algorithmen für das Auswertungsproblem dieser Logiken. Schließlich soll für die hier betrachteten Logiken der Zusammenhang zwischen der Struktur von formeln und ihren algorithmischen Eigenschaften analysiert werden. Solche Resultate sollen einerseits dazu dienen, effizient entscheidbare Fragmente in diesen Logiken zu isolieren, sie sollen andererseits eine Basis zur Konstruktion Wahrscheinlichkeitsverteilungen von Formeln liefern, unter denen das average-case Verhalten von Entscheidungsverfahren analysiert werden kann.

DFG Project: Logik-Algorithmen in der Wissensrepräsentation

Principal Investigator: F. Baader

Involved persons: S. Tobies, J. Hladik

Funded by: Deutsche Forschungsgemeinschaft (DFG)

The aim of this project is the construction of decision procedures and the study of complexity issues of decision problems which are relevant for applications in the area of knowledge representation. In contrast to well-known explorations in the context of the classical Decision Problem of mathematical logic (prefix signature classes), the relevant classes of formulae are characterized by different criteria: on the one hand, the restriction to formulae with few variables or limited quantification is important, on the other hand, certain constructs (fixed points, transitive closure, number restrictions...) which are not dealt with in the classical framework are of interest.

During the first phase of this project, guarded logics, in particular the "Guarded Fragment" (GF) and its extensions, were identified as a class of logics which are relevant for for knowledge representation and very expressive but retain stable decidability properties. Moreover, practical tableau-based decision procedures for GF and expressive description logics were developed and implemented.

The practical aim of the second phase is the comparison and combination of different approaches for the development of decision procedures for logics. In particular, tableau- and automata-based procedures for GF, modal and description logics are going to be examined with the goal of a unitary algorithmical approach combining the advantages of both procedures. Another goal is the development of efficient model checking procedures for these logics.

#### Literature:

J. Hladik: A Tableau System for the Description Logic SHIO. In Ulrike Sattler, editor, *Contributions to the Doctoral Programme of IJCAR 2004*. CEUR, 2004. Available from ceur-ws.org

J. Hladik: Spinoza's Ontology. In G. Büchel, B. Klein, and T. Roth-Berghofer, editors, *Proceedings of the 1st Workshop on Philosophy and Informatics (WSPI 2004)*, number RR-04-02 in *DFKI Research Reports*, 2004.

J. Hladik and J. Model: Tableau Systems for SHIO and SHIQ. In V. Haarslev and R. Möller, editors, *Proceedings of the 2004 International Workshop on Description Logics (DL 2004)*. CEUR, 2004. Available from ceur-ws.org

F. Baader, J. Hladik, C. Lutz, and F. Wolter: From Tableaux to Automata for Description Logics. In Moshe Vardi and Andrei Voronkov, editors, *Proceedings of the 10th International Conference on Logic for Programming, Artificial Intelligence, and Reasoning (LPAR 2003)*, volume 2850 of in *Lecture Notes in Computer Science*, pages 1–32. Springer, 2003.

Franz Baader, Jan Hladik, Carsten Lutz, and Frank Wolter: From Tableaux to Automata for Description Logics. *Fundamenta Informaticae*, 57:1–33, 2003.

J. Hladik and U. Sattler: A Translation of Looping Alternating Automata to Description Logics. In *Proc. of the 19th Conference on Automated Deduction (CADE-19)*, volume 2741 of in *Lecture Notes in Artificial Intelligence*. Springer Verlag, 2003.

Jan Hladik: Reasoning about Nominals with FaCT and RACER. In *Proceedings of the 2003 International Workshop on Description Logics (DL2003)*, in *CEUR-WS*, 2003.

J. Hladik: Implementation and Optimisation of a Tableau Algorithm for the Guarded Fragment. In U. Egly and C. G. Fermüller, editors, *Proceedings of the International Conference on Automated Reasoning with Tableaux and Related Methods (Tableaux 2002)*, volume 2381 of in *Lecture Notes in Artificial Intelligence*. Springer-Verlag, 2002.

G. Pan, U. Sattler, and M. Y. Vardi: BDD-Based Decision Procedures for K. In *Proceedings of the Conference on Automated Deduction*, volume 2392 of in *Lecture Notes in Artificial Intelligence*. Springer Verlag, 2002.

F. Baader and S. Tobies: The Inverse Method Implements the Automata Approach for Modal Satisfiability. In *Proceedings of the International Joint Conference on Automated Reasoning IJCAR'01*, volume 2083 of in *Lecture Notes in Artificial Intelligence*, pages 92–106. Springer-Verlag, 2001.

J. Hladik: Implementierung eines Entscheidungsverfahrens für das Bewachte Fragment der Prädikatenlogik. Diploma thesis, RWTH Aachen, Germany, 2001.

S. Tobies: Complexity Results and Practical Algorithms for Logics in Knowledge Representation. PhD thesis, RWTH Aachen, 2001.

Stephan Tobies: PSPACE Reasoning for Graded Modal Logics. *Journal of Logic and Computation*, 11(1):85–106, 2001.

F. Baader and U. Sattler: Tableau Algorithms for Description Logics. In R. Dyckhoff, editor, *Proceedings of the International Conference on Automated Reasoning with Tableaux and Related Methods (Tableaux 2000)*, volume 1847 of in *Lecture Notes in Artificial Intelligence*, pages 1–18. St Andrews, Scotland, UK, Springer-Verlag, 2000.

C. Hirsch and S. Tobies: A Tableau Algorithm for the Clique Guarded Fragment. In *Proceedings of the Workshop Advances in Modal Logic AiML 2000*, 2000. Final version appeared in Advanced in Modal Logic Volume 3, 2001.

Jan Hladik: Implementing the n-ary Description Logic GF1-. In *Proceedings of the International Workshop in Description Logics 2000 (DL2000)*, 2000.

I. Horrocks, U. Sattler, and S. Tobies: Practical Reasoning for Very Expressive Description Logics. *Logic Journal of the IGPL*, 8(3):239–264, 2000.

I. Horrocks, U. Sattler, and S. Tobies: Reasoning with Individuals for the Description Logic SHIQ. In David MacAllester, editor, *Proceedings of the 17th International Conference on Automated Deduction (CADE-17)*, number 1831 in *Lecture Notes in Computer Science*. Germany, Springer Verlag, 2000.

I. Horrocks and S. Tobies: Reasoning with Axioms: Theory and Practice. In A. G. Cohn, F. Giunchiglia, and B. Selman, editors, *Principles of Knowledge Representation and Reasoning: Proceedings of the Seventh International Conference (KR2000)*. San Francisco, CA, Morgan Kaufmann Publishers, 2000.

I. Horrocks, U. Sattler, S. Tessaris, and S. Tobies: How to decide Query Containment under Constraints using a Description Logic. In Andrei Voronkov, editor, *Proceedings of the 7th International Conference on Logic for Programming and Automated Reasoning (LPAR'2000)*, number 1955 in *Lecture Notes in Artificial Intelligence*. Springer Verlag, 2000.

U. Sattler: Description Logics for the Representation of Aggregated Objects. In W.Horn, editor, *Proceedings of the 14th European Conference on Artificial Intelligence*. IOS Press, Amsterdam, 2000.

Stephan Tobies: The Complexity of Reasoning with Cardinality Restrictions and Nominals in Expressive Description Logics. *Journal of Artificial Intelligence Research*, 12:199–217, 2000.

F. Baader, R. Molitor, and S. Tobies: Tractable and Decidable Fragments of Conceptual Graphs. In W. Cyre and W. Tepfenhart, editors, *Proceedings of the Seventh International Conference on Conceptual Structures (ICCS'99)*, number 1640 in *Lecture Notes in Computer Science*, pages 480–493. Springer Verlag, 1999.

I. Horrocks and U. Sattler: A Description Logic with Transitive and Inverse Roles and Role Hierarchies. *Journal of Logic and Computation*, 9(3):385–410, 1999.

Ian Horrocks, Ulrike Sattler, and Stephan Tobies: Practical Reasoning for Expressive Description Logics. In Harald Ganzinger, David McAllester, and Andrei Voronkov, editors, *Proceedings of the 6th International Conference on Logic for Programming and Automated Reasoning (LPAR'99)*, number 1705 in *Lecture Notes in Artificial Intelligence*, pages 161–180. Springer-Verlag, September 1999.

C. Lutz, U. Sattler, and S. Tobies: A Suggestion for an *n*-ary Description Logic. In Patrick Lambrix, Alex Borgida, Maurizio Lenzerini, Ralf Möller, and Peter Patel-Schneider, editors, *Proceedings of the International Workshop on Description Logics*, number 22 in *CEUR-WS*, pages 81–85. Linkoeping, Sweden, Linköping University, July 30 – August 1 1999. Proceedings online available from http://SunSITE.Informatik.RWTH-Aachen.DE/Publications/CEUR-WS/Vol-22/

S .Tobies: A NExpTime-complete Description Logic Strictly Contained in *C ^{2}*. In J. Flum and M. Rodríguez-Artalejo, editors,

*Proceedings of the Annual Conference of the European Association for Computer Science Logic (CSL-99)*, in

*LNCS 1683*, pages 292–306. Springer-Verlag, 1999.

S. Tobies: A PSpace Algorithm for Graded Modal Logic. In H. Ganzinger, editor, *Automated Deduction – CADE-16, 16th International Conference on Automated Deduction*, in *LNAI 1632*, pages 52–66. Trento, Italy, Springer-Verlag, July 7–10, 1999.

DFG Project: Novel Inference Services in Description Logics

Principal Investigator: F. Baader

Involved persons: S. Brandt, A.-Y. Turhan

Funded by: Deutsche Forschungsgemeinschaft (DFG)

Over the past 15 years the area of Description Logics has seen extensive research on both theoretical and practical aspects of standard inference problems, such as subsumption and the instance problem. When DL systems were employed for practical KR-applications, however, additional inference services facilitating build-up and maintenance of large knowledge bases proved indispensible. This led to the developing of non-standard inferences such as: least common subsumer, most specific concept, approximation, and matching.

In the first project phase, non-standard inference problems have been examined w.r.t. their formal aspects, e.g. computational complexity, and have been evaluated in practice in one specific prototypical application scenario in the domain of chemical process engineering.

Building on the lessons learned during the first project phase, the second phase aims to examine how non-standard inference services can be employed in a more general application area: for the build-up, maintenance, and deployment of ontologies, e.g. for the Semantic Web. To this end, existing algorithms have to be extended considerably and new ones to be found. The more general application area moreover gives rise to additional non-standard inferences not examined during the first project phase.

The ultimate goal of this project is to gain comprehensive knowledge about the formal properties of non-standard inference problems in Description Logics and to demonstrate their utility for build-up and maintenance of knowledge bases. An additional practical goal of the second project phase is to develop a prototypical support system for a specific ontology editor, e.g. OilEd.

#### Literature:

Franz Baader, Barış Sertkaya, and Anni-Yasmin Turhan: Computing the Least Common Subsumer w.r.t. a Background Terminology. *Journal of Applied Logic*, 5(3):392–420, 2007.

F. Baader and A. Okhotin: Complexity of Language Equations With One-Sided Concatenation and All Boolean Operations. In Jordi Levy, editor, *Proceedings of the 20th International Workshop on Unification, UNIF'06*, pages 59–73, 2006.

F. Baader and R. Küsters: Nonstandard Inferences in Description Logics: The Story So Far. In D.M. Gabbay, S.S. Goncharov, and M. Zakharyaschev, editors, *Mathematical Problems from Applied Logic I*, volume 4 of in *International Mathematical Series*, pages 1–75. Springer-Verlag, 2006.

Carsten Lutz, Franz Baader, Enrico Franconi, Domenico Lembo, Ralf Möller, Riccardo Rosati, Ulrike Sattler, Boontawee Suntisrivaraporn, and Sergio Tessaris: Reasoning Support for Ontology Design. In Bernardo Cuenca Grau, Pascal Hitzler, Connor Shankey, and Evan Wallace, editors, *In Proceedings of the second international workshop OWL: Experiences and Directions*, November 2006. To appear

S. Brandt: Standard and Non-standard reasoning in Description Logics. Ph.D. Dissertation, Institute for Theoretical Computer Science, TU Dresden, Germany, 2006.

Barış Sertkaya: Computing the hierarchy of conjunctions of concept names and their negations in a Description Logic knowledge base using Formal Concept Analysis (ICFCA 2006). In Bernhard Ganter and Leonard Kwuida, editors, *Contributions to ICFCA 2006*, pages 73–86. Dresden, Germany, Verlag Allgemeine Wissenschaft, 2006.

Anni-Yasmin Turhan, Sean Bechhofer, Alissa Kaplunova, Thorsten Liebig, Marko Luther, Ralf Möller, Olaf Noppens, Peter Patel-Schneider, Boontawee Suntisrivaraporn, and Timo Weithöner: DIG 2.0 – Towards a Flexible Interface for Description Logic Reasoners. In Bernardo Cuenca Grau, Pascal Hitzler, Connor Shankey, and Evan Wallace, editors, *In Proceedings of the second international workshop OWL: Experiences and Directions*, November 2006.

F. Baader, S. Brandt, and C. Lutz: Pushing the *EL* Envelope. In *Proceedings of the Nineteenth International Joint Conference on Artificial Intelligence IJCAI-05*. Edinburgh, UK, Morgan-Kaufmann Publishers, 2005.

Sebastian Brandt and Jörg Model: Subsumption in *EL* w.r.t. hybrid TBoxes. In *Proceedings of the 28th Annual German Conference on Artificial Intelligence, KI 2005*, in *Lecture Notes in Artificial Intelligence*. Springer-Verlag, 2005.

Hongkai Liu: Matching in Description Logics with Existential Restrictions and Terminological Cycles. Master thesis, TU Dresden, Germany, 2005.

J. Model: Subsumtion in EL bezüglich hybrider TBoxen. Diploma thesis, TU Dresden, Germany, 2005.

Anni-Yasmin Turhan: Pushing the SONIC border — SONIC 1.0. In Reinhold Letz, editor, *FTP 2005 — Fifth International Workshop on First-Order Theorem Proving*. Technical Report University of Koblenz, 2005. http://www.uni-koblenz.de/fb4/publikationen/gelbereihe/RR-13-2005.pdf

F. Baader: A Graph-Theoretic Generalization of the Least Common Subsumer and the Most Specific Concept in the Description Logic *EL*. In J. Hromkovic and M. Nagl, editors, *Proceedings of the 30th International Workshop on Graph-Theoretic Concepts in Computer Science (WG 2004)*, volume 3353 of in *Lecture Notes in Computer Science*, pages 177–188. Bad Honnef, Germany, Springer-Verlag, 2004.

F. Baader and B. Sertkaya: Applying Formal Concept Analysis to Description Logics. In P. Eklund, editor, *Proceedings of the 2nd International Conference on Formal Concept Analysis (ICFCA 2004)*, volume 2961 of in *Lecture Notes in Artificial Intelligence*, pages 261–286. Springer, 2004.

F. Baader, B. Sertkaya, and A.-Y. Turhan: Computing the Least Common Subsumer w.r.t. a Background Terminology. In José Júlio Alferes and João Alexandre Leite, editors, *Proceedings of the 9th European Conference on Logics in Artificial Intelligence (JELIA 2004)*, volume 3229 of in *Lecture Notes in Computer Science*, pages 400–412. Lisbon, Portugal, Springer-Verlag, 2004.

Franz Baader, Baris Sertkaya, and Anni-Yasmin Turhan: Computing the Least Common Subsumer w.r.t. a Background Terminology. In *Proceedings of the 2004 International Workshop on Description Logics (DL2004)*, in *CEUR-WS*, 2004.

Sebastian Brandt: On Subsumption and Instance Problem in *ELH* w.r.t. General TBoxes. In *Proceedings of the 2004 International Workshop on Description Logics (DL2004)*, in *CEUR-WS*, 2004.

Sebastian Brandt: Polynomial Time Reasoning in a Description Logic with Existential Restrictions, GCI Axioms, and—What Else?. In R. López de Mantáras and L. Saitta, editors, *Proceedings of the 16th European Conference on Artificial Intelligence (ECAI-2004)*, pages 298–302. IOS Press, 2004.

Sebastian Brandt and Hongkai Liu: Implementing Matching in *ALN*. In *Proceedings of the KI-2004 Workshop on Applications of Description Logics (KI-ADL'04)*, in *CEUR-WS*, September 2004.

Anni-Yasmin Turhan and Christian Kissig: Sonic—Non-standard Inferences go OilEd. In D. Basin and M. Rusinowitch, editors, *Proceedings of the 2nd International Joint Conference on Automated Reasoning (IJCAR'04)*, volume 3097 of in *Lecture Notes in Artificial Intelligence*. Springer-Verlag, 2004.

Anni-Yasmin Turhan and Christian Kissig: Sonic—System Description. In *Proceedings of the 2004 International Workshop on Description Logics (DL2004)*, in *CEUR-WS*, 2004.

Franz Baader: Computing the least common subsumer in the description logic *EL* w.r.t. terminological cycles with descriptive semantics. In *Proceedings of the 11th International Conference on Conceptual Structures, ICCS 2003*, volume 2746 of in *Lecture Notes in Artificial Intelligence*, pages 117–130. Springer-Verlag, 2003.

Franz Baader: Least Common Subsumers and Most Specific Concepts in a Description Logic with Existential Restrictions and Terminological Cycles. In Georg Gottlob and Toby Walsh, editors, *Proceedings of the 18th International Joint Conference on Artificial Intelligence*, pages 319–324. Morgan Kaufman, 2003.

Franz Baader: Terminological Cycles in a Description Logic with Existential Restrictions. In Georg Gottlob and Toby Walsh, editors, *Proceedings of the 18th International Joint Conference on Artificial Intelligence*, pages 325–330. Morgan Kaufmann, 2003.

Franz Baader: The instance problem and the most specific concept in the description logic *EL* w.r.t. terminological cycles with descriptive semantics. In *Proceedings of the 26th Annual German Conference on Artificial Intelligence, KI 2003*, volume 2821 of in *Lecture Notes in Artificial Intelligence*, pages 64–78. Hamburg, Germany, Springer-Verlag, 2003.

Sebastian Brandt: Implementing Matching in *ALE*—First Results. In *Proceedings of the 2003 International Workshop on Description Logics (DL2003)*, in *CEUR-WS*, 2003.

Sebastian Brandt and Anni-Yasmin Turhan: Computing least common subsumers for *FLE ^{+}*. In

*Proceedings of the 2003 International Workshop on Description Logics*, in

*CEUR-WS*, 2003.

Sebastian Brandt, Anni-Yasmin Turhan, and Ralf Küsters: Extensions of Non-standard Inferences to Description Logics with transitive Roles. In Moshe Vardi and Andrei Voronkov, editors, *Proceedings of the 10th International Conference on Logic for Programming, Artificial Intelligence, and Reasoning (LPAR 2003)*, in *Lecture Notes in Computer Science*. Springer, 2003.

F. Baader and R. Küsters: Unification in a Description Logic with Inconsistency and Transitive Closure of Roles. In I. Horrocks and S. Tessaris, editors, *Proceedings of the 2002 International Workshop on Description Logics*, 2002. See http://sunsite.informatik.rwth-aachen.de/Publications/CEUR-WS/Vol-53/

F. Baader and A.-Y. Turhan: On the problem of computing small representations of least common subsumers. In *Proceedings of the German Conference on Artificial Intelligence, 25th German Conference on Artificial Intelligence (KI 2002)*, in *Lecture Notes in Artificial Intelligence*. Aachen, Germany, Springer–Verlag, 2002.

S. Brandt, R. Küsters, and A.-Y. Turhan: Approximating *ALCN*-Concept Descriptions. In *Proceedings of the 2002 International Workshop on Description Logics*, 2002.

S. Brandt, R. Küsters, and A.-Y. Turhan: Approximation and Difference in Description Logics. In D. Fensel, F. Giunchiglia, D. McGuiness, and M.-A. Williams, editors, *Proceedings of the Eighth International Conference on Principles of Knowledge Representation and Reasoning (KR2002)*, pages 203–214. San Francisco, CA, Morgan Kaufman, 2002.

S. Brandt and A.-Y. Turhan: An Approach for Optimized Approximation. In *Proceedings of the KI-2002 Workshop on Applications of Description Logics (KIDLWS'01)*, in *CEUR-WS*. Aachen, Germany, RWTH Aachen, September 2002. Proceedings online available from http://SunSITE.Informatik.RWTH-Aachen.DE/Publications/CEUR-WS/

F. Baader, S. Brandt, and R. Küsters: Matching under Side Conditions in Description Logics. In B. Nebel, editor, *Proceedings of the Seventeenth International Joint Conference on Artificial Intelligence, IJCAI'01*, pages 213–218. Seattle, Washington, Morgan Kaufmann, 2001.

F. Baader and P. Narendran: Unification of Concepts Terms in Description Logics. *J. Symbolic Computation*, 31(3):277–305, 2001.

F. Baader and R. Küsters: Unification in a Description Logic with Transitive Closure of Roles. In R. Nieuwenhuis and A. Voronkov, editors, *Proceedings of the 8th International Conference on Logic for Programming, Artificial Intelligence, and Reasoning (LPAR 2001)*, volume 2250 of in *Lecture Notes in Computer Science*, pages 217–232. Havana, Cuba, Springer-Verlag, 2001.

F. Baader and A.-Y. Turhan: TBoxes do not yield a compact representation of least common subsumers. In *Proceedings of the International Workshop in Description Logics 2001 (DL2001)*, August 2001.

R. Küsters, and R. Molitor: Approximating most specific concepts in description logics with existential restrictions. In F. Baader, G. Brewka, and T. Eiter, editors, *KI 2001: Advances in Artificial Intelligence, Proceedings of the Joint German/Austrian Conference on AI (KI 2001)*, volume 2174 of *Lecture Notes in Artificial Intelligence*, pages 217–232. Vienna, Austria, Springer-Verlag, 2001.

R. Küsters and R. Molitor: Computing least common subsumers in ALEN. In B. Nebel, editor, *Proceedings of the Seventeenth International Joint Conference on Artificial Intelligence, IJCAI'01*, pages 219–224, Seattle, Washington, Morgan Kaufmann, 2001.

A.-Y. Turhan and R. Molitor: Using lazy unfolding for the computation of least common subsumers. In *Proceedings of the International Workshop in Description Logics 2001 (DL2001)*, August 2001.

S. Brandt: Matching under Side Conditions in Description Logics. Diploma thesis, RWTH Aachen, Germany, 2000.

DFG Project: Combination of Model and Description Logics

Principal Investigator: F. Baader

Involved person: C. Lutz

Funded by: Deutsche Forschungsgemeinschaft (DFG)

The goal of this project is to establish a direct cooperation between researchers in Modal Logic and researchers in Description Logic in order to achieve a mutual exchange of knowledge and advanced techniques between these two areas. In the one direction, we aim at adapting the strong techniques and meta-results obtained in the area of Modal Logics to Description Logics. In the other direction, we want to use the algorithmic techniques developed in the area of Description Logics to devise implementable algorithms for Modal Logics. Additionally, we investigate the combination of Modal and Description Logics. From the view point of Description Logics, such combinations allow for the representation of intensional knowledge (e.g. about knowledge and belief of intelligent agents) and of dynamic knowledge (e.g. temporal or spatial knowledge). From the view point of Modal Logics, such combinations are fragments of First (or higher) Order Modal Logics which have attractive computational and model-theoretic properties since their first order part is restricted to Description Logics.

This project is a cooperation with Frank Wolter from the University of Leipzig.

#### Literature:

Franz Baader, Silvio Ghilardi, and Cesare Tinelli: A new combination procedure for the word problem that generalizes fusion decidability results in modal logics. *Information and Computation*, 204(10):1413–1452, 2006.

F. Baader and S. Ghilardi: Connecting Many-Sorted Structures and Theories through Adjoint Functions. In *Proceedings of the 5th International Workshop on Frontiers of Combining Systems (FroCoS'05)*, volume 3717 of in *Lecture Notes in Artificial Intelligence*. Vienna (Austria), Springer-Verlag, 2005.

F. Baader and S. Ghilardi: Connecting Many-Sorted Theories. In *Proceedings of the 20th International Conference on Automated Deduction (CADE-05)*, volume 3632 of in *Lecture Notes in Artificial Intelligence*, pages 278–294. Tallinn (Estonia), Springer-Verlag, 2005.

Franz Baader and Silvio Ghilardi: Connecting Many-Sorted Theories. LTCS-Report LTCS-05-04, Chair for Automata Theory, Institute for Theoretical Computer Science, Dresden University of Technology, Germany, 2005. See http://lat.inf.tu-dresden.de/research/reports.html.

F. Baader, S. Ghilardi, and C. Tinelli: A New Combination Procedure for the Word Problem that Generalizes Fusion Decidability Results in Modal Logics. In D. Basin and M. Rusinowitch, editors, *Proceedings of the 2nd International Joint Conference on Automated Reasoning (IJCAR'04)*, volume 3097 of in *Lecture Notes in Artificial Intelligence*, pages 183–197. Springer-Verlag, 2004.

R. Kontchakov, C. Lutz, F. Wolter, and M. Zakharyaschev: Temporal Tableaux. *Studia Logica*, 76(1):91–134, 2004.

O. Kutz, C. Lutz, F. Wolter, and M. Zakharyaschev: E-Connections of Abstract Description Systems. *Artificial Intelligence*, 156(1):1–73, 2004.

F. Baader, R Küsters, and F. Wolter: Extensions to Description Logics. In Franz Baader, Diego Calvanese, Deborah McGuinness, Daniele Nardi, and Peter F. Patel-Schneider, editors, *The Description Logic Handbook: Theory, Implementation, and Applications*, pages 219–261. Cambridge University Press, 2003.

Franz Baader, Jan Hladik, Carsten Lutz, and Frank Wolter: From Tableaux to Automata for Description Logics. *Fundamenta Informaticae*, 57:1–33, 2003.

C. Lutz, F. Wolter, and M. Zakharyaschev: Reasoning about concepts and similarity. In *Proceedings of the 2003 International Workshop on Description Logics (DL2003)*, in *CEUR-WS*, 2003.

F. Baader, C. Lutz, H. Sturm, and F. Wolter: Fusions of Description Logics and Abstract Description Systems. *Journal of Artificial Intelligence Research (JAIR)*, 16:1–58, 2002.

C. Lutz, H. Sturm, F. Wolter, and M. Zakharyaschev: A Tableau Decision Algorithm for Modalized *ALC* with Constant Domains. *Studia Logica*, 72(2):199–232, 2002.

C. Lutz, U. Sattler, and F. Wolter: Description Logics and the Two-Variable Fragment. In D.L. McGuiness, P.F. Pater-Schneider, C. Goble, and R. Möller, editors, *Proceedings of the 2001 International Workshop in Description Logics (DL-2001)*, pages 66–75, 2001. Proceedings online available from http://SunSITE.Informatik.RWTH-Aachen.DE/Publications/CEUR-WS/

C. Lutz, U. Sattler, and F. Wolter: Modal Logics and the two-variable fragment. In *Annual Conference of the European Association for Computer Science Logic CSL'01*, in *LNCS*. Paris, France, Springer Verlag, 2001.

C. Lutz, H. Sturm, F. Wolter, and M. Zakharyaschev: Tableaux for Temporal Description Logic with Constant Domain. In Rajeev Goré, Alexander Leitsch, and Tobias Nipkow, editors, *Proceedings of the International Joint Conference on Automated Reasoning*, number 2083 in *Lecture Notes in Artifical Intelligence*, pages 121–136. Siena, Italy, Springer Verlag, 2001.

F. Baader, C. Lutz, H. Sturm, and F. Wolter: Fusions of Description Logics. In F. Baader and U. Sattler, editors, *Proceedings of the International Workshop in Description Logics 2000 (DL2000)*, number 33 in *CEUR-WS*, pages 21–30. Aachen, Germany, RWTH Aachen, August 2000. Proceedings online available from http://SunSITE.Informatik.RWTH-Aachen.DE/Publications/CEUR-WS/Vol-33/

C. Lutz and U. Sattler: The Complexity of Reasoning with Boolean Modal Logic. In *Advances in Modal Logic 2000 (AiML 2000)*, 2000. Final version appeared in Advanced in Modal Logic Volume 3, 2001.

PhD Project: Non-standard Inference Problems in Description Logics

Principal Investigator: F. Baader

Involved person: R. Küsters

Funded by: Studienstiftung des deutschen Volkes

#### Literature:

F. Baader, A. Borgida, and D.L. McGuinness: Matching in Description Logics: Preliminary Results. In M.-L. Mugnier and M. Chein, editors, *Proceedings of the Sixth International Conference on Conceptual Structures (ICCS-98)*, volume 1453 of in *Lecture Notes in Computer Science*, pages 15–34. Montpelier (France), Springer–Verlag, 1998.

F. Baader and P. Narendran: Unification of Concept Terms in Description Logics. In H. Prade, editor, *Proceedings of the 13th European Conference on Artificial Intelligence (ECAI-98)*, pages 331–335. John Wiley & Sons Ltd, 1998.

F. Baader and R. Küsters: Computing the least common subsumer and the most specific concept in the presence of cyclic *ALN*-concept descriptions. In O. Herzog and A. Günter, editors, *Proceedings of the 22nd Annual German Conference on Artificial Intelligence, KI-98*, volume 1504 of in *Lecture Notes in Computer Science*, pages 129–140. Bremen, Germany, Springer–Verlag, 1998.

F. Baader, R. Küsters, and R. Molitor: Structural Subsumption Considered from an Automata Theoretic Point of View. In *Proceedings of the 1998 International Workshop on Description Logics DL'98*, 1998.

R. Küsters: Characterizing the Semantics of Terminological Cycles in *ALN* using Finite Automata. In *Proceedings of the Sixth International Conference on Principles of Knowledge Representation and Reasoning (KR'98)*, pages 499–510. Morgan Kaufmann, 1998.

PhD Project: Using Novel Inference Services to support the development of models of chemical processes

Principal Investigator: F. Baader

Involved person: R. Molitor

Funded by: DFG Graduiertenkolleg "Informatik und Technik"

In the PhD project Terminological knowledge representation systems in a process engineering application it has been shown that the development of models of chemical processes can be supported by terminological knowledge representation systems. These systems are based on description logics, a highly expressive formalism with well-defined semantics, and provide powerful inference services. The main focus of the project was the investigation of traditional inference services like computing the subsumption hierarchy and instance checking. The new project aims at the formal investigation of novel inference services that allow for a more comprehensive support for the process engineers.

The development of models of chemical processes as carried out at the Department of Process Engineering is based on the usage of so-called building blocks. Using a DL-system, these building blocks can be stored in a class hierarchy. So far, the integration of new (classes of) building blocks into the existing hierarchy is not sufficiently supported. The given system services only allow for checking consistency of extended classes, but they can not be used to define new classes. Hence, the existing classes become larger and larger, the hierarchy degenerates, and the retrieval and re-use of building blocks becomes harder.

The approach considered in this PhD project can be described as follows: instead of directly defining a new class, the knowledge engineer introduces several typical examples (building blocks) of instances of the new class. These examples (resp. their descritpions) are then translated into individuals (resp. concept descriptions) in the DL knowledge base. For the resulting set of concept descriptions, a concept definition describing the examples as specific as possible is automatically computed by computing so-called most specific concepts (msc) and least common subsumers (lcs) (see [1], [2] for detatils). Unfortunately, it turned out that, due to the nature of the algorithms for computing the msc and the lcs and the inherent complexity of these operations, the resulting concept description does not contain defined concepts and is quite large. Thus, in order to obtain a concept description that is easy to read and comprehend, a rewriting of the result is computed [3]. This rewriting is then offered to the knowledge engineer as a possible candidate for the new class definition.

The inference problems underlying the notions most specific concept (msc) and least common subsumer (lcs) were introduced for the DL used in the DL-system Classic developped at AT&T. For DLs relevant in the process engineering application, all novel inference services employed in the approach have been investigated only recently.

#### Literature:

F. Baader, R. Küsters, and R. Molitor: Computing Least Common Subsumers in Description Logics with Existential Restrictions. In T. Dean, editor, *Proceedings of the 16th International Joint Conference on Artificial Intelligence (IJCAI'99)*, pages 96–101. Morgan Kaufmann, 1999.

F. Baader and R. Molitor: Rewriting Concepts Using Terminologies. In P. Lambrix, A. Borgida, M. Lenzerini, R. Möller, and P. Patel-Schneider, editors, *Proceedings of the International Workshop on Description Logics 1999 (DL'99)*, number 22 in *CEUR-WS*. Sweden, Linköping University, 1999. Proceedings online available from http://SunSITE.Informatik.RWTH-Aachen.DE/Publications/CEUR-WS/Vol-22/

F. Baader, R. Molitor, and S. Tobies: Tractable and Decidable Fragments of Conceptual Graphs. In W. Cyre and W. Tepfenhart, editors, *Proceedings of the Seventh International Conference on Conceptual Structures (ICCS'99)*, number 1640 in *Lecture Notes in Computer Science*, pages 480–493. Springer Verlag, 1999.

F. Baader, R. Küsters, and R. Molitor: Structural Subsumption Considered from an Automata Theoretic Point of View. In *Proceedings of the 1998 International Workshop on Description Logics DL'98*, 1998.

Postgraduate Programme Specification of discrete processes and systems of processes by operational models and logics

The concept of reactive, technical systems forms the motivation for this research project. Such systems can be considered as configurations of resources on which processes, i.e., sequences of actions are running';' such sequences depend on the inner state of the system and on external events. Examples for reactive systems are operating systems, communication systems, control systems of technical installations and systems for medical diagnosis.In our research project we aim at describing processes of such reactive, technical systems by means of formal methods. From these formal descriptions we try to derive properties of all the processes which run on the system. Because of the complexity of such systems, only formal methods can be a sufficient basis for verification and reliability of such properties.There is a huge diversity of formal models for the decription of processes. Roughly, they can be partitioned into operational models and logics. The aim of the research programme is, on the one hand, to continue the investigation about how to describe processes in a formal way and the comparison of different formalizations. On the other hand, we aim at enriching these formal concepts by verification techniques.

EU-LTR Project: Data Warehouse Quality (DWQ)

Principal Investigator: F. Baader

Involved person: U. Sattler

Funded by: EU

Due to the increasing speed and memory capacities of information systems, the amount of data processed by these systems is steadily increasing. Furthermore, the data available in electronic form is increasing tremendously, too. As a consequence, enterprises can access a huge amount of data concerning their business. Unfortunately, these data is mostly distributed among different systems and thus has different formats and thus cannot be consolidated as a whole. Now, data warehouses are designed to process these huge amounts of data in such a way that decisions can be based on this data. Data warehouses are software tools closely related to database systems which have the following features: They allow (1) to extract and integrate data from different sources into a central schema, (2) to combine and aggregate this integrated data, and (3) to ask ad-hoc queries. Furthermore, they are closely related to "on-line analytical processing"-systems (OLAP) and "decision-support-systems" (DSS).

Within the DWQ project, we are mainly concerned with the investigation of multidimensional aggregation. This comprises aggregation and part-whole relations per se as well as properties of multiply hierarchically structured domains such as time, space, organizations, etc. The understanding of these properties shall lead to a formalism that supports the aggregation of information along multiple dimensions. The degree of support will be evaluated with respect to the quality criteria developed within this project.

#### Literature:

I. Horrocks and U. Sattler: A Description Logic with Transitive and Inverse Roles and Role Hierarchies. *Journal of Logic and Computation*, 9(3):385–410, 1999.

F. Baader and U. Sattler: Description Logics with Concrete Domains and Aggregation. In H. Prade, editor, *Proceedings of the 13th European Conference on Artificial Intelligence (ECAI-98)*, pages 336–340. John Wiley & Sons Ltd, 1998.

I. Horrocks and U. Sattler: A Description Logic with Transitive and Converse Roles and Role Hierarchies. In *Proceedings of the International Workshop on Description Logics*. Povo - Trento, Italy, IRST, 1998.

F. Baader and U. Sattler: Description Logics with Aggregates and Concrete Domains. In *Proceedings of the International Workshop on Description Logics*, 1997.

Project: Design of a Medical Information System with Vague Knowledge

Principal Investigator: F. Baader

Involved person: C. Tresp

Funded by: DFG Graduiertenkolleg "Informatik und Technik"

#### Literature:

R. Molitor and C.B. Tresp: Extending Description Logics to Vague Knowledge in Medicine. In P. Szczepaniak, P.J.G. Lisboa, and S. Tsumoto, editors, *Fuzzy Systems in Medicine*, volume 41 of in *Studies in Fuzziness and Soft Computing*, pages 617–635. Springer Verlag, 2000.

C.B. Tresp and R. Molitor: A Description Logic for Vague Knowledge. In *Proceedings of the 13th biennial European Conference on Artificial Intelligence (ECAI'98)*, pages 361–365. Brighton, UK, J. Wiley and Sons, 1998.

C. Tresp and U. Tüben: Medical Terminology Processing for a Tutoring System. In *International Conference on Computational Intelligence and Multimedia Applications (ICCIMA98)*, Februar 1998.

J. Weidemann, H.-P. Hohn, J. Hiltner, K. Tochtermann, C. Tresp, D. Bozinov, K. Venjakob, A. Freund, B. Reusch, and H.-W. Denker: A Hypermedia Tutorial for Cross-Sectional Anatomy: HyperMed. *Acta Anatomica*, 158, 1997.

Project: On the expressive power of loop constructs in imperative programming languages

Involved persons: K. Indermark, C. A. Albayrak

#### Literature:

Can Adam Albayrak and Thomas Noll: The WHILE Hierarchy of Program Schemes is Infinite. In Maurice Nivat, editor, *Proceedings of Foundations of Software Science and Computation Structures*, pages 35–47. LNCS 1378, Springer, 1998.

C.A. Albayrak: Die WHILE-Hierarchie für Programmschemata. RWTH Aachen, 1998.

Project: Using Novel Inference Services to support the development of models of chemical processes

Principal Investigator: F. Baader

Involved person: U. Sattler

Funded by: DFG Graduiertenkolleg "Informatik und Technik"

In this project, the suitability of different terminological KR languages for representing relevant concepts in different engineering domains will be investigated. In cooperation with Prof. Dr. Marquardt, Aachen, we are trying to design a terminological KR language that is expressive enough to support modeling in process engineering, without having inference problems of unacceptably high complexity. The goal of representing this knowledge is to support the modeling of large chemical plants for planing and optimization purposes. The complex structure of such plants demands for a highly expressive terminological language, in particular because the support of top-down modeling requires the appropriate treatment of part-whole relations. The formally well-founded and algorithmically manageable integration of such relations is one of our main research goals here.

#### Literature:

U. Sattler: Terminological knowledge representation systems in a process engineering application. LuFG Theoretical Computer Science, RWTH-Aachen, 1998.

F. Baader and U. Sattler: Description Logics with Symbolic Number Restrictions. In W. Wahlster, editor, *Proceedings of the Twelfth European Conference on Artificial Intelligence (ECAI-96)*, pages 283–287. John Wiley & Sons Ltd, 1996. An extended version has appeared as Technical Report LTCS-96-03

F. Baader and U. Sattler: Knowledge Representation in Process Engineering. In *Proceedings of the International Workshop on Description Logics*. Cambridge (Boston), MA, U.S.A., AAAI Press/The MIT Press, 1996.

F. Baader and U. Sattler: Number Restrictions on Complex Roles in Description Logics. In *Proceedings of the Fifth International Conference on the Principles of Knowledge Representation and Reasoning (KR-96)*. Morgan Kaufmann, Los Altos, 1996. An extended version has appeared as Technical Report LTCS-96-02

U. Sattler: A Concept Language Extended with Different Kinds of Transitive Roles. In G. Görz and S. Hölldobler, editors, *20. Deutsche Jahrestagung für Künstliche Intelligenz*, number 1137 in *Lecture Notes in Artificial Intelligence*. Springer Verlag, 1996.

Project: Integration of modal operators into terminological knowledge representation languages

Involved persons: F. Baader in cooperation with Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI) andMax-Planck-Institut für Informatik (MPI)

Traditional terminological knowledge representation systems can be used to represent the objective, time-independent knowledge of an application domain. Representing subjective knowledge (e.g., belief and knowledge of intelligent agents) and time-dependent knowledge is only possible in a very restricted way. In systems modeling aspects of intelligent agents, however, intentions, beliefs, and time-dependent facts play an important role.

Modal logics with Kripke-style possible worlds semantics yields a formally well-founded and well-investigated framework for the representation of such notions. However, most modal logics have been defined using first order predicate logic as underlying formalism. This leads to strong undecidability of these logics. Substituting first order predicate logic by terminological languages as underlying formalism, one might substantially raise the expressive power of the terminological language while preserving the decidability of the inference problems.

In collaboration with researchers at DFKI and MPII (Saarbrücken), we are investigating different possibilities for integrating modal operators into terminological KR systems. Our main goal is to design a combined formalism for which all the important terminological inference problems (such as the subsumption problem) are still decidable. Otherwise, it would not be possible to employ such a formalism in an implemented system.

#### Literature:

F. Baader and A. Laux: Terminological Logics with Modal Operators. In C. Mellish, editor, *Proceedings of the 14th International Joint Conference on Artificial Intelligence*, pages 808–814. Montréal, Canada, Morgan Kaufmann, 1995.

F. Baader and H.-J. Ohlbach: A Multi-Dimensional Terminological Knowledge Representation Language. *J. Applied Non-Classical Logics*, 5:153–197, 1995.

H.-J. Ohlbach and F. Baader: A Multi-Dimensional Terminological Knowledge Representation Language. In *Proceedings of the 13th International Joint Conference on Artificial Intelligence, IJCAI-93*, pages 690–695, 1993.

DFG Project: Combination of special deduction procedures

Principal Investigator: F. Baader

Involved person: J. Richts

Funded by: DFG, Schwerpunkt "Deduktion"

Since September 1994, this research project is funded within the Schwerpunkt "Deduktion" by the Deutsche Forschungsgemeinschaft (DFG) for two years, possibly with a prolongation for two more years. It is joint work with the research group of Prof. Schulz at the CIS, University of Munich.

This research is mainly concerned with combining decision procedures for unification problems. Such a procedure can be used to decide whether a given set of equations is solvable with respect to an equational theory or a combination of equational theories. For unification procedures that return complete sets of unifiers, the combination problem has been investigated in greater detail. In contrast to these procedures, a decision procedure only returns a boolean value as result indicating whether a solution exists or not.

One aim of this research project is to develop optimizations of the known combination method for decision procedures, which apply to restricted classes of equational theories. The general combination algorithm is nondeterministic, which means that in the worst-case, exponentially many possibilities must be investigated. If the equational theories under consideration satisfy some simple syntactic restrictions, large parts of the search tree can be pruned. We will investigate to which extent such optimizations are possible.

Another aim is to generalize the existing combination algorithms, for instance to the case of theories with non-disjoint signatures, or to more general problems than unification problems. The long-term objective of this research is to reach a better understanding of the basic algorithmic problems that occur when combining special deduction procedures, and to develop a rather general combination framework.

Since many optimized combination algorithms depend on special procedures that satisfy additional requirements, we will also investigate how well-known special inference procedures can be extended in this direction.

In order to be able to assess the effects of our optimizations and to determine their relevance in practice, we will implement the investigated unification algorithms - the combination algorithm as well as selected special algorithms. For the implementation of special unification algorithms, we have chosen the equational theories A, AC and ACI, which contain a binary function symbol that is associative, commutative, and/or idempotent.

#### Literature:

Stephan Kepser and Jörn Richts: Optimisation Techniques for Combining Constraint Solvers. In Dov Gabbay and Maarten de Rijke, editors, *Frontiers of Combining Systems 2, Papers presented at FroCoS'98*, pages 193–210. Amsterdam, Research Studies Press/Wiley, 1999.

Stephan Kepser and Jörn Richts: UniMoK: A System for Combining Equational Unification Algorithms. In *Rewriting Techniques and Applications, Proceedings RTA-99*, volume 1631 of in *Lecture Notes in Computer Science*, pages 248–251. Springer-Verlag, 1999.

Jörn Richts: Effiziente Entscheidungsverfahren zur E-Unifikation. PhD Thesis, RWTH Aachen. Shaker Verlag, Germany, 1999.

F. Baader and K. Schulz: Combination of Constraint Solvers for Free and Quasi-Free Structures. *Theoretical Computer Science*, 192:107–161, 1998.

F. Baader and C. Tinelli: Deciding the Word Problem in the Union of Equational Theories. UIUCDCS-Report UIUCDCS-R-98-2073, Department of Computer Science, University of Illinois at Urbana-Champaign, 1998.

F. Baader: Combination of Compatible Reduction Orderings that are Total on Ground Terms. In G. Winskel, editor, *Proceedings of the Twelfth Annual IEEE Symposium on Logic in Computer Science (LICS-97)*, pages 2–13. Warsaw, Poland, IEEE Computer Society Press, 1997.

F. Baader and C. Tinelli: A New Approach for Combining Decision Procedures for the Word Problem, and Its Connection to the Nelson-Oppen Combination Method. In W. McCune, editor, *Proceedings of the 14th International Conference on Automated Deduction (CADE-97)*, volume 1249 of in *Lecture Notes in Artificial Intelligence*, pages 19–33. Springer-Verlag, 1997.

Franz Baader and Klaus U. Schulz, editors: Frontiers of Combining Systems. Kluwer Academic Publishers, 1996.

F. Baader and K. U. Schulz: Unification in the Union of Disjoint Equational Theories: Combining Decision Procedures. *J. Symbolic Computation*, 21:211–243, 1996.

Xiaorong Huang, Manfred Kerber, Michael Kohlhase, Erica Melis, Dan Nesmith, Jörn Richts, and Jörg Siekmann: Die Beweisentwicklungsumgebung Omega-MKRP. *Informatik – Forschung und Entwicklung*, 11(1):20–26, 1996. In German

F. Baader and K.U. Schulz: Combination Techniques and Decision Problems for Disunification. *Theoretical Computer Science B*, 142:229–255, 1995.

F. Baader and K.U. Schulz: Combination of Constraint Solving Techniques: An Algebraic Point of View. In *Proceedings of the 6th International Conference on Rewriting Techniques and Applications*, volume 914 of in *Lecture Notes in Artificial Intelligence*, pages 352–366. Kaiserslautern, Germany, Springer Verlag, 1995.

F. Baader and K.U. Schulz: On the Combination of Symbolic Constraints, Solution Domains, and Constraint Solvers. In *Proceedings of the International Conference on Principles and Practice of Constraint Programming, CP95*, volume 976 of in *Lecture Notes in Artificial Intelligence*, pages 380–397. Cassis, France, Springer Verlag, 1995.

Xiaorong Huang, Manfred Kerber, Michael Kohlhase, Erica Melis, Dan Nesmith, Jörn Richts, and Jörg Siekmann: KEIM: A Toolkit for Automated Deduction. In Alan Bundy, editor, *Automated Deduction — CADE-12*, in *Proceedings of the 12th International Conference on Automated Deduction*, pages 807–810. Nancy, Springer-Verlag LNAI 814, 1994.

F. Baader and K. Schulz: Unification in the Union of Disjoint Equational Theories: Combining Decision Procedures. In *Proceedings of the 11th International Conference on Automated Deduction, CADE-92*, volume 607 of in *Lecture Notes in Computer Science*, pages 50–65. Saratoga Springs (USA), Springer–Verlag, 1992.