Research projects
Table of contents
Details of research projects assigned to a cluster of excellence, a research training group, or a research unit can be found on the corresponding linked websites below. Independent research projects are mentioned below.
Active Projects
Clusters of Excellence and Collaborative Research Centres
Center for Perspicuous Computing (CPEC)
About View project publications Go to webpageThe Transregional Collaborative Research Centre 248 “Foundations of Perspicuous Software Systems” aims at enabling comprehension in a cyber-physical world with the human in the loop.
From autonomous vehicles to Industry 4.0, from smart homes to smart cities – increasingly computer programs participate in actions and decisions that affect humans. However, our understanding of how these applications interact and what is the cause of a specific automated decision is lagging far behind. With the increase in cyber-physical technology impacting our lives, the consequences of this gradual loss in understanding are becoming severe. Systems lack support for making their behaviour plausible to their users. And even for technology experts it is nowadays virtually impossible to provide scientifically well-founded answers to questions about the exact reasons that lead to a particular decision, or about the responsibility for a malfunctioning. The root cause of the problem is that contemporary systems do not have any built-in concepts to explicate their behaviour. They calculate and propagate outcomes of computations, but are not designed to provide explanations. They are not perspicuous.
A3 – Description Logic Explications
The main goal of the project is to develop novel techniques for the user-adaptive explication of knowledge-based reasoning, and to combine them with the explication generators developed in other projects. Within the first funding period, it will focus on explications for description logic (DL) reasoning since DLs are a very prominent family of knowledge representation formalisms, for which various sophisticated reasoning techniques and implemented reasoning systems are available. The project will build upon existing work on explication of DL and first-order reasoning, but needs to extend it in several directions.
E1 – Interactive Exploration of Visual Models
Beside the common textual representation, graph visualisation present both structural information and multivariate data attributes as a basis for improved understanding and explanation of models. This project aims at enabling the exploration and refinement of models at both design and inspection time through interactive visualisation. As a concrete starting point, we will make contributions to the visual analysis of behaviour, causality and compositionality, as well as visualise and facilitate the explanation of consequences as a result of reasoning in knowledge representations using description logics. We will investigate interactive model visualisation in future office setups and multi-display environments (MDE) with novel input and output technologies to improve the exploration process for multiple users and tasks. In the first funding period, we identify, design and evaluate suitable representation of models in different complexity and size and develop novel interaction techniques to explore these models appropriate for the current user, task, context, and display setup. Our focus lies in creating responsive graph visualisations to fit these conditions and developing a framework for automating this adaptation to enable an improved exploration process.
E2 – Safe Handover in Mixed-Initiative Control
In mixed-initiative control, one important issue is handover. The mission of this project is to better understand how machines can safely hand over control to humans, and to develop technological support. We will use formal methods from AI – description logic (DL) and automated planning – to more reliably predict when a handover is necessary, and to increase the advance notice for handovers by planning ahead at run-time. We will combine methods from human-computer interaction and natural language generation to develop solutions for safe and smooth handovers. Ultimately, we strive to tightly integrate HCI and formal methods, making human aspects of the human-machine system more accessible to formal analysis, thereby ensuring operational safety.
Center for Scalable Data Analysis and Artificial Intelligence (ScaDS.AI)
About View project publications Go to webpageScaDS.AI Dresden/Leipzig is one of the five AI competence centers in Germany, established within the national strategy for AI. Since July 2022 it is permanently funded by the Federal Ministry of Education and Research (BMBF) and the Saxonian Ministry of Science, Culture, and Tourism (SMWK). To advance penetration and usability of AI, the understanding of its methods, the trust in its outcome, and consequences for society, the research in ScaDS.AI Dresden/Leipzig works on bridging the gap between efficient use of mass data, advanced AI methods, and knowledge representation. This research is structured into four interrelated focus areas, which are Big Data analytics and engineering, AI algorithms and methods, Applied Big Data and AI, as well as integrative topics on Responsible AI and architectures, scalability, and security. Our group contributes its expertise in knowledge representation, ontology engineering, and automated deduction to the focus area AI algorithms and methods, and there mainly to the research topics Knowledge Representation and Engineering and Mathematical Foundations and Statistical Learning.
Center for Advancing Electronics Dresden (cfAED)
About View project publications Go to webpageThe myriad of applications of electronics have made the underlying semiconductor technology a key engine of progress. We have grown accustomed to a steady stream of innovations with huge impact on life. However, societies keep facing challenges, requiring solutions that reach far beyond what the semiconductor industry’s roadmap projects as feasible. The Cluster "cfaed" addresses breakthroughs in advancing electronics in order to enable hitherto unforeseen innovations. The main research foci result from the Excellence Cluster phase 2012-2018 and the existing/grown excellence here in Dresden. The Cluster aims at impacting the future of electronics by initiating revolutionary new applications such as electronics featuring zero-boot time, THz imaging, and complex biosignal processing. We will graduate new cross-border thinking scientists and stimulate industry and create impulses for new start-up ideas. cfaed’s research continues to strengthen TU Dresden, enhance its international credibility, and contribute in making Dresden a leading location of electronics fundamental research.
Vision
Advancing electronics to serve unmet societal needs is the vision of cfaed. Building upon the scientific excellence of our researchers and achievements in cfaed’s Excellence Cluster phase 2012-2018, we pursue the following objectives that further strengthen our aspiration to become an internationally recognized leader in fundamental electronics research.
Scientific objectives
Generate breakthroughs in electronics showing a path beyond current roadmaps.
cfaed intends to push the frontier currently set by the industry roadmap for electronics towards unforeseen innovations, such as super-dense memories, attomolar sensors, fully reconfigurable systems, low-cost on-demand manufacturable circuits, and zero boot-time computing.
Bridge engineering and natural sciences to address untouched research topics.
Natural scientists build a discovery-driven deep understanding of materials and processes, which are picked up by engineering scientists to innovate electronics. Bridging these worlds of thinking is a key ingredient for the success of cfaed, e.g., using Spin-Orbitronics to design memories, using bio-molecules to design and manufacture electronics, novel tailoring of materials to enable THz devices and electronics with non-volatile state memory.
Unleash the full potential of our new devices into new systems design.
Dissolving the functional boundaries between memory, logic, and sensing to build fully reconfigurable systems that converge towards general-purpose electronics, while preserving the capability of exploiting/interfacing custom-tailored components.
Structural objectives
Graduate highly skilled and inspired scientists who think beyond boundaries.
The electronics industry is a pillar of economic success in Dresden and we feel a responsibility to foster visionary scientists with skills and methods to tackle future challenges. Additional support programs at cfaed for careers in academia, industry, and start-ups underpin this effort.
Implement a structured career development program for researchers at all levels.
Academia in general has little experience in structured career development. Therefore, we are committed to implementing a well-structured career development plan to support scientists at all career levels spanning from PhD students to professors, and to attract and promote young high-potential researchers (early career support).
Embrace managerial strategies that promote equal opportunity.
We strongly support equal opportunity and implement concrete measures to ensure nondiscriminative recruiting and a family-friendly environment across all career levels throughout the Cluster’s structure.
Embody a research governance that ensures the capability for the center to stay
open for new input and ideas.The high-risk nature of our research necessitates unconventional thinking to overcome inevitable challenges. Building on our experiences from the previous funding phase, ”cfaed-1”, we employ an open and flexible research structure and closely monitor global research progress for new insight. This includes appropriate quality assurance, risk management measures, as well as accessible management.
Implement new research infrastructure (“Enablers”) that drive the standing of cfaed and TU Dresden.
Our planned high-quality enablers, e.g. labs, microscopy, electronics design, are designed to help cfaed thrive. Our open-doors policy at our institutions shall help TU Dresden to excel as a whole.
Act as first point of contact for all matters relating to international microelectronics research.
We aim to establish cfaed as a committed partner in a collaborative work and research environment. cfaed shall be the first point of contact for all matters concerning microelectronics research for all stakeholders, e.g., politics, industry, research institutes, the public.
Research Training Groups
Role-based Software Infrastructures for continuous-context-sensitive Systems (RoSI)
About View project publications Go to webpageSoftware with long life cycles is facing continuously changing contexts. New functionality has to be added, new platforms have to be addressed, and existing business rules have to be adjusted. The concept of role modeling has been introduced in different fields and at different times in order to model context-related information, including – above all – the dynamic change of contexts. However, often roles have only been used in an isolated way for context modeling in programming languages, in database modeling or to specify access control mechanisms. Never have they been used consistently on all levels of abstraction in the software development process, i.e. modeling of concepts, languages, applications, and software systems.
The central research goal of this program is to deliver proof of the capability of consistent role modeling and its practical applicability. This includes the concept modeling (in meta-languages), the language modeling, and the modeling on the application and software system level. The subsequent scientific elaboration of the role concept, in order to be able to model the change of context on different levels of abstraction, represents another research task in this program. Consistency offers significant advantages in the field of software systems engineering because context changes are interrelated on different levels of abstraction; plus, they can be synchronously developed and maintained. Potential application fields are the future smart grid, natural energy based computing, cyber-physical systems in home, traffic, and factories, enterprise resource planning software, context-sensitive search engines, etc.
The research training group puts a strong emphasis on a comprehensive and individual mentoring and qualification approach. In order to achieve this balancing act, quality assurance measures are introduced in the form of advisor tandems and a thesis advisory board on the one hand. On the other hand, motivating and extra-curricular aspects will be integrated into the research training group, such as seminars on soft skills and a comprehensive international program for visiting scientists.
Research Projects
Repairing Description Logic Ontologies (ReDLO)
About View project publicationsDescription Logics (DLs) are a family of logic-based knowledge representation languages, which are used to formalize ontologies for application domains such as biology and medicine. As the size of DL-based ontologies grows, tools for improving their quality become more important. DL reasoners can detect inconsistencies and infer other implicit consequences. However, for the developer of an ontology, it is often hard to understand why a consequence computed by the reasoner follows, and how to repair the ontology if this consequence is not intended. The classical algorithmic approach for repairing DL-based ontologies is to compute (one or more) maximal subsets of the ontology that no longer have the unintended consequence. In previous work we have introduced a more "gentle" approach for repairing ontologies that allows us to retain more consequences than the classical approach: instead of completely removing axioms, our gentle repair replaces them by "weaker" axioms, i.e., axioms that have less consequences. In addition to introducing the general framework, we have defined useful properties of weakening relations on axioms and investigated two particular such relations for the DL EL. The purpose of this project is foremost to investigate this gentle repair approach in more detail. On the one hand, we will consider variants of the general framework, e.g., w.r.t. which and how many axioms are chosen to be weakened. On the other hand, we will design different weakening relations, both for light-weight DLs such as EL and for expressive DLs such as ALC, and investigate their properties. In addition, we will clarify the relationship to similar approaches in other areas such as belief revision and inconsistency-tolerant reasoning, and extend our approach to the setting of privacy-preserving ontology publishing. Regarding belief revision, it should be noted that our gentle repair approach is related to what is called pseudo-contraction in that area. However, the emphasis in belief revision is more on abstract approaches for generating contractions that satisfy certain postulates than on defining and investigating concrete repair approaches. We will investigate which of these postulates are satisfied by our approaches. Inconsistency-tolerant reasoning is of interest since it is often based on considering what follows from some, all, or the intersection of all repairs. We will investigate how our new notion of repair influences these inconsistency-tolerant reasoning approaches. In privacy-preserving ontology publishing, removing consequences that are to be hidden from the ontology is not sufficient since an attacker might have additional background information. In previous work, we have considered this problem in a very restricted setting, where both the information to be published and the background knowledge are represented using EL concepts. We intend to generalize this to more general forms of knowledge bases and more expressive DLs.
Reasoning and Query Answering Using Concept Similarity Measures and Graded Membership Functions (TIMI)
About View project publicationsDescription Logics (DLs) are a well-investigated family of logic-based knowledge representation languages, which are, e.g., frequently used to formalize ontologies for application domains such as biology and medicine. To define the important notions of such an application domain as formal concepts, DLs state necessary and sufficient conditions for an individual to belong to a concept. Once the relevant concepts of an application domain are formalized this way, they can be used in queries in order to retrieve new information from data. Since traditional DLs are based on classical first-order logic, their semantics is strict in the sense that all the stated properties need to be satisfied for an individual to belong to a concept, and the same is true for answers to queries. In applications where exact definitions are hard to come by, it would be useful to relax this strict requirement and allow for approximate definitions of concepts, where most, but not all, of the stated properties are required to hold. Similarly, if a query has no exact answer, approximate answers that satisfy most of the features the query is looking for could be useful.In order to allow for approximate definitions of concepts, we have introduced the notion of a graded membership function, which instead of a Boolean membership value 0 or 1 yields a membership degree from the interval [0,1]. Threshold concepts can then, for example, require that an individual belongs to a concept C with degree at least 0.8. A different approach, which is based on the use of similarity measures on concepts, was used to relax instance queries (i.e., queries that consist of a single concept). Given a query concept C, we are looking for answers to queries D whose similarity to C is higher than a certain threshold. While these two approaches were developed independently by the two applicants together with doctoral students, it has turned out that there are close connections. Similarity measures can be used to define graded membership functions, and threshold concepts w.r.t. these functions provide a more natural semantics for relaxed instance queries.The goals of this project are to1. explore the connection between the two approaches in detail, and develop and investigate a combined approach; 2. extend the applicability of this combined approach by considering a) more expressive DLs than the DL EL for which the approaches were initially developed; b) more general terminological formalisms; c) more general queries such as conjunctive queries; d) more general inference problems such as determining appropriate threshold values rather than requiring the user to provide them; 3. conduct an empirical study to a) evaluate the approach on a prototypical application; b) generate appropriate graded membership functions or similarity measures using automated parameter tuning approaches.
Completed Clusters of Excellence, Collaborative Research Centres, Research Training Groups, and Research Units
Both automata and logics are employed as modelling approaches in Computer Science, and these approaches often complement each other in a synergetic way. In Theoretical Computer Science the connection between finite automata and logics has been investigated in detail since the early nineteen sixties. This connection is highly relevant for numerous application domains. Examples are the design of combinatorial and sequential circuits, verification, controller synthesis, knowledge representation, or natural language processing. Classical logics and automata models support modelling of qualitative properties. For many Computer Science applications, however, such purely functional models are not sufficient since also quantitative phenomena need to be modelled. Examples are the vagueness and uncertainty of a statement, length of time periods, spatial information, and resource consumption. For this reason, different kinds of quantitative logics and automata models have been introduced. However, their connection is not as well-investigated as in the classical qualitative case.
The aim of this research training group is to investigate quantitative logics and automata as well as their connection in a thorough and complete manner, using methods from Theoretical Computer Science. As possible applications we consider problems from verification, knowledge representation, and constraint solving.
The planned qualification and supervision concept aims at providing the doctoral students with as much freedom as possible for their research work, while optimally preparing them for and supporting them in their research activities. The curriculum consists — in addition to the weekly research seminar — of Reading Groups, a Summer School in the first year of every cohort, advanced lectures, and an annual workshop. In addition, the doctoral students will participate in softskills courses offered by the participating universities.
Publications and Technical Reports
2024
Abstract BibTeX Entry DOI
In applications of AI systems where exact definitions of the important notions of the application domain are hard to come by, the use of traditional logic-based knowledge representation languages such as Description Logics may lead to very large and unintuitive definitions, and high complexity of reasoning. To overcome this problem, we define new concept constructors that allow us to define concepts in an approximate way. To be more precise, we present a family τEL(m) of extensions of the lightweight Description Logic EL that use threshold constructors for this purpose. To define the semantics of these constructors we employ graded membership functions m, which for each individual in an interpretation and concept yield a number in the interval [0,1] expressing the degree to which the individual belongs to the concept in the interpretation. Threshold concepts C⋈t for ⋈∈<,≤,>,≥ then collect all the individuals that belong to C with degree ⋈t. The logic τEL(m) extends EL with threshold concepts whose semantics is defined relative to a function m. To construct appropriate graded membership functions, we show how concept measures ∼ (which are graded generalizations of subsumption or equivalence between concepts) can be used to define graded membership functions m∼. Then we introduce a large class of concept measures, called simi-d, for which the logics τEL(m∼) have good algorithmic properties. Basically, we show that reasoning in τEL(m∼) is NP/coNP-complete without TBox, PSpace-complete w.r.t. acyclic TBoxes, and ExpTime-complete w.r.t. general TBoxes. The exception is the instance problem, which is already PSpace-complete without TBox w.r.t. combined complexity. While the upper bounds hold for all elements of simi-d, we could prove some of the hardness results only for a subclass of simi-d. This article considerably improves on and generalizes results we have shown in three previous conference papers and it provides detailed proofs of all our results.
@article{ BaFe-AIJ-24, author = {Franz {Baader} and Oliver {Fern{\'{a}}ndez Gil}}, doi = {https://doi.org/10.1016/j.artint.2023.104034}, journal = {Artificial Intelligence}, pages = {104034}, title = {Extending the description logic EL with threshold concepts induced by concept measures}, volume = {326}, year = {2024}, }
2023
Abstract BibTeX Entry DOI
OWL is a powerful language to formalize terminologies in an ontology. Its main strength lies in its foundation on description logics, allowing systems to automatically deduce implicit information through logical reasoning. However, since ontologies are often complex, understanding the outcome of the reasoning process is not always straightforward. Unlike already existing tools for exploring ontologies, our visualization tool Evonne is tailored towards explaining logical consequences. In addition, it supports the debugging of unwanted consequences and allows for an interactive comparison of the impact of removing statements from the ontology. Our visual approach combines (1) specialized views for the explanation of logical consequences and the structure of the ontology, (2) employing multiple layout modes for iteratively exploring explanations, (3) detailed explanations of specific reasoning steps, (4) cross-view highlighting and colour coding of the visualization components, (5) features for dealing with visual complexity and (6) comparison and exploration of possible fixes to the ontology. We evaluated Evonne in a qualitative study with 16 experts in logics, and their positive feedback confirms the value of our concepts for explaining reasoning and debugging ontologies.
@article{ MEALKOLABADA-CGF23, author = {Juli{\'a}n {M{\'e}ndez} and Christian {Alrabbaa} and Patrick {Koopmann} and Ricardo {Langner} and Franz {Baader} and Raimund {Dachselt}}, doi = {https://doi.org/10.1111/cgf.14730}, journal = {Computer Graphics Forum}, number = {6}, pages = {e14730}, title = {Evonne: A Visual Tool for Explaining Reasoning with {OWL} Ontologies and Supporting Interactive Debugging}, volume = {42}, year = {2023}, }
2022
Abstract BibTeX Entry PDF File Extended Version
Explanations for description logic (DL) entailments provide important support for the maintenance of large ontologies. The ``justifications'' usually employed for this purpose in ontology editors pinpoint the parts of the ontology responsible for a given entailment. Proofs for entailments make the intermediate reasoning steps explicit, and thus explain how a consequence can actually be derived. We present an interactive system for exploring description logic proofs, called Evonne, which visualizes proofs of consequences for ontologies written in expressive DLs. We describe the methods used for computing those proofs, together with a feature called signature-based proof condensation. Moreover, we evaluate the quality of generated proofs using real ontologies.
@inproceedings{ ALBABODAKOME-IJCAR22, author = {Christian {Alrabbaa} and Franz {Baader} and Stefan {Borgwardt} and Raimund {Dachselt} and Patrick {Koopmann} and Juli\'an {M\'endez}}, booktitle = {Automated Reasoning - 11th International Joint Conference, {IJCAR} 2022}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {\textsc{Evonne}: Interactive Proof Visualization for Description Logics (System Description)}, year = {2022}, }
BibTeX Entry PDF File
@inproceedings{ ALBOFRKOMEPO-XLoKR22, author = {Christian {Alrabbaa} and Stefan {Borgwardt} and Tom {Friese} and Patrick {Koopmann} and Juli\'an {M\'endez} and Alexej {Popovi\v{c}}}, booktitle = {Informal Proceedings of the Explainable Logic-Based Knowledge Representation (XLoKR 2022) workshop co-located with the 19th International Conference on Principles of Knowledge Representation and Reasoning {(KR} 2022), Haifa, Israel, July 31st}, title = {Explaining Description Logic Entailments with \textsc{Evee} and \textsc{Evonne}}, year = {2022}, }
Abstract BibTeX Entry PDF File Extended Version
When working with description logic ontologies, understanding entailments derived by a description logic reasoner is not always straightforward. So far, the standard ontology editor offers two services to help: (black-box) justifications for OWL 2 DL ontologies, and (glass-box) proofs for lightweight OWL EL ontologies, where the latter exploits the proof facilities of reasoner Elk. Since justifications are often insufficient in explaining inferences, there is thus only little tool support for explaining inferences in more expressive DLs. In this paper, we introduce Evee-libs, a Java library for computing proofs for DLs up to ALCH, and Evee-protege, a collection of Protégé plugins for displaying those proofs in Protégé. We also give a short glimpse of the latest version of Evonne, a more advanced standalone application for displaying and interacting with proofs computed with Evee-libs.
@inproceedings{ ALBOFRKOMEPO-DL22, author = {Christian {Alrabbaa} and Stefan {Borgwardt} and Tom {Friese} and Patrick {Koopmann} and Juli\'an {M\'endez} and Alexej {Popovi\v{c}}}, booktitle = {Proceedings of the 35th International Workshop on Description Logics {(DL} 2022)}, publisher = {CEUR-WS.org}, title = {On the Eve of True Explainability for OWL Ontologies: Description Logic Proofs with \textsc{Evee} and \textsc{Evonne}}, year = {2022}, }
Abstract BibTeX Entry PDF File
Ontologies provide the logical underpinning for the Semantic Web, but their consequences can sometimes be surprising and must be explained to users. A promising kind of explanations are proofs generated via automated reasoning. We report about a series of studies with the purpose of exploring how to explain such formal logical proofs to humans. We compare different representations, such as tree- vs. text-based visualizations, but also vary other parameters such as length, interactivity, and the shape of formulas. We did not find evidence to support our main hypothesis that different user groups can understand different proof representations better. Nevertheless, when participants directly compared proof representations, their subjective rankings showed some tendencies such as that most people prefer short tree-shaped proofs. However, this did not impact the user's understanding of the proofs as measured by an objective performance measure.
@inproceedings{ AlBoHiKnKoRoWi-RuleML22, author = {Christian {Alrabbaa} and Stefan {Borgwardt} and Anke {Hirsch} and Nina {Knieriemen} and Alisa {Kovtunova} and Anna Milena {Rothermel} and Frederik {Wiehr}}, booktitle = {Rules and Reasoning - 6th International Joint Conference, RuleML+RR 2022, Virtual, September 26-28, 2022, Proceedings}, title = {In the Head of the Beholder:\texorpdfstring{\\}{} Comparing Different Proof Representations}, year = {2022}, }
Abstract BibTeX Entry DOI Extended Version
In ontology-mediated query answering, access to incomplete data sources is mediated by a conceptual layer constituted by an ontology, which can be formulated in a description logic (DL) or using existential rules. Computing the answers to queries requires complex reasoning over the constraints expressed by the ontology. In the literature, there exists a multitude of complex techniques for incorporating ontological knowledge into queries. However, few of these approaches were designed for explainability of the query answers. We tackle this challenge by adapting an existing proof framework toward conjunctive query answering, based on the notion of universal models. We investigate the data and combined complexity of determining the existence of a proof below a given quality threshold, which can be measured in different ways. By distinguishing various parameters such as the shape of the query, we obtain an overview of the complexity of this problem for several Horn DLs.
@inproceedings{ AlBoKoKo-RuleML2022, author = {Christian {Alrabbaa} and Stefan {Borgwardt} and Patrick {Koopmann} and Alisa {Kovtunova}}, booktitle = {Rules and Reasoning - 6th International Joint Conference, RuleML+RR 2022, Virtual, September 26-28, 2022, Proceedings}, doi = {https://doi.org/10.1007/978-3-031-21541-4_11}, title = {Explaining Ontology-Mediated Query Answers using Proofs over Universal Models}, year = {2022}, }
BibTeX Entry PDF File
@inproceedings{ AlBoKoKo-DL22, author = {Christian {Alrabbaa} and Stefan {Borgwardt} and Patrick {Koopmann} and Alisa {Kovtunova}}, booktitle = {Proceedings of the 35th International Workshop on Description Logics {(DL} 2022)}, publisher = {CEUR-WS.org}, title = {Finding Good Proofs for Answers to Conjunctive Queries Mediated by Lightweight Ontologies}, year = {2022}, }
BibTeX Entry PDF File
@inproceedings{ AlBoKoKo-XLoKR22, author = {Christian {Alrabbaa} and Stefan {Borgwardt} and Patrick {Koopmann} and Alisa {Kovtunova}}, booktitle = {Informal Proceedings of the Explainable Logic-Based Knowledge Representation (XLoKR 2022) workshop co-located with the 19th International Conference on Principles of Knowledge Representation and Reasoning {(KR} 2022), Haifa, Israel, July 31st}, publisher = {CEUR-WS.org}, title = {Finding Good Proofs for Answers to Conjunctive Queries Mediated by Lightweight Ontologies (Extended Abstract)}, year = {2022}, }
Abstract BibTeX Entry DOI
Reasoning results computed by description logic systems can be hard to comprehend. When an ontology does not entail an expected subsumption relationship, generating an explanation of this non-entailment becomes necessary. In this paper, we use countermodels to explain non-entailments. More precisely, we devise relevant parts of canonical models of EL ontologies that serve as explanations and discuss the computational complexity of extracting these parts by means of model transformations. Furthermore, we provide an implementation of these transformations and evaluate it using real ontologies.
@inproceedings{ ALHI-IJCKG22, author = {Christian {Alrabbaa} and Willi {Hieke}}, booktitle = {IJCKG'22: The 11th International Joint Conference on Knowledge Graphs, Virtual Event, Hangzhou, China, October 27 - 29, 2022}, doi = {https://doi.org/10.1145/3579051.3579060}, publisher = {{ACM}}, title = {Explaining Non-Entailment by Model Transformation for the Description Logic {$\mathcal{EL}$}}, year = {2022}, }
Abstract BibTeX Entry DOI
Concrete domains have been introduced in the area of Description Logic to enable reference to concrete objects (such as numbers) and predefined predicates on these objects (such as numerical comparisons) when defining concepts. Unfortunately, in the presence of general concept inclusions (GCIs), which are supported by all modern DL systems, adding concrete domains may easily lead to undecidability. To regain decidability of the DL ALC in the presence of GCIs, quite strong restrictions, in sum called omega-admissibility, were imposed on the concrete domain. On the one hand, we generalize the notion of omega-admissibility from concrete domains with only binary predicates to concrete domains with predicates of arbitrary arity. On the other hand, we relate omega-admissibility to well-known notions from model theory. In particular, we show that finitely bounded homogeneous structures yield omega-admissible concrete domains. This allows us to show omega-admissibility of concrete domains using existing results from model theory. When integrating concrete domains into lightweight DLs of the EL family, achieving decidability is not enough. One wants reasoning in the resulting DL to be tractable. This can be achieved by using so-called p-admissible concrete domains and restricting the interaction between the DL and the concrete domain. We investigate p-admissibility from an algebraic point of view. Again, this yields strong algebraic tools for demonstrating p-admissibility. In particular, we obtain an expressive numerical p-admissible concrete domain based on the rational numbers. Although omega-admissibility and p-admissibility are orthogonal conditions that are almost exclusive, our algebraic characterizations of these two properties allow us to locate an infinite class of p-admissible concrete domains whose integration into ALC yields decidable DLs
@article{ BaaderRydvalJAR22, address = {Cham}, author = {Franz {Baader} and Jakub {Rydval}}, doi = {https://doi.org/10.1007/s10817-022-09626-2}, journal = {Journal of Automated Reasoning}, number = {3}, pages = {357--407}, publisher = {Springer International Publishing}, title = {Using Model Theory to Find Decidable and Tractable Description Logics with Concrete Domains}, volume = {66}, year = {2022}, }
Abstract BibTeX Entry Publication
Concrete domains have been introduced in the area of Description Logic (DL) to enable reference to concrete objects (such as numbers) and predefined predicates on these objects (such as numerical comparisons) when defining concepts. Unfortunately, in the presence of general concept inclusions (GCIs), which are supported by all modern DL systems, adding concrete domains may easily lead to undecidability.
To regain decidability of the DL \(\mathcal{ALC}\) in the presence of GCIs, quite strong restrictions, called \(\omega\)-admissibility, were imposed on the concrete domain. On the one hand, we generalize the notion of \(\omega\)-admissibility from concrete domains with only binary predicates to concrete domains with predicates of arbitrary arity. On the other hand, we relate \(\omega\)-admissibility to well-known notions from model theory. In particular, we show that finitely bounded homogeneous structures yield \(\omega\)-admissible concrete domains. This allows us to show \(\omega\)-admissibility of concrete domains using existing results from model theory. When integrating concrete domains into lightweight DLs of the \(\mathcal{EL}\) family, achieving decidability of reasoning is not enough. One wants the resulting DL to be tractable. This can be achieved by using so-called p-admissible concrete domains and restricting the interaction between the DL and the concrete domain. We investigate p-admissibility from an algebraic point of view. Again, this yields strong algebraic tools for demonstrating p-admissibility. In particular, we obtain an expressive numerical p-admissible concrete domain based on the rational numbers. Although \(\omega\)-admissibility and p-admissibility are orthogonal conditions that are almost exclusive, our algebraic characterizations of these two properties allow us to locate an infinite class of p-admissible concrete domains whose integration into \(\mathcal{ALC}\) yields decidable DLs.
DL systems that can handle concrete domains allow their users to employ a fixed set of predicates of one or more fixed concrete domains when modelling concepts. They do not provide their users with means for defining new predicates, let alone new concrete domains. The good news is that finitely bounded homogeneous structures offer precisely that. We show that integrating concrete domains based on finitely bounded homogeneous structures into \(\mathcal{ALC}\) yields decidable DLs even if we allow predicates specified by first-order formulas. This class of structures also provides effective means for defining new \(\omega\)-admissible concrete domains with at most binary predicates. The bad news is that defining \(\omega\)-admissible concrete domains with predicates of higher arities is computationally hard. We obtain two new lower bounds for this meta-problem, but leave its decidability open. In contrast, we prove that there is no algorithm that would facilitate defining p-admissible concrete domains already for binary signatures.
@thesis{ Rydval-Diss-2022, address = {Dresden, Germany}, author = {Jakub {Rydval}}, school = {Technische Universit\"{a}t Dresden}, title = {Using Model Theory to Find Decidable and Tractable Description Logics with Concrete Domains}, type = {Doctoral Thesis}, year = {2022}, }
2021
Abstract BibTeX Entry PDF File DOI Extended Technical Report
Logic-based approaches to AI have the advantage that their behavior can in principle be explained to a user. If, for instance, a Description Logic reasoner derives a consequence that triggers some action of the overall system, then one can explain such an entailment by presenting a proof of the consequence in an appropriate calculus. How comprehensible such a proof is depends not only on the employed calculus, but also on the properties of the particular proof, such as its overall size, its depth, the complexity of the employed sentences and proof steps, etc. For this reason, we want to determine the complexity of generating proofs that are below a certain threshold w.r.t. a given measure of proof quality. Rather than investigating this problem for a fixed proof calculus and a fixed measure, we aim for general results that hold for wide classes of calculi and measures. In previous work, we first restricted the attention to a setting where proof size is used to measure the quality of a proof. We then extended the approach to a more general setting, but important measures such as proof depth were not covered. In the present paper, we provide results for a class of measures called recursive, which yields lower complexities and also encompasses proof depth. In addition, we close some gaps left open in our previous work, thus providing a comprehensive picture of the complexity landscape.
@inproceedings{ AlBaBoKoKo-CADE2021, author = {Christian {Alrabbaa} and Franz {Baader} and Stefan {Borgwardt} and Patrick {Koopmann} and Alisa {Kovtunova}}, booktitle = {Proceedings of the 28th International Conference on Automated Deduction (CADE-28), July 11--16, 2021, Virtual Event, United States}, doi = {https://doi.org/10.1007/978-3-030-79876-5_17}, editor = {Andr{\'e} {Platzer} and Geoff {Sutcliffe}}, pages = {291--308}, series = {Lecture Notes in Computer Science}, title = {Finding Good Proofs for Description Logic Entailments Using Recursive Quality Measures}, volume = {12699}, year = {2021}, }
BibTeX Entry PDF File
@inproceedings{ AlrabbaaBBKK21, author = {Christian {Alrabbaa} and Franz {Baader} and Stefan {Borgwardt} and Patrick {Koopmann} and Alisa {Kovtunova}}, booktitle = {Proceedings of the 34th International Workshop on Description Logics {(DL} 2021) part of Bratislava Knowledge September {(BAKS} 2021), Bratislava, Slovakia, September 19th to 22nd, 2021}, editor = {Martin {Homola} and Vladislav {Ryzhikov} and Renate A. {Schmidt}}, publisher = {CEUR-WS.org}, series = {{CEUR} Workshop Proceedings}, title = {Finding Good Proofs for Description Logic Entailments Using Recursive Quality Measures (Extended Abstract)}, volume = {2954}, year = {2021}, }
BibTeX Entry PDF File
@inproceedings{ AlrabbaaBKKRW21, author = {Christian {Alrabbaa} and Stefan {Borgwardt} and Nina {Knieriemen} and Alisa {Kovtunova} and Anna Milena {Rothermel} and Frederik {Wiehr}}, booktitle = {Proceedings of the 34th International Workshop on Description Logics {(DL} 2021) part of Bratislava Knowledge September {(BAKS} 2021), Bratislava, Slovakia, September 19th to 22nd, 2021}, editor = {Martin {Homola} and Vladislav {Ryzhikov} and Renate A. {Schmidt}}, publisher = {CEUR-WS.org}, series = {{CEUR} Workshop Proceedings}, title = {In the Hand of the Beholder: Comparing Interactive Proof Visualizations}, volume = {2954}, year = {2021}, }
BibTeX Entry PDF File
@inproceedings{ BoHiKoWi-XLoKR21, author = {Christian {Alrabbaa} and Stefan {Borgwardt} and Nina {Knieriemen} and Alisa {Kovtunova} and Anna Milena {Rothermel} and Frederik {Wiehr}}, booktitle = {Informal Proceedings of the Explainable Logic-Based Knowledge Representation (XLoKR 2021) workshop co-located with the 18th International Conference on Principles of Knowledge Representation and Reasoning {(KR} 2021), Online Event [Hanoi, Vietnam], November 3rd to 5th, 2021}, title = {In the Hand of the Beholder: Comparing Interactive Proof Visualizations (Extended Abstract)}, year = {2021}, }
Abstract BibTeX Entry PDF File
When subsumption relationships unexpectedly fail to be detected by a description logic reasoner, the cause for this non-entailment need not be evident. In this case, succinct automatically generated explanations would be helpful. Reasoners for the description logic EL compute the canonical model of a TBox in order to perform subsumption tests. We devise substructures of such models as relevant parts for explanation and propose an approach based on graph transductions to extract such relevant parts from canonical models.
@inproceedings{ AlHiTu-FCR-2021, author = {Christian {Alrabbaa} and Willi {Hieke} and Anni-Yasmin {Turhan}}, booktitle = {Proceedings of the 7th Workshop on Formal and Cognitive Reasoning (FCR-2021) co-located with the 44th German Conference on Artificial Intelligence (KI-2021)}, editor = {Christoph {Beierle} and Marco {Ragni} and Frieder {Stolzenburg} and Matthias {Thimm}}, pages = {9--22}, publisher = {CEUR-WS.org}, series = {CEUR Workshop Proceedings}, title = {Counter Model Transformation for Explaining Non-Subsumption in $\mathcal{EL}$}, volume = {2961}, year = {2021}, }
Abstract BibTeX Entry PDF File
When subsumption relationships unexpectedly fail to be detected by a description logic reasoner, the cause for this non-entailment need not be evident. In this case, succinct automatically generated explanations would be helpful. We devise substructures of such models as relevant parts suitable for explaining the non-entailment to users of description logic systems.
@inproceedings{ AlHiTu-XLoKR-2021, author = {Christian {Alrabbaa} and Willi {Hieke} and Anni-Yasmin {Turhan}}, booktitle = {Informal Proceedings of the 2nd Workshop on Explainable Logic-Based Knowledge Representation (XLoKR-2021) co-located with the 18th international Conference on Principles of Knowledge Representation and Reasoning (KR-2021)}, title = {Relevant Parts of Counter Models as Explanations for {$\mathcal{EL}$} Non-Subsumptions (Extended Abstract)}, year = {2021}, }
Abstract BibTeX Entry PDF File ©Springer-Verlag
Concrete domains have been introduced in Description Logics (DLs) to enable reference to concrete objects (such as numbers) and predefined predicates on these objects (such as numerical comparisons) when defining concepts. To retain decidability when integrating a concrete domain into a decidable DL, the domain must satisfy quite strong restrictions. In previous work, we have analyzed the most prominent such condition, called omega-admissibility, from an algebraic point of view. This provided us with useful algebraic tools for proving omega-admissibility, which allowed us to find new examples for concrete domains whose integration leaves the prototypical expressive DL ALC decidable. When integrating concrete domains into lightweight DLs of the EL family, achieving decidability is not enough. One wants reasoning in the resulting DL to be tractable. This can be achieved by using so-called p-admissible concrete domains and restricting the interaction between the DL and the concrete domain. In the present paper, we investigate p-admissibility from an algebraic point of view. Again, this yields strong algebraic tools for demonstrating p-admissibility. In particular, we obtain an expressive numerical p-admissible concrete domain based on the rational numbers. Although omega-admissibility and p-admissibility are orthogonal conditions that are almost exclusive, our algebraic characterizations of these two properties allow us to locate an infinite class of p-admissible concrete domains whose integration into ALC yields decidable DLs.
@inproceedings{ BaRy-JELIA-21, address = {Cham}, author = {Franz {Baader} and Jakub {Rydval}}, booktitle = {Proceedings of the 17th European Conference on Logics in Artificial Intelligence (JELIA 2021)}, editor = {Wolfgang {Faber} and Gerhard {Friedrich} and Martin {Gebser} and Michael {Morak}}, pages = {194--209}, publisher = {Springer International Publishing}, series = {Lecture Notes in Computer Science}, title = {An Algebraic View on p-Admissible Concrete Domains for Lightweight Description Logics}, volume = {12678}, year = {2021}, }
Abstract BibTeX Entry DOI
Probabilistic model checking (PMC) is a well-established method for the quantitative analysis of state based operational models such as Markov decision processes. Description logics (DLs) provide a well-suited formalism to describe and reason about knowledge and are used as basis for the web ontology language (OWL). We investigate how such knowledge described by DLs can be integrated into the PMC process, introducing ontology-mediated PMC. Specifically, we propose ontologized programs as a formalism that links ontologies to behaviors specified by probabilistic guarded commands, the de-facto standard input formalism for PMC tools such as Prism. Through DL reasoning, inconsistent states in the modeled system can be detected. We present three ways to resolve these inconsistencies, leading to different Markov decision process semantics. We analyze the computational complexity of checking whether an ontologized program is consistent under these semantics. Further, we present and implement a technique for the quantitative analysis of ontologized programs relying on standard DL reasoning and PMC tools. This way, we enable the application of PMC techniques to analyze knowledge-intensive systems.We evaluate our approach and implementation on amulti-server systemcase study,where different DL ontologies are used to provide specifications of different server platforms and situations the system is executed in.
@article{ DuKoTu-FAC-2021, author = {Clemens {Dubslaff} and Patrick {Koopmann} and Anni-Yasmin {Turhan}}, doi = {https://doi.org/10.1007/s00165-021-00549-0}, journal = {Formal Aspects of Computing}, publisher = {Springer}, title = {Enhancing Probabilistic Model Checking with Ontologies}, year = {2021}, }
Abstract BibTeX Entry PDF File
Many stream reasoning applications make use of ontology reasoning to deal with incomplete data. To this end, definitions of complex concepts and elaborate ontologies are already used in event descriptions and queries. On the other hand, model checking can support stream reasoning, e.g., by runtime verification to predict future events or by classical offline verification. For such a combined use of model checking and stream reasoning, it is crucial that events are modeled in a compatible way. We have recently introduced and implemented a framework for ontology-mediated probabilistic model checking, which would allow to use the same ontology and event descriptions in both: the stream reasoner and the model checker. In this short paper we present this framework and discuss and motivate its use in the context of stream reasoning.
@inproceedings{ DKT-SR-21, author = {Clemens {Dubslaff} and Patrick {Koopmann} and Anni-Yasmin {Turhan}}, booktitle = {Informal proceedings of the Stream Reasoning Workshop}, editor = {Emanuele Della {Valle} and Thomas {Eiter} and Danh Le {Phuoc} and Konstantin {Schekotihin}}, title = {Supporting Ontology-Mediated Stream Reasoning with Model Checking}, year = {2021}, }
Abstract BibTeX Entry PDF File
Classical regular path queries (RPQs) can be too restrictive for some applications and answering such queries under approximate semantics to relax the query is desirable. While for answering regular path queries over graph databases under approximate semantics algorithms are available, such algorithms are scarce for the ontology-mediated setting. In this paper we extend an approach for answering RPQs over graph databases that uses weighted transducers to approximate paths from the query in two ways. The first extension is to answering approximate conjunctive 2-way regular path queries (C2RPQs) over graph databases and the second is to answering C2RPQs over ELH and DL-LiteR ontologies. We provide results on the computational complexity of the underlying reasoning problems and devise approximate query answering algorithms.
@inproceedings{ FeAt-AAAI-21, author = {Oliver {Fern{\'a}ndez Gil} and Anni-Yasmin {Turhan}}, booktitle = {Proceedings of the 35th AAAI Conference on Artificial Intelligence (AAAI'21)}, pages = {6340--6348}, publisher = {{AAAI} Press}, title = {Answering Regular Path Queries Under Approximate Semantics in Lightweight Description Logics}, year = {2021}, }
Abstract BibTeX Entry PDF File PDF File (ceur-ws.org)
Knowledge engineers might face situations in which an unwanted consequence is derivable from an ontology. It is then desired to revise the ontology such that it no longer entails the consequence. For this purpose, we introduce a novel technique for repairing TBoxes formulated in the description logic \(\mathcal{EL}\). Specifically, we first compute a canonical model of the TBox and then transform it into a countermodel to the unwanted consequence. As formalism for the model transformation we employ transductions. We then obtain a TBox repair as the axiomatization of the logical intersection of the original TBox and the theory of the countermodel. In fact, we construct a set of countermodels, each of which induces a TBox repair. For the actual computation of the repairs we use results from Formal Concept Analysis.
@inproceedings{ HiKrNu-DL-2021, author = {Willi {Hieke} and Francesco {Kriegel} and Adrian {Nuradiansyah}}, booktitle = {Proceedings of the 34th International Workshop on Description Logics ({DL} 2021), Hybrid Event, Bratislava, Slovakia, September 19--22, 2021}, editor = {Martin {Homola} and Vladislav {Ryzhikov} and Renate A. {Schmidt}}, publisher = {CEUR-WS.org}, series = {{CEUR} Workshop Proceedings}, title = {{Repairing $\mathcal{EL}$ TBoxes by Means of Countermodels Obtained by Model Transformation}}, volume = {2954}, year = {2021}, }
Abstract BibTeX Entry PDF File
Stream reasoning systems often rely on complex temporal queries that are answered over enriched data sources. While there are systems for answering such queries, automated support for building tem- poral queries is rare. We present in this paper initial results on how to derive a temporalized concept from a set of examples. The resulting con- cept can then be used to retrieve objects that change over time. We consider the temporalized description logic that extends the description Logic EL with the LTL operators next (X) and global (G) and we present an approach that extends generalization inferences for classical EL.
@inproceedings{ TT-SR-21, author = {Satyadharma {Tirtarasa} and Anni-Yasmin {Turhan}}, booktitle = {Informal proceedings of the Stream Reasoning Workshop}, editor = {Emanuele Della {Valle} and Thomas {Eiter} and Danh Le {Phuoc} and Konstantin {Schekotihin}}, title = {Towards Reverse Engineering Temporal Queries: Generalizing $\mathcal{EL}$ Concepts with Next and Global}, year = {2021}, }
2020
Abstract BibTeX Entry PDF File DOI
Logic-based approaches to AI have the advantage that their behaviour can in principle be explained by providing their users with proofs for the derived consequences. However, if such proofs get very large, then it may be hard to understand a consequence even if the individual derivation steps are easy to comprehend. This motivates our interest in finding small proofs for Description Logic (DL) entailments. Instead of concentrating on a specific DL and proof calculus for this DL, we introduce a general framework in which proofs are represented as labeled, directed hypergraphs, where each hyperedge corresponds to a single sound derivation step. On the theoretical side, we investigate the complexity of deciding whether a certain consequence has a proof of size at most n along the following orthogonal dimensions: (i) the underlying proof system is polynomial or exponential; (ii) proofs may or may not reuse already derived consequences; and (iii) the number n is represented in unary or binary. We have determined the exact worst-case complexity of this decision problem for all but one of the possible combinations of these options. On the practical side, we have developed and implemented an approach for generating proofs for expressive DLs based on a non-standard reasoning task called forgetting. We have evaluated this approach on a set of realistic ontologies and compared the obtained proofs with proofs generated by the DL reasoner ELK, finding that forgetting-based proofs are often better w.r.t. different measures of proof complexity.
@inproceedings{ AlBaBoKoKo-20, author = {Christian {Alrabbaa} and Franz {Baader} and Stefan {Borgwardt} and Patrick {Koopmann} and Alisa {Kovtunova}}, booktitle = {LPAR-23: 23rd International Conference on Logic for Programming, Artificial Intelligence and Reasoning}, doi = {https://doi.org/10.29007/nhpp}, editor = {Elvira {Albert} and Laura {Kovacs}}, pages = {32--67}, publisher = {EasyChair}, series = {EPiC Series in Computing}, title = {Finding Small Proofs for Description Logic Entailments: Theory and Practice}, volume = {73}, year = {2020}, }
Abstract BibTeX Entry PDF File
If a Description Logic (DL) system derives a consequence, then one can in principle explain such an entailment by presenting a proof of the consequence in an appropriate calculus. Intuitively, good proofs are ones that are simple enough to be comprehensible to a user of the system. In recent work, we have introduced a general framework in which proofs are represented as labeled, directed hypergraphs, and have investigated the complexity of finding small proofs. However, large size is not the only reason that can make a proof complex. In the present paper, we introduce other measures for the complexity of a proof, and analyze the computational complexity of deciding whether a given consequence has a proof of complexity at most \(q\). This problem can be NP-complete even for \(\mathcal{EL}\), but we also identify measures where it can be decided in polynomial time.
@inproceedings{ AlBaBoKoKo-DL-20, author = {Christian {Alrabbaa} and Franz {Baader} and Stefan {Borgwardt} and Patrick {Koopmann} and Alisa {Kovtunova}}, booktitle = {{DL 2020:} International Workshop on Description Logics}, publisher = {CEUR-WS.org}, series = {CEUR-WS}, title = {On the Complexity of Finding Good Proofs for Description Logic Entailments}, year = {2020}, }
Abstract BibTeX Entry PDF File
The classical approach for repairing a Description Logic (DL) ontology in the sense of removing an unwanted consequence is to delete a minimal number of axioms from the ontology such that the resulting ontology no longer has the consequence. While there are automated tools for computing all possible such repairs, the user still needs to decide by hand which of the (potentially exponentially many) repairs to choose. In this paper, we argue that exploring a proof of the unwanted consequence may help us to locate other erroneous consequences within the proof, and thus allows us to make a more informed decision on which axioms to remove. In addition, we suggest that looking at the so-called atomic decomposition, which describes the modular structure of the ontology, enables us to judge the impact that removing a certain axiom has. Since both proofs and atomic decompositions of ontologies may be large, visual support for inspecting them is required. We describe a prototypical system that can visualize proofs and the atomic decomposition in an integrated visualization tool to support ontology debugging.
@inproceedings{ AlBaDaFlKo-DL-20, author = {Christian {Alrabbaa} and Franz {Baader} and Raimund {Dachselt} and Tamara {Flemisch} and Patrick {Koopmann}}, booktitle = {{DL 2020:} International Workshop on Description Logics}, publisher = {CEUR-WS.org}, series = {CEUR-WS}, title = {Visualising Proofs and the Modular Structure of Ontologies to Support Ontology Repair}, year = {2020}, }
Abstract BibTeX Entry DOI
Simple counting quantifiers that can be used to compare the number of role successors of an individual or the cardinality of a concept with a fixed natural number have been employed in Description Logics (DLs) for more than two decades under the respective names of number restrictions and cardinality restriction on concepts. Recently, we have considerably extended the expressivity of such quantifiers by allowing to impose set and cardinality constraints formulated in the quantifier-free fragment of Boolean Algebra with Presburger Arithmetic (QFBAPA) on sets of role successors and concepts, respectively. We were able to prove that this extension does not increase the complexity of reasoning. In the present paper, we investigate the expressive power of the DLs obtained this way, using appropriate bisimulation characterizations and 0-1 laws as tools for distinguishing the expressiveness of different logics. In particular, we show that, in contrast to most classical DLs, these logics are no longer expressible in first-order predicate logic (FOL), and we characterize their first-order fragments. In most of our previous work on DLs with QFBAPA-based set and cardinality constraints we have employed finiteness restrictions on interpretations to ensure that the obtained sets are finite. Here we dispense with these restrictions to make the comparison with classical DLs, where one usually considers arbitrary models rather than finite ones, easier. It turns out that doing so does not change the complexity of reasoning.
@inproceedings{ BaBo-ANDREI60-20, author = {Franz {Baader} and Filippo {De Bortoli}}, booktitle = {ANDREI-60. Automated New-era Deductive Reasoning Event in Iberia}, doi = {https://doi.org/10.29007/ltzn}, editor = {Laura {Kovacs} and Konstantin {Korovin} and Giles {Reger}}, pages = {1--25}, publisher = {EasyChair}, series = {EPiC Series in Computing}, title = {Description Logics That Count, and What They Can and Cannot Count}, volume = {68}, year = {2020}, }
Abstract BibTeX Entry PDF File
Simple counting quantifiers that can be used to compare the number of role successors of an individual or the cardinality of a concept with a fixed natural number have been employed in Description Logics (DLs) for more than two decades under the respective names of number restrictions and cardinality restriction on concepts. Recently, we have considerably extended the expressivity of such quantifiers by allowing to impose set and cardinality constraints formulated in the quantifier-free fragment of Boolean Algebra with Presburger Arithmetic (QFBAPA) on sets of role successors and concepts, respectively. We were able to prove that this extension does not increase the complexity of reasoning. In the present paper, we investigate the expressive power of the DLs obtained this way, using appropriate bisimulation characterizations and 0-1 laws as tools for distinguishing the expressiveness of different logics. In particular, we show that, in contrast to most classical DLs, these logics are no longer expressible in first-order predicate logic (FOL), and we characterize their first-order fragments. In most of our previous work on DLs with QFBAPA-based set and cardinality constraints we have employed finiteness restrictions on interpretations to ensure that the obtained sets are finite. Here we dispense with these restrictions to make the comparison with classical DLs, where one usually considers arbitrary models rather than finite ones, easier. It turns out that doing so does not change the complexity of reasoning.
@inproceedings{ BaDeBo-DL-20, address = {Online}, author = {Franz {Baader} and Filippo {De Bortoli}}, booktitle = {Proceedings of the 33rd International Workshop on Description Logics (DL'20)}, editor = {Stefan {Borgwardt} and Thomas {Meyer}}, publisher = {CEUR-WS}, series = {CEUR Workshop Proceedings}, title = {Description Logics that Count, and What They Can and Cannot Count (Extended Abstract)}, volume = {2663}, year = {2020}, }
Abstract BibTeX Entry DOI
The theory ACUI of an associative, commutative, and idempotent binary function symbol + with unit 0 was one of the first equational theories for which the complexity of testing solvability of unification problems was investigated in detail. In this paper, we investigate two extensions of ACUI. On the one hand, we consider approximate ACUI-unification, where we use appropriate measures to express how close a substitution is to being a unifier. On the other hand, we extend ACUI-unification to ACUIG-unification, i.e., unification in equational theories that are obtained from ACUI by adding a finite set G of ground identities. Finally, we combine the two extensions, i.e., consider approximate ACUIG-unification. For all cases we are able to determine the exact worst-case complexity of the unification problem.
@article{ BaMaMoOk-MSCS-20, author = {Franz {Baader} and Pavlos {Marantidis} and Antoine {Mottet} and Alexander {Okhotin}}, doi = {https://doi.org/10.1017/S0960129519000185}, journal = {Mathematical Structures in Computer Science}, number = {6}, pages = {597--626}, title = {Extensions of Unification Modulo {$\mathit{ACUI}$}}, volume = {30}, year = {2020}, }
Abstract BibTeX Entry PDF File DOI
Concrete domains have been introduced in Description Logics (DLs) to enable reference to concrete objects (such as numbers) and predefined predicates on these objects (such as numerical comparisons) when defining concepts. To retain decidability when integrating a concrete domain into a decidable DL, the domain must satisfy quite strong restrictions. In previous work, we have analyzed the most prominent such condition, called omega-admissibility, from an algebraic point of view. This provided us with useful algebraic tools for proving omega-admissibility, which allowed us to find new examples for concrete domains whose integration leaves the prototypical expressive DL ALC decidable. When integrating concrete domains into lightweight DLs of the EL family, achieving decidability is not enough. One wants reasoning in the resulting DL to be tractable. This can be achieved by using so-called p-admissible concrete domains and restricting the interaction between the DL and the concrete domain. In the present paper, we investigate p-admissibility from an algebraic point of view. Again, this yields strong algebraic tools for demonstrating p-admissibility. In particular, we obtain an expressive numerical p-admissible concrete domain based on the rational numbers. Although omega-admissibility and p-admissibility are orthogonal conditions that are almost exclusive, our algebraic characterizations of these two properties allow us to locate an infinite class of p-admissible concrete domains whose integration into ALC yields decidable DLs.
@techreport{ BaRy-LTCS-20-10, address = {Dresden, Germany}, author = {Franz {Baader} and Jakub {Rydval}}, doi = {https://doi.org/10.25368/2022.265}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, number = {20-10}, title = {An Algebraic View on p-Admissible Concrete Domains for Lightweight Description Logics (Extended Version)}, type = {LTCS-Report}, year = {2020}, }
Abstract BibTeX Entry PDF File DOI
Concrete domains have been introduced in the area of Description Logic to enable reference to concrete objects (such as numbers) and predefined predicates on these objects (such as numerical comparisons) when defining concepts. Unfortunately, in the presence of general concept inclusions (GCIs), which are supported by all modern DL systems, adding concrete domains may easily lead to undecidability. One contribution of this paper is to strengthen the existing undecidability results further by showing that concrete domains even weaker than the ones considered in the previous proofs may cause undecidability. To regain decidability in the presence of GCIs, quite strong restrictions, in sum called omega-admissiblity, need to be imposed on the concrete domain. On the one hand, we generalize the notion of omega-admissiblity from concrete domains with only binary predicates to concrete domains with predicates of arbitrary arity. On the other hand, we relate omega-admissiblity to well-known notions from model theory. In particular, we show that finitely bounded, homogeneous structures yield omega-admissible concrete domains. This allows us to show omega-admissibility of concrete domains using existing results from model theory.
@inproceedings{ BaRy-IJCAR20, author = {Franz {Baader} and Jakub {Rydval}}, booktitle = {Proceedings of the International Joint Conference on Automated Reasoning ({IJCAR} 2020)}, doi = {https://doi.org/10.1007/978-3-030-51074-9_24}, editor = {Viorica {Sofronie-Stokkermans} and Nicolas {Peltier}}, pages = {413--431}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {Description Logics with Concrete Domains and General Concept Inclusions Revisited}, volume = {12166}, year = {2020}, }
Abstract BibTeX Entry PDF File
Concrete domains have been introduced in the area of Description Logic to enable reference to concrete objects (such as numbers) and predefined predicates on these objects (such as numerical comparisons) when defining concepts. Unfortunately, in the presence of general concept inclusions (GCIs), which are supported by all modern DL systems, adding concrete domains may easily lead to undecidability. One contribution of this paper is to strengthen the existing undecidability results further by showing that concrete domains even weaker than the ones considered in the previous proofs may cause undecidability. To regain decidability in the presence of GCIs, quite strong restrictions, in sum called omega-admissiblity, need to be imposed on the concrete domain. On the one hand, we generalize the notion of omega-admissiblity from concrete domains with only binary predicates to concrete domains with predicates of arbitrary arity. On the other hand, we relate omega-admissiblity to well-known notions from model theory. In particular, we show that finitely bounded, homogeneous structures yield omega-admissible concrete domains. This allows us to show omega-admissibility of concrete domains using existing results from model theory.
@inproceedings{ BaRy-DL-20, address = {Online}, author = {Franz {Baader} and Jakub {Rydval}}, booktitle = {Proceedings of the 33rd International Workshop on Description Logics (DL'20)}, editor = {Stefan {Borgwardt} and Thomas {Meyer}}, publisher = {CEUR-WS}, series = {CEUR Workshop Proceedings}, title = {Description Logics with Concrete Domains and General Concept Inclusions Revisited (Extended Abstract)}, volume = {2663}, year = {2020}, }
Abstract BibTeX Entry PDF File DOI
Concrete domains have been introduced in the area of Description Logic to enable reference to concrete objects (such as numbers) and predefined predicates on these objects (such as numerical comparisons) when defining concepts. Unfortunately, in the presence of general concept inclusions (GCIs), which are supported by all modern DL systems, adding concrete domains may easily lead to undecidability. One contribution of this paper is to strengthen the existing undecidability results further by showing that concrete domains even weaker than the ones considered in the previous proofs may cause undecidability. To regain decidability in the presence of GCIs, quite strong restrictions, in sum called omega-admissiblity, need to be imposed on the concrete domain. On the one hand, we generalize the notion of omega-admissiblity from concrete domains with only binary predicates to concrete domains with predicates of arbitrary arity. On the other hand, we relate omega-admissiblity to well-known notions from model theory. In particular, we show that finitely bounded, homogeneous structures yield omega-admissible concrete domains. This allows us to show omega-admissibility of concrete domains using existing results from model theory.
@techreport{ BaRy-LTCS-20-01, address = {Dresden, Germany}, author = {Franz {Baader} and Jakub {Rydval}}, doi = {https://doi.org/10.25368/2022.259}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, number = {20-01}, title = {Using Model-Theory to Find $\omega$-Admissible Concrete Domains}, type = {LTCS-Report}, year = {2020}, }
BibTeX Entry
@techreport{ BoPaRy-LTCS-20-04, address = {Dresden, Germany}, author = {Manuel {Bodirsky} and Wied {Pakusa} and Jakub {Rydval}}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, number = {20-04}, title = {Temporal Constraint Satisfaction Problems in Fixed-Point Logic}, type = {LTCS-Report}, year = {2020}, }
Abstract BibTeX Entry PDF File PDF File (Extended Technical Report) DOI
Finite-domain constraint satisfaction problems are either solvable by Datalog, or not even expressible in fixed-point logic with counting. The border between the two regimes can be described by a strong height-one Maltsev condition. For infinite-domain CSPs, the situation is more complicated even if the template structure of the CSP is model-theoretically tame. We prove that there is no Maltsev condition that characterizes Datalog already for the CSPs of first-order reducts of (Q;<); such CSPs are called temporal CSPs and are of fundamental importance in infinite-domain constraint satisfaction. Our main result is a complete classification of temporal CSPs that can be expressed in one of the following logical formalisms: Datalog, fixed-point logic (with or without counting), or fixed-point logic with the Boolean rank operator. The classification shows that many of the equivalent conditions in the finite fail to capture expressibility in Datalog or fixed-point logic already for temporal CSPs.
@inproceedings{ BoRy-LICS20, address = {New York, NY, USA}, author = {Manuel {Bodirsky} and Wied {Pakusa} and Jakub {Rydval}}, booktitle = {Proceedings of the Symposium on Logic in Computer Science ({LICS} 2020)}, doi = {https://doi.org/10.1145/3373718.3394750}, pages = {237--251}, publisher = {Association for Computing Machinery}, title = {Temporal Constraint Satisfaction Problems in Fixed-Point Logic}, year = {2020}, }
Abstract BibTeX Entry PDF File
Recently, we have introduced ontologized programs as a formalism to link description logic ontologies with programs specified in the guarded command language, the de-facto standard input formalism for probabilistic model checking tools such as Prism, to allow for an ontology-mediated verification of stochastic systems. Central to our approach is a complete separation of the two formalisms involved: guarded command language to describe the dynamics of the stochastic system and description logics are used to model background knowledge about the system in an ontology. In ontologized programs, these two languages are loosely coupled by an interface that mediates between these two worlds. Under the original semantics defined for ontologized programs, a program may enter a state that is inconsistent with the ontology, which limits the capabilities of the description logic component. We approach this issue in different ways by introducing consistency notions, and discuss two alternative semantics for ontologized programs. Furthermore, we present complexity results for checking whether a program is consistent under the different semantics.
@inproceedings{ DuKoTu-DL-20, author = {Clemens {Dubslaff} and Patrick {Koopmann} and Anni-Yasmin {Turhan}}, booktitle = {{DL 2020:} International Workshop on Description Logics}, publisher = {CEUR-WS.org}, series = {CEUR-WS}, title = {Give Inconsistency a Chance: Semantics for Ontology-Mediated Verification}, year = {2020}, }
Abstract BibTeX Entry DOI
Metric Temporal Logic (MTL) and Timed Propositional Temporal Logic (TPTL) are quantitative extensions of Linear Temporal Logic (LTL) that are prominent and widely used in the verification of real-timed systems. We study MTL and TPTL as specification languages for one-counter machines. It is known that model checking one-counter machines against formulas of Freeze LTL (FLTL), a strict fragment of TPTL, is undecidable. We prove that in our setting, MTL is strictly less expressive than TPTL, and incomparable in expressiveness to FLTL, so undecidability for MTL is not implied by the result for FLTL. We show, however, that the model-checking problem for MTL is undecidable. We further prove that the satisfiability problem for the unary fragments of TPTL and MTL are undecidable; for TPTL, this even holds for the fragment in which only one register and the finally modality is used. This is opposed to a known decidability result for the satisfiability problem for the same fragment of FLTL.
@article{ FengCFQ-TOCL-20, author = {Shiguang {Feng} and Claudia {Carapelle} and Oliver {Fern{\'{a}}ndez Gil} and Karin {Quaas}}, doi = {https://doi.org/10.1145/3372789}, journal = {{ACM} Trans. Comput. Log.}, number = {2}, pages = {12:1--12:34}, title = {{MTL} and {TPTL} for One-Counter Machines: Expressiveness, Model Checking, and Satisfiability}, volume = {21}, year = {2020}, }
BibTeX Entry PDF File
@inproceedings{ FlLaAlDa-VOILA-20, author = {Tamara {Flemisch} and Ricardo {Langner} and Christian {Alrabbaa} and Raimund {Dachselt}}, booktitle = {Proceedings of the Fifth International Workshop on Visualization and Interaction for Ontologies and Linked Data co-located with the 19th International Semantic Web Conference {(ISWC} 2020), Virtual Conference (originally planned in Athens, Greece), November 02, 2020}, editor = {Valentina {Ivanova} and Patrick {Lambrix} and Catia {Pesquita} and Vitalis {Wiens}}, pages = {28--40}, publisher = {CEUR-WS.org}, series = {{CEUR} Workshop Proceedings}, title = {Towards Designing a Tool For Understanding Proofs in Ontologies through Combined Node-Link Diagrams}, volume = {2778}, year = {2020}, }
Abstract BibTeX Entry PDF File
Models for Description Logic (DL) ontologies can be used for explanation purposes, but the models computed by standard DL reasoner systems are not necessarily suitable for this task. In this paper we investigate the general task of transforming an arbitrary model into a target model that admits the desired property. To this end, we introduce a general framework for model transformation that abstracts from the DL in use, the property of the target model and the transformation formalism. We consider instantiations of the framework for the DLs \(\mathcal{ALC}\) and \(\mathcal{EL}\) and use transductions as transformation formalism.
@inproceedings{ HiTu-FCR20, author = {Willi {Hieke} and Anni-Yasmin {Turhan}}, booktitle = {Proceedings of the 6th Workshop on Formal and Cognitive Reasoning (FCR-2020) co-located with 43rd German Conference on Artificial Intelligence (KI-2020)}, editor = {Christoph {Beierle} and Marco {Ragni} and Frieder {Stolzenburg} and Matthias {Thimm}}, pages = {69--82}, publisher = {CEUR-WS.org}, series = {CEUR Workshop Proceedings}, title = {Towards Model Transformation in Description Logics --- Investigating the Case of Transductions}, volume = {2680}, year = {2020}, }
BibTeX Entry DOI
@article{ Pe-KI20, author = {Maximilian {Pensel}}, doi = {https://doi.org/10.1007/s13218-020-00644-z}, journal = {KI - K{\"{u}}nstliche Intelligenz}, title = {{A Lightweight Defeasible Description Logic in Depth - Quantification in Rational Reasoning and Beyond}}, year = {2020}, }
2019
Abstract BibTeX Entry PDF File DOI (The final publication is available at link.springer.com) ©Springer Nature Switzerland AG 2019
In recent work we have extended the description logic (DL) by means of more expressive number restrictions using numerical and set constraints stated in the quantifier-free fragment of Boolean Algebra with Presburger Arithmetic (QFBAPA). It has been shown that reasoning in the resulting DL, called \(\mathcal{ALCSCC}\), is PSpace-complete without a TBox and ExpTime-complete w.r.t. a general TBox. The semantics of \(\mathcal{ALCSCC}\) is defined in terms of finitely branching interpretations, that is, interpretations where every element has only finitely many role successors. This condition was needed since QFBAPA considers only finite sets. In this paper, we first introduce a variant of \(\mathcal{ALCSCC}\), called \(\mathcal{ALCSCC}^\infty\), in which we lift this requirement (inexpressible in first-order logic) and show that the complexity results for \(\mathcal{ALCSCC}\) mentioned above are preserved. Nevertheless, like \(\mathcal{ALCSCC}\), \(\mathcal{ALCSCC}^\infty\) is not a fragment of first-order logic. The main contribution of this paper is to give a characterization of the first-order fragment of \(\mathcal{ALCSCC}^\infty\). The most important tool used in the proof of this result is a notion of bisimulation that characterizes this fragment.
@inproceedings{ BaDeBo-FroCoS19, author = {Franz {Baader} and Filippo {De Bortoli}}, booktitle = {Proc. of the 12th International Symposium on Frontiers of Combining Systems ({FroCoS} 2019)}, doi = {https://doi.org/10.1007/978-3-030-29007-8_12}, editor = {Andreas {Herzig} and Andrei {Popescu}}, pages = {203--219}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {On the Expressive Power of Description Logics with Cardinality Constraints on Finite and Infinite Sets}, volume = {11715}, year = {2019}, }
Abstract BibTeX Entry PDF File DOI
Simple counting quantifiers that can be used to compare the number of role successors of an individual or the cardinality of a concept with a fixed natural number have been employed in Description Logics (DLs) for more than two decades under the respective names of number restrictions and cardinality restrictions on concepts. Recently, we have considerably extended the expressivity of such quantifiers by allowing to impose set and cardinality constraints formulated in the quantifier-free fragment of Boolean Algebra with Presburger Arithmetic (QFBAPA) on sets of role successors and concepts, respectively. We were able to prove that this extension does not increase the complexity of reasoning. In the present paper, we investigate the expressive power of the DLs obtained this way, using appropriate bisimulation characterizations and 0–1 laws as tools for distinguishing the expressiveness of different logics. In particular, we show that, in contrast to most classical DLs, these logics are no longer expressible in first-order predicate logic (FOL), and we characterize their first-order fragments. In most of our previous work on DLs with QFBAPA-based set and cardinality constraints we have employed finiteness restrictions on interpretations to ensure that the obtained sets are finite. Here we dispense with these restrictions to make the comparison with classical DLs, where one usually considers arbitrary models rather than finite ones, easier. It turns out that doing so does not change the complexity of reasoning.
@techreport{ BaBo-LTCS-19-09, address = {Dresden, Germany}, author = {Franz {Baader} and Filippo {De Bortoli}}, doi = {https://doi.org/10.25368/2022.258}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tu-dresden.de/inf/lat/reports#BaBo-LTCS-19-09}}, number = {19-09}, title = {{On the Complexity and Expressiveness of Description Logics with Counting}}, type = {LTCS-Report}, year = {2019}, }
Abstract BibTeX Entry PDF File
Matching concept descriptions against concept patterns was introduced as a new inference task in Description Logics two decades ago, motivated by applications in the Classic system. Shortly afterwards, a polynomial-time matching algorithm was developed for the DL FL0. However, this algorithm cannot deal with general TBoxes (i.e., finite sets of general concept inclusions). Here we show that matching in FL0 w.r.t. general TBoxes is in ExpTime, which is the best possible complexity for this problem since already subsumption w.r.t. general TBoxes is ExpTime-hard in FL0. We also show that, w.r.t. a restricted form of TBoxes, the complexity of matching in FL0 can be lowered to PSpace.
@inproceedings{ BaFeMa-DL19, author = {Franz {Baader} and Oliver {Fern\'andez Gil} and Pavlos {Marantidis}}, booktitle = {Proceedings of the 32nd International Workshop on Description Logics (DL'19)}, editor = {Mantas {Simkus} and Grant {Weddell}}, publisher = {CEUR-WS}, series = {CEUR Workshop Proceedings}, title = {Matching in the Description Logic {FL0} with respect to General TBoxes (Extended abstract)}, volume = {2373}, year = {2019}, }
Abstract BibTeX Entry PDF File DOI
Probabilistic model checking (PMC) is a well-established method for the quantitative analysis of dynamic systems. Description logics (DLs) provide a well-suited formalism to describe and reason about terminological knowledge, used in many areas to specify background knowledge on the domain. We investigate how such knowledge can be integrated into the PMC process, introducing ontology-mediated PMC. Specifically, we propose a formalism that links ontologies to dynamic behaviors specified by guarded commands, the de-facto standard input formalism for PMC tools such as Prism. Further, we present and implement a technique for their analysis relying on existing DL-reasoning and PMC tools. This way, we enable the application of standard PMC techniques to analyze knowledge-intensive systems. Our approach is implemented and evaluated on a multi-server system case study, where different DL-ontologies are used to provide specifications of different server platforms and situations the system is executed in.
@inproceedings{ DKT-iFM-19, author = {Clemens {Dubslaff} and Patrick {Koopmann} and Anni{-}Yasmin {Turhan}}, booktitle = {Proceedings of the 15th International Conference on Integrated Formal Methods (iFM'19)}, doi = {https://doi.org/10.1007/978-3-030-34968-4\_11}, editor = {Wolfgang {Ahrendt} and Silvia Lizeth {Tapia Tarifa}}, note = {Best Paper Award.}, pages = {194--211}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {Ontology-Mediated Probabilistic Model Checking}, volume = {11918}, year = {2019}, }
Abstract BibTeX Entry PDF File Publication
Unification in description logics (DLs) has been introduced as a novel inference service that can be used to detect redundancies in ontologies, by finding different concepts that may potentially stand for the same intuitive notion. Together with the special case of matching, they were first investigated in detail for the DL FL0, where these problems can be reduced to solving certain language equations. In this thesis, we extend this service in two directions. In order to increase the recall of this method for finding redundancies, we introduce and investigate the notion of approximate unification, which basically finds pairs of concepts that “almost” unify, in order to account for potential small modelling errors. The meaning of “almost” is formalized using distance measures between concepts. We show that approximate unification in FL0 can be reduced to approximately solving language equations, and devise algorithms for solving the latter problem for particular distance measures. Furthermore, we make a first step towards integrating background knowledge, formulated in so-called TBoxes, by investigating the special case of matching in the presence of TBoxes of different forms. We acquire a tight complexity bound for the general case, while we prove that the problem becomes easier in a restricted setting. To achieve these bounds, we take advantage of an equivalence characterization of FL0 concepts that is based on formal languages. In addition, we incorporate TBoxes in computing concept distances. Even though our results on the approximate setting cannot deal with TBoxes yet, we prepare the framework that future research can build on. Before we journey to the technical details of the above investigations, we showcase our program in the simpler setting of the equational theory ACUI, where we are able to also combine the two extensions. In the course of studying the above problems, we make heavy use of automata theory, where we also derive novel results that could be of independent interest.
@thesis{ Marantidis-Diss-2019, address = {Dresden, Germany}, author = {Pavlos {Marantidis}}, school = {Technische Universit\"{a}t Dresden}, title = {Quantitative Variants of Language Equations and their Applications to Description Logics}, type = {Doctoral Thesis}, year = {2019}, }
Abstract BibTeX Entry PDF File Publication
Description Logics (DLs) are increasingly successful knowledge representation formalisms, useful for any application requiring implicit derivation of knowledge from explicitly known facts. A prominent example domain benefiting from these formalisms since the 1990s is the biomedical field. This area contributes an intangible amount of facts and relations between low- and high-level concepts such as the constitution of cells or interactions between studied illnesses, their symptoms and remedies. DLs are well-suited for handling large formal knowledge repositories and computing inferable coherences throughout such data, relying on their well-founded first-order semantics. In particular, DLs of reduced expressivity have proven a tremendous worth for handling large ontologies due to their computational tractability. In spite of these assets and prevailing influence, classical DLs are not well-suited to adequately model some of the most intuitive forms of reasoning. The capability for abductive reasoning is imperative for any field subjected to incomplete knowledge and the motivation to complete it with typical expectations. When such default expectations receive contradicting evidence, an abductive formalism is able to retract previously drawn, conflicting conclusions. Common examples often include human reasoning or a default characterisation of properties in biology, such as the normal arrangement of organs in the human body. Treatment of such defeasible knowledge must be aware of exceptional cases - such as a human suffering from the congenital condition situs inversus - and therefore accommodate for the ability to retract defeasible conclusions in a non-monotonic fashion. Specifically tailored non-monotonic semantics have been continuously investigated for DLs in the past 30 years. A particularly promising approach, is rooted in the research by Kraus, Lehmann and Magidor for preferential (propositional) logics and Rational Closure (RC). The biggest advantages of RC are its well-behaviour in terms of formal inference postulates and the efficient computation of defeasible entailments, by relying on a tractable reduction to classical reasoning in the underlying formalism. A major contribution of this work is a reorganisation of the core of this reasoning method, into an abstract framework formalisation. This framework is then easily instantiated to provide the reduction method for RC in DLs as well as more advanced closure operators, such as Relevant or Lexicographic Closure. In spite of their practical aptitude, we discovered that all reduction approaches fail to provide any defeasible conclusions for elements that only occur in the relational neighbourhood of the inspected elements. More explicitly, a distinguishing advantage of DLs over propositional logic is the capability to model binary relations and describe aspects of a related concept in terms of existential and universal quantification. Previous approaches to RC (and more advanced closures) are not able to derive typical behaviour for the concepts that occur within such quantification. The main contribution of this work is to introduce stronger semantics for the lightweight DL ELbot with the capability to infer the expected entailments, while maintaining a close relation to the reduction method. We achieve this by introducing a new kind of first-order interpretation that allocates defeasible information on its elements directly. This allows to compare the level of typicality of such interpretations in terms of defeasible information satisfied at elements in the relational neighbourhood. A typicality preference relation then provides the means to single out those sets of models with maximal typicality. Based on this notion, we introduce two types of nested rational semantics, a sceptical and a selective variant, each capable of deriving the missing entailments under RC for arbitrarily nested quantified concepts. As a proof of versatility for our new semantics, we also show that the stronger Relevant Closure, can be imbued with typical information in the successors of binary relations. An extensive investigation into the computational complexity of our new semantics shows that the sceptical nested variant comes at considerable additional effort, while the selective semantics reside in the complexity of classical reasoning in the underlying DL, which remains tractable in our case.
@thesis{ Pensel-Diss-2019, address = {Dresden, Germany}, author = {Maximilian {Pensel}}, school = {Technische Universit\"{a}t Dresden}, title = {A Lightweight Defeasible Description Logic in Depth: Quantification in Rational Reasoning and Beyond}, type = {Doctoral Thesis}, year = {2019}, }
2018
Abstract BibTeX Entry PDF File
Even for small logic programs, the number of resulting answer-sets can be tremendous. In such cases, users might be incapable of comprehending the space of answer-sets as a whole nor being able to identify a specific answer-set according to their needs. To overcome this difficulty, we propose a general formal framework that takes an arbitrary logic program as input, and allows for navigating the space of answer- sets in a systematic interactive way analogous to faceted browsing. The navigation is carried out stepwise, where each step narrows down the remaining solutions, eventually arriving at a single one. We formulate two navigation modes, one stringent conflict avoiding, and a “free” mode, where conflicting selections of facets might occur. For the latter mode, we provide efficient algorithms for resolving the conflicts. We provide an implementation of our approach and demonstrate that our framework is able to handle logic programs for which it is currently infeasible to retrieve all answer sets.
@inproceedings{ AlRuSc-RR18, author = {Christian {Alrabbaa} and Sebastian {Rudolph} and Lukas {Schweizer}}, booktitle = {Proceedings of Rules and Reasoning - Second International Joint Conference, RuleML+RR 2018}, editor = {Christoph Benzm{\"{u}}ller and Francesco Ricca and Xavier {Parent} and Dumitru {Roman}}, pages = {211--225}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {Faceted Answer-Set Navigation}, volume = {11092}, year = {2018}, }
Abstract BibTeX Entry PDF File DOI
Matching concept descriptions against concept patterns was introduced as a new inference task in Description Logics two decades ago, motivated by applications in the Classic system. Shortly afterwards, a polynomial-time matching algorithm was developed for the DL FL0. However, this algorithm cannot deal with general TBoxes (i.e., finite sets of general concept inclusions). Here we show that matching in FL0 w.r.t. general TBoxes is in ExpTime, which is the best possible complexity for this problem since already subsumption w.r.t. general TBoxes is ExpTime-hard in FL0. We also show that, w.r.t. a restricted form of TBoxes, the complexity of matching in FL0 can be lowered to PSpace.
@inproceedings{ BaFeMa18-LPAR, author = {Franz {Baader} and Oliver {Fern\'andez Gil} and Pavlos {Marantidis}}, booktitle = {LPAR-22. 22nd International Conference on Logic for Programming, Artificial Intelligence and Reasoning}, doi = {https://doi.org/10.29007/q74p}, editor = {Gilles {Barthe} and Geoff {Sutcliffe} and Margus {Veanes}}, pages = {76--94}, publisher = {EasyChair}, series = {EPiC Series in Computing}, title = {Matching in the Description Logic $\mathcal{FL}_0$ with respect to General TBoxes}, volume = {57}, year = {2018}, }
Abstract BibTeX Entry PDF File DOI
Although being quite inexpressive, the description logic (DL) FL0, which provides only conjunction, value restriction and the top concept as concept constructors, has an intractable subsumption problem in the presence of terminologies (TBoxes): subsumption reasoning w.r.t. acyclic FL0 TBoxes is coNP-complete, and becomes even ExpTime-complete in case general TBoxes are used. In the present paper, we use automata working on infinite trees to solve both standard and non-standard inferences in FL0 w.r.t. general TBoxes. First, we give an alternative proof of the ExpTime upper bound for subsumption in FL0 w.r.t. general TBoxes based on the use of looping tree automata. Second, we employ parity tree automata to tackle non-standard inference problems such as computing the least common subsumer and the difference of FL0 concepts w.r.t. general TBoxes.
@techreport{ BaFePe-LTCS-18-04, address = {Dresden, Germany}, author = {Franz {Baader} and Oliver {Fern{\'a}ndez Gil} and Maximilian {Pensel}}, doi = {https://doi.org/10.25368/2022.240}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See http://lat.inf.tu-dresden.de/research/reports.html}, number = {18-04}, title = {Standard and Non-Standard Inferences in the Description Logic $\mathcal{FL}_0$ Using Tree Automata}, type = {LTCS-Report}, year = {2018}, }
Abstract BibTeX Entry PDF File DOI
Although being quite inexpressive, the description logic (DL) FL0, which provides only conjunction, value restriction and the top concept as concept constructors, has an intractable subsumption problem in the presence of terminologies (TBoxes): subsumption reasoning w.r.t. acyclic FL0 TBoxes is coNP-complete, and becomes even ExpTime-complete in case general TBoxes are used. In the present paper, we use automata working on infinite trees to solve both standard and non-standard inferences in FL0 w.r.t. general TBoxes. First, we give an alternative proof of the ExpTime upper bound for subsumption in FL0 w.r.t. general TBoxes based on the use of looping tree automata. Second, we employ parity tree automata to tackle non-standard inference problems such as computing the least common subsumer and the difference of FL0 concepts w.r.t. general TBoxes.
@inproceedings{ BaFePe-GCAI-18, author = {Franz {Baader} and Oliver {Fern\'andez Gil} and Maximilian {Pensel}}, booktitle = {{GCAI} 2018, 4th Global Conference on Artificial Intelligence, Luxembourg, September 2018.}, doi = {https://doi.org/10.29007/scbw}, editor = {Daniel {Lee} and Alexander {Steen} and Toby {Walsh}}, pages = {1--14}, publisher = {EasyChair}, series = {EPiC Series in Computing}, title = {Standard and Non-Standard Inferences in the Description Logic {$\mathcal{FL}_0$} Using Tree Automata}, volume = {55}, year = {2018}, }
BibTeX Entry PDF File
@inproceedings{ BaFeRo-DL-21, author = {Franz {Baader} and Oliver {Fern{\'a}ndez Gil} and Maximilian {Pensel}}, booktitle = {Proceedings of the 31th International Workshop on Description Logics ({DL} 2018), Arizona, US, October 27--29, 2018}, editor = {Magdalena {Ortiz} and Thomas {Schneider}}, publisher = {CEUR-WS.org}, series = {{CEUR} Workshop Proceedings}, title = {Standard and Non-Standard Inferences in the Description Logic {$\mathcal{FL}_0$} Using Tree Automata}, volume = {2211}, year = {2018}, }
Abstract BibTeX Entry PDF File
It is well-known that the unification problem for a binary associative-commutative-idempotent function symbol with a unit (ACUI-unification) is polynomial for unification with constants and NP-complete for general unification. We prove that the same is true if we add a finite set of ground identities. To be more precise, we first show that not only unification with constants, but also unification with linear constant restrictions is in P for any extension of ACUI with a finite set of ground identities. Using well-known combination results for unification algorithms, this then yields an NP-upper bound for general unification modulo such a theory. The matching lower bound can be shown as in the case without ground identities.
@inproceedings{ BaMaMo-UNIF2018, address = {Oxford, UK}, author = {Franz {Baader} and Pavlos {Marantidis} and Antoine {Mottet}}, booktitle = {Proceedings of the 32th International Workshop on Unification ({UNIF 2018})}, editor = {Mauricio {Ayala-Rinc{\'o}n} and Philippe {Balbiani}}, title = {ACUI Unification modulo Ground Theories}, year = {2018}, }
Abstract BibTeX Entry PDF File DOI
Ontology-mediated query answering can be used to access large data sets through a mediating ontology. It has drawn considerable attention in the Description Logic (DL) community where both the complexity of query answering and practical query answering approaches based on rewriting were investigated in detail. Surprisingly, there is still a gap in what is known about the data complexity of query answering w.r.t. ontologies formulated in the inexpressive DL FL0. While it is known that the data complexity of answering conjunctive queries w.r.t. FL0 ontologies is coNP-complete, the exact complexity of answering instance queries was open until now. In the present paper, we show that answering instance queries w.r.t. FL0 ontologies is in P for data complexity. Together with the known lower bound of P-completeness for a fragment of FL0, this closes the gap mentioned above.
@inproceedings{ BaMaPe-RoD-18, author = {Franz {Baader} and Pavlos {Marantidis} and Maximilian {Pensel}}, booktitle = {Proc.\ of the Reasoning on Data Workshop (RoD'18), Companion of the The Web Conference 2018}, doi = {https://dx.doi.org/10.1145/3184558.3191618}, pages = {1603--1607}, publisher = {ACM}, title = {The Data Complexity of Answering Instance Queries in $\mathcal{FL}_0$}, year = {2018}, }
Abstract BibTeX Entry DOI
Defeasible Description Logics (DDLs) extend Description Logics with defeasible concept inclusions. Reasoning in DDLs often employs rational closure according to the (propositional) KLM postulates. A well-known approach to lift this closure to DDLs is by so-called materialisation. Previously investigated algorithms for materialisation-based reasoning employ reductions to classical reasoning using all Boolean connectors. As a first result in this paper, we present a materialisation-based algorithm for the sub-Boolean DDL ELbot, using a reduction to reasoning in classical ELbot, rendering materialisation-based defeasible reasoning tractable. The main contribution of this article is a kind of canonical model construction, which can be used to decide defeasible subsumption and instance queries in ELbot under rational and the stronger relevant entailment. Our so-called typicality models can reproduce the entailments obtained from materialisation-based rational and relevant closure and, more importantly, obtain stronger versions of rational and relevant entailment. These do not suffer from neglecting defeasible information for concepts appearing nested inside quantifications, which all materialisation-based approaches do. We also show the computational complexity of defeasible subsumption and instance checking in our stronger rational and relevant semantics.
@article{ PeTu-IJAR-18, author = {Maximilian {Pensel} and Anni-Yasmin {Turhan}}, doi = {https://doi.org/10.1016/j.ijar.2018.08.005}, journal = {International Journal of Approximate Reasoning ({IJAR})}, pages = {28--70}, title = {Reasoning in the Defeasible Description Logic $\mathcal{EL}_{\bot}$---Computing Standard Inferences under Rational and Relevant Semantics}, volume = {103}, year = {2018}, }
Abstract BibTeX Entry PDF File (The final publication is available at link.springer.com) ©Springer Nature Switzerland
The fixed-domain semantics for OWL and description logic has been introduced to open up the OWL modeling and reasoning tool landscape for use cases resembling constraint satisfaction problems. While standard reasoning under this new semantics is by now rather well-understood theoretically and supported practically, more elaborate tasks like computation of justifications have not been considered so far, although being highly important in the modeling phase. In this paper, we compare three approaches to this problem: one using standard OWL technology employing an axiomatization of the fixed-domain semantics, one using our dedicated fixed-domain reasoner Wolpertinger in combination with standard justification computation technology, and one where the problem is encoded entirely into answer-set programming.
@inproceedings{ RuScTi-RR-18, address = {Cham}, author = {Sebastian {Rudolph} and Lukas {Schweizer} and Satyadharma {Tirtarasa}}, booktitle = {Rules and Reasoning}, editor = {Christoph {Benzm{\"u}ller} and Francesco {Ricca} and Xavier {Parent} and Dumitru {Roman}}, pages = {185--200}, publisher = {Springer International Publishing}, title = {Justifications for Description Logic Knowledge Bases Under the Fixed-Domain Semantics}, year = {2018}, }
2017
Abstract BibTeX Entry PDF File
In a recent research paper, we have proposed an extension of the light-weight Description Logic (DL) EL in which concepts can be defined in an approximate way. To this purpose, the notion of a graded membership function m, which instead of a Boolean membership value 0 or 1 yields a membership degree from the interval [0,1], was introduced. Threshold concepts can then, for example, require that an individual belongs to a concept C with degree at least 0.8. Reasoning in the threshold DL tel(m) obtained this way of course depends on the employed graded membership function m. The paper defines a specific such function, called deg, and determines the exact complexity of reasoning in tel(deg). In addition, it shows how concept similarity measures (CSMs) satisfying certain properties can be used to define graded membership functions m , but it does not investigate the complexity of reasoning in the induced threshold DLs tel(m ). In the present paper, we start filling this gap. In particular, we show that computability of implies decidability of tel(m ), and we introduce a class of CSMs for which reasoning in the induced threshold DLs has the same complexity as in tel(deg).
@inproceedings{ sacBaFe17, author = {Franz {Baader} and Oliver {Fern{\'a}ndez Gil}}, booktitle = {Proceedings of the 32nd Annual {ACM} Symposium on Applied Computing, Marrakech, Morocco, April 4-6, 2017}, pages = {983--988}, publisher = {{ACM}}, title = {Decidability and Complexity of Threshold Description Logics Induced by Concept Similarity Measures}, year = {2017}, }
Abstract BibTeX Entry PDF File DOI
Recently introduced approaches for relaxed query answering, approximately defining concepts, and approximately solving unification problems in Description Logics have in common that they are based on the use of concept comparison measures together with a threshold construction. In this paper, we will briefly review these approaches, and then show how weighted automata working on infinite trees can be used to construct computable concept comparison measures for FL0 that are equivalence invariant w.r.t. general TBoxes. This is a first step towards employing such measures in the mentioned approximation approaches.
@inproceedings{ BaFM-LATA17, author = {Franz {Baader} and Oliver {Fern{\'a}ndez Gil} and Pavlos {Marantidis}}, booktitle = {Proceedings of the 11th International Conference on Language and Automata Theory and Applications ({LATA 2017})}, doi = {https://doi.org/10.1007/978-3-319-53733-7_1}, editor = {Frank {Drewes} and Carlos {Mart{\'i}n{-}Vide} and Bianca {Truthe}}, pages = {3--26}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {Approximation in Description Logics: How Weighted Tree Automata Can Help to Define the Required Concept Comparison Measures in $\mathcal{FL}_0$}, venue = {Ume{\aa}, Sweden}, volume = {10168}, year = {2017}, }
Abstract BibTeX Entry PDF File
Both matching and unification in the Description Logic FL0 can be reduced to solving certain formal language equations. In previous work, we have extended unification in FL0 to approximate unification, and have shown that approximate unification can be reduced to approximately solving language equations. An approximate solution of a language equation need not make the languages on the left- and right-hand side of the equation equal, but close w.r.t. a given distance function. In the present paper, we consider approximate matching. We show that, for a large class of distance functions, approximate matching is in NP. We then consider a particular distance function d1(K,L) = 2−n, where n is the length of the shortest word in the symmetric difference of the languages K, L, and show that w.r.t. this distance function approximate matching is polynomial.
@inproceedings{ BaMa-UNIF2017, address = {Oxford, UK}, author = {Franz {Baader} and Pavlos {Marantidis}}, booktitle = {Proceedings of the 31st International Workshop on Unification ({UNIF'17})}, editor = {Adri\`{a} {Gasc\'{o}n} and Christopher {Lynch}}, title = {Language equations for approximate matching in the Description Logic FL0}, year = {2017}, }
Abstract BibTeX Entry PDF File DOI
Defeasible Description Logics (DDLs) extend classical Description Logics with defeasible concept inclusions and offer thereby a form of non-monotonicity. For reasoning in such settings often the rational closure according to the well-known KLM postulates (for propositional logic) was employed in earlier approaches. If in DDLs that use quantification a defeasible subsumption relationship holds between two concepts, such a relationship might also hold if these concepts appear nested in existential restrictions. Earlier reasoning algorithms for DDLs do not detect this kind of defeasible subsumption relationships. We devise a new form of canonical models that extend classical canonical models for ELbot by elements that satisfy increasing amounts of defeasible knowledge. We show that reasoning based on these models yields the missing entailments.
@inproceedings{ PeTu-LPNMR-17, author = {Maximilian {Pensel} and Anni-Yasmin {Turhan}}, booktitle = {Proceedings of the 14th International Conference on Logic Programming and Nonmonotonic Reasoning - {LPNMR}}, doi = {https://doi.org/10.1007/978-3-319-61660-5_9}, editor = {Marcello {Balduccini} and Tomi {Janhunen}}, pages = {78--84}, publisher = {Springer}, title = {Including Quantification in Defeasible Reasoning for the Description Logic $\mathcal{EL}_{\bot}$}, year = {2017}, }
Abstract BibTeX Entry PDF File DOI
Defeasible Description Logics (DDLs) extend Description Logics with defeasible concept inclusions. Reasoning in DDLs often employs rational or relevant closure according to the (propositional) KLM postulates. If in DDLs with quantification a defeasible subsumption relationship holds between concepts, this relationship might also hold if these concepts appear in existential restrictions. Such nested defeasible subsumption relationships were not detected by earlier reasoning algorithms—neither for rational nor relevant closure. In this report, we present a new approach for ELbot that alleviates this problem for relevant closure (the strongest form of preferential reasoning currently investigated) by the use of typicality models that extend classical canonical models by domain elements that individually satisfy any amount of consistent defeasible knowledge. We also show that a certain restriction on the domain of the typicality models in this approach yields inference results that correspond to the (weaker) more commonly known rational closure.
@techreport{ PeTu-LTCS-17-01, address = {Dresden, Germany}, author = {Maximilian {Pensel} and Anni-Yasmin {Turhan}}, doi = {https://doi.org/10.25368/2022.231}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See http://lat.inf.tu-dresden.de/research/reports.html}, number = {17-01}, title = {Making Quantification Relevant again---the Case of Defeasible $\mathcal{EL}_{\bot}$}, type = {LTCS-Report}, year = {2017}, }
Abstract BibTeX Entry PDF File
Defeasible Description Logics (DDLs) extend Description Logics with defeasible concept inclusions. Reasoning in DDLs often employs rational or relevant closure according to the (propositional) KLM postulates. If in DDLs with quantification a defeasible subsumption relationship holds between concepts, this relationship might also hold if these concepts appear in existential restrictions. Such nested defeasible subsumption relationships were not detected by earlier reasoning algorithms—neither for rational nor relevant closure. Recently, we devised a new approach for ELbot that alleviates this problem for rational closure by the use of typicality models that extend classical canonical models by domain elements that individually satisfy any amount of consistent defeasible knowledge. In this paper we lift our approach to relevant closure and show that reasoning based on typicality models yields the missing entailments.
@inproceedings{ PeTu-DARE-17, author = {Maximilian {Pensel} and Anni-Yasmin {Turhan}}, booktitle = {Proceedings of the 4th International Workshop on Defeasible and Ampliative Reasoning - {DARe}}, editor = {Richard {Booth} and Giovanni {Casini} and Ivan {Varzinczak}}, publisher = {CEUR-WS.org}, title = {Making Quantification Relevant again---the Case of Defeasible $\mathcal{EL}_{\bot}$}, year = {2017}, }
2016
Abstract BibTeX Entry PDF File
We introduce an extension to Description Logics that allows us to use prototypes to define concepts. To accomplish this, we introduce the notion of a prototype distance functions (pdf), which assign to each element of an interpretation a distance value. Based on this, we define a new concept constructor of the form P n(d) for being a relation from ≤,<,≥,>, which is interpreted as the set of all elements with a distance n according to the pdf d. We show how weighted alternating parity tree automata (wapta) over the integers can be used to define pdfs, and how this allows us to use both concepts and pointed interpretations as prototypes. Finally, we investigate the complexity of reasoning in ALCP(wapta), which extends the Description Logic ALC with prototype constructors for pdfs defined using wapta.
@inproceedings{ BaEc-LATA16, author = {Franz {Baader} and Andreas {Ecke}}, booktitle = {Proceedings of the 10th International Conference on Language and Automata Theory and Applications (LATA 2016)}, pages = {63--75}, publisher = {Springer-Verlag}, series = {Lecture Notes in Computer Science}, title = {Reasoning with Prototypes in the Description Logic ALC using Weighted Tree Automata}, venue = {Prague, Czeck}, volume = {9618}, year = {2016}, }
Abstract BibTeX Entry PDF File DOI
In a recent research paper, we have proposed an extension of the light-weight Description Logic (DL) EL in which concepts can be defined in an approximate way. To this purpose, the notion of a graded membership function m, which instead of a Boolean membership value 0 or 1 yields a membership degree from the interval [0,1], was introduced. Threshold concepts can then, for example, require that an individual belongs to a concept C with degree at least 0.8. Reasoning in the threshold DL tel(m) obtained this way of course depends on the employed graded membership function m. The paper defines a specific such function, called deg, and determines the exact complexity of reasoning in tel(deg). In addition, it shows how concept similarity measures (CSMs) satisfying certain properties can be used to define graded membership functions m , but it does not investigate the complexity of reasoning in the induced threshold DLs tel(m ). In the present paper, we start filling this gap. In particular, we show that computability of implies decidability of tel(m ), and we introduce a class of CSMs for which reasoning in the induced threshold DLs has the same complexity as in tel(deg).
@techreport{ BaFe-LTCS-16-07, address = {Dresden, Germany}, author = {Franz {Baader} and Oliver {Fern{\'a}ndez Gil}}, doi = {https://doi.org/10.25368/2022.229}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See http://lat.inf.tu-dresden.de/research/reports.html}, number = {16-07}, title = {Decidability and Complexity of Threshold Description Logics Induced by Concept Similarity Measures}, type = {LTCS-Report}, year = {2016}, }
Abstract BibTeX Entry PDF File DOI
In a previous paper, we have introduced an extension of the lightweight Description Logic EL that allows us to define concepts in an approximate way. For this purpose, we have defined a graded membership function deg, which for each individual and concept yields a number in the interval [0,1] expressing the degree to which the individual belongs to the concept. Threshold concepts C t for in <, <=, >, >= then collect all the individuals that belong to C with degree t. We have then investigated the complexity of reasoning in the Description Logic tEL(deg), which is obtained from EL by adding such threshold concepts. In the present paper, we extend these results, which were obtained for reasoning without TBoxes, to the case of reasoning w.r.t. acyclic TBoxes. Surprisingly, this is not as easy as might have been expected. On the one hand, one must be quite careful to define acyclic TBoxes such that they still just introduce abbreviations for complex concepts, and thus can be unfolded. On the other hand, it turns out that, in contrast to the case of EL, adding acyclic TBoxes to tEL(deg) increases the complexity of reasoning by at least on level of the polynomial hierarchy.
@techreport{ BaFe-LTCS-16-02, address = {Dresden, Germany}, author = {Franz {Baader} and Oliver {Fern{\'a}ndez Gil}}, doi = {https://doi.org/10.25368/2022.226}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See http://lat.inf.tu-dresden.de/research/reports.html}, number = {16-02}, title = {Extending the Description Logic $\tau\mathcal{EL}(deg)$ with Acyclic TBoxes}, type = {LTCS-Report}, year = {2016}, }
Abstract BibTeX Entry PDF File
In a previous paper, we have introduced an extension of the lightweight Description Logic EL that allows us to define concepts in an approximate way. For this purpose, we have defined a graded membership function deg, which for each individual and concept yields a number in the interval [0,1] expressing the degree to which the individual belongs to the concept. Threshold concepts C t for in <, <=, >, >= then collect all the individuals that belong to C with degree t. We have then investigated the complexity of reasoning in the Description Logic tEL(deg), which is obtained from EL by adding such threshold concepts. In the present paper, we extend these results, which were obtained for reasoning without TBoxes, to the case of reasoning w.r.t. acyclic TBoxes. Surprisingly, this is not as easy as might have been expected. On the one hand, one must be quite careful to define acyclic TBoxes such that they still just introduce abbreviations for complex concepts, and thus can be unfolded. On the other hand, it turns out that, in contrast to the case of EL, adding acyclic TBoxes to tEL(deg) increases the complexity of reasoning by at least on level of the polynomial hierarchy.
@inproceedings{ ecaiBaFe16, author = {Franz {Baader} and Oliver {Fern{\'a}ndez Gil}}, booktitle = {{ECAI} 2016 - 22nd European Conference on Artificial Intelligence, 29 August-2 September 2016, The Hague, The Netherlands - Including Prestigious Applications of Artificial Intelligence {(PAIS} 2016)}, pages = {1096--1104}, publisher = {{IOS} Press}, series = {Frontiers in Artificial Intelligence and Applications}, title = {Extending the Description Logic $\tau\mathcal{EL}(deg)$ with Acyclic TBoxes}, volume = {285}, year = {2016}, }
Abstract BibTeX Entry PDF File DOI
Recently introduced approaches for relaxed query answering, approximately defining concepts, and approximately solving unification problems in Description Logics have in common that they are based on the use of concept comparison measures together with a threshold construction. In this paper, we will briefly review these approaches, and then show how weighted automata working on infinite trees can be used to construct computable concept comparison measures for FL0 that are equivalence invariant w.r.t. general TBoxes. This is a first step towards employing such measures in the mentioned approximation approaches.
@techreport{ BaFM-LTCS-16-08, address = {Dresden, Germany}, author = {Franz {Baader} and Oliver {Fern{\'a}ndez Gil} and Pavlos {Marantidis}}, doi = {https://doi.org/10.25368/2022.230}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See http://lat.inf.tu-dresden.de/research/reports.html}, number = {16-08}, title = {Approximation in Description Logics: How Weighted Tree Automata Can Help to Define the Required Concept Comparison Measures in $\mathcal{FL}_0$}, type = {LTCS-Report}, year = {2016}, }
Abstract BibTeX Entry PDF File DOI
Unification in description logics (DLs) has been introduced as a novel inference service that can be used to detect redundancies in ontologies, by finding different concepts that may potentially stand for the same intuitive notion. It was first investigated in detail for the DL \(\mathcal{FL}_0\), where unification can be reduced to solving certain language equations. In order to increase the recall of this method for finding redundancies, we introduce and investigate the notion of approximate unification, which basically finds pairs of concepts that ``almost'' unify. The meaning of ``almost'' is formalized using distance measures between concepts. We show that approximate unification in \(\mathcal{FL}_0\) can be reduced to approximately solving language equations, and devise algorithms for solving the latter problem for two particular distance measures.
@techreport{ BaMaOk-LTCS-16-04, address = {Dresden, Germany}, author = {Franz {Baader} and Pavlos {Marantidis} and Alexander {Okhotin}}, doi = {https://doi.org/10.25368/2022.228}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See http://lat.inf.tu-dresden.de/research/reports.html}, number = {16-04}, title = {Approximate Unification in the Description Logic {$\mathcal{FL}_0$}}, type = {LTCS-Report}, year = {2016}, }
Abstract BibTeX Entry
Unification in Description logics (DLs) has been introduced as a novel inference service that can be used to detect redundancies in ontologies, by finding different concepts that may potentially stand for the same intuitive notion. It was first investigated in detail for the DL FL0, where unification can be reduced to solving certain language equations. In order to increase the recall of this method for finding redundancies, we introduce and investigate the notion of approximate unification, which basically finds pairs of concepts that "almost" unify. The meaning of "almost" is formalized using distance measures between concepts. We show that approximate unification in FL0 can be reduced to approximately solving language equations, and devise algorithms for solving the latter problem for two particular distance measures.
@inproceedings{ BaMaOk-JELIA16, author = {Franz {Baader} and Pavlos {Marantidis} and Alexander {Okhotin}}, booktitle = {Proc.\ of the 15th Eur.\ Conf.\ on Logics in Artificial Intelligence ({JELIA 2016})}, editor = {Loizos {Michael} and Antonis C. {Kakas}}, pages = {49--63}, publisher = {Springer-Verlag}, series = {Lecture Notes in Artificial Intelligence}, title = {Approximate Unification in the Description Logic {$\mathcal{FL}_0$}}, volume = {10021}, year = {2016}, }
Abstract BibTeX Entry PDF File DOI
Unification with constants modulo the theory of an associative (A), commutative (C) and idempotent (I) binary function symbol with a unit (U) corresponds to solving a very simple type of set equations. It is well-known that solvability of systems of such equations can be decided in polynomial time by reducing it to satisfiability of propositional Horn formulae. Here we introduce a modified version of this problem by no longer requiring all equations to be completely solved, but allowing for a certain number of violations of the equations. We introduce three different ways of counting the number of violations, and investigate the complexity of the respective decision problem, i.e., the problem of deciding whether there is an assignment that solves the system with at most \(\ell\) violations for a given threshold value \(\ell\).
@techreport{ BaMaOk-LTCS-16-03, address = {Dresden, Germany}, author = {Franz {Baader} and Pavlos {Marantidis} and Alexander {Okhotin}}, doi = {https://doi.org/10.25368/2022.227}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See http://lat.inf.tu-dresden.de/research/reports.html}, number = {16-03}, title = {Approximately Solving Set Equations}, type = {LTCS-Report}, year = {2016}, }
Abstract BibTeX Entry PDF File DOI ©Taylor and Francis
Description logic knowledge bases can be used to represent knowledge about a particular domain in a formal and unambiguous manner. Their practical relevance has been shown in many research areas, especially in biology and the Semantic Web. However, the tasks of constructing knowledge bases itself, often performed by human experts, is difficult, time-consuming and expensive. In particular the synthesis of terminological knowledge is a challenge every expert has to face. Because human experts cannot be omitted completely from the construction of knowledge bases, it would therefore be desirable to at least get some support from machines during this process. To this end, we shall investigate in this work an approach which shall allow us to extract terminological knowledge in the form of general concept inclusions from factual data, where the data is given in the form of vertex and edge labeled graphs. As such graphs appear naturally within the scope of the Semantic Web in the form of sets of RDF triples, the presented approach opens up another possibility to extract terminological knowledge from the Linked Open Data Cloud.
@article{ BoDiKr-JANCL16, author = {Daniel {Borchmann} and Felix {Distel} and Francesco {Kriegel}}, doi = {https://doi.org/10.1080/11663081.2016.1168230}, journal = {Journal of Applied Non-Classical Logics}, number = {1}, pages = {1--46}, title = {Axiomatisation of General Concept Inclusions from Finite Interpretations}, volume = {26}, year = {2016}, }
Abstract BibTeX Entry PDF File DOI
Reasoning for Description Logics with concrete domains and w.r.t. general TBoxes easily becomes undecidable. However, with some restriction on the concrete domain, decidability can be regained. We introduce a novel way to integrate a concrete domain D into the well known description logic ALC, we call the resulting logic ALCP(D). We then identify sufficient conditions on D that guarantee decidability of the satisfiability problem, even in the presence of general TBoxes. In particular, we show decidability of ALCP(D) for several domains over the integers, for which decidability was open. More generally, this result holds for all negation-closed concrete domains with the EHD-property, which stands for `the existence of a homomorphism is definable'. Such technique has recently been used to show decidability of CTL* with local constraints over the integers.
@techreport{ CaTu-LTCS-16-01, address = {Dresden, Germany}, author = {Claudia {Carapelle} and Anni-Yasmin {Turhan}}, doi = {https://doi.org/10.25368/2022.225}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, number = {16-01}, title = {Decidability of $\mathcal{ALC}^P\!(\mathcal{D})$ for concrete domains with the EHD-property}, type = {LTCS-Report}, year = {2016}, }
Abstract BibTeX Entry PDF File DOI
Reasoning for Description logics with concrete domains and w.r.t. general TBoxes easily becomes undecidable. However, with some restriction on the concrete domain, decidability can be regained. We introduce a novel way to integrate concrete domains D into the well-known description logic ALC, we call the resulting logic ALCP(D). We then identify sufficient conditions on D that guarantee decidability of the satisfiability problem, even in the presence of general TBoxes. In particular, we show decidability of ALCP(D) for several domains over the integers, for which decidabil- ity was open. More generally, this result holds for all negation-closed concrete domains with the EHD-property, which stands for ‘the exis- tence of a homomorphism is definable’. Such technique has recently been used to show decidability of CTL∗ with local constraints over the integers.
@inproceedings{ CaTu-ECAI-16, address = {Dresden, Germany}, author = {Claudia {Carapelle} and Anni-Yasmin {Turhan}}, booktitle = {Proceedings of the 22nd European Conference on Artificial Intelligence}, doi = {https://doi.org/10.3233/978-1-61499-672-9-1440}, editor = {Gal A. {Kaminka} and Maria {Fox} and Paolo {Bouquet} and Eyke {H{\"{u}}llermeier} and Virginia {Dignum} and Frank {Dignum} and Frank van {Harmelen}}, pages = {1440--1448}, publisher = {{IOS} Press}, title = {Description Logics Reasoning w.r.t. general TBoxes is decidable for Concrete Domains with the EHD-property}, year = {2016}, }
Abstract BibTeX Entry PDF File
Description Logics (DLs) are a family of logic-based knowledge representation languages used to describe the knowledge of an application domain and reason about it in formally well-defined way. They allow users to describe the important notions and classes of the knowledge domain as concepts, which formalize the necessary and sufficient conditions for individual objects to belong to that concept. A variety of different DLs exist, differing in the set of properties one can use to express concepts, the so-called concept constructors, as well as the set of axioms available to describe the relations between concepts or individuals. However, all classical DLs have in common that they can only express exact knowledge, and correspondingly only allow exact inferences. Either we can infer that some individual belongs to a concept, or we can't, there is no in-between. In practice though, knowledge is rarely exact. Many definitions have their exceptions or are vaguely formulated in the first place, and people might not only be interested in exact answers, but also in alternatives that are "close enough".
This thesis is aimed at tackling how to express that something "close enough", and how to integrate this notion into the formalism of Description Logics. To this end, we will use the notion of similarity and dissimilarity measures as a way to quantify how close exactly two concepts are. We will look at how useful measures can be defined in the context of DLs, and how they can be incorporated into the formal framework in order to generalize it. In particular, we will look closer at two applications of thus measures to DLs: Relaxed instance queries will incorporate a similarity measure in order to not just give the exact answer to some query, but all answers that are reasonably similar. Prototypical definitions on the other hand use a measure of dissimilarity or distance between concepts in order to allow the definitions of and reasoning with concepts that capture not just those individuals that satisfy exactly the stated properties, but also those that are "close enough".
@thesis{ Ecke-PhD-2016, address = {Dresden, Germany}, author = {Andreas {Ecke}}, school = {Technische Universit\"{a}t Dresden}, title = {Quantitative Methods for Similarity in Description Logics}, type = {Doctoral Thesis}, year = {2016}, }
2015
Abstract BibTeX Entry PDF File DOI
Ontology-based data access (OBDA) generalizes query answering in databases towards deductive entailment since (i) the fact base is not assumed to contain complete knowledge (i.e., there is no closed world assumption), and (ii) the interpretation of the predicates occurring in the queries is constrained by axioms of an ontology. OBDA has been investigated in detail for the case where the ontology is expressed by an appropriate Description Logic (DL) and the queries are conjunctive queries. Motivated by situation awareness applications, we investigate an extension of OBDA to the temporal case. As the query language we consider an extension of the well-known propositional temporal logic LTL where conjunctive queries can occur in place of propositional variables, and as the ontology language we use the expressive DL SHQ. For the resulting instance of temporalized OBDA, we investigate both data complexity and combined complexity of the query entailment problem. In the course of this investigation, we also establish the complexity of consistency of Boolean knowledge bases in SHQ.
@article{ BaBL-JWS15, author = {Franz {Baader} and Stefan {Borgwardt} and Marcel {Lippmann}}, doi = {http://dx.doi.org/10.1016/j.websem.2014.11.008}, journal = {Journal of Web Semantics}, pages = {71--93}, title = {Temporal Query Entailment in the Description Logic {$\mathcal{SHQ}$}}, volume = {33}, year = {2015}, }
Abstract BibTeX Entry PDF File
We introduce an extension of the lightweight Description Logic EL that allows us to define concepts in an approximate way. For this purpose, we use a graded membership function, which for each individual and concept yields a number in the interval [0,1] expressing the degree to which the individual belongs to the concept. Threshold concepts C t for in <,<=,>,>= then collect all the individuals that belong to C with degree t. We generalize a well-known characterization of membership in EL concepts to construct a specific graded membership function deg , and investigate the complexity of reasoning in the Description Logic tauEL(deg), which extends EL by threshold concepts defined using deg . We also compare the instance problem for threshold concepts of the form C >t in tauEL(deg) with the relaxed instance queries of Ecke et al.
@inproceedings{ BBG-FROCOS-15, address = {Wroclaw, Poland}, author = {Franz {Baader} and Gerhard {Brewka} and Oliver {Fern\'andez Gil}}, booktitle = {Proceedings of the 10th International Symposium on Frontiers of Combining Systems (FroCoS'15)}, editor = {Carsten {Lutz} and Silvio {Ranise}}, pages = {33--48}, publisher = {Springer-Verlag}, series = {Lectures Notes in Artificial Intelligence}, title = {Adding Threshold Concepts to the Description Logic {$\mathcal{EL}$}}, volume = {9322}, year = {2015}, }
Abstract BibTeX Entry PDF File
We introduce an extension of the lightweight Description Logic EL that allows us to define concepts in an approximate way. For this purpose, we use a graded membership function which, for each individual and concept, yields a number in the interval [0,1] expressing the degree to which the individual belongs to the concept. Threshold concepts then collect all the individuals that belong to an EL concept C with degree less, less or equal, greater, respectively greater or equal r, for some r in [0,1] . We generalize a well-known characterization of membership in EL concepts to obtain an appropriate graded membership function deg, and investigate the complexity of reasoning in the Description Logic which extends EL by threshold concepts defined using deg.
@inproceedings{ BBGIL-DL-15, address = {Athens, Greece}, author = {Franz {Baader} and Gerhard {Brewka} and Oliver {Fern\'andez Gil}}, booktitle = {Proceedings of the 28th International Workshop on Description Logics (DL-2015)}, editor = {Diego {Calvanese} and Boris {Konev}}, publisher = {CEUR-WS.org}, series = {CEUR Workshop Proceedings}, title = {Adding Threshold Concepts to the Description Logic {EL} (extended abstract)}, volume = {1350}, year = {2015}, }
Abstract BibTeX Entry PDF File DOI
We introduce an extension of the lightweight Description Logic EL that allows us to define concepts in an approximate way. For this purpose, we use a graded membership function, which for each individual and concept yields a number in the interval [0,1] expressing the degree to which the individual belongs to the concept. Threshold concepts C t for in <,<=,>,>= then collect all the individuals that belong to C with degree t. We generalize a well-known characterization of membership in EL concepts to construct a specific graded membership function deg, and investigate the complexity of reasoning in the Description Logic tel(deg), which extends EL by threshold concepts defined using deg. We also compare the instance problem for threshold concepts of the form C> t in tel(deg) with the relaxed instance queries of Ecke et al.
@techreport{ BaBrF-LTCS-15-09, address = {Dresden, Germany}, author = {Franz {Baader} and Gerhard {Brewka} and Oliver Fern{\'a}ndez {Gil}}, doi = {https://doi.org/10.25368/2022.215}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See http://lat.inf.tu-dresden.de/research/reports.html.}, number = {15-09}, title = {Adding Threshold Concepts to the Description Logic $\mathcal{E\!L}$}, type = {LTCS-Report}, year = {2015}, }
Abstract BibTeX Entry PDF File
Within formal concept analysis, attribute exploration is a powerful tool to semi-automatically check data for completeness with respect to a given domain. However, the classical formulation of attribute exploration does not take into account possible errors which are present in the initial data. To remedy this, we present in this work a generalization of attribute exploration based on the notion of confidence, that will allow for the exploration of implications which are not necessarily valid in the initial data, but instead enjoy a minimal confidence therein.
@inproceedings{ Borc-ICFCA15, author = {Daniel {Borchmann}}, booktitle = {Proceedings of the 13th International Conference on Formal Concept Analysis {(ICFCA'15)}}, editor = {Jaume {Baixeries} and Christian {Sacarea} and Manuel {Ojeda{-}Aciego}}, pages = {219--235}, publisher = {Springer-Verlag}, series = {Lecture Notes in Computer Science}, title = {{Exploring Faulty Data}}, volume = {9113}, year = {2015}, }
Abstract BibTeX Entry PDF File DOI
Description logic knowledge bases can be used to represent knowledge about a particular domain in a formal and unambiguous manner. Their practical relevance has been shown in many research areas, especially in biology and the semantic web. However, the tasks of constructing knowledge bases itself, often performed by human experts, is difficult, time-consuming and expensive. In particular the synthesis of terminological knowledge is a challenge every expert has to face. Because human experts cannot be omitted completely from the construction of knowledge bases, it would therefore be desirable to at least get some support from machines during this process. To this end, we shall investigate in this work an approach which shall allow us to extract terminological knowledge in the form of general concept inclusions from factual data, where the data is given in the form of vertex and edge labeled graphs. As such graphs appear naturally within the scope of the Semantic Web in the form of sets of RDF triples, the presented approach opens up the possibility to extract terminological knowledge from the Linked Open Data Cloud. We shall also present first experimental results showing that our approach has the potential to be useful for practical applications.
@techreport{ BoDiKr-LTCS-15-13, address = {Dresden, Germany}, author = {Daniel {Borchmann} and Felix {Distel} and Francesco {Kriegel}}, doi = {https://doi.org/10.25368/2022.219}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tu-dresden.de/inf/lat/reports#BoDiKr-LTCS-15-13}}, number = {15-13}, title = {{Axiomatization of General Concept Inclusions from Finite Interpretations}}, type = {LTCS-Report}, year = {2015}, }
Abstract BibTeX Entry PDF File DOI
Fuzzy Description Logics (DLs) can be used to represent and reason with vague knowledge. This family of logical formalisms is very diverse, each member being characterized by a specific choice of constructors, axioms, and triangular norms, which are used to specify the semantics. Unfortunately, it has recently been shown that the consistency problem in many fuzzy DLs with general concept inclusion axioms is undecidable. In this paper, we present a proof framework that allows us to extend these results to cover large classes of fuzzy DLs. On the other hand, we also provide matching decidability results for most of the remaining logics. As a result, we obtain a near-universal classification of fuzzy DLs according to the decidability of their consistency problem.
@article{ BoDP-AI15, author = {Stefan {Borgwardt} and Felix {Distel} and Rafael {Pe{\~n}aloza}}, doi = {http://dx.doi.org/10.1016/j.artint.2014.09.001}, journal = {Artificial Intelligence}, pages = {23--55}, title = {The Limits of Decidability in Fuzzy Description Logics with General Concept Inclusions}, volume = {218}, year = {2015}, }
Abstract BibTeX Entry PDF File DOI
Ontology-based data access (OBDA) generalizes query answering in relational databases. It allows to query a database by using the language of an ontology, abstracting from the actual relations of the database. OBDA can sometimes be realized by compiling the information of the ontology into the query and the database. The resulting query is then answered using classical database techniques. In this paper, we consider a temporal version of OBDA. We propose a generic temporal query language that combines linear temporal logic with queries over ontologies. This language is well-suited for expressing temporal properties of dynamic systems and is useful in context-aware applications that need to detect specific situations. We show that, if atemporal queries are rewritable in the sense described above, then the corresponding temporal queries are also rewritable such that we can answer them over a temporal database. We present three approaches to answering the resulting queries.
@article{ BoLT-JWS15, author = {Stefan {Borgwardt} and Marcel {Lippmann} and Veronika {Thost}}, doi = {http://dx.doi.org/10.1016/j.websem.2014.11.007}, journal = {Journal of Web Semantics}, pages = {50--70}, title = {Temporalizing Rewritable Query Languages over Knowledge Bases}, volume = {33}, year = {2015}, }
Abstract BibTeX Entry PDF File DOI
In Description Logics (DL) knowledge bases (KBs), information is typically captured by clear-cut concepts. For many practical applications querying the KB by crisp concepts is too restrictive; a user might be willing to lose some precision in the query, in exchange of a larger selection of answers. Similarity measures can offer a controlled way of gradually relaxing a query concept within a user-specified limit. In this paper we formalize the task of instance query answering for DL KBs using concepts relaxed by concept similarity measures (CSMs). We investigate computation algorithms for this task in the DL EL, their complexity and properties for the CSMs employed regarding whether unfoldable or general TBoxes are used. For the case of general TBoxes we define a family of CSMs that take the full TBox information into account, when assessing the similarity of concepts.
@article{ EcPeTu-JAL15, author = {Andreas {Ecke} and Rafael {Pe\~naloza} and Anni-Yasmin {Turhan}}, doi = {http://dx.doi.org/10.1016/j.jal.2015.01.002}, journal = {Journal of Applied Logic}, note = {Special Issue for the Workshop on Weighted Logics for {AI} 2013}, number = {4, Part 1}, pages = {480--508}, title = {Similarity-based Relaxed Instance Queries}, volume = {13}, year = {2015}, }
Abstract BibTeX Entry PDF File
Recently an approach has been devised how to employ concept similarity measures (CSMs) for relaxing instance queries over EL ontologies in a controlled way. The approach relies on similarity measures between pointed interpretations to yield CSMs with certain properties. We report in this paper on ELASTIQ, which is a first implementation of this approach and propose initial optimizations for this novel inference. We also provide a first evaluation of ELASTIQ on the Gene Ontology.
@inproceedings{ EcPeTu-DL15, author = {Andreas {Ecke} and Maximilian {Pensel} and Anni-Yasmin {Turhan}}, booktitle = {Proceedings of the 28th International Workshop on Description Logics ({DL-2015})}, editor = {Diego {Calvanese} and Boris {Konev}}, month = {June}, publisher = {CEUR-WS.org}, series = {{CEUR} Workshop Proceedings}, title = {ELASTIQ: Answering Similarity-threshold Instance Queries in {$\mathcal{EL}$}}, venue = {Athens, Greece}, volume = {1350}, year = {2015}, }
2014
Abstract BibTeX Entry PDF File
Black-box optimization problems are of practical importance throughout science and engineering. Hundreds of algorithms and heuristics have been developed to solve them. However, none of them outperforms any other on all problems. The success of a particular heuristic is always relative to a class of problems. So far, these problem classes are elusive and it is not known what algorithm to use on a given problem. Here we describe the use of Formal Concept Analysis (FCA) to extract implications about problem classes and algorithm performance from databases of empirical benchmarks. We explain the idea in a small example and show that FCA produces meaningful implications. We further outline the use of attribute exploration to identify problem features that predict algorithm performance.
@inproceedings{ AsBoSbWa-FCA4AI-14, author = {Josefine {Asmus} and Daniel {Borchmann} and Ivo F. {Sbalzarini} and Dirk {Walther}}, booktitle = {Proceedings of the 3rd International Workshop on "What can FCA do for Artificial Intelligence?" ({FCA4AI'14})}, editor = {Sergei O. {Kuznetsov} and Amedeo {Napoli} and Sebastian {Rudolph}}, pages = {35--42}, series = {CEUR Workshop Proceedings}, title = {Towards an FCA-based Recommender System for Black-Box Optimization}, volume = {1257}, year = {2014}, }
Abstract BibTeX Entry PDF File DOI
Formulae of linear temporal logic (LTL) can be used to specify (wanted or unwanted) properties of a dynamical system. In model checking, the system's behaviour is described by a transition system, and one needs to check whether all possible traces of this transition system satisfy the formula. In runtime verification, one observes the actual system behaviour, which at any point in time yields a finite prefix of a trace. The task is then to check whether all continuations of this prefix to a trace satisfy (violate) the formula. More precisely, one wants to construct a monitor, i.e., a finite automaton that receives the finite prefix as input and then gives the right answer based on the state currently reached. In this paper, we extend the known approaches to LTL runtime verification in two directions. First, instead of propositional LTL we use the more expressive temporal logic ALC-LTL, which can use axioms of the Description Logic (DL) ALC instead of propositional variables to describe properties of single states of the system. Second, instead of assuming that the observed system behaviour provides us with complete information about the states of the system, we assume that states are described in an incomplete way by ALC-knowledge bases. We show that also in this setting monitors can effectively be constructed. The (double-exponential) size of the constructed monitors is in fact optimal, and not higher than in the propositional case. As an auxiliary result, we show how to construct Büchi automata for ALC-LTL-formulae, which yields alternative proofs for the known upper bounds of deciding satisfiability in ALC-LTL.
@techreport{ BaLi-LTCS-14-01, address = {Dresden, Germany}, author = {Franz {Baader} and Marcel {Lippmann}}, doi = {https://doi.org/10.25368/2022.203}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See \url{http://lat.inf.tu-dresden.de/research/reports.html}.}, number = {14-01}, title = {Runtime Verification Using a Temporal Description Logic Revisited}, type = {LTCS-Report}, year = {2014}, }
Abstract BibTeX Entry DOI
Formulae of linear temporal logic (LTL) can be used to specify (wanted or unwanted) properties of a dynamical system. In model checking, the system's behaviour is described by a transition system, and one needs to check whether all possible traces of this transition system satisfy the formula. In runtime verification, one observes the actual system behaviour, which at any point in time yields a finite prefix of a trace. The task is then to check whether all continuations of this prefix to a trace satisfy (violate) the formula. More precisely, one wants to construct a monitor, i.e., a finite automaton that receives the finite prefix as input and then gives the right answer based on the state currently reached. In this paper, we extend the known approaches to LTL runtime verification in two directions. First, instead of propositional LTL we use the more expressive temporal logic ALC-LTL, which can use axioms of the Description Logic (DL) ALC instead of propositional variables to describe properties of single states of the system. Second, instead of assuming that the observed system behaviour provides us with complete information about the states of the system, we assume that states are described in an incomplete way by ALC-knowledge bases. We show that also in this setting monitors can effectively be constructed. The (double-exponential) size of the constructed monitors is in fact optimal, and not higher than in the propositional case. As an auxiliary result, we show how to construct Büchi automata for ALC-LTL-formulae, which yields alternative proofs for the known upper bounds of deciding satisfiability in ALC-LTL.
@article{ BaLi-JAL14, author = {Franz {Baader} and Marcel {Lippmann}}, doi = {http://dx.doi.org/10.1016/j.jal.2014.09.001}, journal = {Journal of Applied Logic}, number = {4}, pages = {584--613}, title = {Runtime Verification Using the Temporal Description Logic $\mathcal{ALC}$-LTL Revisited}, volume = {12}, year = {2014}, }
Abstract BibTeX Entry PDF File
We provide experience in applying methods from formal concept analysis to the problem of classifying software bug reports characterized by distinguished features. More specifically, we investigate the situation where we are given a set of already processed bug reports together with the components of the program that contained the corresponding error. The task is the following: given a new bug report with specific features, provide a list of components of the program based on the bug reports already processed that are likely to contain the error. To this end, we investigate several approaches that employ the idea of implications between features and program components. We describe these approaches in detail, and apply them to real-world data for evaluation. The best of our approaches is capable of identifying in just a fraction of a second the component causing a bug with an accuracy of over 70 percent.
@article{ BoPW-ICFCA14, author = {Daniel {Borchmann} and Rafael {Pe{\~n}aloza} and Wenqian {Wang}}, journal = {Studia Universitatis Babe{\c{s}}-Bolyai Informatica}, month = {June}, note = {Suplemental proceedings of the 12th International Conference on Formal Concept Analysis (ICFCA'14)}, pages = {10--27}, title = {Classifying Software Bug Reports Using Methods from Formal Concept Analysis}, volume = {59}, year = {2014}, }
Abstract BibTeX Entry PDF File
The complexity of reasoning in fuzzy description logics (DLs) over a finite lattice L usually does not exceed that of the underlying classical DLs. This has recently been shown for the logics between L-IALC and L-ISCHI using a combination of automata- and tableau-based techniques. In this paper, this approach is modified to deal with nominals and constants in L-ISCHOI. Reasoning w.r.t. general TBoxes is ExpTime-complete, and PSpace-completeness is shown under the restriction to acyclic terminologies in two sublogics. The latter implies two previously unknown complexity results for the classical DLs ALCHO and SO.
@inproceedings{ Borg-DL14, author = {Stefan {Borgwardt}}, booktitle = {Proceedings of the 27th International Workshop on Description Logics ({DL'14})}, editor = {Meghyn {Bienvenu} and Magdalena {Ortiz} and Riccardo {Rosati} and Mantas {Simkus}}, pages = {58--70}, series = {CEUR Workshop Proceedings}, title = {Fuzzy DLs over Finite Lattices with Nominals}, volume = {1193}, year = {2014}, }
Abstract BibTeX Entry Publication
Description logics (DLs) are used to represent knowledge of an application domain and provide standard reasoning services to infer consequences of this knowledge. However, classical DLs are not suited to represent vagueness in the description of the knowledge. We consider a combination of DLs and Fuzzy Logics to address this task. In particular, we consider the t-norm-based semantics for fuzzy DLs introduced by Hajek in 2005. Since then, many tableau algorithms have been developed for reasoning in fuzzy DLs. Another popular approach is to reduce fuzzy ontologies to classical ones and use existing highly optimized classical reasoners to deal with them. However, a systematic study of the computational complexity of the different reasoning problems is so far missing from the literature on fuzzy DLs. Recently, some of the developed tableau algorithms have been shown to be incorrect in the presence of general concept inclusion axioms (GCIs). In some fuzzy DLs, reasoning with GCIs has even turned out to be undecidable. This work provides a rigorous analysis of the boundary between decidable and undecidable reasoning problems in t-norm-based fuzzy DLs, in particular for GCIs. Existing undecidability proofs are extended to cover large classes of fuzzy DLs, and decidability is shown for most of the remaining logics considered here. Additionally, the computational complexity of reasoning in fuzzy DLs with semantics based on finite lattices is analyzed. For most decidability results, tight complexity bounds can be derived.
@thesis{ Borgwardt-Diss-2014, address = {Dresden, Germany}, author = {Stefan {Borgwardt}}, school = {Technische Universit\"{a}t Dresden}, title = {Fuzzy Description Logics with General Concept Inclusions}, type = {Doctoral Thesis}, year = {2014}, }
BibTeX Entry PDF File
@inproceedings{ BoCP-PRUV14, address = {Vienna, Austria}, author = {Stefan {Borgwardt} and Marco {Cerami} and Rafael {Pe{\~n}aloza}}, booktitle = {Proceedings of the 1st International Workshop on Logics for Reasoning about Preferences, Uncertainty, and Vagueness ({PRUV'14})}, editor = {Thomas {Lukasiewicz} and Rafael {Pe{\~n}aloza} and Anni-Yasmin {Turhan}}, pages = {52--58}, series = {CEUR Workshop Proceedings}, title = {Many-Valued Horn Logic is Hard}, volume = {1205}, year = {2014}, }
Abstract BibTeX Entry PDF File
In the last few years, there has been a large effort for analyzing the computational properties of reasoning in fuzzy description logics. This has led to a number of papers studying the complexity of these logics, depending on the chosen semantics. Surprisingly, despite being arguably the simplest form of fuzzy semantics, not much is known about the complexity of reasoning in fuzzy description logics w.r.t. witnessed models over the Gödel t-norm. We show that in the logic G-IALC, reasoning cannot be restricted to finitely-valued models in general. Despite this negative result, we also show that all the standard reasoning problems can be solved in exponential time, matching the complexity of reasoning in classical ALC.
@inproceedings{ BoDP-KR14, author = {Stefan {Borgwardt} and Felix {Distel} and Rafael {Pe{\~n}aloza}}, booktitle = {Proceedings of the 14th International Conference on Principles of Knowledge Representation and Reasoning (KR'14)}, editor = {Chitta {Baral} and Giuseppe {De Giacomo} and Thomas {Eiter}}, pages = {228--237}, publisher = {AAAI Press}, title = {Decidable {G}{\"o}del description logics without the finitely-valued model property}, year = {2014}, }
Abstract BibTeX Entry PDF File
In the last few years, the complexity of reasoning in fuzzy description logics has been studied in depth. Surprisingly, despite being arguably the simplest form of fuzzy semantics, not much is known about the complexity of reasoning in fuzzy description logics using the Gödel t-norm. It was recently shown that in the logic G-IALC under witnessed model semantics, all standard reasoning problems can be solved in exponential time, matching the complexity of reasoning in classical ALC. We show that this also holds under general model semantics.
@inproceedings{ BoDP-DL14, address = {Vienna, Austria}, author = {Stefan {Borgwardt} and Felix {Distel} and Rafael {Pe{\~n}aloza}}, booktitle = {Proceedings of the 27th International Workshop on Description Logics ({DL'14})}, editor = {Meghyn {Bienvenu} and Magdalena {Ortiz} and Riccardo {Rosati} and Mantas {Simkus}}, pages = {391--403}, series = {CEUR Workshop Proceedings}, title = {G\"odel Description Logics with General Models}, volume = {1193}, year = {2014}, }
Abstract BibTeX Entry PDF File
We study the fuzzy extension of FL0 with semantics based on the Gödel t-norm. We show that gfp-subsumption w.r.t. a finite set of primitive definitions can be characterized by a relation on weighted automata, and use this result to provide tight complexity bounds for reasoning in this logic.
@inproceedings{ BoLP-DL14, address = {Vienna, Austria}, author = {Stefan {Borgwardt} and Jos{\'e} A. {Leyva Galano} and Rafael {Pe{\~n}aloza}}, booktitle = {Proceedings of the 27th International Workshop on Description Logics ({DL'14})}, editor = {Meghyn {Bienvenu} and Magdalena {Ortiz} and Riccardo {Rosati} and Mantas {Simkus}}, pages = {71--82}, series = {CEUR Workshop Proceedings}, title = {G{\"o}del {$\mathcal{FL}_0$} with Greatest Fixed-Point Semantics}, volume = {1193}, year = {2014}, }
Abstract BibTeX Entry PDF File ©Springer-Verlag
We study the fuzzy extension of the Description Logic FL0 with semantics based on the Gödel t-norm. We show that subsumption w.r.t. a finite set of primitive definitions, using greatest fixed-point semantics, can be characterized by a relation on weighted automata. We use this result to provide tight complexity bounds for reasoning in this logic, showing that it is PSpace-complete. If the definitions do not contain cycles, subsumption becomes co-NP-complete.
@inproceedings{ BoLP-JELIA14, address = {Funchal, Portugal}, author = {Stefan {Borgwardt} and Jos{\'e} A. {Leyva Galano} and Rafael {Pe{\~n}aloza}}, booktitle = {Proceedings of the 14th European Conference on Logics in Artificial Intelligence (JELIA'14)}, editor = {Eduardo {Ferm{\'e}} and Jo{\~a}o {Leite}}, pages = {62--76}, publisher = {Springer-Verlag}, series = {Lecture Notes in Artificial Intelligence}, title = {The Fuzzy Description Logic {\textsf{G}-$\mathcal{FL}_0$} with Greatest Fixed-Point Semantics}, volume = {8761}, year = {2014}, }
Abstract BibTeX Entry PDF File DOI
Fuzzy Description Logics have been widely studied as a formalism for representing and reasoning with vague knowledge. One of the most basic reasoning tasks in (fuzzy) Description Logics is to decide whether an ontology representing a knowledge domain is consistent. Surprisingly, not much is known about the complexity of this problem for semantics based on complete De Morgan lattices. To cover this gap, in this paper we study the consistency problem for the fuzzy Description Logic L-SHI and its sublogics in detail. The contribution of the paper is twofold. On the one hand, we provide a tableaux-based algorithm for deciding consistency when the underlying lattice is finite. The algorithm generalizes the one developed for classical SHI. On the other hand, we identify decidable and undecidable classes of fuzzy Description Logics over infinite lattices. For all the decidable classes, we also provide tight complexity bounds.
@article{ BoPe-IJAR14, author = {Stefan {Borgwardt} and Rafael {Pe{\~n}aloza}}, doi = {http://dx.doi.org/10.1016/j.ijar.2013.07.006}, journal = {International Journal of Approximate Reasoning}, number = {9}, pages = {1917--1938}, title = {Consistency Reasoning in Lattice-Based Fuzzy Description Logics}, volume = {55}, year = {2014}, }
Abstract BibTeX Entry PDF File ©Springer-Verlag
We consider the fuzzy description logic ALCOI with semantics based on a finite residuated De Morgan lattice. We show that reasoning in this logic is ExpTime-complete w.r.t. general TBoxes. In the sublogics ALCI and ALCO, it is PSpace-complete w.r.t. acyclic TBoxes. This matches the known complexity bounds for reasoning in classical description logics between ALC and ALCOI.
@inproceedings{ BoPe-URSW3, author = {Stefan {Borgwardt} and Rafael {Pe{\~n}aloza}}, booktitle = {Uncertainty Reasoning for the Semantic Web III}, editor = {F. {Bobillo} and R.N. {Carvalho} and P.C.G. {da Costa} and C. {d'Amato} and N. {Fanizzi} and K.B. {Laskey} and K.J. {Laskey} and Th. {Lukasiewicz} and M. {Nickles} and M. {Pool}}, note = {Revised Selected Papers from the ISWC International Workshops URSW 2011 - 2013}, pages = {122--141}, publisher = {Springer-Verlag}, series = {LNCS}, title = {Finite Lattices Do Not Make Reasoning in ALCOI Harder}, volume = {8816}, year = {2014}, }
Abstract BibTeX Entry PDF File
Description Logic (DL) knowledge bases (KBs) allow to express knowledge about concepts and individuals in a formal way. This knowledge is typically crisp, i.e., an individual either is an instance of a given concept or it is not. However, in practice this is often too restrictive: when querying for instances, one may often also want to find suitable alternatives, i.e., individuals that are not instances of query concept, but could still be considered `good enough'. Relaxed instance queries have been introduced to gradually relax this inference in a controlled way via the use of concept similarity measures (CSMs). So far, those algorithms only work for the DL EL, which has limited expressive power. In this paper, we introduce a suitable CSM for EL++-concepts. EL++ adds nominals, role inclusion axioms, and concrete domains to EL. We extend the algorithm to compute relaxed instance queries w.r.t. this new CSM, and thus to work for general EL++ KBs.
@inproceedings{ Ecke-PRUV-14, author = {Andreas {Ecke}}, booktitle = {Proceedings of the First Workshop on Logics for Reasoning about Preferences, Uncertainty, and Vagueness}, editor = {Thomas {Lukasiewicz} and Rafael {Pe{\~n}aloza} and Anni-Yasmin {Turhan}}, pages = {101--113}, publisher = {CEUR-WS.org}, series = {{CEUR} Workshop Proceedings}, title = {Similarity-based Relaxed Instance Queries in $\mathcal{EL}^{++}$}, volume = {1205}, year = {2014}, }
Abstract BibTeX Entry PDF File
In Description Logics (DL) knowledge bases (KBs) information is typically captured by crisp concepts. For many applications querying the KB by crisp query concepts is too restrictive. A controlled way of gradually relaxing a query concept can be achieved by the use of concept similarity. In this paper we formalize the task of instance query answering for crisp DL KBs using concepts relaxed by concept similarity measures (CSM). For the DL \(\mathcal{EL}\) we investigate computation algorithms for this task, their complexity and properties for the employed CSM in case unfoldabel Tboxes or general TBoxes aer used.
@inproceedings{ EcPeTu-KR14, address = {Vienna, Austria}, author = {Andreas {Ecke} and Rafael {Pe{\~n}aloza} and Anni-Yasmin {Turhan}}, booktitle = {Proceedings of the Fourteenth International Conference on Principles of Knowledge Representation and Reasoning ({KR'14})}, editor = {Chitta {Baral} and Giuseppe {De Giacomo} and Thomas {Eiter}}, pages = {248--257}, publisher = {AAAI Press}, title = {Answering Instance Queries Relaxed by Concept Similarity}, year = {2014}, }
Abstract BibTeX Entry PDF File DOI
Description Logics (DLs) are a well-established family of knowledge representation formalisms. One of its members, the DL \(\mathcal{ELOR}\) has been successfully used for representing knowledge from the bio-medical sciences, and is the basis for the OWL 2 EL profile of the standard ontology language for the Semantic Web. Reasoning in this DL can be performed in polynomial time through a completion-based algorithm. In this paper we study the logic Prob-\(\mathcal{ELOR}\), that extends \(\mathcal{ELOR}\) with subjective probabilities, and present a completion-based algorithm for polynomial time reasoning in a restricted version, Prob-\(\mathcal{ELOR}^c_{01}\), of Prob-\(\mathcal{ELOR}\). We extend this algorithm to computation algorithms for approximations of (i) the most specific concept, which generalizes a given individual into a concept description, and (ii) the least common subsumer, which generalizes several concept descriptions into one. Thus, we also obtain methods for these inferences for the OWL 2 EL profile. These two generalization inferences are fundamental for building ontologies automatically from examples. The feasibility of our approach is demonstrated empirically by our prototype system GEL.
@article{ EcPeTu-IJAR-14, author = {Andreas {Ecke} and Rafael {Pe{\~n}aloza} and Anni-Yasmin {Turhan}}, doi = {http://dx.doi.org/10.1016/j.ijar.2014.03.001}, journal = {International Journal of Approximate Reasoning}, number = {9}, pages = {1939--1970}, publisher = {Elsevier}, title = {Completion-based Generalization Inferences for the Description Logic $\mathcal{ELOR}$ with Subjective Probabilities}, volume = {55}, year = {2014}, }
BibTeX Entry PDF File
@inproceedings{ EcPT-DL14, address = {Vienna, Austria}, author = {Andreas {Ecke} and Rafael {Pe{\~n}aloza} and Anni-Yasmin {Turhan}}, booktitle = {Proceedings of the 27th International Workshop on Description Logics ({DL'14})}, editor = {Meghyn {Bienvenu} and Magdalena {Ortiz} and Riccardo {Rosati} and Mantas {Simkus}}, pages = {526--529}, series = {CEUR Workshop Proceedings}, title = {Mary, What's Like All Cats?}, volume = {1193}, year = {2014}, }
2013
Abstract BibTeX Entry
Adequate encodings for high-level constraints are a key ingredient for the application of SAT technology. In particular, cardinality constraints state that at most (at least, or exactly) \(k\) out of \(n\) propositional variables can be true. They are crucial in many applications. Although sophisticated encodings for cardinality constraints exist, it is well known that for small \(n\) and \(k\) straightforward encodings without auxiliary variables sometimes behave better, and that the choice of the right trade-off between minimizing either the number of variables or the number of clauses is highly application-dependent. Here we build upon previous work on Cardinality Networks to get the best of several worlds: we develop an arc-consistent encoding that, by recursively de-composing the constraint into smaller ones, allows one to decide which encoding to apply to each sub-constraint. This process minimizes a function \(λ·numvars+numclauses\), being \(λ\) a parameter that can be tuned by the user. Our careful experimental evaluation shows that (e.g., for \(λ=5\) ) this new technique produces much smaller encodings in variables and clauses, and indeed strongly improves SAT solvers' performance.
@inproceedings{ parametricCardinalityConstraint, author = {Ignasi {Ab{\'\i}o} and Robert {Nieuwenhuis} and Albert {Oliveras} and Enric {Rodr\'{\i}guez-Carbonell}}, booktitle = {19th International Conference on Principles and Practice of Constraint Programming}, series = {CP'13}, title = {{A Parametric Approach for Smaller and Better Encodings of Cardinality Constraints}}, year = {2013}, }
BibTeX Entry
@inproceedings{ encodeOrPropagate, author = {Ignasi {Ab{\'\i}o} and Robert {Nieuwenhuis} and Albert {Oliveras} and Enric {Rodr\'{\i}guez-Carbonell} and Peter J. {Stuckey}}, booktitle = {19th International Conference on Principles and Practice of Constraint Programming}, series = {CP'13}, title = {{To Encode or to Propagate? The Best Choice for Each Constraint in SAT}}, year = {2013}, }
Abstract BibTeX Entry PDF File (The final publication is available at link.springer.com)
Ontology-based data access (OBDA) generalizes query answering in databases towards deduction since (i) the fact base is not assumed to contain complete knowledge (i.e., there is no closed world assumption), and (ii) the interpretation of the predicates occurring in the queries is constrained by axioms of an ontology. OBDA has been investigated in detail for the case where the ontology is expressed by an appropriate Description Logic (DL) and the queries are conjunctive queries. Motivated by situation awareness applications, we investigate an extension of OBDA to the temporal case. As query language we consider an extension of the well-known propositional temporal logic LTL where conjunctive queries can occur in place of propositional variables, and as ontology language we use the prototypical expressive DL ALC. For the resulting instance of temporalized OBDA, we investigate both data complexity and combined complexity of the query entailment problem.
@inproceedings{ BaBL-CADE13, address = {Lake Placid, NY, USA}, author = {Franz {Baader} and Stefan {Borgwardt} and Marcel {Lippmann}}, booktitle = {Proceedings of the 24th International Conference on Automated Deduction (CADE-24)}, editor = {Maria Paola {Bonacina}}, pages = {330--344}, publisher = {Springer-Verlag}, series = {Lecture Notes in Artificial Intelligence}, title = {Temporalizing Ontology-Based Data Access}, volume = {7898}, year = {2013}, }
Abstract BibTeX Entry
To extract terminological knowledge from data, Baader and Distel have proposed an effective method that allows for the extraction of a base of all valid general concept inclusions of a given finite interpretation. In previous works, to be able to handle small amounts of errors in our data, we have extended this approach to also extract general concept inclusions which are ``almost valid'' in the interpretation. This has been done by demanding that general concept inclusions which are ``almost valid'' are those having only an allowed percentage of counterexamples in the interpretation. In this work, we shall further extend our previous work to allow the interpretation to contain both trusted and untrusted individuals, i.e. individuals from which we know and do not know that they are correct, respectively. The problem we then want to solve is to find a compact representation of all terminological knowledge that is valid for all trusted individuals and is almost valid for all others.
@inproceedings{ Borc-DL13, author = {Daniel {Borchmann}}, booktitle = {Proceedings of the 26th International Workshop on Description Logics ({DL-2013})}, month = {July}, pages = {65--79}, publisher = {CEUR-WS.org}, series = {CEUR Workshop Proceedings}, title = {Axiomatizing $\mathcal{E\!L}^{\bot}_{\mathrm{gfp}}$-General Concept Inclusions in the Presence of Untrusted Individuals}, venue = {Ulm, Germany}, volume = {1014}, year = {2013}, }
Abstract BibTeX Entry
In a recent approach, Baader and Distel proposed an algorithm to axiomatize all terminological knowledge that is valid in a given data set and is expressible in the description logic \(\mathcal{EL}^{\bot}\). This approach is based on the mathematical theory of formal concept analysis. However, this algorithm requires the initial data set to be free of errors, an assumption that normally cannot be made for real-world data. In this work, we propose a first extension of the work of Baader and Distel to handle errors in the data set. The approach we present here is based on the notion of confidence, as it has been developed and used in the area of data mining.
@inproceedings{ Borc-KCAP13, author = {Daniel {Borchmann}}, booktitle = {Proceedings of the Seventh International Conference on Knowledge Capture}, pages = {1--8}, publisher = {ACM}, title = {Axiomatizing $\mathcal{E}\!\mathcal{L}^{\bot}$-Expressible Terminological Knowledge from Erroneous Data}, venue = {Banff, Canada}, year = {2013}, }
Abstract BibTeX Entry PDF File
In the work of Baader and Distel, a method has been proposed to axiomatize all general concept inclusions (GCIs) expressible in the description logic \(\mathcal{EL}^{\bot}\) and valid in a given interpretation \(\mathcal{I}\). This provides us with an effective method to learn \(\mathcal{EL}^{\bot}-ontologies from interpretations. In this work, we want to extend this approach in the direction of handling errors, which might be present in the data-set. We shall do so by not only considering valid GCIs but also those whose confidence is above a given threshold \)c\(. We shall give the necessary definitions and show some first results on the axiomatization of all GCIs with confidence at least \)c$. Finally, we shall provide some experimental evidence based on real-world data that supports our approach.
@inproceedings{ Borc-ICFCA13, author = {Daniel {Borchmann}}, booktitle = {Formal Concept Analysis, 11th International Conference, ICFCA 2013, Dresden, Germany, May 21-24, 2013. Proceedings}, editor = {Peggy {Cellier} and Felix {Distel} and Bernhard {Ganter}}, pages = {60--75}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {Towards an Error-Tolerant Construction of $\mathcal{EL}^{\bot}$ -Ontologies from Data Using Formal Concept Analysis}, volume = {7880}, year = {2013}, }
Abstract BibTeX Entry PDF File DOI
We present a general form of attribute exploration, a knowledge completion algorithm from formal concept analysis. The aim of this generalization is to extend the applicability of attribute exploration by a general description. Additionally, this may also allow for viewing different existing variants of attribute exploration as instances of a general form, as for example exploration on partial contexts.
@techreport{ Borc-LTCS-13-02, address = {Dresden, Germany}, author = {Daniel {Borchmann}}, doi = {https://doi.org/10.25368/2022.192}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See http://lat.inf.tu-dresden.de/research/reports.html.}, number = {13-02}, title = {{A General Form of Attribute Exploration}}, type = {LTCS-Report}, year = {2013}, }
Abstract BibTeX Entry PDF File DOI
Within formal concept analysis, attribute exploration is a powerful tool to semi-automatically check data for completeness with respect to a given domain. However, the classical formulation of attribute exploration does not take into account possible errors which are present in the initial data. We present in this work a generalization of attribute exploration based on the notion of confidence, which will allow for the exploration of implications which are not necessarily valid in the initial data, but instead enjoy a minimal confidence therein.
@techreport{ Borch-LTCS-13-04, address = {Dresden, Germany}, author = {Daniel {Borchmann}}, doi = {https://doi.org/10.25368/2022.194}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See http://lat.inf.tu-dresden.de/research/reports.html.}, number = {13-04}, title = {{Exploration by Confidence}}, type = {LTCS-Report}, year = {2013}, }
Abstract BibTeX Entry PDF File DOI
We present an extensions of our previous work on axiomatizing confident general concept inclusions in given finite interpretations. Within this extension we allow external experts to interactively provide counterexamples to general concept inclusions with otherwise enough confidence in the given data. This extensions allows us to distinguish between erroneous counterexamples in the data and rare, but valid counterexamples.
@techreport{ Borc-LTCS-13-11, address = {Dresden, Germany}, author = {Daniel {Borchmann}}, doi = {https://doi.org/10.25368/2022.201}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See http://lat.inf.tu-dresden.de/research/reports.html.}, number = {13-11}, title = {{Model Exploration by Confidence with Completely Specified Counterexamples}}, type = {LTCS-Report}, year = {2013}, }
Abstract BibTeX Entry PDF File (The final publication is available at link.springer.com)
Ontology-based data access (OBDA) generalizes query answering in relational databases. It allows to query a database by using the language of an ontology, abstracting from the actual relations of the database. For ontologies formulated in Description Logics of the DL-Lite family, OBDA can be realized by rewriting the query into a classical first-order query, e.g. an SQL query, by compiling the information of the ontology into the query. The query is then answered using classical database techniques. In this paper, we consider a temporal version of OBDA. We propose a temporal query language that combines a linear temporal logic with queries over DL-Litecore-ontologies. This language is well-suited to express temporal properties of dynamical systems and is useful in context-aware applications that need to detect specific situations. Using a first-order rewriting approach, we transform our temporal queries into queries over a temporal database. We then present three approaches to answering the resulting queries, all having different advantages and drawbacks.
@inproceedings{ BoLiTh-FroCoS13, address = {Nancy, France}, author = {Stefan {Borgwardt} and Marcel {Lippmann} and Veronika {Thost}}, booktitle = {Proceedings of the 9th International Symposium on Frontiers of Combining Systems ({FroCoS 2013})}, editor = {Pascal {Fontaine} and Christophe {Ringeissen} and Renate A. {Schmidt}}, pages = {165--180}, publisher = {Springer-Verlag}, series = {Lecture Notes in Computer Science}, title = {Temporal Query Answering in the Description Logic {\textit{DL-Lite}}}, volume = {8152}, year = {2013}, }
BibTeX Entry PDF File
@inproceedings{ BoLiTh-DL13, address = {Ulm, Germany}, author = {Stefan {Borgwardt} and Marcel {Lippmann} and Veronika {Thost}}, booktitle = {Proceedings of the 26th International Workshop on Description Logics ({DL-2013})}, editor = {Thomas {Eiter} and Birte {Glimm} and Yevgeny {Kazakov} and Markus {Kr{\"o}tzsch}}, month = {July}, publisher = {CEUR-WS.org}, series = {CEUR Workshop Proceedings}, title = {Temporal Query Answering in {\textit{DL-Lite}}}, volume = {1014}, year = {2013}, }
Abstract BibTeX Entry PDF File
The Description Logic EL is used to formulate several large biomedical ontologies. Fuzzy extensions of EL can express the vagueness inherent in many biomedical concepts. We consider fuzzy EL with semantics based on general t-norms, and study the reasoning problems of deciding positive subsumption and 1-subsumption and computing the best subsumption degree.
@inproceedings{ BoPe-DL13, address = {Ulm, Germany}, author = {Stefan {Borgwardt} and Rafael {Pe{\~n}aloza}}, booktitle = {Proceedings of the 2013 International Workshop on Description Logics ({DL'13})}, editor = {Thomas {Eiter} and Birte {Glimm} and Yevgeny {Kazakov} and Markus {Kr{\"o}tzsch}}, pages = {526--538}, series = {CEUR-WS}, title = {About Subsumption in Fuzzy $\mathcal{EL}$}, volume = {1014}, year = {2013}, }
Abstract BibTeX Entry PDF File ©IJCAI
The Description Logic EL is used to formulate several large biomedical ontologies. Fuzzy extensions of EL can express the vagueness inherent in many biomedical concepts. We study the reasoning problem of deciding positive subsumption in fuzzy EL with semantics based on general t-norms. We show that the complexity of this problem depends on the specific t-norm chosen. More precisely, if the t-norm has zero divisors, then the problem is co-NP-hard; otherwise, it can be decided in polynomial time. We also show that the best subsumption degree cannot be computed in polynomial time if the t-norm contains the Łukasiewicz t-norm.
@inproceedings{ BoPe-IJCAI13, address = {Beijing, China}, author = {Stefan {Borgwardt} and Rafael {Pe{\~n}aloza}}, booktitle = {Proceedings of the 23rd International Joint Conference on Artificial Intelligence (IJCAI'13)}, editor = {Francesca {Rossi}}, pages = {789--795}, publisher = {AAAI Press}, title = {Positive Subsumption in Fuzzy $\mathcal{EL}$ with General t-norms}, year = {2013}, }
Abstract BibTeX Entry PDF File
Ontologies are used to represent and share knowledge. Numerous ontologies have been developed so far, especially in knowledge intensive areas such as the biomedical domain. As the size of ontologies increases, their continued development and maintenance is becoming more challenging as well. Detecting and representing semantic differences between versions of ontologies is an important task for which automated tool support is needed. In this paper we investigate the logical difference problem using a hypergraph representation of EL-terminologies. We focus solely on the concept difference wrt. a signature. For computing this difference it suffices to check the existence of simulations between hypergraphs whereas previous approaches required a combination of different methods.
@inproceedings{ EcLuWa-DChanges2013, author = {Andreas {Ecke} and Michel {Ludwig} and Dirk {Walther}}, booktitle = {Proceedings of the International workshop on (Document) Changes: modeling, detection, storage and visualization ({DChanges 2013})}, series = {CEUR-WS}, title = {The Concept Difference for $\mathcal{EL}$-Terminologies using Hypergraphs}, venue = {Florence, Italy}, volume = {1008}, year = {2013}, }
Abstract BibTeX Entry PDF File PDF File (Extended Technical Report)
Description Logics (DLs) are a family of knowledge representation formalisms, that provides the theoretical basis for the standard web ontology language OWL. Generalization services like the least common subsumer (lcs) and the most specific concept (msc) are the basis of several ontology design methods, and form the core of similarity measures. For the DL ELOR, which covers most of the OWL 2 EL profile, the lcs and msc need not exist in general, but they always exist if restricted to a given role-depth. We present algorithms that compute these role-depth bounded generalizations. Our method is easy to implement, as it is based on the polynomial-time completion algorithm for ELOR.
@inproceedings{ EcPeTu-KI-13, address = {Koblenz, Germany}, author = {Andreas {Ecke} and Rafael {Pe{\~n}aloza} and Anni-Yasmin {Turhan}}, booktitle = {Proceedings of the 36th German Conference on Artificial Intelligence (KI 2013)}, editor = {Ingo J. {Timm} and Matthias {Thimm}}, pages = {49--60}, publisher = {Springer-Verlag}, series = {Lecture Notes in Artificial Intelligence}, title = {Computing Role-depth Bounded Generalizations in the Description Logic {$\mathcal{ELOR}$}}, volume = {8077}, year = {2013}, }
Abstract BibTeX Entry PDF File
Completion-based algorithms can be employed for computing the least common subsumer of two concepts up to a given role-depth, in extensions of the lightweight DL EL. This approach has been applied also to the probabilistic DL Prob-EL, which is variant of EL with subjective probabilities. In this paper we extend the completion-based lcs-computation algorithm to nominals, yielding a procedure for the DL Prob-ELO⁰¹.
@inproceedings{ EcPeTu-DL-13, address = {Ulm, Germany}, author = {Andreas {Ecke} and Rafael {Pe{\~n}aloza} and Anni-Yasmin {Turhan}}, booktitle = {Proceedings of the 26th International Workshop on Description Logics ({DL-2013})}, editor = {Thomas {Eiter} and Birte {Glimm} and Yevgeny {Kazakov} and Markus {Kr{\"o}tzsch}}, month = {July}, pages = {670--688}, series = {CEUR-WS}, title = {Role-depth bounded Least Common Subsumer in Prob-{$\mathcal{EL}$} with Nominals}, volume = {1014}, year = {2013}, }
Abstract BibTeX Entry PDF File
In Description Logics (DL) knowledge bases (KBs) information is typically captured by crisp concept descriptions. However, for many practical applications querying the KB by crisp concepts is too restrictive. A controlled way of gradually relaxing a query concept can be achieved by the use of similarity measures. To this end we formalize the task of instance query answering for crisp DL KBs using concepts relaxed by similarity measures. We identify relevant properties for the similarity measure and give first results on a computation algorithm.
@inproceedings{ EcPeTu-WL4AI-13, address = {Beijing, China}, author = {Andreas {Ecke} and Rafael {Pe{\~n}aloza} and Anni-Yasmin {Turhan}}, booktitle = {Workshop on {W}eighted {L}ogics for {AI} (in conjunction with IJCAI'13)}, title = {Towards Instance Query Answering for Concepts Relaxed by Similarity Measures}, year = {2013}, }
Abstract BibTeX Entry PDF File DOI
The notion of concept similarity is central to several ontology tasks and can be employed to realize relaxed versions of classical reasoning services. In this paper we investigate the reasoning service of answering instance queries in a relaxed fashion, where the query concept is relaxed by means of a concept similarity measure (CSM). To this end we investigate CSMs that assess the similarity of EL-concepts defined w.r.t. a general EL-TBox. We derive such a family of CSMs from a family of similarity measures for finite interpretations and show in both cases that the resulting measures enjoy a collection of formal properties. These properties allow us to devise an algorithm for computing relaxed instances w.r.t. general EL-TBoxes, where users can specify the `appropriate' notion of similarity by instanciating our CSM appropriately.
@techreport{ EcTu-TR-13, address = {Dresden, Germany}, author = {Andreas {Ecke} and Anni-Yasmin {Turhan}}, doi = {https://doi.org/10.25368/2022.202}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See \url{http://lat.inf.tu-dresden.de/research/reports.html}.}, number = {13-12}, title = {Similarity Measures for Computing Relaxed Instances w.r.t.\ General $\mathcal{EL}$-{TBoxes}}, type = {LTCS-Report}, year = {2013}, }
Generated 19 December 2024, 10:12:38.
Introduction
The DFG Collaborative Research Center (CRC/SFB 912) "Highly Adaptive Energy Efficient Computing(HAEC)" complements this cluster as Path H. HAEC focuses on large-scale, multi-chip computing platforms with disruptive wireless and optical inter-chip interconnects and on hardware/software adaptation methods for a new quality of energy-efficient computing. The understanding generated here can have major impacts on the other System-Oriented Paths and will be challenged in a much wider context. E.g., new devices from the Materials-Inspired Paths could be integrated into HAEC’s computing platform. Therefore, the CRC is scientifically linked to cfAED, even though it is organizationally independent (see CRC 912).
Investigators
- Speaker: Prof. Dr.-Ing. Gerhard Fettweis
- Representative Speakers: Prof. Dr. Christel Baier, Prof. Dr.-Ing. Wolfgang Lehner, Prof. Dr. Wolfgang E. Nagel, Prof. Dr.-Ing. Dirk Plettemeier
- Prof. Dr. Uwe Aßmann
- Prof. Dr.-Ing. Franz Baader
- Prof. Dr. Christel Baier
- Prof. Dr.-Ing. Dr. h.c. Karlheinz Bock
- Prof. Dr.-Ing. Jerónimo Castrillón (cfaed Strategic Professorship)
- PD. Dr.-Ing. habil. Waltenegus Dargie
- Dr. Meik Dörpinghaus (cfaed Research Group Leader - Dörpinghaus Group)
- Prof. Dr. sc. techn. habil. Dipl. Betriebswissenschaften Frank Ellinger
- Prof. Dr.-Ing. Dr. h.c. Gerhard Fettweis
- Prof. Dr. rer. nat. habil. Andreas Fischer
- Prof. Dr.-Ing. Dr. h.c. Frank H. P. Fitzek
- Dr.-Ing. Elke Franz
- Prof. Dr. Hermann Härtig
- Prof. Dr.-Ing. Eduard Jorswieck
- Prof. Dr. Markus Krötzsch
- Prof. Dr.-Ing Wolfgang Lehner
- Prof. Dr. Wolfgang E. Nagel
- Prof. Dr.-Ing. Dirk Plettemeier
- Prof. Dr. Silvia Santini
- Prof. Dr. rer. nat. habil. Dr. h.c. Alexander Schill
- Prof. Dr.-Ing. Michael Schröter
- PD Dr.-Ing. Anni-Yasmin Turhan
Research Program
Since 2007, the global Internet’s server and network energy consumption and corresponding CO2 pollution exceeds that of worldwide air traffic. Addressing this increasing energy demand and the resulting ecological impact, the goal of the Collaborative Research Center HAEC is to research technologies enabling computing systems with high energy efficiency without compromising on high performance. A straightforward way for improving energy efficiency is to reduce the energy consumption of every individual hardware component in a system.
However, rather than ignoring the characteristics of applications, user communities, and contexts, HAEC strives for a comprehensive application-aware approach. As computational problems require the parallel execution of computational operations with complex and problem-specific intercommunication patterns, HAECs research focuses on highly-adaptive hardware interconnection systems capable of adapting to software needs to achieve a substantially higher level of efficiency than currently possible with fixed connections. In addition, energy efficiency is addressed by novel software-level adaptation schemes focusing on the utility of applications under different environmental conditions. This requires system states of applications and hardware to be monitored and strategies for improving application utility at run-time, taking into account optimizations from these two ends.
On the hardware side, a novel, highly adaptive and energy efficient interconnection architecture will be investigated and shall be demonstrated in the third (i.e., final) 4-year phase of HAEC (2019 – 2023). While state-of-the-art CMOS technology will be considered initially, new insights on efficient communication and computation components from other Paths shall be used to model the energy/ performance tradeoffs when available. On the software side, HAEC’s goal is to provide comprehensive energy-adaptive software technology, which exploits the versatile interconnection architecture for high application utility and that will require many innovations for run-time optimization.
Publications and Technical Reports
2020
Abstract BibTeX Entry PDF File DOI
In contrast to qualitative linear temporal logics, which can be used to state that some property will eventually be satisfied, metric temporal logics allow us to formulate constraints on how long it may take until the property is satisfied. While most of the work on combining description logics (DLs) with temporal logics has concentrated on qualitative temporal logics, there is a growing interest in extending this work to the quantitative case. In this article, we complement existing results on the combination of DLs with metric temporal logics by introducing interval-rigid concept and role names. Elements included in an interval-rigid concept or role name are required to stay in it for some specified amount of time. We investigate several combinations of (metric) temporal logics with ALC by either allowing temporal operators only on the level of axioms or also applying them to concepts. In contrast to most existing work on the topic, we consider a timeline based on the integers and also allow assertional axioms. We show that the worst-case complexity does not increase beyond the previously known bound of 2-ExpSpace and investigate in detail how this complexity can be reduced by restricting the temporal logic and the occurrences of interval-rigid names.
@article{ BaBoKoOzTh-TOCL20, author = {Franz {Baader} and Stefan {Borgwardt} and Patrick {Koopmann} and Ana {Ozaki} and Veronika {Thost}}, doi = {https://doi.org/10.1145/3399443}, journal = {ACM Transactions on Computational Logic}, month = {August}, number = {4}, pages = {30:1--30:46}, title = {Metric Temporal Description Logics with Interval-Rigid Names}, volume = {21}, year = {2020}, }
Abstract BibTeX Entry DOI
The project "Semantic Technologies for Situation Awareness" was concerned with detecting certain critical situations from data obtained by observing a complex hard- and software system, in order to trigger actions that allow this system to save energy. The general idea was to formalize situations as ontology-mediated queries, but in order to express the relevant situations, both the employed ontology language and the query language had to be extended. In this paper we sketch the general approach and then concentrate on reporting the formal results obtained for reasoning in these extensions, but do not describe the application that triggered these extensions in detail.
@article{ BBKTT-KI20, author = {Franz {Baader} and Stefan {Borgwardt} and Patrick {Koopmann} and Veronika {Thost} and Anni-Yasmin {Turhan}}, doi = {https://doi.org/10.1007/s13218-020-00694-3}, journal = {{KI} -- K{\"u}nstliche Intelligenz}, number = {4}, pages = {291--301}, title = {Semantic Technologies for Situation Awareness}, volume = {34}, year = {2020}, }
2019
Abstract BibTeX Entry PDF File PDF File (Extended Technical Report) DOI
We present a method for answering ontology-mediated queries for DL-Lite extended with a concrete domain, where we allow concrete domain predicates to be used in the query as well. Our method is based on query rewriting, a well-known technique for ontology-based query answering (OBQA), where the knowledge provided by the ontology is compiled into the query so that the rewritten query can be evaluated directly over a database. This technique reduces the problem of query answering w.r.t. an ontology to query evaluation over a database instance. Specifically, we consider members of the DL-Lite family extended with unary and binary concrete domain predicates over the real numbers. While approaches for query rewriting DL-Lite with these concrete domain have been investigated theoretically, these approaches use a combined approach in which also the data is processed, and require the concrete domain values occurring in the data to be known in advance, which makes the procedure data-dependent. In contrast, we show how rewritings can be computed in a data-independent fashion.
@inproceedings{ AlKoTu-GCAI-19, author = {Christian {Alrabbaa} and Patrick {Koopmann} and Anni-Yasmin {Turhan}}, booktitle = {GCAI 2019. Proceedings of the 5th Global Conference on Artificial Intelligence}, doi = {https://doi.org/10.29007/gqll}, editor = {Diego {Calvanese} and Luca {Iocchi}}, pages = {15--27}, publisher = {EasyChair}, series = {EPiC Series in Computing}, title = {Practical Query Rewriting for DL-Lite with Numerical Predicates}, volume = {65}, year = {2019}, }
Abstract BibTeX Entry PDF File DOI
%!TEX root = ./paper.tex [1]blue#1 We present a method for answering ontology-mediated queries for DL-Lite extended with a concrete domain, where we allow concrete domain predicates to be used in the query as well. Our method is based on query rewriting, a well-known technique for ontology-based query answering (OBQA), where the knowledge provided by the ontology is compiled into the query so that the rewritten query can be evaluated directly over a database. This technique reduces the problem of query answering w.r.t. an ontology to query evaluation over a database instance. Specifically, we consider members of the DL-Lite family extended with unary and binary concrete domain predicates over the real numbers. While approaches for query rewriting DL-Lite with these concrete domain have been investigated theoretically, these approaches use a combined approach in which also the data is processed, and require the concrete domain values occurring in the data to be known in advance, which makes the procedure data-dependent. In contrast, we show how rewritings can be computed in a data-independent fashion.
@techreport{ AlKoTu-LTCS-19-06, address = {Dresden, Germany}, author = {Christian {Alrabbaa} and Patrick {Koopmann} and Anni-Yasmin {Turhan}}, doi = {https://doi.org/10.25368/2022.255}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See \url{https://lat.inf.tu-dresden.de/research/reports.html}}, number = {19-06}, title = {Practical Query Rewriting for \textit{DL-Lite} with Numerical Predicates (Extended Version)}, type = {LTCS-Report}, year = {2019}, }
Abstract BibTeX Entry PDF File
Stream-based reasoning systems process data stemming from different sources that are received over time. In this kind of applications, reasoning needs to cope with the temporal dimension and should be resilient against inconsistencies in the data. Motivated by such settings, this paper addresses the problem of handling inconsistent data in a temporal version of ontology-mediated query answering. We consider a recently proposed temporal query language that combines conjunctive queries with operators of propositional linear temporal logic (LTL), and consider these under three inconsistency-tolerant semantics that have been introduced for querying inconsistent description logic knowledge bases. We investigate their complexity for temporal \(\mathcal{EL}_\bot\) and \(\text{DL-Lite}_\mathcal{R}\) knowledge bases. In particular, we consider two different cases, depending on the presence of negations in the query. Furthermore, we complete the complexity picture for the consistent case. We also provide two approaches toward practical algorithms for inconsistency-tolerant temporal query answering.
@article{ BouKooTur2019, author = {Camille {Bourgaux} and Patrick {Koopmann} and Anni-Yasmin {Turhan}}, journal = {Semantic Web}, number = {3}, pages = {475--521}, title = {Ontology-mediated query answering over temporal and inconsistent data}, volume = {10}, year = {2019}, }
Abstract BibTeX Entry PDF File
Ontology-based access to large data-sets has recently gained a lot of attention. To access data efficiently, one approach is to rewrite the ontology into Datalog, and then use powerful Datalog engines to compute implicit entailments. Existing rewriting techniques support Description Logics (DLs) from \(\mathcal{ELH}\) to Horn-\(\mathcal{SHIQ}\). We go one step further and present one such data-independent rewriting technique for Horn-\(\mathcal{SRIQ}_\sqcap\), the extension of Horn-\(\mathcal{SHIQ}\) that supports role chain axioms, an expressive feature prominently used in many real-world ontologies. We evaluated our rewriting technique on a large known corpus of ontologies. Our experiments show that the resulting rewritings are of moderate size, and that our approach is more efficient than state-of-the-art DL reasoners when reasoning with data-intensive ontologies.
@inproceedings{ CaGoKo-DL19, address = {Oslo, Norway}, author = {David {Carral} and Larry {Gonz\'alez} and Patrick {Koopmann}}, booktitle = {Proceedings of the 32nd International Workshop on Description Logics (DL'19)}, editor = {Mantas {Simkus} and Grant {Weddell}}, publisher = {CEUR-WS}, series = {CEUR Workshop Proceedings}, title = {From {Horn}-{$\mathcal{SRIQ}$} to {Datalog}: A Data-Independent Transformation that Preserves Assertion Entailment (Abstract)}, volume = {2373}, year = {2019}, }
Abstract BibTeX Entry PDF File PDF File (Extended Technical Report) Evaluation Files
Ontology-based access to large data-sets has recently gained a lot of attention. To access data efficiently, one approach is to rewrite the ontology into Datalog, and then use powerful Datalog engines to compute implicit entailments. Existing rewriting techniques support Description Logics (DLs) from \(\mathcal{ELH}\) to Horn-\(\mathcal{SHIQ}\). We go one step further and present one such data-independent rewriting technique for Horn-\(\mathcal{SRIQ}_\sqcap\), the extension of Horn-\(\mathcal{SHIQ}\) that supports role chain axioms, an expressive feature prominently used in many real-world ontologies. We evaluated our rewriting technique on a large known corpus of ontologies. Our experiments show that the resulting rewritings are of moderate size, and that our approach is more efficient than state-of-the-art DL reasoners when reasoning with data-intensive ontologies.
@inproceedings{ CaGoKo-AAAI-19, author = {David {Carral} and Larry {Gonz\'alez} and Patrick {Koopmann}}, booktitle = {Proceedings of the 33rd AAAI Conference on Artificial Intelligence (AAAI'19)}, editor = {Pascal Van {Hentenryck} and Zhi-Hua {Zhou}}, pages = {2736--2743}, publisher = {{AAAI} Press}, title = {From {Horn}-{$\mathcal{SRIQ}$} to {Datalog}: {A} Data-Independent Transformation that Preserves Assertion Entailment}, year = {2019}, }
Abstract BibTeX Entry PDF File DOI
Probabilistic model checking (PMC) is a well-established method for the quantitative analysis of dynamic systems. Description logics (DLs) provide a well-suited formalism to describe and reason about terminological knowledge, used in many areas to specify background knowledge on the domain. We investigate how such knowledge can be integrated into the PMC process, introducing ontology-mediated PMC. Specifically, we propose a formalism that links ontologies to dynamic behaviors specified by guarded commands, the de-facto standard input formalism for PMC tools such as Prism. Further, we present and implement a technique for their analysis relying on existing DL-reasoning and PMC tools. This way, we enable the application of standard PMC techniques to analyze knowledge-intensive systems. Our approach is implemented and evaluated on a multi-server system case study, where different DL-ontologies are used to provide specifications of different server platforms and situations the system is executed in.
@inproceedings{ DKT-iFM-19, author = {Clemens {Dubslaff} and Patrick {Koopmann} and Anni{-}Yasmin {Turhan}}, booktitle = {Proceedings of the 15th International Conference on Integrated Formal Methods (iFM'19)}, doi = {https://doi.org/10.1007/978-3-030-34968-4\_11}, editor = {Wolfgang {Ahrendt} and Silvia Lizeth {Tapia Tarifa}}, note = {Best Paper Award.}, pages = {194--211}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {Ontology-Mediated Probabilistic Model Checking}, volume = {11918}, year = {2019}, }
Abstract BibTeX Entry PDF File DOI
Probabilistic model checking (PMC) is a well-established method for the quantitative analysis of dynamic systems. On the other hand, description logics (DLs) provide a well-suited formalism to describe and reason about static knowledge, used in many areas to specify domain knowledge in an ontology. We investigate how such knowledge can be integrated into the PMC process, introducing ontology-mediated PMC. Specifically, we propose a formalism that links ontologies to dynamic behaviors specified by guarded commands, the de-facto standard input formalism for PMC tools such as Prism. Further, we present and implement a technique for their analysis relying on existing DL-reasoning and PMC tools. This way, we enable the application of standard PMC techniques to analyze knowledge-intensive systems. Our approach is implemented and evaluated on a multi-server system case study, where different DL-ontologies are used to provide specifications of different server platforms and situations the system is executed in.
@techreport{ DuKoTu-LTCS-19-05, address = {Dresden, Germany}, author = {Clemens {Dubslaff} and Patrick {Koopmann} and Anni-Yasmin {Turhan}}, doi = {https://doi.org/10.25368/2022.254}, institution = {TU Dresden}, note = {see \url{https://lat.inf.tu-dresden.de/research/reports.html}}, number = {19-05}, title = {Ontology-Mediated Probabilistic Model Checking (Extended Version)}, type = {LTCS-Report}, year = {2019}, }
Abstract BibTeX Entry PDF File PDF File (Extended Technical Report)
We present some initial results on ontology-based query answering with description logic ontologies that may employ temporal and probabilistic operators on concepts and axioms. Specifically, we consider description logics extended with operators from linear temporal logic (LTL), as well as subjective probability operators, and an extended query language in which conjunctive queries can be combined using these operators. We first show some complexity results for the setting in which either only temporal operators or only probabilistic operators may be used, both in the ontology and in the query, and then show a TwoExpSpace lower bound for the setting in which both types of operators can be used together.
@inproceedings{ Ko-DL19b, address = {Oslo, Norway}, author = {Patrick {Koopmann}}, booktitle = {Proceedings of the 32nd International Workshop on Description Logics (DL'19)}, editor = {Mantas {Simkus} and Grant {Weddell}}, publisher = {CEUR-WS}, series = {CEUR Workshop Proceedings}, title = {Maybe Eventually? Towards Combining Temporal and Probabilistic Description Logics and Queries}, volume = {2373}, year = {2019}, }
Abstract BibTeX Entry PDF File PDF File (Extended Technical Report)
We investigate ontology-based query answering for data that are both temporal and probabilistic, which might occur in contexts such as stream reasoning or situation recognition with uncertain data. We present a framework that allows to represent temporal probabilistic data, and introduce a query language with which complex temporal and probabilistic patterns can be described. Specifically, this language combines conjunctive queries with operators from linear time logic as well as probability operators. We analyse the complexities of evaluating queries in this language in various settings. While in some cases, combining the temporal and the probabilistic dimension in such a way comes at the cost of increased complexity, we also determine cases for which this increase can be avoided.
@inproceedings{ Ko-AAAI-19, author = {Patrick {Koopmann}}, booktitle = {Proceedings of the 33rd AAAI Conference on Artificial Intelligence (AAAI'19)}, editor = {Pascal Van {Hentenryck} and Zhi-Hua {Zhou}}, pages = {2903--2910}, publisher = {{AAAI} Press}, title = {Ontology-Based Query Answering for Probabilistic Temporal Data}, year = {2019}, }
Abstract BibTeX Entry PDF File
We investigate ontology-based query answering for data that are both temporal and probabilistic, which might occur in contexts such as stream reasoning or situation recognition with uncertain data. We present a framework that allows to represent temporal probabilistic data, and introduce a query language with which complex temporal and probabilistic patterns can be described. Specifically, this language combines conjunctive queries with operators from linear time logic as well as probability operators. We analyse the complexities of evaluating queries in this language in various settings. While in some cases, combining the temporal and the probabilistic dimension in such a way comes at the cost of increased complexity, we also determine cases for which this increase can be avoided.
@inproceedings{ Ko-DL19, address = {Oslo, Norway}, author = {Patrick {Koopmann}}, booktitle = {Proceedings of the 32nd International Workshop on Description Logics (DL'19)}, editor = {Mantas {Simkus} and Grant {Weddell}}, publisher = {CEUR-WS}, series = {CEUR Workshop Proceedings}, title = {Ontology-Based Query Answering for Probabilistic Temporal Data (Abstract)}, volume = {2373}, year = {2019}, }
2018
Abstract BibTeX Entry PDF File
Even for small logic programs, the number of resulting answer-sets can be tremendous. In such cases, users might be incapable of comprehending the space of answer-sets as a whole nor being able to identify a specific answer-set according to their needs. To overcome this difficulty, we propose a general formal framework that takes an arbitrary logic program as input, and allows for navigating the space of answer- sets in a systematic interactive way analogous to faceted browsing. The navigation is carried out stepwise, where each step narrows down the remaining solutions, eventually arriving at a single one. We formulate two navigation modes, one stringent conflict avoiding, and a “free” mode, where conflicting selections of facets might occur. For the latter mode, we provide efficient algorithms for resolving the conflicts. We provide an implementation of our approach and demonstrate that our framework is able to handle logic programs for which it is currently infeasible to retrieve all answer sets.
@inproceedings{ AlRuSc-RR18, author = {Christian {Alrabbaa} and Sebastian {Rudolph} and Lukas {Schweizer}}, booktitle = {Proceedings of Rules and Reasoning - Second International Joint Conference, RuleML+RR 2018}, editor = {Christoph Benzm{\"{u}}ller and Francesco Ricca and Xavier {Parent} and Dumitru {Roman}}, pages = {211--225}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {Faceted Answer-Set Navigation}, volume = {11092}, year = {2018}, }
Abstract BibTeX Entry PDF File DOI
Finding suitable candidates for clinical trials is a labor-intensive task that requires expert medical knowledge. Our goal is to design (semi-)automated techniques that can support clinical researchers in this task. We investigate the issues involved in designing formal query languages for selecting patients that are eligible for a given clinical trial, leveraging existing ontology-based query answering techniques. In particular, we propose to use a temporal extension of existing approaches for accessing data through ontologies written in Description Logics. We sketch how such a query answering system could work and show that eligibility criteria and patient data can be adequately modeled in our formalism.
@inproceedings{ BaBF-HQA18, author = {Franz {Baader} and Stefan {Borgwardt} and Walter {Forkel}}, booktitle = {Proc.\ of the 1st Int.\ Workshop on Hybrid Question Answering with Structured and Unstructured Knowledge (HQA'18), Companion of the The Web Conference 2018}, doi = {https://doi.org/10.1145/3184558.3191538}, pages = {1069--1074}, publisher = {ACM}, title = {Patient Selection for Clinical Trials Using Temporalized Ontology-Mediated Query Answering}, year = {2018}, }
Abstract BibTeX Entry PDF File DOI
Ontology-mediated query answering can be used to access large data sets through a mediating ontology. It has drawn considerable attention in the Description Logic (DL) community where both the complexity of query answering and practical query answering approaches based on rewriting were investigated in detail. Surprisingly, there is still a gap in what is known about the data complexity of query answering w.r.t. ontologies formulated in the inexpressive DL FL0. While it is known that the data complexity of answering conjunctive queries w.r.t. FL0 ontologies is coNP-complete, the exact complexity of answering instance queries was open until now. In the present paper, we show that answering instance queries w.r.t. FL0 ontologies is in P for data complexity. Together with the known lower bound of P-completeness for a fragment of FL0, this closes the gap mentioned above.
@inproceedings{ BaMaPe-RoD-18, author = {Franz {Baader} and Pavlos {Marantidis} and Maximilian {Pensel}}, booktitle = {Proc.\ of the Reasoning on Data Workshop (RoD'18), Companion of the The Web Conference 2018}, doi = {https://dx.doi.org/10.1145/3184558.3191618}, pages = {1603--1607}, publisher = {ACM}, title = {The Data Complexity of Answering Instance Queries in $\mathcal{FL}_0$}, year = {2018}, }
Abstract BibTeX Entry PDF File DOI Evaluation Files
Ontology-based access to large data-sets has recently gained a lot of attention. To access data efficiently, one approach is to rewrite the ontology into Datalog, and then use powerful Datalog engines to compute implicit entailments. Existing rewriting techniques support Description Logics (DLs) from \(\mathcal{ELH}\) to Horn-\(\mathcal{SHIQ}\). We go one step further and present one such data-independent rewriting technique for Horn-\(\mathcal{SRIQ}_\sqcap\), the extension of Horn-\(\mathcal{SHIQ}\) that supports role chain axioms, an expressive feature prominently used in many real-world ontologies. We evaluated our rewriting technique on a large known corpus of ontologies. Our experiments show that the resulting rewritings are of moderate size, and that our approach is more efficient than state-of-the-art DL reasoners when reasoning with data-intensive ontologies.
@techreport{ CaGoKo-18-14, author = {David {Carral} and Larry {Gonz\'alez} and Patrick {Koopmann}}, doi = {https://doi.org/10.25368/2022.249}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See \url{https://lat.inf.tu-dresden.de/research/reports.html}}, number = {18-14}, title = {From {Horn}-{$\mathcal{SRIQ}$} to {Datalog}: A Data-Independent Transformation that Preserves Assertion Entailment (Extended Version)}, type = {LTCS-Report}, year = {2018}, }
Abstract BibTeX Entry PDF File DOI Evaluation Files
Modeling context-dependent systems for their analysis is challenging as verification tools usually rely on an input language close to imperative programming languages which need not support description of contexts well. We introduce the concept of contextualized programs where operational behaviors and context knowledge are modeled separately using domain-specific formalisms. For behaviors specified in stochastic guarded-command language and contextual knowledge given by OWL description logic ontologies, we develop a technique to eficiently incorporate contextual information into behavioral descriptions by reasoning about the ontology. We show how our presented concepts support and facilitate the quantitative analysis of context-dependent systems using probabilistic model checking. For this, we evaluate our implementation on a case study issuing a multi-server system.
@techreport{ DuKoTu-LTCS-18-09, address = {Dresden, Germany}, author = {Clemens {Dubslaff} and Patrick {Koopmann} and Anni-Yasmin {Turhan}}, doi = {https://doi.org/10.25368/2022.244}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {see \url{https://lat.inf.tu-dresden.de/research/reports.html}}, number = {18-09}, title = {Contextualized Programs for Ontology-Mediated Probabilistic System Analysis}, type = {LTCS-Report}, year = {2018}, }
Abstract BibTeX Entry PDF File DOI
Description Logic actions specify adaptations of description logic interpretations based on some preconditions defined using a description logic. We consider DL actions in which preconditions can be specified using DL axioms as well as using conjunctive queries, and combinatiosn thereof. We investigate complexity bounds for the executability and the projection problem for these actions, which respectively ask whether an action can be executed on models of an interpretation, and which entailments are satisfied after an action has been executed on this model. In addition, we consider a set of new reasoning tasks concerned with conflicts and interactions that may arise if two action are executed at the same time. Since these problems have not been investigated before for Description Logic actions, we investigate the complexity of these tasks both for actions with conjunctive queries and without those. Finally, we consider the verification problem for Golog programs formulated over our famility of actions. Our complexity analysis considers several expressive DLs, and we provide tight complexity bounds for those for which the exact complexity of conjunctive query entailment is known.
@techreport{ Ko-LTCS-18-08, address = {Dresden, Germany}, author = {Patrick {Koopmann}}, doi = {https://doi.org/10.25368/2022.243}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {see \url{https://lat.inf.tu-dresden.de/research/reports.html}}, number = {18-08}, title = {Actions with Conjunctive Queries: Projection, Conflict Detection and Verification}, type = {LTCS-Report}, year = {2018}, }
Abstract BibTeX Entry PDF File DOI
We investigate ontology-based query answering for data that are both temporal and probabilistic, which might occur in contexts such as stream reasoning or situation recognition with uncertain data. We present a framework that allows to represent temporal probabilistic data, and introduce a query language with which complex temporal and probabilistic patterns can be described. Specifically, this language combines conjunctive queries with operators from linear time logic as well as probability operators. We analyse the complexities of evaluating queries in this language in various settings. While in some cases, combining the temporal and the probabilistic dimension in such a way comes at the cost of increased complexity, we also determine cases for which this increase can be avoided.
@techreport{ Koopmann18tr, address = {Dresden, Germany}, author = {Patrick {Koopmann}}, doi = {https://doi.org/10.25368/2022.248}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {see \url{https://lat.inf.tu-dresden.de/research/reports.html}}, number = {18-13}, title = {Ontology-Based Query Answering for Probabilistic Temporal Data (Extended Version)}, type = {LTCS-Report}, year = {2018}, }
Abstract BibTeX Entry PDF File PDF File (Extended Technical Report)
Especially in the field of stream reasoning, there is an increased interest in reasoning about temporal data in order to detect situations of interest or complex events. Ontologies have been proved a useful way to infer missing information from incomplete data, or simply to allow for a higher order vocabulary to be used in the event descriptions. Motivated by this, ontology-based temporal query answering has been proposed as a means for the recognition of situations and complex events. But often, the data to be processed do not only contain temporal information, but also probabilistic information, for example because of uncertain sensor measurements. While there has been a plethora of research on ontology-based temporal query answering, only little is known so far about querying temporal probabilistic data using ontologies. This work addresses this problem by introducing a temporal query language that extends a well-investigated temporal query language with probability operators, and investigating the complexity of answering queries using this query language together with ontologies formulated in the description logic EL.
@inproceedings{ Ko-DKB-KIK-18, author = {Patrick {Koopmann}}, booktitle = {Proceedings of the 7th Workshop on Dynamics of Knowledge and Belief (DKB-2018) and the 6th Workshop KI \& Kognition (KIK-2018)}, pages = {68--79}, publisher = {CEUR-WS.org}, series = {CEUR-Workshop Proceedings}, title = {Ontology-Mediated Query Answering for Probabilistic Temporal Data with {$\mathcal{EL}$} Ontologies}, volume = {2194}, year = {2018}, }
Abstract BibTeX Entry PDF File DOI
Especially in the field of stream reasoning, there is an increased interest in reasoning about temporal data in order to detect situations of interest or complex events. Ontologies have been proved a useful way to infer missing information from incomplete data, or simply to allow for a higher order vocabulary to be used in the event descriptions. Motivated by this, ontology-based temporal query answering has been proposed as a means for the recognition of situations and complex events. But often, the data to be processed do not only contain temporal information, but also probabilistic information, for example because of uncertain sensor measurements. While there has been a plethora of research on ontology-based temporal query answering, only little is known so far about querying temporal probabilistic data using ontologies. This work addresses this problem by introducing a temporal query language that extends a well-investigated temporal query language with probability operators, and investigating the complexity of answering queries using this query language together with ontologies formulated in the description logic EL.
@techreport{ Ko-LTCS-18-07, address = {Dresden, Germany}, author = {Patrick {Koopmann}}, doi = {https://doi.org/10.25368/2022.242}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {see \url{https://lat.inf.tu-dresden.de/research/reports.html}}, number = {18-07}, title = {Ontology-Mediated Query Answering for Probabilistic Temporal Data with {$\mathcal{EL}$} Ontologies (Extended Version)}, type = {LTCS-Report}, year = {2018}, }
Abstract BibTeX Entry PDF File DOI
Golog programs allow to model complex behaviour of agents by combining primitive actions defined in a Situation Calculus theory using imperative and non-deterministic programming language constructs. In general, verifying temporal properties of Golog programs is undecidable. One way to establish decidability is to restrict the logic used by the program to a Description Logic (DL), for which recently some complexity upper bounds for verification problem have been established. However, so far it was open whether these results are tight, and lightweight DLs such as EL have not been studied at all. Furthermore, these results only apply to a setting where actions do not consume time, and the properties to be verified only refer to the timeline in a qualitative way. In a lot of applications, this is an unrealistic assumption. In this work, we study the verification problem for timed Golog programs, in which actions can be assigned differing durations, and temporal properties are specified in a metric branching time logic. This allows to annotate temporal properties with time intervals over which they are evaluated, to specify for example that some property should hold for at least n time units, or should become specified within some specified time window. We establish tight complexity bounds of the verification problem for both expressive and lightweight DLs. Our lower bounds already apply to a very limited fragment of the verification problem, and close open complexity bounds for the non-metrical cases studied before.
@techreport{ KoZa-LTCS-18-06, address = {Dresden, Germany}, author = {Patrick {Koopmann} and Benjamin {Zarrie{\ss}}}, doi = {https://doi.org/10.25368/2022.241}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {see \url{https://lat.inf.tu-dresden.de/research/reports.html}}, number = {18-06}, title = {On the Complexity of Verifying Timed {Golog} Programs over Description Logic Actions (Extended Version)}, type = {LTCS-Report}, year = {2018}, }
Abstract BibTeX Entry PDF File ©AAAI
Querying large datasets with incomplete and vague data is still a challenge. Ontology-based query answering extends standard database query answering by background knowledge from an ontology to augment incomplete data. We focus on ontologies written in rough description logics (DLs), which allow to represent vague knowledge by partitioning the domain of discourse into classes of indiscernible elements. In this paper, we extend the combined approach for ontology-based query answering to a variant of the DL ELHbot augmented with rough concept constructors. We show that this extension preserves the good computational properties of classical EL and can be implemented by standard database systems.
@inproceedings{ PeThTu-KR-18, author = {Rafael {Pe\~naloza} and Veronika {Thost} and Anni-Yasmin {Turhan}}, booktitle = {Proceedings of 16. International Conference on Principles of Knowledge Representation and Reasoning (KR 2018)}, editor = {Michael {Thielscher} and Francesca {Toni}}, pages = {399--408}, series = {AAAI}, title = {Query Answering for Rough $\mathcal{EL}$ Ontologies}, year = {2018}, }
2017
Abstract BibTeX Entry PDF File ©Springer-Verlag
In contrast to qualitative linear temporal logics, which can be used to state that some property will eventually be satisfied, metric temporal logics allow to formulate constraints on how long it may take until the property is satisfied. While most of the work on combining Description Logics (DLs) with temporal logics has concentrated on qualitative temporal logics, there has recently been a growing interest in extending this work to the quantitative case. In this paper, we complement existing results on the combination of DLs with metric temporal logics over the natural numbers by introducing interval-rigid names. This allows to state that elements in the extension of certain names stay in this extension for at least some specified amount of time.
@inproceedings{ BaBoKoOzTh-FroCoS17, address = {Bras{\'i}lia, Brazil}, author = {Franz {Baader} and Stefan {Borgwardt} and Patrick {Koopmann} and Ana {Ozaki} and Veronika {Thost}}, booktitle = {Proceedings of the 11th International Symposium on Frontiers of Combining Systems (FroCoS'17)}, editor = {Clare {Dixon} and Marcelo {Finger}}, pages = {60--76}, series = {Lecture Notes in Computer Science}, title = {Metric Temporal Description Logics with Interval-Rigid Names}, volume = {10483}, year = {2017}, }
Abstract BibTeX Entry PDF File
In contrast to qualitative linear temporal logics, which can be used to state that some property will eventually be satisfied, metric temporal logics allow to formulate constraints on how long it may take until the property is satisfied. While most of the work on combining Description Logics (DLs) with temporal logics has concentrated on qualitative temporal logics, there has recently been a growing interest in extending this work to the quantitative case. In this paper, we complement existing results on the combination of DLs with metric temporal logics over the natural numbers by introducing interval-rigid names. This allows to state that elements in the extension of certain names stay in this extension for at least some specified amount of time.
@inproceedings{ BBK+-DL17, address = {Montpellier, France}, author = {Franz {Baader} and Stefan {Borgwardt} and Patrick {Koopmann} and Ana {Ozaki} and Veronika {Thost}}, booktitle = {Proceedings of the 30th International Workshop on Description Logics (DL'17)}, editor = {Alessandro {Artale} and Birte {Glimm} and Roman {Kontchakov}}, publisher = {CEUR-WS}, series = {CEUR Workshop Proceedings}, title = {Metric Temporal Description Logics with Interval-Rigid Names (Extended Abstract)}, volume = {1879}, year = {2017}, }
Abstract BibTeX Entry PDF File DOI
In contrast to qualitative linear temporal logics, which can be used to state that some property will eventually be satisfied, metric temporal logics allow to formulate constraints on how long it may take until the property is satisfied. While most of the work on combining Description Logics (DLs) with temporal logics has concentrated on qualitative temporal logics, there has recently been a growing interest in extending this work to the quantitative case. In this paper, we complement existing results on the combination of DLs with metric temporal logics over the natural numbers by introducing interval-rigid names. This allows to state that elements in the extension of certain names stay in this extension for at least some specified amount of time.
@techreport{ BaBoKoOzTh-LTCS-17-03, address = {Dresden, Germany}, author = {Franz {Baader} and Stefan {Borgwardt} and Patrick {Koopmann} and Ana {Ozaki} and Veronika {Thost}}, doi = {https://doi.org/10.25368/2022.233}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {see \url{https://lat.inf.tu-dresden.de/research/reports.html}}, number = {17-03}, title = {Metric Temporal Description Logics with Interval-Rigid Names (Extended Version)}, type = {LTCS-Report}, year = {2017}, }
Abstract BibTeX Entry PDF File ©IJCAI
We investigate ontology-based query answering (OBQA) in a setting where both the ontology and the query can refer to concrete values such as numbers and strings. In contrast to previous work on this topic, the built-in predicates used to compare values are not restricted to being unary. We introduce restrictions on these predicates and on the ontology language that allow us to reduce OBQA to query answering in databases using the so-called combined rewriting approach. Though at first sight our restrictions are different from the ones used in previous work, we show that our results strictly subsume some of the existing first-order rewritability results for unary predicates.
@inproceedings{ BaBL-IJCAI17, address = {Melbourne, Australia}, author = {Franz {Baader} and Stefan {Borgwardt} and Marcel {Lippmann}}, booktitle = {Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI'17)}, editor = {Carles {Sierra}}, pages = {786--792}, title = {Query Rewriting for \textit{{DL-Lite}} with {$n$}-ary Concrete Domains}, year = {2017}, }
BibTeX Entry PDF File
@inproceedings{ BaBL-DL17, address = {Montpellier, France}, author = {Franz {Baader} and Stefan {Borgwardt} and Marcel {Lippmann}}, booktitle = {Proceedings of the 30th International Workshop on Description Logics (DL'17)}, editor = {Alessandro {Artale} and Birte {Glimm} and Roman {Kontchakov}}, publisher = {CEUR-WS}, series = {CEUR Workshop Proceedings}, title = {Query Rewriting for \textit{{DL-Lite}} with {$n$}-ary Concrete Domains (Abstract)}, volume = {1879}, year = {2017}, }
Abstract BibTeX Entry PDF File DOI
We investigate ontology-based query answering (OBQA) in a setting where both the ontology and the query can refer to concrete values such as numbers and strings. In contrast to previous work on this topic, the built-in predicates used to compare values are not restricted to being unary. We introduce restrictions on these predicates and on the ontology language that allow us to reduce OBQA to query answering in databases using the so-called combined rewriting approach. Though at first sight our restrictions are different from the ones used in previous work, we show that our results strictly subsume some of the existing first-order rewritability results for unary predicates.
@techreport{ BaBL-LTCS-17-04, address = {Germany}, author = {Franz {Baader} and Stefan {Borgwardt} and Marcel {Lippmann}}, doi = {https://doi.org/10.25368/2022.234}, institution = {Chair for Automata Theory, Technische Universit{\"a}t Dresden}, note = {see \url{https://lat.inf.tu-dresden.de/research/reports.html}}, number = {17-04}, title = {Query Rewriting for \textit{{DL-Lite}} with {$n$}-ary Concrete Domains (Extended Version)}, type = {LTCS-Report}, year = {2017}, }
Abstract BibTeX Entry PDF File PDF File (Extended Technical Report) DOI (The final publication is available at link.springer.com) ©Spinger International Publishing
We consider ontology-based query answering in a setting where some of the data are numerical and of a probabilistic nature, such as data obtained from uncertain sensor readings. The uncertainty for such numerical values can be more precisely represented by continu- ous probability distributions than by discrete probabilities for numerical facts concerning exact values. For this reason, we extend existing ap- proaches using discrete probability distributions over facts by continuous probability distributions over numerical values. We determine the exact (data and combined) complexity of query answering in extensions of the well-known description logics EL and ALC with numerical comparison operators in this probabilistic setting.
@inproceedings{ BaKoTu-FroCoS-17, author = {Franz {Baader} and Patrick {Koopmann} and Anni-Yasmin {Turhan}}, booktitle = {Frontiers of Combining Systems: 11th International Symposium}, doi = {https://doi.org/10.1007/978-3-319-66167-4_5}, pages = {77--94}, publisher = {Springer International Publishing}, series = {Lecture Notes in Computer Science}, title = {Using Ontologies to Query Probabilistic Numerical Data}, volume = {10483}, year = {2017}, }
Abstract BibTeX Entry PDF File DOI
We consider ontology-based query answering in a setting where some of the data are numerical and of a probabilistic nature, such as data obtained from uncertain sensor readings. The uncertainty for such numerical values can be more precisely represented by continuous probability distributions than by discrete probabilities for numerical facts concerning exact values. For this reason, we extend existing approaches using discrete probability distributions over facts by continuous probability distributions over numerical values. We determine the exact (data and combined) complexity of query answering in extensions of the well-known description logics EL and ALC with numerical comparison operators in this probabilistic setting.
@techreport{ BaKoTu-LTCS-17-05, address = {Germany}, author = {Franz {Baader} and Patrick {Koopmann} and Anni-Yasmin {Turhan}}, doi = {https://doi.org/10.25368/2022.235}, institution = {Chair for Automata Theory, Technische Universit{\"a}t Dresden}, note = {See \url{https://lat.inf.tu-dresden.de/research/reports.html}}, number = {17-05}, title = {Using Ontologies to Query Probabilistic Numerical Data (Extended Version)}, type = {LTCS-Report}, year = {2017}, }
Abstract BibTeX Entry PDF File
Probabilistic databases (PDBs) are usually incomplete, e.g., contain only the facts that have been extracted from the Web with high confidence. However, missing facts are often treated as being false, which leads to unintuitive results when querying PDBs. Recently, open-world probabilistic databases (OPDBs) were proposed to address this issue by allowing probabilities of unknown facts to take any value from a fixed probability interval. In this paper, we extend OPDBs by Datalog+/- ontologies, under which both upper and lower probabilities of queries become even more informative, enabling us to distinguish queries that were indistinguishable before. We show that the dichotomy between P and PP in (Open)PDBs can be lifted to the case of first-order rewritable positive programs (without negative constraints); and that the problem can become NPPP-complete, once negative constraints are allowed. We also propose an approximating semantics that circumvents the increase in complexity caused by negative constraints.
@inproceedings{ BoCL-AAAI17, address = {San Francisco, USA}, author = {Stefan {Borgwardt} and Ismail Ilkan {Ceylan} and Thomas {Lukasiewicz}}, booktitle = {Proceedings of the 31st AAAI Conf.\ on Artificial Intelligence (AAAI'17)}, editor = {Satinder {Singh} and Shaul {Markovitch}}, pages = {1063--1069}, publisher = {AAAI Press}, title = {Ontology-Mediated Queries for Probabilistic Databases}, year = {2017}, }
BibTeX Entry PDF File
@inproceedings{ BoCL-DL17, address = {Montpellier, France}, author = {Stefan {Borgwardt} and Ismail Ilkan {Ceylan} and Thomas {Lukasiewicz}}, booktitle = {Proceedings of the 30th International Workshop on Description Logics (DL'17)}, editor = {Alessandro {Artale} and Birte {Glimm} and Roman {Kontchakov}}, publisher = {CEUR-WS}, series = {CEUR Workshop Proceedings}, title = {Ontology-Mediated Queries for Probabilistic Databases (Extended Abstract)}, volume = {1879}, year = {2017}, }
Abstract BibTeX Entry PDF File DOI
Fuzzy description logics (FDLs) are knowledge representation formalisms capable of dealing with imprecise knowledge by allowing intermediate membership degrees in the interpretation of concepts and roles. One option for dealing with these intermediate degrees is to use the so-called Gödel semantics, under which conjunction is interpreted by the minimum of the degrees of the conjuncts. Despite its apparent simplicity, developing reasoning techniques for expressive FDLs under this semantics is a hard task. In this paper, we introduce two new algorithms for reasoning in very expressive FDLs under Gödel semantics. They combine the ideas of a previous automata-based algorithm for Gödel FDLs with the known crispification and tableau approaches for FDL reasoning. The results are the two first practical algorithms capable of reasoning in infinitely valued FDLs supporting general concept inclusions.
@article{ BoPe-IJAR17, author = {Stefan {Borgwardt} and Rafael {Pe{\~n}aloza}}, doi = {http://dx.doi.org/10.1016/j.ijar.2016.12.014}, journal = {International Journal of Approximate Reasoning}, pages = {60--101}, title = {Algorithms for Reasoning in Very Expressive Description Logics under Infinitely Valued {G}{\"o}del Semantics}, volume = {83}, year = {2017}, }
Abstract BibTeX Entry PDF File
In ontology-based systems that process data stemming from different sources and that is received over time, as in context-aware systems, reasoning needs to cope with the temporal dimension and should be resilient against inconsistencies in the data. Motivated by such settings, this paper addresses the problem of handling inconsistent data in a temporal version of ontology-based query answering. We consider a recently proposed temporal query language that combines conjunctive queries with operators of propositional linear temporal logic and extend to this setting three inconsistency-tolerant semantics that have been introduced for querying inconsistent description logic knowledge bases. We investigate their complexity for DL-LiteR temporal knowledge bases, and furthermore complete the picture for the consistent case.
@inproceedings{ BoTu-ISWC-17, author = {Camille {Bourgaux} and Anni-Yasmin {Turhan}}, booktitle = {Proceedings of the 16th International Semantic Web Conference (ISWC 2017)}, editor = {Claudia {d'Amato} and Miriam {Fernandez}}, series = {LNCS}, title = {Temporal Query Answering in DL-Lite over Inconsistent Data}, year = {2017}, }
Abstract BibTeX Entry PDF File DOI
In ontology-based systems that process data stemming from different sources and that is received over time, as in context-aware systems, reasoning needs to cope with the temporal dimension and should be resilient against inconsistencies in the data. Motivated by such settings, this paper addresses the problem of handling inconsistent data in a temporal version of ontology-based query answering. We consider a recently proposed temporal query language that combines conjunctive queries with operators of propositional linear temporal logic and extend to this setting three inconsistency-tolerant semantics that have been introduced for querying inconsistent description logic knowledge bases. We investigate their complexity for DL-LiteR temporal knowledge bases, and furthermore complete the picture for the consistent case.
@techreport{ BoTu-LTCS-17-06, address = {Dresden, Germany}, author = {Camille {Bourgaux} and Anni-Yasmin {Turhan}}, doi = {https://doi.org/10.25368/2022.236}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {see \url{http://lat.inf.tu-dresden.de/research/reports.html}}, number = {17-06}, title = {Temporal Query Answering in {DL-Lite} over Inconsistent Data}, type = {LTCS-Report}, year = {2017}, }
Abstract BibTeX Entry PDF File ©IJCAI
Forming the foundations of large-scale knowledge bases, probabilistic databases have been widely studied in the literature. In particular, probabilistic query evaluation has been investigated intensively as a central inference mechanism. However, despite its power, query evaluation alone cannot extract all the relevant information encompassed in large-scale knowledge bases. To exploit this potential, we study two inference tasks; namely finding the most probable database and the most probable hypothesis for a given query. As natural counterparts of most probable explanations (MPE) and maximum a posteriori hypotheses (MAP) in probabilistic graphical models, they can be used in a variety of applications that involve prediction or diagnosis tasks. We investigate these problems relative to a variety of query languages, ranging from conjunctive queries to ontology-mediated queries, and provide a detailed complexity analysis.
@inproceedings{ CeBL-IJCAI17, address = {Melbourne, Australia}, author = {Ismail Ilkan {Ceylan} and Stefan {Borgwardt} and Thomas {Lukasiewicz}}, booktitle = {Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI'17)}, editor = {Carles {Sierra}}, pages = {950--956}, title = {Most Probable Explanations for Probabilistic Database Queries}, year = {2017}, }
BibTeX Entry PDF File
@inproceedings{ CeBL-DL17, address = {Montpellier, France}, author = {Ismail Ilkan {Ceylan} and Stefan {Borgwardt} and Thomas {Lukasiewicz}}, booktitle = {Proceedings of the 30th International Workshop on Description Logics (DL'17)}, editor = {Alessandro {Artale} and Birte {Glimm} and Roman {Kontchakov}}, publisher = {CEUR-WS}, series = {CEUR Workshop Proceedings}, title = {Most Probable Explanations for Probabilistic Database Queries (Extended Abstract)}, volume = {1879}, year = {2017}, }
Abstract BibTeX Entry PDF File ©Springer-Verlag ©Springer International Publishing
While running times of ontology reasoners have been studied extensively, studies on energy-consumption of reasoning are scarce, and the energy-efficiency of ontology reasoning is not fully understood yet. Earlier empirical studies on the energy-consumption of ontology reasoners focused on reasoning on smart phones and used measurement methods prone to noise and side-effects. This paper presents an evaluation of the energy-efficiency of five state-of-the-art OWL reasoners on an ARM single-board computer that has built-in sensors to measure the energy consumption of CPUs and memory precisely. Using such a machine gives full control over installed and running software, active clusters and CPU frequencies, allowing for a more precise and detailed picture of the energy consumption of ontology reasoning. Besides evaluating the energy consumption of reasoning, our study further explores the relationship between computation power of the CPU, reasoning time, and energy consumption.
@inproceedings{ KoHaTu-JIST-2017, author = {Patrick {Koopmann} and Marcus {H\"ahnel} and Anni-Yasmin {Turhan}}, booktitle = {Proceedings of JIST 2017}, publisher = {Springer International Publishing}, title = {Energy-Efficiency of {OWL} Reasoners---Frequency Matters}, year = {2017}, }
Abstract BibTeX Entry PDF File
In modelling real-world knowledge, there often arises a need to represent and reason with meta-knowledge. To equip description logics (DLs) for dealing with such ontologies, we enrich DL concepts and roles with finite sets of attribute–value pairs, called annotations, and allow concept inclusions to express constraints on annotations. We show that this may lead to increased complexity or even undecidability, and we identify cases where this increased expressivity can be achieved without incurring increased complexity of reasoning. In particular, we describe a tractable fragment based on the lightweight description logic EL.
@inproceedings{ KMOT-DL17, author = {Markus {Kr{\"{o}}tzsch} and Maximilian {Marx} and Ana {Ozaki} and Veronika {Thost}}, booktitle = {Proceedings of the 30th International Workshop on Description Logics (DL 2017)}, month = {July}, publisher = {CEUR-WS.org}, series = {CEUR Workshop Proceedings}, title = {Reasoning with Attributed Description Logics}, year = {2017}, }
Abstract BibTeX Entry PDF File
Graph-structured data is used to represent large information collections, called knowledge graphs, in many applications. Their exact format may vary, but they often share the concept that edges can be annotated with additional information, such as validity time or provenance information. Property Graph is a popular graph database format that also provides this feature. We give a formalisation of a generalised notion of Property Graphs, called multi-attributed relational structures (MARS), and introduce a matching knowledge representation formalism, multi-attributed predicate logic (MAPL). We analyse the expressive power of MAPL and suggest a simpler, rule-based fragment of MAPL that can be used for ontological reasoning on Property Graphs. To the best of our knowledge, this is the first approach to making Property Graphs and related data structures accessible to symbolic AI.
@inproceedings{ MKT-IJCAI17, author = {Maximilian {Marx} and Markus {Kr{\"{o}}tzsch} and Veronika {Thost}}, booktitle = {Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI'17)}, editor = {Carles {Sierra}}, pages = {1188--1194}, publisher = {International Joint Conferences on Artificial Intelligence}, title = {Logic on {MARS:} Ontologies for generalised property graphs}, year = {2017}, }
Abstract BibTeX Entry PDF File DOI
Defeasible Description Logics (DDLs) extend Description Logics with defeasible concept inclusions. Reasoning in DDLs often employs rational or relevant closure according to the (propositional) KLM postulates. If in DDLs with quantification a defeasible subsumption relationship holds between concepts, this relationship might also hold if these concepts appear in existential restrictions. Such nested defeasible subsumption relationships were not detected by earlier reasoning algorithms—neither for rational nor relevant closure. In this report, we present a new approach for ELbot that alleviates this problem for relevant closure (the strongest form of preferential reasoning currently investigated) by the use of typicality models that extend classical canonical models by domain elements that individually satisfy any amount of consistent defeasible knowledge. We also show that a certain restriction on the domain of the typicality models in this approach yields inference results that correspond to the (weaker) more commonly known rational closure.
@techreport{ PeTu-LTCS-17-01, address = {Dresden, Germany}, author = {Maximilian {Pensel} and Anni-Yasmin {Turhan}}, doi = {https://doi.org/10.25368/2022.231}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See http://lat.inf.tu-dresden.de/research/reports.html}, number = {17-01}, title = {Making Quantification Relevant again---the Case of Defeasible $\mathcal{EL}_{\bot}$}, type = {LTCS-Report}, year = {2017}, }
Abstract BibTeX Entry PDF File
Defeasible Description Logics (DDLs) extend Description Logics with defeasible concept inclusions. Reasoning in DDLs often employs rational or relevant closure according to the (propositional) KLM postulates. If in DDLs with quantification a defeasible subsumption relationship holds between concepts, this relationship might also hold if these concepts appear in existential restrictions. Such nested defeasible subsumption relationships were not detected by earlier reasoning algorithms—neither for rational nor relevant closure. Recently, we devised a new approach for ELbot that alleviates this problem for rational closure by the use of typicality models that extend classical canonical models by domain elements that individually satisfy any amount of consistent defeasible knowledge. In this paper we lift our approach to relevant closure and show that reasoning based on typicality models yields the missing entailments.
@inproceedings{ PeTu-DARE-17, author = {Maximilian {Pensel} and Anni-Yasmin {Turhan}}, booktitle = {Proceedings of the 4th International Workshop on Defeasible and Ampliative Reasoning - {DARe}}, editor = {Richard {Booth} and Giovanni {Casini} and Ivan {Varzinczak}}, publisher = {CEUR-WS.org}, title = {Making Quantification Relevant again---the Case of Defeasible $\mathcal{EL}_{\bot}$}, year = {2017}, }
Abstract BibTeX Entry Publication
Ontology-based data access (OBDA) augments classical query answering in databases by including domain knowledge provided by an ontology. An ontology captures the terminology of an application domain and describes domain knowledge in a machine-processable way. Formal ontology languages additionally provide semantics to these specifications. Systems for OBDA thus may apply logical reasoning to answer queries; they use the ontological knowledge to infer new information, which is only implicitly given in the data. Moreover, they usually employ the open-world assumption, which means that knowledge not stated explicitly in the data or inferred is neither assumed to be true nor false. Classical OBDA regards the knowledge however only w.r.t. a single moment, which means that information about time is not used for reasoning and hence lost; in particular, the queries generally cannot express temporal aspects. We investigate temporal query languages that allow to access temporal data through classical ontologies. In particular, we study the computational complexity of temporal query answering regarding ontologies written in lightweight description logics, which are known to allow for efficient reasoning in the atemporal setting and are successfully applied in practice. Furthermore, we present a so-called rewritability result for ontology-based temporal query answering, which suggests ways for implementation. Our results may thus guide the choice of a query language for temporal OBDA in data-intensive applications that require fast processing, such as context recognition.
@thesis{ Thost-Diss-2017, address = {Dresden, Germany}, author = {Veronika {Thost}}, school = {Technische Universit\"{a}t Dresden}, title = {Using Ontology-Based Data Access to Enable Context Recognition in the Presence of Incomplete Information}, type = {Doctoral Thesis}, year = {2017}, }
Abstract BibTeX Entry PDF File DOI
Ontologies may capture the terminology of an application domain and describe domain knowledge in a machine-processable way. Formal ontology languages, such as description logics, additionally provide semantics to these specifications. Systems for ontology-based data access (OBDA) may thus apply logical reasoning to answer queries over given data; they use the ontological knowledge to infer new information that is implicit in the data. The classical OBDA setting regards however only a single moment, which means that information about time is not used for reasoning and that the queries cannot express temporal aspects. We investigate temporal query languages that allow to access temporal data through classical ontologies. In particular, we study the computational complexity of temporal query answering regarding ontologies written in lightweight description logics, which are known to allow for efficient reasoning in the atemporal setting and are successfully applied in practice. Furthermore, we present a so-called rewritability result for ontology-based temporal query answering, which suggests ways for implementation. Our results may thus guide the choice of a query language for temporal OBDA in data-intensive applications that require fast processing.
@article{ Thost-KIJ2017, author = {Veronika {Thost}}, doi = {https://doi.org/10.1007/s13218-017-0510-z}, journal = {{KI}}, number = {4}, pages = {377--380}, title = {Using Ontology-Based Data Access to Enable Context Recognition in the Presence of Incomplete Information (Extended Abstract)}, volume = {31}, year = {2017}, }
2016
Abstract BibTeX Entry PDF File
In a previous paper, we have introduced an extension of the lightweight Description Logic EL that allows us to define concepts in an approximate way. For this purpose, we have defined a graded membership function deg, which for each individual and concept yields a number in the interval [0,1] expressing the degree to which the individual belongs to the concept. Threshold concepts C t for in <, <=, >, >= then collect all the individuals that belong to C with degree t. We have then investigated the complexity of reasoning in the Description Logic tEL(deg), which is obtained from EL by adding such threshold concepts. In the present paper, we extend these results, which were obtained for reasoning without TBoxes, to the case of reasoning w.r.t. acyclic TBoxes. Surprisingly, this is not as easy as might have been expected. On the one hand, one must be quite careful to define acyclic TBoxes such that they still just introduce abbreviations for complex concepts, and thus can be unfolded. On the other hand, it turns out that, in contrast to the case of EL, adding acyclic TBoxes to tEL(deg) increases the complexity of reasoning by at least on level of the polynomial hierarchy.
@inproceedings{ ecaiBaFe16, author = {Franz {Baader} and Oliver {Fern{\'a}ndez Gil}}, booktitle = {{ECAI} 2016 - 22nd European Conference on Artificial Intelligence, 29 August-2 September 2016, The Hague, The Netherlands - Including Prestigious Applications of Artificial Intelligence {(PAIS} 2016)}, pages = {1096--1104}, publisher = {{IOS} Press}, series = {Frontiers in Artificial Intelligence and Applications}, title = {Extending the Description Logic $\tau\mathcal{EL}(deg)$ with Acyclic TBoxes}, volume = {285}, year = {2016}, }
Abstract BibTeX Entry PDF File DOI ©Taylor and Francis
Description logic knowledge bases can be used to represent knowledge about a particular domain in a formal and unambiguous manner. Their practical relevance has been shown in many research areas, especially in biology and the Semantic Web. However, the tasks of constructing knowledge bases itself, often performed by human experts, is difficult, time-consuming and expensive. In particular the synthesis of terminological knowledge is a challenge every expert has to face. Because human experts cannot be omitted completely from the construction of knowledge bases, it would therefore be desirable to at least get some support from machines during this process. To this end, we shall investigate in this work an approach which shall allow us to extract terminological knowledge in the form of general concept inclusions from factual data, where the data is given in the form of vertex and edge labeled graphs. As such graphs appear naturally within the scope of the Semantic Web in the form of sets of RDF triples, the presented approach opens up another possibility to extract terminological knowledge from the Linked Open Data Cloud.
@article{ BoDiKr-JANCL16, author = {Daniel {Borchmann} and Felix {Distel} and Francesco {Kriegel}}, doi = {https://doi.org/10.1080/11663081.2016.1168230}, journal = {Journal of Applied Non-Classical Logics}, number = {1}, pages = {1--46}, title = {Axiomatisation of General Concept Inclusions from Finite Interpretations}, volume = {26}, year = {2016}, }
Abstract BibTeX Entry PDF File DOI
Fuzzy Description Logics (DLs) provide a means for representing vague knowledge about an application domain. In this paper, we study fuzzy extensions of conjunctive queries (CQs) over the DL SROIQ based on finite chains of degrees of truth. To answer such queries, we extend a well-known technique that reduces the fuzzy ontology to a classical one, and use classical DL reasoners as a black box. We improve the complexity of previous reduction techniques for finitely valued fuzzy DLs, which allows us to prove tight complexity results for answering certain kinds of fuzzy CQs. We conclude with an experimental evaluation of a prototype implementation, showing the feasibility of our approach.
@article{ BMPT-JoDS16, author = {Stefan {Borgwardt} and Theofilos {Mailis} and Rafael {Pe{\~n}aloza} and Anni-Yasmin {Turhan}}, doi = {http://dx.doi.org/10.1007/s13740-015-0055-y}, journal = {Journal on Data Semantics}, number = {2}, pages = {55--75}, title = {Answering Fuzzy Conjunctive Queries over Finitely Valued Fuzzy Ontologies}, volume = {5}, year = {2016}, }
Abstract BibTeX Entry PDF File DOI
Reasoning for Description Logics with concrete domains and w.r.t. general TBoxes easily becomes undecidable. However, with some restriction on the concrete domain, decidability can be regained. We introduce a novel way to integrate a concrete domain D into the well known description logic ALC, we call the resulting logic ALCP(D). We then identify sufficient conditions on D that guarantee decidability of the satisfiability problem, even in the presence of general TBoxes. In particular, we show decidability of ALCP(D) for several domains over the integers, for which decidability was open. More generally, this result holds for all negation-closed concrete domains with the EHD-property, which stands for `the existence of a homomorphism is definable'. Such technique has recently been used to show decidability of CTL* with local constraints over the integers.
@techreport{ CaTu-LTCS-16-01, address = {Dresden, Germany}, author = {Claudia {Carapelle} and Anni-Yasmin {Turhan}}, doi = {https://doi.org/10.25368/2022.225}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, number = {16-01}, title = {Decidability of $\mathcal{ALC}^P\!(\mathcal{D})$ for concrete domains with the EHD-property}, type = {LTCS-Report}, year = {2016}, }
Abstract BibTeX Entry PDF File DOI
Reasoning for Description logics with concrete domains and w.r.t. general TBoxes easily becomes undecidable. However, with some restriction on the concrete domain, decidability can be regained. We introduce a novel way to integrate concrete domains D into the well-known description logic ALC, we call the resulting logic ALCP(D). We then identify sufficient conditions on D that guarantee decidability of the satisfiability problem, even in the presence of general TBoxes. In particular, we show decidability of ALCP(D) for several domains over the integers, for which decidabil- ity was open. More generally, this result holds for all negation-closed concrete domains with the EHD-property, which stands for ‘the exis- tence of a homomorphism is definable’. Such technique has recently been used to show decidability of CTL∗ with local constraints over the integers.
@inproceedings{ CaTu-ECAI-16, address = {Dresden, Germany}, author = {Claudia {Carapelle} and Anni-Yasmin {Turhan}}, booktitle = {Proceedings of the 22nd European Conference on Artificial Intelligence}, doi = {https://doi.org/10.3233/978-1-61499-672-9-1440}, editor = {Gal A. {Kaminka} and Maria {Fox} and Paolo {Bouquet} and Eyke {H{\"{u}}llermeier} and Virginia {Dignum} and Frank {Dignum} and Frank van {Harmelen}}, pages = {1440--1448}, publisher = {{IOS} Press}, title = {Description Logics Reasoning w.r.t. general TBoxes is decidable for Concrete Domains with the EHD-property}, year = {2016}, }
Abstract BibTeX Entry PDF File
Large-scale knowledge graphs (KGs) are widely used in industry and academia, and provide excellent use-cases for ontologies. We find, however, that popular ontology languages, such as OWL and Datalog, cannot express even the most basic relationships on the normalised data format of KGs. Existential rules are more powerful, but may make reasoning undecidable. Normalising them to suit KGs often also destroys syntactic restrictions that ensure decidability and low complexity. We study this issue for several classes of existential rules and derive new syntactic criteria to recognise well-behaved rule-based ontologies over KGs.
@inproceedings{ KT-ISWC2016, author = {Markus {Kr{\"{o}}tzsch} and Veronika {Thost}}, booktitle = {Proceedings of the 15th International Semantic Web Conference (ISWC 2016)}, editor = {Yolanda {Gil} and Elena {Simperl} and Paul {Groth} and Freddy {Lecue} and Markus {Kr{\"{o}}tzsch} and Alasdair {Gray} and Marta {Sabou} and Fabian {Fl{\"{o}}ck} and Hideaki {Takeda}}, publisher = {Springer}, series = {LNCS}, title = {Ontologies for Knowledge Graphs: Breaking the Rules}, year = {2016}, }
2015
Abstract BibTeX Entry PDF File ©Springer-Verlag
In Ontology-Based Data Access (OBDA), user queries are evaluated over a set of facts under the open world assumption, while taking into account background knowledge given in the form of a Description Logic (DL) ontology. In order to deal with dynamically changing data sources, temporal conjunctive queries (TCQs) have recently been proposed as a useful extension of OBDA to support the processing of temporal information. We extend the existing complexity analysis of TCQ entailment to very expressive DLs underlying the OWL 2 standard, and in contrast to previous work also allow for queries containing transitive roles.
@inproceedings{ BaBL-AI15, address = {Canberra, Australia}, author = {Franz {Baader} and Stefan {Borgwardt} and Marcel {Lippmann}}, booktitle = {Proceedings of the 28th Australasian Joint Conference on Artificial Intelligence (AI'15)}, editor = {Bernhard {Pfahringer} and Jochen {Renz}}, pages = {21--33}, publisher = {Springer-Verlag}, series = {Lecture Notes in Artificial Intelligence}, title = {Temporal Conjunctive Queries in Expressive Description Logics with Transitive Roles}, volume = {9457}, year = {2015}, }
Abstract BibTeX Entry PDF File DOI
In Ontology-Based Data Access (OBDA), user queries are evaluated over a set of facts under the open world assumption, while taking into account background knowledge given in the form of a Description Logic (DL) ontology. In order to deal with dynamically changing data sources, temporal conjunctive queries (TCQs) have recently been proposed as a useful extension of OBDA to support the processing of temporal information. We extend the existing complexity analysis of TCQ entailment to very expressive DLs underlying the OWL 2 standard, and in contrast to previous work also allow for queries containing transitive roles.
@techreport{ BaBL-LTCS-15-17, address = {Germany}, author = {Franz {Baader} and Stefan {Borgwardt} and Marcel {Lippmann}}, doi = {https://doi.org/10.25368/2022.222}, institution = {Chair for Automata Theory, Technische Universit{\"a}t Dresden}, number = {15-17}, title = {Temporal Conjunctive Queries in Expressive {DLs} with Non-simple Roles}, type = {LTCS-Report}, year = {2015}, }
Abstract BibTeX Entry PDF File DOI
Ontology-based data access (OBDA) generalizes query answering in databases towards deductive entailment since (i) the fact base is not assumed to contain complete knowledge (i.e., there is no closed world assumption), and (ii) the interpretation of the predicates occurring in the queries is constrained by axioms of an ontology. OBDA has been investigated in detail for the case where the ontology is expressed by an appropriate Description Logic (DL) and the queries are conjunctive queries. Motivated by situation awareness applications, we investigate an extension of OBDA to the temporal case. As the query language we consider an extension of the well-known propositional temporal logic LTL where conjunctive queries can occur in place of propositional variables, and as the ontology language we use the expressive DL SHQ. For the resulting instance of temporalized OBDA, we investigate both data complexity and combined complexity of the query entailment problem. In the course of this investigation, we also establish the complexity of consistency of Boolean knowledge bases in SHQ.
@article{ BaBL-JWS15, author = {Franz {Baader} and Stefan {Borgwardt} and Marcel {Lippmann}}, doi = {http://dx.doi.org/10.1016/j.websem.2014.11.008}, journal = {Journal of Web Semantics}, pages = {71--93}, title = {Temporal Query Entailment in the Description Logic {$\mathcal{SHQ}$}}, volume = {33}, year = {2015}, }
Abstract BibTeX Entry PDF File DOI
Description logic knowledge bases can be used to represent knowledge about a particular domain in a formal and unambiguous manner. Their practical relevance has been shown in many research areas, especially in biology and the semantic web. However, the tasks of constructing knowledge bases itself, often performed by human experts, is difficult, time-consuming and expensive. In particular the synthesis of terminological knowledge is a challenge every expert has to face. Because human experts cannot be omitted completely from the construction of knowledge bases, it would therefore be desirable to at least get some support from machines during this process. To this end, we shall investigate in this work an approach which shall allow us to extract terminological knowledge in the form of general concept inclusions from factual data, where the data is given in the form of vertex and edge labeled graphs. As such graphs appear naturally within the scope of the Semantic Web in the form of sets of RDF triples, the presented approach opens up the possibility to extract terminological knowledge from the Linked Open Data Cloud. We shall also present first experimental results showing that our approach has the potential to be useful for practical applications.
@techreport{ BoDiKr-LTCS-15-13, address = {Dresden, Germany}, author = {Daniel {Borchmann} and Felix {Distel} and Francesco {Kriegel}}, doi = {https://doi.org/10.25368/2022.219}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tu-dresden.de/inf/lat/reports#BoDiKr-LTCS-15-13}}, number = {15-13}, title = {{Axiomatization of General Concept Inclusions from Finite Interpretations}}, type = {LTCS-Report}, year = {2015}, }
Abstract BibTeX Entry PDF File DOI
Fuzzy Description Logics (DLs) can be used to represent and reason with vague knowledge. This family of logical formalisms is very diverse, each member being characterized by a specific choice of constructors, axioms, and triangular norms, which are used to specify the semantics. Unfortunately, it has recently been shown that the consistency problem in many fuzzy DLs with general concept inclusion axioms is undecidable. In this paper, we present a proof framework that allows us to extend these results to cover large classes of fuzzy DLs. On the other hand, we also provide matching decidability results for most of the remaining logics. As a result, we obtain a near-universal classification of fuzzy DLs according to the decidability of their consistency problem.
@article{ BoDP-AI15, author = {Stefan {Borgwardt} and Felix {Distel} and Rafael {Pe{\~n}aloza}}, doi = {http://dx.doi.org/10.1016/j.artint.2014.09.001}, journal = {Artificial Intelligence}, pages = {23--55}, title = {The Limits of Decidability in Fuzzy Description Logics with General Concept Inclusions}, volume = {218}, year = {2015}, }
Abstract BibTeX Entry PDF File DOI
Ontology-based data access (OBDA) generalizes query answering in relational databases. It allows to query a database by using the language of an ontology, abstracting from the actual relations of the database. OBDA can sometimes be realized by compiling the information of the ontology into the query and the database. The resulting query is then answered using classical database techniques. In this paper, we consider a temporal version of OBDA. We propose a generic temporal query language that combines linear temporal logic with queries over ontologies. This language is well-suited for expressing temporal properties of dynamic systems and is useful in context-aware applications that need to detect specific situations. We show that, if atemporal queries are rewritable in the sense described above, then the corresponding temporal queries are also rewritable such that we can answer them over a temporal database. We present three approaches to answering the resulting queries.
@article{ BoLT-JWS15, author = {Stefan {Borgwardt} and Marcel {Lippmann} and Veronika {Thost}}, doi = {http://dx.doi.org/10.1016/j.websem.2014.11.007}, journal = {Journal of Web Semantics}, pages = {50--70}, title = {Temporalizing Rewritable Query Languages over Knowledge Bases}, volume = {33}, year = {2015}, }
Abstract BibTeX Entry PDF File DOI
Fuzzy Description Logics (FDLs) combine classical Description Logics with the semantics of Fuzzy Logics in order to represent and reason with vague knowledge. Most FDLs using truth values from the interval [0,1] have been shown to be undecidable in the presence of a negation constructor and general concept inclusions. One exception are those FDLs whose semantics is based on the infinitely valued Gödel t-norm (G). We extend previous decidability results for the FDL G-ALC to deal with complex role inclusions, nominals, inverse roles, and qualified number restrictions. Our novel approach is based on a combination of the known crispification technique for finitely valued FDLs and an automata-based procedure for reasoning in G-ALC.
@techreport{ BoPe-LTCS-15-11, address = {Dresden, Germany}, author = {Stefan {Borgwardt} and Rafael {Pe{\~n}aloza}}, doi = {https://doi.org/10.25368/2022.217}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See http://lat.inf.tu-dresden.de/research/reports.html.}, number = {15-11}, title = {Infinitely Valued G{\"o}del Semantics for Expressive Description Logics}, type = {LTCS-Report}, year = {2015}, }
Abstract BibTeX Entry PDF File ©Springer-Verlag
Fuzzy Description Logics (FDLs) combine classical Description Logics with the semantics of Fuzzy Logics in order to represent and reason with vague knowledge. Most FDLs using truth values from the interval [0,1] have been shown to be undecidable in the presence of a negation constructor and general concept inclusions. One exception are those FDLs whose semantics is based on the infinitely valued Gödel t-norm (G). We extend previous decidability results for the FDL G-ALC to deal with complex role inclusions, nominals, inverse roles, and qualified number restrictions. Our novel approach is based on a combination of the known crispification technique for finitely valued FDLs and an automata-based procedure for reasoning in G-ALC.
@inproceedings{ BoPe-FroCoS15, address = {Wroclaw, Poland}, author = {Stefan {Borgwardt} and Rafael {Pe{\~n}aloza}}, booktitle = {Proceedings of the 10th International Symposium on Frontiers of Combining Systems (FroCoS'15)}, editor = {Carsten {Lutz} and Silvio {Ranise}}, pages = {49--65}, publisher = {Springer-Verlag}, series = {Lecture Notes in Artificial Intelligence}, title = {Reasoning in Expressive Description Logics under Infinitely Valued G{\"o}del Semantics}, volume = {9322}, year = {2015}, }
Abstract BibTeX Entry PDF File
Ontology-based query answering augments classical query answering in databases by adopting the open-world assumption and by including domain knowledge provided by an ontology. We investigate temporal query answering w.r.t. ontologies formulated in DLLite, a family of description logics that captures the conceptual features of relational databases and was tailored for efficient query answering. We consider a recently proposed temporal query language that combines conjunctive queries with the operators of propositional linear temporal logic (LTL). In particular, we consider negation in the ontology and query language, and study both data and combined complexity of query entailment.
@inproceedings{ BoTh-GCAI15, author = {Stefan {Borgwardt} and Veronika {Thost}}, booktitle = {Proceedings of the 1st Global Conference on Artificial Intelligence (GCAI'15)}, editor = {Georg {Gottlob} and Geoff {Sutcliffe} and Andrei {Voronkov}}, pages = {51--65}, publisher = {EasyChair}, series = {EasyChair Proceedings in Computing}, title = {Temporal Query Answering in \textit{{DL-Lite}} with Negation}, volume = {36}, year = {2015}, }
Abstract BibTeX Entry PDF File ©IJCAI
Context-aware systems use data collected at runtime to recognize certain predefined situations and trigger adaptations. This can be implemented using ontology-based data access (OBDA), which augments classical query answering in databases by adopting the open-world assumption and including domain knowledge provided by an ontology. We investigate temporalized OBDA w.r.t. ontologies formulated in EL, a description logic that allows for efficient reasoning and is successfully used in practice. We consider a recently proposed temporalized query language that combines conjunctive queries with the operators of propositional linear temporal logic (LTL), and study both data and combined complexity of query entailment in this setting. We also analyze the satisfiability problem in the similar formalism EL-LTL.
@inproceedings{ BoTh-IJCAI15, address = {Buenos Aires, Argentina}, author = {Stefan {Borgwardt} and Veronika {Thost}}, booktitle = {Proceedings of the 24th International Joint Conference on Artificial Intelligence (IJCAI'15)}, editor = {Qiang {Yang} and Michael {Wooldridge}}, pages = {2819--2825}, publisher = {AAAI Press}, title = {Temporal Query Answering in the Description Logic {$\mathcal{EL}$}}, year = {2015}, }
Abstract BibTeX Entry PDF File DOI
Context-aware systems use data about their environment for adaptation at runtime, e.g., for optimization of power consumption or user experience. Ontology-based data access (OBDA) can be used to support the interpretation of the usually large amounts of data. OBDA augments query answering in databases by dropping the closed-world assumption (i.e., the data is not assumed to be complete any more) and by including domain knowledge provided by an ontology. We focus on a recently proposed temporalized query language that allows to combine conjunctive queries with the operators of the well-known propositional temporal logic LTL. In particular, we investigate temporalized OBDA w.r.t. ontologies in the DL EL, which allows for efficient reasoning and has been successfully applied in practice. We study both data and combined complexity of the query entailment problem.
@techreport{ BoTh-LTCS-15-08, address = {Dresden, Germany}, author = {Stefan {Borgwardt} and Veronika {Thost}}, doi = {https://doi.org/10.25368/2022.214}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See http://lat.inf.tu-dresden.de/research/reports.html.}, number = {15-08}, title = {Temporal Query Answering in {$\mathcal{EL}$}}, type = {LTCS-Report}, year = {2015}, }
Abstract BibTeX Entry PDF File
Context-aware systems use data collected at runtime to recognize certain predefined situations and trigger adaptations. This can be implemented using ontology-based data access (OBDA), which augments classical query answering in databases by adopting the open-world assumption and including domain knowledge provided by an ontology. We investigate temporalized OBDA w.r.t. ontologies formulated in EL, a description logic that allows for efficient reasoning and is successfully used in practice. We consider a recently proposed temporalized query language that combines conjunctive queries with the operators of propositional linear temporal logic (LTL), and study both data and combined complexity of query entailment in this setting. We also analyze the satisfiability problem in the similar formalism EL-LTL.
@inproceedings{ BoTh-DL15, address = {Athens, Greece}, author = {Stefan {Borgwardt} and Veronika {Thost}}, booktitle = {Proceedings of the 28th International Workshop on Description Logics (DL 2015)}, editor = {Diego {Calvanese} and Boris {Konev}}, pages = {83--87}, series = {CEUR Workshop Proceedings}, title = {Temporal Query Answering in {$\mathcal{EL}$}}, volume = {1350}, year = {2015}, }
Abstract BibTeX Entry PDF File DOI
Ontology-based query answering augments classical query answering in databases by adopting the open-world assumption and by including domain knowledge provided by an ontology. We investigate temporal query answering w.r.t. ontologies formulated in DL-Lite, a family of description logics that captures the conceptual features of relational databases and was tailored for efficient query answering. We consider a recently proposed temporal query language that combines conjunctive queries with the operators of propositional linear temporal logic (LTL). In particular, we consider negation in the ontology and query language, and study both data and combined complexity of query entailment.
@techreport{ BoTh-LTCS-15-16, address = {Germany}, author = {Stefan {Borgwardt} and Veronika {Thost}}, doi = {https://doi.org/10.25368/2022.221}, institution = {Chair for Automata Theory, Technische Universit{\"a}t Dresden}, number = {15-16}, title = {Temporal Query Answering in {{\textit{DL-Lite}}} with Negation}, type = {LTCS-Report}, year = {2015}, }
Abstract BibTeX Entry PDF File DOI
We investigate a combination of EL-axioms with operators of the linear-temporal logic LTL. The complexity of deciding satisfiability is reduced when compared to ALC-LTL, but we also identify one setting where it remains the same. We additionally investigate the setting where GCIs are restricted to hold globally (at all time points), in which case the problem is PSpace-complete.
@techreport{ BoTh-LTCS-15-07, address = {Dresden, Germany}, author = {Stefan {Borgwardt} and Veronika {Thost}}, doi = {https://doi.org/10.25368/2022.213}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See http://lat.inf.tu-dresden.de/research/reports.html.}, number = {15-07}, title = {{LTL} over {$\mathcal{EL}$} Axioms}, type = {LTCS-Report}, year = {2015}, }
Abstract BibTeX Entry PDF File DOI
In Description Logics (DL) knowledge bases (KBs), information is typically captured by clear-cut concepts. For many practical applications querying the KB by crisp concepts is too restrictive; a user might be willing to lose some precision in the query, in exchange of a larger selection of answers. Similarity measures can offer a controlled way of gradually relaxing a query concept within a user-specified limit. In this paper we formalize the task of instance query answering for DL KBs using concepts relaxed by concept similarity measures (CSMs). We investigate computation algorithms for this task in the DL EL, their complexity and properties for the CSMs employed regarding whether unfoldable or general TBoxes are used. For the case of general TBoxes we define a family of CSMs that take the full TBox information into account, when assessing the similarity of concepts.
@article{ EcPeTu-JAL15, author = {Andreas {Ecke} and Rafael {Pe\~naloza} and Anni-Yasmin {Turhan}}, doi = {http://dx.doi.org/10.1016/j.jal.2015.01.002}, journal = {Journal of Applied Logic}, note = {Special Issue for the Workshop on Weighted Logics for {AI} 2013}, number = {4, Part 1}, pages = {480--508}, title = {Similarity-based Relaxed Instance Queries}, volume = {13}, year = {2015}, }
Abstract BibTeX Entry PDF File
Recently an approach has been devised how to employ concept similarity measures (CSMs) for relaxing instance queries over EL ontologies in a controlled way. The approach relies on similarity measures between pointed interpretations to yield CSMs with certain properties. We report in this paper on ELASTIQ, which is a first implementation of this approach and propose initial optimizations for this novel inference. We also provide a first evaluation of ELASTIQ on the Gene Ontology.
@inproceedings{ EcPeTu-DL15, author = {Andreas {Ecke} and Maximilian {Pensel} and Anni-Yasmin {Turhan}}, booktitle = {Proceedings of the 28th International Workshop on Description Logics ({DL-2015})}, editor = {Diego {Calvanese} and Boris {Konev}}, month = {June}, publisher = {CEUR-WS.org}, series = {{CEUR} Workshop Proceedings}, title = {ELASTIQ: Answering Similarity-threshold Instance Queries in {$\mathcal{EL}$}}, venue = {Athens, Greece}, volume = {1350}, year = {2015}, }
Abstract BibTeX Entry PDF File
Fuzzy Description Logics (FDLs) generalize crisp ones by providing membership degree semantics. To offer efficient query answering for FDLs it is desirable to extend the rewriting-based approach for DL-Lite to its fuzzy variants. For answering conjunctive queries over fuzzy DL-LiteR ontologies we present an approach, that employs the crisp rewriting as a black-box procedure and treats the degrees in a second rewriting step. This pragmatic approach yields a sound procedure for the Goedel based fuzzy semantics, which we have implemented in the FLite reasoner that employs the Ontop system. A first evaluation of FLite suggests that one pays only a linear overhead for fuzzy queries.
@inproceedings{ MaTuZe-DL15, author = {Theofilos {Mailis} and Anni-Yasmin {Turhan} and Erik {Zenker}}, booktitle = {Proceedings of the 28th International Workshop on Description Logics ({DL-2015})}, month = {June}, title = {A Pragmatic Approach to Answering CQs over Fuzzy \textit{DL-Lite}-ontologies - introducing FLite}, venue = {Athens, Greece}, year = {2015}, }
Abstract BibTeX Entry PDF File DOI
Ontology-based data access augments classical query answering over fact bases by adopting the open-world assumption and by including domain know-ledge provided by an ontology. We implemented temporal query answering w.r.t. ontologies formulated in the Description Logic DL-Lite. Focusing on temporal conjunctive queries (TCQs), which combine conjunctive queries via the operators of propositional linear temporal logic, we regard three approaches for answering them: an iterative algorithm that considers all data available; a window-based algorithm; and a rewriting approach, which translates the TCQs to be answered into SQL queries. Since the relevant ontological knowledge is already encoded into the latter queries, they can be answered by a standard database system. Our evaluation especially shows that implementations of both the iterative and the window-based algorithm answer TCQs within a few milliseconds, and that the former achieves a constant performance, even if data is growing over time.
@techreport{ THOe-LTCS-15-12, address = {Dresden, Germany}, author = {Veronika {Thost} and Jan {Holste} and \"Ozg\"ur {\"Oz\c{c}ep}}, doi = {https://doi.org/10.25368/2022.218}, institution = {Chair of Automata Theory, TU Dresden}, number = {15-12}, title = {On Implementing Temporal Query Answering in {\textit{DL-Lite}}}, type = {LTCS-Report}, year = {2015}, }
Abstract BibTeX Entry PDF File
Ontology-based data access augments classical query answering over fact bases by adopting the open-world assumption and by including domain know-ledge provided by an ontology. We implemented temporal query answering w.r.t. ontologies formulated in the Description Logic DL-Lite. Focusing on temporal conjunctive queries (TCQs), which combine conjunctive queries via the operators of propositional linear temporal logic, we regard three approaches for answering them: an iterative algorithm that considers all data available; a window-based algorithm; and a rewriting approach, which translates the TCQs to be answered into SQL queries. Since the relevant ontological knowledge is already encoded into the latter queries, they can be answered by a standard database system. Our evaluation especially shows that implementations of both the iterative and the window-based algorithm answer TCQs within a few milliseconds, and that the former achieves a constant performance, even if data is growing over time.
@inproceedings{ THOe-DL15, address = {Athens, Greece}, author = {Veronika {Thost} and Jan {Holste} and \"Ozg\"ur {\"Oz\c{c}ep}}, booktitle = {Proceedings of the 28th International Workshop on Description Logics (DL-2015)}, publisher = {CEUR Workshop Proceedings}, title = {On Implementing Temporal Query Answering in {\textit{DL-Lite}} (extended abstract)}, year = {2015}, }
Abstract BibTeX Entry PDF File
For reasoning over streams of data ontology-based data access is a common approach. The method for answering conjunctive queries (CQs) over DL-Lite ontologies in this setting is by rewritings of the query and evaluation of the resulting query by a data base engine. For stream-based applications the classical expressivity of DL-Lite lacks means to handle fuzzy and temporal information. In this paper we report on a combination of a recently proposed pragmatic approach for answering CQs over fuzzy DL-Lite ontologies with answering of CQs over sequences of ABoxes, resulting in a system that supplies rewritings for query answering over temporal fuzzy DL-Lite-ontologies.
@inproceedings{ TuZe-HiDest-15, author = {Anni{-}Yasmin {Turhan} and Erik {Zenker}}, booktitle = {Proceedings of the 1st Workshop on High-Level Declarative Stream Processing (HiDest'15)}, editor = {Daniela {Nicklas} and {\"{O}}zg{\"{u}}r L{\"{u}}tf{\"{u}} {{\"{O}}z{\c{c}}ep}}, pages = {56--69}, publisher = {CEUR-WS.org}, series = {{CEUR} Workshop Proceedings}, title = {Towards Temporal Fuzzy Query Answering on Stream-based Data}, volume = {1447}, year = {2015}, }
2014
Abstract BibTeX Entry PDF File
Our understanding of the notion "dynamic system" is a rather broad one: such a system has states, which can change over time. Ontologies are used to describe the states of the system, possibly in an incomplete way. Monitoring is then concerned with deciding whether some run of the system or all of its runs satisfy a certain property, which can be expressed by a formula of an appropriate temporal logic. We consider different instances of this broad framework, which can roughly be classified into two cases. In one instance, the system is assumed to be a black box, whose inner working is not known, but whose states can be (partially) observed during a run of the system. In the second instance, one has (partial) knowledge about the inner working of the system, which provides information on which runs of the system are possible. In this paper, we will review some of our recent work that can be seen as instances of this general framework of ontology-based monitoring of dynamic systems. We will also mention possible extensions towards probabilistic reasoning and the integration of mathematical modeling of dynamical systems.
@inproceedings{ Ba-KR-2014, address = {Vienna, Austria}, author = {Franz {Baader}}, booktitle = {Proceedings of the 14th International Conference on Principles of Knowledge Representation and Reasoning (KR'14)}, editor = {Chitta {Baral} and Giuseppe {De Giacomo} and Thomas {Eiter}}, note = {Invited contribution.}, pages = {678--681}, publisher = {AAAI Press}, title = {Ontology-Based Monitoring of Dynamic Systems}, year = {2014}, }
Abstract BibTeX Entry PDF File DOI
Formulae of linear temporal logic (LTL) can be used to specify (wanted or unwanted) properties of a dynamical system. In model checking, the system's behaviour is described by a transition system, and one needs to check whether all possible traces of this transition system satisfy the formula. In runtime verification, one observes the actual system behaviour, which at any point in time yields a finite prefix of a trace. The task is then to check whether all continuations of this prefix to a trace satisfy (violate) the formula. More precisely, one wants to construct a monitor, i.e., a finite automaton that receives the finite prefix as input and then gives the right answer based on the state currently reached. In this paper, we extend the known approaches to LTL runtime verification in two directions. First, instead of propositional LTL we use the more expressive temporal logic ALC-LTL, which can use axioms of the Description Logic (DL) ALC instead of propositional variables to describe properties of single states of the system. Second, instead of assuming that the observed system behaviour provides us with complete information about the states of the system, we assume that states are described in an incomplete way by ALC-knowledge bases. We show that also in this setting monitors can effectively be constructed. The (double-exponential) size of the constructed monitors is in fact optimal, and not higher than in the propositional case. As an auxiliary result, we show how to construct Büchi automata for ALC-LTL-formulae, which yields alternative proofs for the known upper bounds of deciding satisfiability in ALC-LTL.
@techreport{ BaLi-LTCS-14-01, address = {Dresden, Germany}, author = {Franz {Baader} and Marcel {Lippmann}}, doi = {https://doi.org/10.25368/2022.203}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See \url{http://lat.inf.tu-dresden.de/research/reports.html}.}, number = {14-01}, title = {Runtime Verification Using a Temporal Description Logic Revisited}, type = {LTCS-Report}, year = {2014}, }
Abstract BibTeX Entry DOI
Formulae of linear temporal logic (LTL) can be used to specify (wanted or unwanted) properties of a dynamical system. In model checking, the system's behaviour is described by a transition system, and one needs to check whether all possible traces of this transition system satisfy the formula. In runtime verification, one observes the actual system behaviour, which at any point in time yields a finite prefix of a trace. The task is then to check whether all continuations of this prefix to a trace satisfy (violate) the formula. More precisely, one wants to construct a monitor, i.e., a finite automaton that receives the finite prefix as input and then gives the right answer based on the state currently reached. In this paper, we extend the known approaches to LTL runtime verification in two directions. First, instead of propositional LTL we use the more expressive temporal logic ALC-LTL, which can use axioms of the Description Logic (DL) ALC instead of propositional variables to describe properties of single states of the system. Second, instead of assuming that the observed system behaviour provides us with complete information about the states of the system, we assume that states are described in an incomplete way by ALC-knowledge bases. We show that also in this setting monitors can effectively be constructed. The (double-exponential) size of the constructed monitors is in fact optimal, and not higher than in the propositional case. As an auxiliary result, we show how to construct Büchi automata for ALC-LTL-formulae, which yields alternative proofs for the known upper bounds of deciding satisfiability in ALC-LTL.
@article{ BaLi-JAL14, author = {Franz {Baader} and Marcel {Lippmann}}, doi = {http://dx.doi.org/10.1016/j.jal.2014.09.001}, journal = {Journal of Applied Logic}, number = {4}, pages = {584--613}, title = {Runtime Verification Using the Temporal Description Logic $\mathcal{ALC}$-LTL Revisited}, volume = {12}, year = {2014}, }
Abstract BibTeX Entry Publication
Description logics (DLs) are used to represent knowledge of an application domain and provide standard reasoning services to infer consequences of this knowledge. However, classical DLs are not suited to represent vagueness in the description of the knowledge. We consider a combination of DLs and Fuzzy Logics to address this task. In particular, we consider the t-norm-based semantics for fuzzy DLs introduced by Hajek in 2005. Since then, many tableau algorithms have been developed for reasoning in fuzzy DLs. Another popular approach is to reduce fuzzy ontologies to classical ones and use existing highly optimized classical reasoners to deal with them. However, a systematic study of the computational complexity of the different reasoning problems is so far missing from the literature on fuzzy DLs. Recently, some of the developed tableau algorithms have been shown to be incorrect in the presence of general concept inclusion axioms (GCIs). In some fuzzy DLs, reasoning with GCIs has even turned out to be undecidable. This work provides a rigorous analysis of the boundary between decidable and undecidable reasoning problems in t-norm-based fuzzy DLs, in particular for GCIs. Existing undecidability proofs are extended to cover large classes of fuzzy DLs, and decidability is shown for most of the remaining logics considered here. Additionally, the computational complexity of reasoning in fuzzy DLs with semantics based on finite lattices is analyzed. For most decidability results, tight complexity bounds can be derived.
@thesis{ Borgwardt-Diss-2014, address = {Dresden, Germany}, author = {Stefan {Borgwardt}}, school = {Technische Universit\"{a}t Dresden}, title = {Fuzzy Description Logics with General Concept Inclusions}, type = {Doctoral Thesis}, year = {2014}, }
Abstract BibTeX Entry PDF File
In the last few years, there has been a large effort for analyzing the computational properties of reasoning in fuzzy description logics. This has led to a number of papers studying the complexity of these logics, depending on the chosen semantics. Surprisingly, despite being arguably the simplest form of fuzzy semantics, not much is known about the complexity of reasoning in fuzzy description logics w.r.t. witnessed models over the Gödel t-norm. We show that in the logic G-IALC, reasoning cannot be restricted to finitely-valued models in general. Despite this negative result, we also show that all the standard reasoning problems can be solved in exponential time, matching the complexity of reasoning in classical ALC.
@inproceedings{ BoDP-KR14, author = {Stefan {Borgwardt} and Felix {Distel} and Rafael {Pe{\~n}aloza}}, booktitle = {Proceedings of the 14th International Conference on Principles of Knowledge Representation and Reasoning (KR'14)}, editor = {Chitta {Baral} and Giuseppe {De Giacomo} and Thomas {Eiter}}, pages = {228--237}, publisher = {AAAI Press}, title = {Decidable {G}{\"o}del description logics without the finitely-valued model property}, year = {2014}, }
Abstract BibTeX Entry PDF File
In the last few years, the complexity of reasoning in fuzzy description logics has been studied in depth. Surprisingly, despite being arguably the simplest form of fuzzy semantics, not much is known about the complexity of reasoning in fuzzy description logics using the Gödel t-norm. It was recently shown that in the logic G-IALC under witnessed model semantics, all standard reasoning problems can be solved in exponential time, matching the complexity of reasoning in classical ALC. We show that this also holds under general model semantics.
@inproceedings{ BoDP-DL14, address = {Vienna, Austria}, author = {Stefan {Borgwardt} and Felix {Distel} and Rafael {Pe{\~n}aloza}}, booktitle = {Proceedings of the 27th International Workshop on Description Logics ({DL'14})}, editor = {Meghyn {Bienvenu} and Magdalena {Ortiz} and Riccardo {Rosati} and Mantas {Simkus}}, pages = {391--403}, series = {CEUR Workshop Proceedings}, title = {G\"odel Description Logics with General Models}, volume = {1193}, year = {2014}, }
Abstract BibTeX Entry PDF File
Several researchers have developed properties that ensure compatibility of a concept similarity or dissimilarity measure with the formal semantics of Description Logics. While these authors have highlighted the relevance of the triangle inequality, none of their proposed dissimilarity measures satisfy it. In this work we present a theoretical framework for dissimilarity measures with this property. Our approach is based on concept relaxations, operators that perform stepwise generalizations on concepts. We prove that from any relaxation we can derive a dissimilarity measure that satisfies a number or properties that are important when comparing concepts.
@inproceedings{ DiAtBl-KR14, address = {Vienna, Austria}, author = {Felix {Distel} and Jamal {Atif} and Isabelle {Bloch}}, booktitle = {Proceedings of the Fourteenth International Conference on Principles of Knowledge Representation and Reasoning ({KR'14})}, note = {Short Paper. To appear.}, publisher = {AAAI Press}, title = {Concept Dissimilarity with Triangle Inequality}, year = {2014}, }
Abstract BibTeX Entry PDF File
A number of similarity measures for comparing description logic concepts have been proposed. Criteria have been developed to evaluate a measure's fitness for an application. These criteria include on the one hand those that ensure compatibility with the semantics, such as equivalence soundness, and on the other hand the properties of a metric, such as the triangle inequality. In this work we present two classes of dissimilarity measures that are at the same time equivalence sound and satisfy the triangle inequality: a simple dissimilarity measure, based on description trees for the lightweight description logic ; and an instantiation of a general framework, presented in our previous work, using dilation operators from mathematical morphology, and which exploits the link between Hausdorff distance and dilations using balls of the ground distance as structuring elements.
@inproceedings{ DiAtBl-ECAI14, address = {Prague, Czech Republic}, author = {Felix {Distel} and Jamal, {Atif} and Isabelle {Bloch}}, booktitle = {Proceedings of the 21st International Conference on Artificial Intelligence (ECAI'14)}, editor = {Torsten {Schaub}}, title = {Concept Dissimilarity Based on Tree Edit Distances and Morphological Dilations}, year = {2014}, }
Abstract BibTeX Entry PDF File
In Description Logics (DL) knowledge bases (KBs) information is typically captured by crisp concepts. For many applications querying the KB by crisp query concepts is too restrictive. A controlled way of gradually relaxing a query concept can be achieved by the use of concept similarity. In this paper we formalize the task of instance query answering for crisp DL KBs using concepts relaxed by concept similarity measures (CSM). For the DL \(\mathcal{EL}\) we investigate computation algorithms for this task, their complexity and properties for the employed CSM in case unfoldabel Tboxes or general TBoxes aer used.
@inproceedings{ EcPeTu-KR14, address = {Vienna, Austria}, author = {Andreas {Ecke} and Rafael {Pe{\~n}aloza} and Anni-Yasmin {Turhan}}, booktitle = {Proceedings of the Fourteenth International Conference on Principles of Knowledge Representation and Reasoning ({KR'14})}, editor = {Chitta {Baral} and Giuseppe {De Giacomo} and Thomas {Eiter}}, pages = {248--257}, publisher = {AAAI Press}, title = {Answering Instance Queries Relaxed by Concept Similarity}, year = {2014}, }
Abstract BibTeX Entry PDF File DOI
Description Logics (DLs) are a well-established family of knowledge representation formalisms. One of its members, the DL \(\mathcal{ELOR}\) has been successfully used for representing knowledge from the bio-medical sciences, and is the basis for the OWL 2 EL profile of the standard ontology language for the Semantic Web. Reasoning in this DL can be performed in polynomial time through a completion-based algorithm. In this paper we study the logic Prob-\(\mathcal{ELOR}\), that extends \(\mathcal{ELOR}\) with subjective probabilities, and present a completion-based algorithm for polynomial time reasoning in a restricted version, Prob-\(\mathcal{ELOR}^c_{01}\), of Prob-\(\mathcal{ELOR}\). We extend this algorithm to computation algorithms for approximations of (i) the most specific concept, which generalizes a given individual into a concept description, and (ii) the least common subsumer, which generalizes several concept descriptions into one. Thus, we also obtain methods for these inferences for the OWL 2 EL profile. These two generalization inferences are fundamental for building ontologies automatically from examples. The feasibility of our approach is demonstrated empirically by our prototype system GEL.
@article{ EcPeTu-IJAR-14, author = {Andreas {Ecke} and Rafael {Pe{\~n}aloza} and Anni-Yasmin {Turhan}}, doi = {http://dx.doi.org/10.1016/j.ijar.2014.03.001}, journal = {International Journal of Approximate Reasoning}, number = {9}, pages = {1939--1970}, publisher = {Elsevier}, title = {Completion-based Generalization Inferences for the Description Logic $\mathcal{ELOR}$ with Subjective Probabilities}, volume = {55}, year = {2014}, }
BibTeX Entry PDF File
@inproceedings{ EcPT-DL14, address = {Vienna, Austria}, author = {Andreas {Ecke} and Rafael {Pe{\~n}aloza} and Anni-Yasmin {Turhan}}, booktitle = {Proceedings of the 27th International Workshop on Description Logics ({DL'14})}, editor = {Meghyn {Bienvenu} and Magdalena {Ortiz} and Riccardo {Rosati} and Mantas {Simkus}}, pages = {526--529}, series = {CEUR Workshop Proceedings}, title = {Mary, What's Like All Cats?}, volume = {1193}, year = {2014}, }
Abstract BibTeX Entry PDF File
Regarding energy efficiency, resource management in complex hard- and software systems that is based on the information typically available to the OS alone does not yield best results. Nevertheless, general-purpose resource management should stay independent of application-specific information. To resolve this dilemma, we propose a generic, ontology-based approach to resource scheduling that is context-aware and takes information of running applications into account. The central task here is to recognize situations that might necessitate an adaptation of resource scheduling. This task is performed by logical reasoning over OWL ontologies. Our initial study shows that current OWL 2 EL reasoner systems can perform recognition of exemplary situations relevant to resource management within 4 seconds.
@inproceedings{ HaMeTT-ARM-14, address = {Bordeaux, France}, author = {Marcus {H\"ahnel} and Julian {Mendez} and Veronika {Thost} and Anni-Yasmin {Turhan}}, booktitle = {Workshop on Adaptive and Reflective Middleware'14}, month = {December}, title = {Bridging the Application Knowledge Gap}, year = {2014}, }
Abstract BibTeX Entry PDF File ©Springer-Verlag
Fuzzy Description Logics (DLs) generalize crisp ones by providing membership degree semantics for concepts and roles. A popular technique for reasoning in fuzzy DL ontologies is by providing a reduction to crisp DLs and then employ reasoning in the crisp DL. In this paper we adopt this approach to solve conjunctive query (CQ) answering problems for fuzzy DLs. We give reductions for Gödel and Łukasiewicz variants of fuzzy SROIQ and two kinds of fuzzy CQs. The correctness of the proposed reduction is proved and its complexity is studied for different fuzzy variants of SROIQ.
@inproceedings{ MaPe-RR-14, author = {Theofilos {Mailis} and Rafael {Pe\~naloza} and Anni-Yasmin {Turhan}}, booktitle = {Proceedings of the 8th International Conference on Web Reasoning and Rule Systems (RR 2014)}, editor = {Roman {Kontchakov} and Marie-Laure {Mugnier}}, pages = {124--139}, publisher = {Springer}, title = {Conjunctive Query Answering in Finitely-valued Fuzzy Description Logics}, volume = {8741}, year = {2014}, }
Abstract BibTeX Entry PDF File ©Springer-Verlag
Recently, answering of conjunctive queries has been investigated and implemented in optimized reasoner systems based on the rewriting approach for crisp DLs. In this paper we investigate how to employ such existing implementations for query answering in DL-LiteR over fuzzy ontologies. To this end we give an extended rewriting algorithm for the case of fuzzy DL-LiteR-ABoxes that employs the one for crisp DL-LiteR and investigate the limitations of this approach. We also tested the performance of our proto-type implementation FLite of this method.
@inproceedings{ MaTu-JIST-14, author = {Theofilos {Mailis} and Anni-Yasmin {Turhan}}, booktitle = {Proceedings of the 4th Joint International Semantic Technology Conference (JIST2014)}, editor = {Thepchai {Supnithi} and Takahira {Yamaguchi}}, publisher = {Lecture Notes in Computer Science}, title = {Employing DL-LiteR-Reasoners for Fuzzy Query Answering}, year = {2014}, }
Abstract BibTeX Entry PDF File
In the context of Description Logics (DLs) concrete domains allow to model concepts and facts by the use of concrete values and predicates between them. For reasoning in the DL ALC with general TBoxes concrete domains may cause undecidability. Under certain restrictions of the concrete domains decidability can be regained. Typically, the concrete domain predicates are crisp, which is a limitation for some applications. In this paper we investigate crisp ALC in combination with fuzzy concrete domains for general TBoxes, devise conditions for decidability, and give a tableau-based reasoning algorithm.
@inproceedings{ MePeTu-KI-14, author = {Dorian {Merz} and Rafael {Pe{\~n}aloza} and Anni-Yasmin {Turhan}}, booktitle = {Proceedings of 37th edition of the German Conference on Artificial Intelligence (KI'14)}, editor = {Carsten {Lutz} and Michael {Thielscher}}, pages = {171--182}, publisher = {Springer Verlag}, series = {Lecture Notes in Artificial Intelligence}, title = {Reasoning in {$\mathcal{ALC}$} with Fuzzy Concrete Domains}, volume = {8736}, year = {2014}, }
BibTeX Entry PDF File
@inproceedings{ PeTT-DL14, address = {Vienna, Austria}, author = {Rafael {Pe{\~n}aloza} and Veronika {Thost} and Anni-Yasmin {Turhan}}, booktitle = {Proceedings of the 27th International Workshop on Description Logics ({DL'14})}, editor = {Meghyn {Bienvenu} and Magdalena {Ortiz} and Riccardo {Rosati} and Mantas {Simkus}}, pages = {709--712}, series = {CEUR Workshop Proceedings}, title = {Certain Answers in a Rough World}, volume = {1193}, year = {2014}, }
2013
Abstract BibTeX Entry PDF File (The final publication is available at link.springer.com)
Ontology-based data access (OBDA) generalizes query answering in databases towards deduction since (i) the fact base is not assumed to contain complete knowledge (i.e., there is no closed world assumption), and (ii) the interpretation of the predicates occurring in the queries is constrained by axioms of an ontology. OBDA has been investigated in detail for the case where the ontology is expressed by an appropriate Description Logic (DL) and the queries are conjunctive queries. Motivated by situation awareness applications, we investigate an extension of OBDA to the temporal case. As query language we consider an extension of the well-known propositional temporal logic LTL where conjunctive queries can occur in place of propositional variables, and as ontology language we use the prototypical expressive DL ALC. For the resulting instance of temporalized OBDA, we investigate both data complexity and combined complexity of the query entailment problem.
@inproceedings{ BaBL-CADE13, address = {Lake Placid, NY, USA}, author = {Franz {Baader} and Stefan {Borgwardt} and Marcel {Lippmann}}, booktitle = {Proceedings of the 24th International Conference on Automated Deduction (CADE-24)}, editor = {Maria Paola {Bonacina}}, pages = {330--344}, publisher = {Springer-Verlag}, series = {Lecture Notes in Artificial Intelligence}, title = {Temporalizing Ontology-Based Data Access}, volume = {7898}, year = {2013}, }
Abstract BibTeX Entry PDF File (The final publication is available at link.springer.com)
Ontology-based data access (OBDA) generalizes query answering in relational databases. It allows to query a database by using the language of an ontology, abstracting from the actual relations of the database. For ontologies formulated in Description Logics of the DL-Lite family, OBDA can be realized by rewriting the query into a classical first-order query, e.g. an SQL query, by compiling the information of the ontology into the query. The query is then answered using classical database techniques. In this paper, we consider a temporal version of OBDA. We propose a temporal query language that combines a linear temporal logic with queries over DL-Litecore-ontologies. This language is well-suited to express temporal properties of dynamical systems and is useful in context-aware applications that need to detect specific situations. Using a first-order rewriting approach, we transform our temporal queries into queries over a temporal database. We then present three approaches to answering the resulting queries, all having different advantages and drawbacks.
@inproceedings{ BoLiTh-FroCoS13, address = {Nancy, France}, author = {Stefan {Borgwardt} and Marcel {Lippmann} and Veronika {Thost}}, booktitle = {Proceedings of the 9th International Symposium on Frontiers of Combining Systems ({FroCoS 2013})}, editor = {Pascal {Fontaine} and Christophe {Ringeissen} and Renate A. {Schmidt}}, pages = {165--180}, publisher = {Springer-Verlag}, series = {Lecture Notes in Computer Science}, title = {Temporal Query Answering in the Description Logic {\textit{DL-Lite}}}, volume = {8152}, year = {2013}, }
BibTeX Entry PDF File
@inproceedings{ BoLiTh-DL13, address = {Ulm, Germany}, author = {Stefan {Borgwardt} and Marcel {Lippmann} and Veronika {Thost}}, booktitle = {Proceedings of the 26th International Workshop on Description Logics ({DL-2013})}, editor = {Thomas {Eiter} and Birte {Glimm} and Yevgeny {Kazakov} and Markus {Kr{\"o}tzsch}}, month = {July}, publisher = {CEUR-WS.org}, series = {CEUR Workshop Proceedings}, title = {Temporal Query Answering in {\textit{DL-Lite}}}, volume = {1014}, year = {2013}, }
Abstract BibTeX Entry PDF File ©IEEE Press
For service management systems the early recognition of situations that necessitate a rebinding or a migration of services is an important task. To describe these situations on differing levels of detail and to allow their recognition even if only incomplete information is available, we employ the ontology language OWL 2 and the reasoning services defined for it. In this paper we provide a case study on the performance of state of the art OWL 2 reasoning systems for answering class queries and conjunctive queries modeling the relevant situations for service rebinding or migration in the differing OWL 2 profiles.
@inproceedings{ DelMe13, address = {San Diego, California}, author = {Waltenegus {Dargie} and {Eldora} and Julian {Mendez} and Christoph {M{\"o}bius} and Kateryna {Rybina} and Veronika {Thost} and Anni-Yasmin {Turhan}}, booktitle = {Proceedings of the 10th IEEE Workshop on Context Modeling and Reasoning 2013}, month = {March}, pages = {31--36}, publisher = {IEEE Computer Society}, title = {Situation Recognition for Service Management Systems Using OWL 2 Reasoners}, year = {2013}, }
Abstract BibTeX Entry PDF File PDF File (Extended Technical Report)
Description Logics (DLs) are a family of knowledge representation formalisms, that provides the theoretical basis for the standard web ontology language OWL. Generalization services like the least common subsumer (lcs) and the most specific concept (msc) are the basis of several ontology design methods, and form the core of similarity measures. For the DL ELOR, which covers most of the OWL 2 EL profile, the lcs and msc need not exist in general, but they always exist if restricted to a given role-depth. We present algorithms that compute these role-depth bounded generalizations. Our method is easy to implement, as it is based on the polynomial-time completion algorithm for ELOR.
@inproceedings{ EcPeTu-KI-13, address = {Koblenz, Germany}, author = {Andreas {Ecke} and Rafael {Pe{\~n}aloza} and Anni-Yasmin {Turhan}}, booktitle = {Proceedings of the 36th German Conference on Artificial Intelligence (KI 2013)}, editor = {Ingo J. {Timm} and Matthias {Thimm}}, pages = {49--60}, publisher = {Springer-Verlag}, series = {Lecture Notes in Artificial Intelligence}, title = {Computing Role-depth Bounded Generalizations in the Description Logic {$\mathcal{ELOR}$}}, volume = {8077}, year = {2013}, }
Abstract BibTeX Entry PDF File
Completion-based algorithms can be employed for computing the least common subsumer of two concepts up to a given role-depth, in extensions of the lightweight DL EL. This approach has been applied also to the probabilistic DL Prob-EL, which is variant of EL with subjective probabilities. In this paper we extend the completion-based lcs-computation algorithm to nominals, yielding a procedure for the DL Prob-ELO⁰¹.
@inproceedings{ EcPeTu-DL-13, address = {Ulm, Germany}, author = {Andreas {Ecke} and Rafael {Pe{\~n}aloza} and Anni-Yasmin {Turhan}}, booktitle = {Proceedings of the 26th International Workshop on Description Logics ({DL-2013})}, editor = {Thomas {Eiter} and Birte {Glimm} and Yevgeny {Kazakov} and Markus {Kr{\"o}tzsch}}, month = {July}, pages = {670--688}, series = {CEUR-WS}, title = {Role-depth bounded Least Common Subsumer in Prob-{$\mathcal{EL}$} with Nominals}, volume = {1014}, year = {2013}, }
Abstract BibTeX Entry PDF File
In Description Logics (DL) knowledge bases (KBs) information is typically captured by crisp concept descriptions. However, for many practical applications querying the KB by crisp concepts is too restrictive. A controlled way of gradually relaxing a query concept can be achieved by the use of similarity measures. To this end we formalize the task of instance query answering for crisp DL KBs using concepts relaxed by similarity measures. We identify relevant properties for the similarity measure and give first results on a computation algorithm.
@inproceedings{ EcPeTu-WL4AI-13, address = {Beijing, China}, author = {Andreas {Ecke} and Rafael {Pe{\~n}aloza} and Anni-Yasmin {Turhan}}, booktitle = {Workshop on {W}eighted {L}ogics for {AI} (in conjunction with IJCAI'13)}, title = {Towards Instance Query Answering for Concepts Relaxed by Similarity Measures}, year = {2013}, }
Abstract BibTeX Entry PDF File
Runtime variability management of component-based software systems allows to consider the current context of a system for system configuration to achieve energy-efficiency. For optimizing the system configuration at runtime, the early recognition of situations apt to reconfiguration is an important task. To describe these situations on differing levels of detail and to allow their recognition even if only incomplete information is available, we employ the ontology language OWL 2 and the reasoning services defined for it. In this paper, we show that the relevant situations for optimizing the current system configuration can be modeled in the different OWL 2 profiles. We further provide a case study on the performance of state of the art OWL 2 reasoning systems for answering concept queries and conjunctive queries modeling the situations to be detected.
@inproceedings{ GoMeT13, address = {Montpellier, France}, author = {Sebastian {Goetz} and Julian {Mendez} and Veronika {Thost} and Anni-Yasmin {Turhan}}, booktitle = {Proceedings of the 10th OWL: Experiences and Directions Workshop (OWLED 2013)}, editor = {Kavitha {Srinivas} and Simon {Jupp}}, month = {May}, title = {OWL 2 Reasoning To Detect Energy-Efficient Software Variants From Context}, year = {2013}, }
Abstract BibTeX Entry PDF File
Abstract: Energy efficiency of software is an increasingly important topic. To achieve energy efficiency, a system should automatically optimize itself to provide the best possible utility to the user for the least possible cost in terms of energy consumption. To reach this goal, the system has to continuously decide whether and how to adapt itself, which takes time and consumes energy by itself. During this time, the system could be in an inefficient state and waste energy. We envision the application of predictive situation recognition to initiate decision making before it is actually needed. Thus, the time of the system being in an inefficient state is reduced, leading to a more energy-efficient reconfiguration.
@article{ GoScWiMeAs13, author = {Sebastian {G{\"o}tz} and Ren{\'e} {Sch{\"o}ne} and Claas {Wilke} and Julian {Mendez} and Uwe {A{\ss}mann}}, journal = {2nd Workshop EASED@ BUIS 2013}, pages = {11}, title = {Towards Predictive Self-optimization by Situation Recognition}, year = {2013}, }
Abstract BibTeX Entry PDF File ©Springer-Verlag (The final publication is available at link.springer.com)
For practical ontology-based applications representing and reasoning with probabilities is an essential task. For Description Logics with subjective probabilities reasoning procedures for testing instance relations based on the completion method have been developed. In this paper we extend this technique to devise algorithms for solving non-standard inferences for EL and its probabilistic extension Prob-EL01: computing the most specific concept of an individual and finding explanations for instance relations.
@inproceedings{ PeTu12, author = {Rafael {Pe{\~n}aloza} and Anni-Yasmin {Turhan}}, booktitle = {Uncertainty Reasoning for the Semantic Web II, International Workshops URSW 2008-2010 Held at ISWC and UniDL 2010 Held at FLoC, Revised Selected Papers}, editor = {Fernando {Bobillo} and Paulo C. G. {Costa} and Claudia {d'Amato} and Nicola {Fanizzi} and Kathryn B. {Laskey} and Kenneth J. {Laskey} and Thomas {Lukasiewicz} and Matthias {Nickles} and Michael {Pool}}, number = {7123}, pages = {80--98}, publisher = {Springer-Verlag}, series = {Lecture Notes in Computer Science}, title = {Instance-based Non-standard Inferences in $\mathcal{EL}$ with Subjective Probabilities}, year = {2013}, }
Abstract BibTeX Entry PDF File
Recently, exact conditions for the existence of the least common subsumer (lcs) computed w.r.t. general \(\mathcal{EL}\)-TBoxes have been devised. This paper extends these results and provides necessary and suffcient conditions for the existence of the lcs w.r.t. \(\mathcal{EL}^+\)-TBoxes. We show decidability of the existence in PTime and polynomial bounds on the maximal role-depth of the lcs, which in turn yields a computation algorithm for the lcs w.r.t. \(\mathcal{EL}^+\)-TBoxes.
@inproceedings{ TuZa-DL13, address = {Ulm, Germany}, author = {Anni-Yasmin {Turhan} and Benjamin {Zarrie{\ss}}}, booktitle = {Proceedings of the 26th International Workshop on Description Logics ({DL-2013})}, editor = {Thomas {Eiter} and Birte {Glimm} and Yevgeny {Kazakov} and Markus {Kr{\"o}tzsch}}, month = {July}, pages = {477--488}, publisher = {CEUR-WS.org}, series = {CEUR Workshop Proceedings}, title = {Computing the lcs w.r.t.\ General $\mathcal{E\!L}^+$ {TB}oxes}, year = {2013}, }
Abstract BibTeX Entry PDF File DOI
In the area of Description Logics the least common subsumer (lcs) and the most specific concept (msc) are inferences that generalize a set of concepts or an individual, respectively, into a single concept. If computed w.r.t. a general -TBox neither the lcs nor the msc need to exist. So far in this setting no exact conditions for the existence of lcs- or msc-concepts are known. This paper provides necessary and suffcient conditions for the existence of these two kinds of concepts. For the lcs of a fixed number of concepts and the msc we show decidability of the existence in PTime and polynomial bounds on the maximal role-depth of the lcs- and msc-concepts. The latter allows to compute the lcs and the msc, respectively.
@techreport{ ZaTu-LTCS-13-06, address = {Dresden, Germany}, author = {Benjamin {Zarrie{\"s}} and Anni-Yasmin {Turhan}}, doi = {https://doi.org/10.25368/2022.196}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See \url{http://lat.inf.tu-dresden.de/research/reports.html}.}, number = {13-06}, title = {Most Specific Generalizations w.r.t.\ General $\mathcal{EL}$-TBoxes}, type = {LTCS-Report}, year = {2013}, }
Abstract BibTeX Entry PDF File ©IJCAI
In the area of Description Logics the least common subsumer (lcs) and the most specific concept (msc) are inferences that generalize a set of concepts or an individual, respectively, into a single concept. If computed w.r.t. a general -TBox neither the lcs nor the msc need to exist. So far in this setting no exact conditions for the existence of lcs- or msc-concepts are known. This paper provides necessary and suffcient conditions for the existence of these two kinds of concepts. For the lcs of a fixed number of concepts and the msc we show decidability of the existence in PTime and polynomial bounds on the maximal role-depth of the lcs- and msc-concepts. The latter allows to compute the lcs and the msc, respectively.
@inproceedings{ ZaTu-IJCAI13, address = {Beijing, China}, author = {Benjamin {Zarrie{\ss}} and Anni-Yasmin {Turhan}}, booktitle = {Proceedings of the 23rd International Joint Conference on Artificial Intelligence (IJCAI'13)}, publisher = {AAAI Press}, title = {Most Specific Generalizations w.r.t.\ General $\mathcal{EL}$-TBoxes}, year = {2013}, }
2012
Abstract BibTeX Entry PDF File ©Springer-Verlag
Fuzzy Description Logics (DLs) with t-norm semantics have been studied as a means for representing and reasoning with vague knowledge. Recent work has shown that even fairly inexpressive fuzzy DLs become undecidable for a wide variety of t-norms. We complement those results by providing a class of t-norms and an expressive fuzzy DL for which ontology consistency is linearly reducible to crisp reasoning, and thus has its same complexity. Surprisingly, in these same logics crisp models are insufficient for deciding fuzzy subsumption.
@inproceedings{ BoDP-IJCAR-12, address = {Manchester, UK}, author = {Stefan {Borgwardt} and Felix {Distel} and Rafael {Pe{\~n}aloza}}, booktitle = {Proceedings of the 6th International Joint Conference on Automated Reasoning (IJCAR'12)}, editor = {Bernhard {Gramlich} and Dale {Miller} and Ulrike {Sattler}}, pages = {82--96}, publisher = {Springer-Verlag}, series = {Lecture Notes in Artificial Intelligence}, title = {How Fuzzy is my Fuzzy Description Logic?}, volume = {7364}, year = {2012}, }
Abstract BibTeX Entry PDF File
Ontology consistency has been shown to be undecidable for a wide variety of fairly inexpressive fuzzy Description Logics (DLs). In particular, for any t-norm "starting with" the Lukasiewicz t-norm, consistency of crisp ontologies (w.r.t. witnessed models) is undecidable in any fuzzy DL with conjunction, existential restrictions, and (residual) negation. In this paper we show that for any t-norm with Gödel negation, that is, any t-norm not starting with Lukasiewicz, ontology consistency for a variant of fuzzy SHOI is linearly reducible to crisp reasoning, and hence decidable in exponential time. Our results hold even if reasoning is not restricted to the class of witnessed models only.
@inproceedings{ BoDP-DL12, address = {Rome, Italy}, author = {Stefan {Borgwardt} and Felix {Distel} and Rafael {Pe{\~n}aloza}}, booktitle = {Proceedings of the 2012 International Workshop on Description Logics ({DL'12})}, editor = {Yevgeny {Kazakov} and Domenico {Lembo} and Frank {Wolter}}, pages = {103--113}, series = {CEUR-WS}, title = {{G{\"o}}del Negation Makes Unwitnessed Consistency Crisp}, volume = {846}, year = {2012}, }
Abstract BibTeX Entry PDF File
Recent results show that ontology consistency is undecidable for a wide variety of fuzzy Description Logics (DLs). Most notably, undecidability arises for a family of inexpressive fuzzy DLs using only conjunction, existential restrictions, and residual negation, even if the ontology itself is crisp. All those results depend on restricting reasoning to witnessed models. In this paper, we show that ontology consistency for inexpressive fuzzy DLs using any t-norm starting with the Łukasiewicz t-norm is also undecidable w.r.t. general models.
@inproceedings{ BoPe-DL12, address = {Rome, Italy}, author = {Stefan {Borgwardt} and Rafael {Pe{\~n}aloza}}, booktitle = {Proceedings of the 2012 International Workshop on Description Logics ({DL'12})}, editor = {Yevgeny {Kazakov} and Domenico {Lembo} and Frank {Wolter}}, pages = {411--421}, series = {CEUR-WS}, title = {Non-{G{\"o}}del Negation Makes Unwitnessed Consistency Undecidable}, volume = {846}, year = {2012}, }
Abstract BibTeX Entry PDF File ©AAAI
Fuzzy description logics (DLs) have been investigated for over two decades, due to their capacity to formalize and reason with imprecise concepts. Very recently, it has been shown that for several fuzzy DLs, reasoning becomes undecidable. Although the proofs of these results differ in the details of each specific logic considered, they are all based on the same basic idea. In this paper, we formalize this idea and provide sufficient conditions for proving undecidability of a fuzzy DL. We demonstrate the effectiveness of our approach by strengthening all previously-known undecidability results and providing new ones. In particular, we show that undecidability may arise even if only crisp axioms are considered.
@inproceedings{ BoPe-KR12, address = {Rome, Italy}, author = {Stefan {Borgwardt} and Rafael {Pe{\~n}aloza}}, booktitle = {Proceedings of the 13th International Conference on Principles of Knowledge Representation and Reasoning (KR 2012)}, editor = {Gerhard {Brewka} and Thomas {Eiter} and Sheila A. {McIlraith}}, pages = {232--242}, publisher = {AAAI Press}, title = {Undecidability of Fuzzy Description Logics}, year = {2012}, }
Abstract BibTeX Entry PDF File
Computing the least common subsumer (lcs) yields a generalization of a collection of concepts, computing such generalizations is a useful reasoning task for many ontology-based applications. Since the lcs need not exist, if computed w.r.t. general TBoxes, an approximative approach, the role-depth bounded lcs, has been proposed. Recently, this approach has been extended to the Description logic , which covers most of the OWL 2 EL profile. In this paper we present two kinds of optimizations for the computation of such approximative lcs: one to obtain succinct rewritings of \(\mathcal{el}+\)-concepts and the other to speed-up the k-lcs computation. The evaluation of these optimizations give evidence that they can improve the computation of the role-depth bounded lcs by orders of magnitude.
@inproceedings{ EcTu-OWLED-12, author = {Andreas {Ecke} and Anni-Yasmin {Turhan}}, booktitle = {Proc. of 9th OWL: Experiences and Directions Workshop (OWLED 2012)}, editor = {Matthew {Horridge} and Pavel {Klinov}}, title = {Optimizations for the role-depth bounded least common subsumer in $\mathcal{el}+$}, volume = {849}, year = {2012}, }
Abstract BibTeX Entry PDF File
For \(\mathcal{EL}\) the least common subsumer (lcs) need not exist, if computed w.r.t. general TBoxes. In case the role-depth of the lcs concept description is bounded, an approximate solution can be obtained. In this paper we extend the completion-based method for computing such approximate solutions to \(\mathcal{ELI}\) and \(\mathcal{EL}\)+. For \(\mathcal{ELI}\) the extension needs to be able to treat complex node labels. For a naive method generates highly redundant concept descriptions for which we devise a heuristic that produces smaller, but equivalent concept descriptions. We demonstrate the usefulness of this heuristic by an evaluation.
@inproceedings{ EcTu-DL-12, author = {Andreas {Ecke} and Anni-Yasmin {Turhan}}, booktitle = {Proc.\ of Description Logics Workshop}, editor = {Yevgeny {Kazakhov} and Frank {Wolter}}, series = {CEUR}, title = {Role-depth Bounded Least Common Subsumers for $\mathcal{EL}$+ and $\mathcal{ELI}$}, volume = {846}, year = {2012}, }
Abstract BibTeX Entry PDF File
Similarity measures for concepts written in Description Logics (DLs) are often devised based on the syntax of concepts or simply by adjusting them to a set of instance data. These measures do not take the semantics of the concepts into account and can thus lead to unintuitive results. It even remains unclear how these measures behave if applied to new domains or new sets of instance data. In this paper we develop a framework for similarity measures for ELH-concept descriptions based on the semantics of the DL ELH. We show that our framework ensures that the measures resulting from instantiations fulfill fundamental properties, such as equivalence invariance, yet the framework provides the flexibility to adjust measures to specifics of the modelled domain.
@inproceedings{ LeTu-Jelia12, author = {Karsten {Lehmann} and Anni-Yasmin {Turhan}}, booktitle = {Proceedings of the 13th European Conference on Logics in Artificial Intelligence}, editor = {Luis Fari{\~n}as del {Cerro} and Andreas {Herzig} and J{\'e}r{\^o}me {Mengin}}, pages = {307--319}, publisher = {Springer Verlag}, series = {Lecture Notes in Artificial Intelligence}, title = {A Framework for Semantic-based Similarity Measures for {$\mathcal{ELH}$}-Concepts}, year = {2012}, }
Generated 19 December 2024, 10:12:37.
A1: Verification of Non-Terminating Action Programs
The action language GOLOG has been used, among other things, for the specification of the behaviour of mobile robots. Since the task of such autonomous systems is typically open-ended, their GOLOG programs are usually non-terminating. To ensure that the program will let the robot exhibit the intended behaviour, it is often desirable to be able to formally specify and then verify the desired properties, which are often of a temporal nature. This task has been studied within our preliminary work from two perspectives: On the one hand, the problem was tackled for very expressive specification and action program formalisms, but without the goal of achieving decidability, i.e. the developed verification methods were not guaranteed to terminate. On the other hand, the verification problem was studied for action formalisms based on decidable description logics and very limited means of specifying admissible sequences of actions, which allowed us to show decidability and complexity results for the verification problem. The purpose of this project is to combine the advantages of both approaches by, on one hand, developing verification methods for GOLOG programs that are effective and practically feasible and, on the other hand, going beyond the formalisms with very limited expressiveness to enhance their usefulness. Among other things, both qualitative and quantitative temporal program properties will be addressed.
Partners
- Prof. Dr.-Ing. Franz Baader
- Prof. Gerhard Lakemeyer, Ph.D.
Researchers
- Jens Claßen (RWTH Aachen, KBSG)
- Benjamin Zarrieß (TU Dresden, LAT)
Publications and Technical Reports
2018
Abstract BibTeX Entry PDF File DOI
Golog programs allow to model complex behaviour of agents by combining primitive actions defined in a Situation Calculus theory using imperative and non-deterministic programming language constructs. In general, verifying temporal properties of Golog programs is undecidable. One way to establish decidability is to restrict the logic used by the program to a Description Logic (DL), for which recently some complexity upper bounds for verification problem have been established. However, so far it was open whether these results are tight, and lightweight DLs such as EL have not been studied at all. Furthermore, these results only apply to a setting where actions do not consume time, and the properties to be verified only refer to the timeline in a qualitative way. In a lot of applications, this is an unrealistic assumption. In this work, we study the verification problem for timed Golog programs, in which actions can be assigned differing durations, and temporal properties are specified in a metric branching time logic. This allows to annotate temporal properties with time intervals over which they are evaluated, to specify for example that some property should hold for at least n time units, or should become specified within some specified time window. We establish tight complexity bounds of the verification problem for both expressive and lightweight DLs. Our lower bounds already apply to a very limited fragment of the verification problem, and close open complexity bounds for the non-metrical cases studied before.
@techreport{ KoZa-LTCS-18-06, address = {Dresden, Germany}, author = {Patrick {Koopmann} and Benjamin {Zarrie{\ss}}}, doi = {https://doi.org/10.25368/2022.241}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {see \url{https://lat.inf.tu-dresden.de/research/reports.html}}, number = {18-06}, title = {On the Complexity of Verifying Timed {Golog} Programs over Description Logic Actions (Extended Version)}, type = {LTCS-Report}, year = {2018}, }
Abstract BibTeX Entry PDF File ©AAAI
We consider an action language extended with quantitative notions of uncertainty. In our setting, the initial beliefs of an agent are represented as a probabilistic knowledge base with axioms formulated in the Description Logic ALCO. Action descriptions describe the possibly context-sensitive and non-deterministic effects of actions and provide likelihood distributions over the different possible outcomes of actions. In this paper, we prove decidability of the projection problem which is the basic reasoning task needed for predicting the outcome of action sequences. Furthermore, we investigate how the non-determinism in the action model affects the complexity of the projection problem.
@inproceedings{ Za-KR2018, address = {USA}, author = {Benjamin {Zarrie{\ss}}}, booktitle = {Principles of Knowledge Representation and Reasoning: Proceedings of the Sixteenth International Conference, {KR} 2018, Tempe, Arizona, 30 October - 2 November 2018}, editor = {Frank {Wolter} and Michael {Thielscher} and Francesca {Toni}}, pages = {514--523}, publisher = {{AAAI} Press}, title = {{Complexity of Projection with Stochastic Actions in a Probabilistic Description Logic}}, year = {2018}, }
2017
Abstract BibTeX Entry PDF File Publication
Golog is a powerful programming language for logic-based agents. The primitives of the language are actions whose preconditions and effects are defined in a Situation Calculus action theory using first-order logic. To describe possible courses of actions the programmer can freely combine imperative control structures with constructs for non-deterministic choice, leaving it to the system to resolve the non-determinism in a suitable manner. Golog has been successfully used for high-level decision making in the area of cognitive robotics. Obviously, it is important to verify certain properties of a Golog program before executing it on a physical robot. However, due to the high expressiveness of the language the verification problem is in general undecidable. In this thesis, we study the verification problem for Golog programs over actions defined in action languages based on Description Logics and explore the boundary between decidable and undecidable fragments.
@thesis{ Zarriess-Diss-2017, address = {Dresden, Germany}, author = {Benjamin {Zarrie{\ss}}}, school = {Technische Universit\"{a}t Dresden}, title = {Verification of Golog Programs over Description Logic Actions}, type = {Doctoral Thesis}, year = {2017}, }
2016
Abstract BibTeX Entry
The Golog action programming language is a powerful means to express high-level behaviours in terms of programs over actions defined in a Situation Calculus theory. In particular for physical systems, verifying that the program satisfies certain desired temporal properties is often crucial, but undecidable in general, the latter being due to the language's high expressiveness in terms of first-order quantification and program constructs. So far, approaches to achieve decidability involved restrictions where action effects either had to be context-free (i.e. not depend on the current state), local (i.e. only affect objects mentioned in the action's parameters), or at least bounded (i.e. only affect a finite number of objects). In this paper, we present a new, more general class of action theories (called acyclic) that allows for context-sensitive, non-local, unbounded effects, i.e. actions that may affect an unbounded number of possibly unnamed objects in a state-dependent fashion. We contribute to the further exploration of the boundary between decidability and undecidability for Golog, showing that for acyclic theories in the two-variable fragment of first-order logic, verification of CTL* properties of programs over ground actions is decidable.
@inproceedings{ ZC2016, author = {Benjamin {Zarrie{\ss}} and Jens {Cla{\ss}en}}, booktitle = {Proceedings of the Thirtieth {AAAI} Conference on Artificial Intelligence (AAAI-16)}, editor = {Dale {Schuurmans} and Michael {Wellman}}, month = {February}, publisher = {AAAI Press}, title = {Decidable Verification of Golog Programs over Non-Local Effect Actions}, year = {2016}, }
2015
Abstract BibTeX Entry PDF File DOI
The Golog action programming language is a powerful means to express high-level behaviours in terms of programs over actions defined in a Situation Calculus theory. In particular for physical systems, verifying that the program satisfies certain desired temporal properties is often crucial, but undecidable in general, the latter being due to the language's high expressiveness in terms of first-order quantification and program constructs. So far, approaches to achieve decidability involved restrictions where action effects either had to be context-free (i.e. not depend on the current state), local (i.e. only affect objects mentioned in the action's parameters), or at least bounded (i.e. only affect a finite number of objects). In this paper, we present a new, more general class of action theories (called acyclic) that allows for context-sensitive, non-local, unbounded effects, i.e. actions that may affect an unbounded number of possibly unnamed objects in a state-dependent fashion. We contribute to the further exploration of the boundary between decidability and undecidability for Golog, showing that for acyclic theories in the two-variable fragment of first-order logic, verification of CTL* properties of programs over ground actions is decidable.
@techreport{ ZaCla-LTCS-15-19, address = {Dresden, Germany}, author = {Benjamin {Zarrie{\ss}} and Jens {Cla{\ss}en}}, doi = {https://doi.org/10.25368/2022.224}, institution = {Chair of Automata Theory, TU Dresden}, note = {Extended version. See http://lat.inf.tu-dresden.de/research/reports.html.}, number = {15-19}, title = {Decidable Verification of Golog Programs over Non-Local Effect Actions}, type = {LTCS-Report}, year = {2015}, }
Abstract BibTeX Entry PDF File DOI
A knowledge-based program defines the behavior of an agent by combining primitive actions, programming constructs and test conditions that make explicit reference to the agent's knowledge. In this paper we consider a setting where an agent is equipped with a Description Logic (DL) knowledge base providing general domain knowledge and an incomplete description of the initial situation. We introduce a corresponding new DL-based action language that allows for representing both physical and sensing actions, and that we then use to build knowledge-based programs with test conditions expressed in the epistemic DL. After proving undecidability for the general case, we then discuss a restricted fragment where verification becomes decidable. The provided proof is constructive and comes with an upper bound on the procedure's complexity.
@techreport{ ZaCl-LTCS-15-10, address = {Dresden, Germany}, author = {Benjamin {Zarrie{\ss}} and Jens {Cla{\ss}en}}, doi = {https://doi.org/10.25368/2022.216}, institution = {Chair of Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See http://lat.inf.tu-dresden.de/research/reports.html}, number = {15-10}, title = {Verification of Knowledge-Based Programs over Description Logic Actions}, type = {LTCS-Report}, year = {2015}, }
Abstract BibTeX Entry
A knowledge-based program defines the behavior of an agent by combining primitive actions, programming constructs and test conditions that make explicit reference to the agent's knowledge. In this paper we consider a setting where an agent is equipped with a Description Logic (DL) knowledge base providing general domain knowledge and an incomplete description of the initial situation. We introduce a corresponding new DL-based action language that allows for representing both physical and sensing actions, that we then use to build knowledge-based programs with test conditions expressed in an epistemic DL. After proving undecidability for the general case, we then discuss a restricted fragment where verification becomes decidable. The provided proof is constructive and comes with an upper bound on the procedure's complexity.
@inproceedings{ ZC2015, author = {Benjamin {Zarrie{\ss}} and Jens {Cla{\ss}en}}, booktitle = {Proceedings of the 24th International Joint Conference on Artificial Intelligence (IJCAI'15)}, editor = {Qiang {Yang} and Michael {Wooldridge}}, pages = {3278--3284}, publisher = {AAAI Press}, title = {Verification of Knowledge-Based Programs over Description Logic Actions}, year = {2015}, }
2014
Abstract BibTeX Entry PDF File
The high-level action programming language Golog is a useful means for modeling the behavior of autonomous agents such as mobile robots. It relies on a representation given in terms of a logic-based action theory in the Situation Calculus (SC). To guarantee that the possibly non-terminating execution of a Golog program leads to the desired behavior of the agent, it is desirable to (automatically) verify that it satisfies certain requirements given in terms of temporal formulas. However, due to the high (first-order) expressiveness of the Golog language, the verification problem is in general undecidable. In this paper we show that for a fragment of the Golog language defined on top of the decidable logic C2, the verification problem for linear time temporal properties becomes decidable, which extends earlier results to a more expressive fragment of the input formalism. Moreover, we justify the involved restrictions on program constructs and action theory by showing that relaxing any of these restrictions instantly renders the verification problem undecidable again.
@inproceedings{ ZaCla-KRR14, address = {Palo Alto, California, USA}, author = {Benjamin {Zarrie{\ss}} and Jens {Cla{\ss}en}}, booktitle = {Technical Report of the AAAI 2014 Spring Symposium: Knowledge Representation and Reasoning in Robotics (KRR{\textquoteright}14)}, publisher = {AAAI Press}, title = {On the Decidability of Verifying LTL Properties of Golog Programs}, year = {2014}, }
Abstract BibTeX Entry
Golog is a high-level action programming language for controlling autonomous agents such as mobile robots. It is defined on top of a logic-based action theory expressed in the Situation Calculus. Before a program is deployed onto an actual robot and executed in the physical world, it is desirable, if not crucial, to verify that it meets certain requirements (typically expressed through temporal formulas) and thus indeed exhibits the desired behaviour. However, due to the high (first-order) expressiveness of the language, the corresponding verification problem is in general undecidable. In this paper, we extend earlier results to identify a large, non-trivial fragment of the formalism where verification is decidable. In particular, we consider properties expressed in a first-order variant of the branching-time temporal logic CTL*. Decidability is obtained by (1) resorting to the decidable first-order fragment C2 as underlying base logic, (2) using a fragment of Golog with ground actions only, and (3) requiring the action theory to only admit local effects.
@inproceedings{ ZC-ECAI14, address = {Prague, Czech Republic}, author = {Benjamin {Zarrie{\ss}} and Jens {Cla{\ss}en}}, booktitle = {Proceedings of the Twenty-First European Conference on Artificial Intelligence (ECAI 2014)}, title = {Verifying CTL* Properties of Golog Programs over Local-Effect Actions}, year = {2014}, }
2013
BibTeX Entry PDF File DOI
@techreport{ BaZa-LTCS-13-08, address = {Dresden, Germany}, author = {Franz {Baader} and Benjamin {Zarrie{\"s}}}, doi = {https://doi.org/10.25368/2022.198}, institution = {Chair of Automata Theory, TU Dresden}, note = {See http://lat.inf.tu-dresden.de/research/reports.html.}, number = {13-08}, title = {Verification of Golog Programs over Description Logic Actions}, type = {LTCS-Report}, year = {2013}, }
Abstract BibTeX Entry PDF File ©Springer-Verlag
High-level action programming languages such as Golog have successfully been used to model the behavior of autonomous agents. In addition to a logic-based action formalism for describing the environment and the effects of basic actions, they enable the construction of complex actions using typical programming language constructs. To ensure that the execution of such complex actions leads to the desired behavior of the agent, one needs to specify the required properties in a formal way, and then verify that these requirements are met by any execution of the program. Due to the expressiveness of the action formalism underlying Golog (Situation Calculus), the verification problem for Golog programs is in general undecidable. Action formalisms based on Description Logic (DL) try to achieve decidability of inference problems such as the projection problem by restricting the expressiveness of the underlying base logic. However, until now these formalisms have not been used within Golog programs. In the present paper, we introduce a variant of Golog where basic actions are defined using such a DL-based formalism, and show that the verification problem for such programs is decidable. This improves on our previous work on verifying properties of infinite sequences of DL actions in that it considers (finite and infinite) sequences of DL actions that correspond to (terminating and non-terminating) runs of a Golog program rather than just infinite sequences accepted by a Büchi automaton abstracting the program.
@inproceedings{ BaZa-FroCoS13, address = {Nancy, France}, author = {Franz {Baader} and Benjamin {Zarrie{\ss}}}, booktitle = {Proceedings of the 9th International Symposium on Frontiers of Combining Systems ({FroCoS 2013})}, editor = {Pascal {Fontaine} and Christophe {Ringeissen} and Renate A. {Schmidt}}, month = {September}, pages = {181--196}, publisher = {Springer-Verlag}, series = {Lecture Notes in Computer Science}, title = {Verification of Golog Programs over Description Logic Actions}, volume = {8152}, year = {2013}, }
Abstract BibTeX Entry PDF File DOI
Golog is a high-level action programming language for controlling autonomous agents such as mobile robots. It is defined on top of a logic-based action theory expressed in the Situation Calculus. Before a program is deployed onto an actual robot and executed in the physical world, it is desirable, if not crucial, to verify that it meets certain requirements (typically expressed through temporal formulas) and thus indeed exhibits the desired behaviour. However, due to the high (first-order) expressiveness of the language, the corresponding verification problem is in general undecidable. In this paper, we extend earlier results to identify a large, non-trivial fragment of the formalism where verification is decidable. In particular, we consider properties expressed in a first-order variant of the branching-time temporal logic CTL*. Decidability is obtained by (1) resorting to the decidable first-order fragment C2 as underlying base logic, (2) using a fragment of Golog with ground actions only, and (3) requiring the action theory to only admit local effects.
@techreport{ ZaCla-LTCS-13-10, address = {Dresden, Germany}, author = {Benjamin {Zarrie{\ss}} and Jens {Cla{\ss}en}}, doi = {https://doi.org/10.25368/2022.200}, institution = {Chair of Automata Theory, TU Dresden}, note = {Extended version. See http://lat.inf.tu-dresden.de/research/reports.html.}, number = {13-10}, title = {On the Decidability of Verifying LTL Properties of Golog Programs}, type = {LTCS-Report}, year = {2013}, }
Generated 19 December 2024, 10:12:36.
A3: Probabilistic Description Logics Based on the Aggregating Semantics and the Principle of Maximum Entropy
Description Logics (DLs) are a well-investigated family of logic-based knowledge representation languages which are tailored towards representing terminological knowledge. This terminological knowledge is represented in the TBox using general concept inclusions (GCIs), whereas knowledge about individuals (assertional knowledge) is stated in the ABox. Probabilistic extensions of DLs are motivated by the fact that, in many application domains, knowledge is not always certain. In such extensions, there is a need for treating assertional knowledge differently from terminological knowledge. In principle, probabilistic terminological knowledge has a statistical flavour whereas probabilistic assertional knowledge has a subjective flavour. However, in order to reason with respect to a knowledge base containing both kinds of knowledge, one needs a common semantic framework covering both aspects. Previous work in this area has not addressed this dual need in a completely satisfactory way.
The main idea underlying this project is to adapt and extend the recently developed aggregating semantics from a restricted first-order case to DLs by respectively generalizing ABox assertions and GCIs to closed and open probabilistic conditionals. This semantics combines subjective probabilities with population-based statements on the basis of a possible-worlds semantics, thus providing a common semantic framework for both subjective and statistical probabilities. As a second main feature, we apply the principle of maximum entropy on top of aggregating semantics. This overcomes the pitfall of obtaining large and uninformative intervals for inferred probabilities which is a common feature of many of the approaches that reason with respect to sets of probability distributions.
Whereas the semantic properties of the approach have been investigated in some detail for a fragment of first-order logic, only preliminary work has been done on algorithmic and computational properties. To be useful in practice, the probabilistic DL obtained by applying this approach need to be equipped with effective reasoning procedures. Thus, the main emphasis of this project will be on investigating computational properties (decidability and complexity) of the probabilistic logics obtained by instantiating the approach in particular with DLs of different expressive power. In addition to showing decidability and complexity results, we will develop practical algorithms for some of the investigated DLs and provide prototypical implementations. Another major challenge will be to extend the approach from universes of fixed finite size to the infinite case by either considering limit probabilities for universes of growing size or considering a countably infinite universe. Furthermore, in addition to the basic approach, we will also investigate extensions, such as using probabilities also within concepts, allowing for additional constraints in the knowledge base and for independence assumptions.
Partners
- Prof. Dr.-Ing. Franz Baader
- Prof. Dr. Gabriele Kern-Isberner
Publications and Technical Reports
2020
Abstract BibTeX Entry DOI
We introduce and investigate the expressive description logic (DL) ALCSCC++, in which the global and local cardinality constraints introduced in previous papers can be mixed. We prove that the added expressivity does not increase the complexity of satisfiability checking and other standard inference problems. However, reasoning in ALCSCC++ becomes undecidable if inverse roles are added or conjunctive query entailment is considered. We prove that decidability of querying can be regained if global and local constraints are not mixed and the global constraints are appropriately restricted. In this setting, query entailment can be shown to be EXPTIME-complete and hence not harder than reasoning in ALC.
@inproceedings{ BaBaRu-ECAI20, author = {Franz {Baader} and Bartosz {Bednarczyk} and Sebastian {Rudolph}}, booktitle = {Proceedings of the 24th European Conference on Artificial Intelligence ({ECAI} 2020)}, doi = {https://doi.org/10.3233/FAIA200146}, pages = {616--623}, publisher = {IOS Press}, series = {Frontiers in Artificial Intelligence and Applications}, title = {Satisfiability and Query Answering in Description Logics with Global and Local Cardinality Constraints}, volume = {325}, year = {2020}, }
2019
Abstract BibTeX Entry PDF File
In two previous publications we have, on the one hand, extended the description logic (DL) ALCQ by more expressive number restrictions using numerical and set constraints expressed in the quantifier-free fragment of Boolean Algebra with Presburger Arithmetic (QFBAPA). The resulting DL was called ALCSCC. On the other hand, we have extended the terminological formalism of the well-known description logic ALC from concept inclusions (CIs) to more general cardinality constraints expressed in QFBAPA, which we called extended cardinality constraints. Here, we combine the two extensions, i.e., we consider extended cardinality constraints on ALCSCC concepts. We show that this does not increase the complexity of reasoning, which is NExpTime-complete both for extended cardinality constraints in ALC and ALCSCC. The same is true for a restricted version of such cardinality constraints, where the complexity of reasoning decreases to ExpTime, not just for ALC, but also for ALCSCC.
@inproceedings{ BaaderSAC19, author = {Franz {Baader}}, booktitle = {Proceedings of the 34th Annual {ACM} Symposium on Applied Computing ({SAC}'19)}, pages = {1123--1131}, publisher = {{ACM}}, title = {Expressive Cardinality Constraints on {$\mathcal{ALCSCC}$} Concepts}, year = {2019}, }
Abstract BibTeX Entry PDF File DOI
In two previous publications we have, on the one hand, extended the description logic (DL) ALCQ by more expressive number restrictions using numerical and set constraints expressed in the quantifier-free fragment of Boolean Algebra with Presburger Arithmetic (QFBAPA). The resulting DL was called ALCSCC. On the other hand, we have extended the terminological formalism of the well-known description logic ALC from concept inclusions (CIs) to more general cardinality constraints expressed in QFBAPA, which we called extended cardinality constraints. Here, we combine the two extensions, i.e., we consider extended cardinality constraints on ALCSCC concepts. We show that this does not increase the complexity of reasoning, which is NExpTime-complete both for extended cardinality constraints in the DL ALC and in its extension ALCSCC. The same is true for a restricted version of such cardinality constraints, where the complexity of reasoning decreases to ExpTime, not just for ALC, but also for ALCSCC.
@article{ Baader-ACR-19, author = {Franz {Baader}}, doi = {https://doi.org/10.1145/3372001.3372002}, journal = {ACM SIGAPP Applied Computing Review}, pages = {5--17}, publisher = {ACM}, title = {Expressive Cardinality Restrictions on Concepts in a Description Logic with Expressive Number Restrictions}, volume = {19}, year = {2019}, }
Abstract BibTeX Entry PDF File
We consider an expressive description logic (DL) in which the global and local cardinality constraints introduced in previous papers can be mixed. On the one hand, we show that this does not increase the complexity of satisfiability checking and other standard inference problems. On the other hand, conjunctive query entailment in this DL turns out to be undecidable. We prove that decidability of querying can be regained if global and local constraints are not mixed and the global constraints are appropriately restricted.
@inproceedings{ BaBR-DL-19, author = {Franz {Baader} and Bartosz {Bednarczyk} and Sebastian {Rudolph}}, booktitle = {Proceedings of the 32nd International Workshop on Description Logics (DL'19)}, editor = {Mantas {Simkus} and Grant {Weddell}}, publisher = {CEUR-WS}, series = {CEUR Workshop Proceedings}, title = {Satisfiability Checking and Conjunctive Query Answering in Description Logics with Global and Local Cardinality Constraints}, volume = {2373}, year = {2019}, }
Abstract BibTeX Entry PDF File DOI (The final publication is available at link.springer.com) ©Springer Nature Switzerland AG 2019
In recent work we have extended the description logic (DL) by means of more expressive number restrictions using numerical and set constraints stated in the quantifier-free fragment of Boolean Algebra with Presburger Arithmetic (QFBAPA). It has been shown that reasoning in the resulting DL, called \(\mathcal{ALCSCC}\), is PSpace-complete without a TBox and ExpTime-complete w.r.t. a general TBox. The semantics of \(\mathcal{ALCSCC}\) is defined in terms of finitely branching interpretations, that is, interpretations where every element has only finitely many role successors. This condition was needed since QFBAPA considers only finite sets. In this paper, we first introduce a variant of \(\mathcal{ALCSCC}\), called \(\mathcal{ALCSCC}^\infty\), in which we lift this requirement (inexpressible in first-order logic) and show that the complexity results for \(\mathcal{ALCSCC}\) mentioned above are preserved. Nevertheless, like \(\mathcal{ALCSCC}\), \(\mathcal{ALCSCC}^\infty\) is not a fragment of first-order logic. The main contribution of this paper is to give a characterization of the first-order fragment of \(\mathcal{ALCSCC}^\infty\). The most important tool used in the proof of this result is a notion of bisimulation that characterizes this fragment.
@inproceedings{ BaDeBo-FroCoS19, author = {Franz {Baader} and Filippo {De Bortoli}}, booktitle = {Proc. of the 12th International Symposium on Frontiers of Combining Systems ({FroCoS} 2019)}, doi = {https://doi.org/10.1007/978-3-030-29007-8_12}, editor = {Andreas {Herzig} and Andrei {Popescu}}, pages = {203--219}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {On the Expressive Power of Description Logics with Cardinality Constraints on Finite and Infinite Sets}, volume = {11715}, year = {2019}, }
Abstract BibTeX Entry PDF File (The final publication is available at link.springer.com)
The probabilistic Description Logic ALCME is an extension of the Description Logic ALC that allows for uncertain conditional statements of the form "if C holds, then D holds with probability p," together with probabilistic assertions about individuals. In ALCME, probabilities are understood as an agent's degree of belief. Probabilistic conditionals are formally interpreted based on the so-called aggregating semantics, which combines a statistical interpretation of probabilities with a subjective one. Knowledge bases of ALCME are interpreted over a fixed finite domain and based on their maximum entropy (ME) model. We prove that checking consistency of such knowledge bases can be done in time polynomial in the cardinality of the domain, and in exponential time in the size of a binary encoding of this cardinality. If the size of the knowledge base is also taken into account, the combined complexity of the consistency problem is NP-complete for unary encoding of the domain cardinality and NExpTime-complete for binary encoding.
@inproceedings{ BaEKW19, author = {Franz {Baader} and Andreas {Ecke} and Gabriele {Kern{-}Isberner} and Marco {Wilhelm}}, booktitle = {Proc. of the 12th International Symposium on Frontiers of Combining Systems ({FroCoS} 2019)}, editor = {Andreas {Herzig} and Andrei {Popescu}}, pages = {167--184}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {The Complexity of the Consistency Problem in the Probabilistic Description Logic {$\mathcal{ALC}^{\mathsf{ME}}$}}, volume = {11715}, year = {2019}, }
Abstract BibTeX Entry PDF File Publication
Recent research in the field of Description Logic (DL) investigated the complexity of the satisfiability problem for description logics that are obtained by enriching the well-known DL ALCQ with more complex set and cardinality constraints over role successors. The algorithms that have been proposed so far, despite providing worst-case optimal decision procedures for the concept satisfiability problem (both without and with a terminology) lack the efficiency needed to obtain usable implementations. In particular, the algorithm for the case without terminology is non-deterministic and the one for the case with a terminology is also best-case exponential. The goal of this thesis is to use well-established techniques from the field of numerical optimization, such as column generation, in order to obtain more practical algorithms. As a starting point, efficient approaches for dealing with counting quantifiers over unary predicates based on SAT-based column generation should be considered.
@thesis{ DeBo-Mas-19, address = {Dresden, Germany}, author = {Filippo {De Bortoli}}, school = {Technische Universit\"{a}t Dresden}, title = {Integrating Reasoning Services for Description Logics with Cardinality Constraints with Numerical Optimization Techniques}, type = {Master's Thesis}, year = {2019}, }
Abstract BibTeX Entry PDF File DOI ©Springer-Verlag
We present \(\mathcal{ALC}^{\mathsf{ME}}\), a probabilistic variant of the Description Logic \(\mathcal{ALC}\) that allows for representing and processing conditional statements of the form ``if \(E\) holds, then \(F\) follows with probability \(p\)'' under the principle of maximum entropy. Probabilities are understood as degrees of belief and formally interpreted by the aggregating semantics. We prove that both checking consistency and drawing inferences based on approximations of the maximum entropy distribution is possible in \(\mathcal{ALC}^{\mathsf{ME}}\) in time polynomial in the domain size. A major problem for probabilistic reasoning from such conditional knowledge bases is to count models and individuals. To achieve our complexity results, we develop sophisticated counting strategies on interpretations aggregated with respect to the so-called conditional impacts of types, which refine their conditional structure.
@inproceedings{ WiKeEcBa-JELIA19, author = {Marco {Wilhelm} and Gabriele {Kern-Isberner} and Andreas {Ecke} and Franz {Baader}}, booktitle = {16th European Conference on Logics in Artificial Intelligence, {JELIA} 2019, Rende, Italy, May 7-11, 2019, Proceedings}, doi = {https://doi.org/10.1007/978-3-030-19570-0_28}, editor = {Francesco {Calimeri} and Nicola {Leone} and Marco {Manna}}, pages = {434--449}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {{Counting Strategies for the Probabilistic Description Logic $\mathcal{ALC}^{\mathsf{ME}}$ Under the Principle of Maximum Entropy}}, volume = {11468}, year = {2019}, }
2017
Abstract BibTeX Entry PDF File
We introduce a new description logic that extends the well-known logic ALCQ by allowing the statement of constraints on role successors that are more general than the qualified number restrictions of ALCQ. To formulate these constraints, we use the quantifier-free fragment of Boolean Algebra with Presburger Arithmetic (QFBAPA), in which one can express Boolean combinations of set constraints and numerical constraints on the cardinalities of sets. Though our new logic is considerably more expressive than ALCQ, we are able to show that the complexity of reasoning in it is the same as in ALCQ, both without and with TBoxes.
@inproceedings{ Baader-FroCoS17, address = {Bras{\'i}lia, Brazil}, author = {Franz {Baader}}, booktitle = {Proceedings of the 11th International Symposium on Frontiers of Combining Systems (FroCoS'17)}, editor = {Clare {Dixon} and Marcelo {Finger}}, pages = {43--59}, publisher = {Springer-Verlag}, series = {Lecture Notes in Computer Science}, title = {A New Description Logic with Set Constraints and Cardinality Constraints on Role Successors}, volume = {10483}, year = {2017}, }
Abstract BibTeX Entry PDF File DOI
We introduce a new description logic that extends the well-known logic ALCQ by allowing the statement of constraints on role successors that are more general than the qualified number restrictions of ALCQ. To formulate these constraints, we use the quantifier-free fragment of Boolean Algebra with Presburger Arithmetic (QFBAPA), in which one can express Boolean combinations of set constraints and numerical constraints on the cardinalities of sets. Though our new logic is considerably more expressive than ALCQ, we are able to show that the complexity of reasoning in it is the same as in ALCQ, both without and with TBoxes.
@techreport{ Baad-LTCS-17-02, address = {Dresden, Germany}, author = {Franz {Baader}}, doi = {https://doi.org/10.25368/2022.232}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See http://lat.inf.tu-dresden.de/research/reports.html}, number = {17-02}, title = {Concept Descriptions with Set Constraints and Cardinality Constraints}, type = {LTCS-Report}, year = {2017}, }
Abstract BibTeX Entry PDF File
We extend the terminological formalism of the well-known description logic ALC from concept inclusions (CIs) to more general constraints formulated in the quantifier-free fragment of Boolean Algebra with Presburger Arithmetic (QFBAPA). In QFBAPA one can formulate Boolean combinations of inclusion constraints and numerical constraints on the cardinalities of sets. Our new formalism extends, on the one hand, so-called cardinality restrictions on concepts, which have been introduced two decades ago, and on the other hand the recently introduced statistical knowledge bases. Though considerably more expressive, our formalism has the same complexity (NExpTime) as cardinality restrictions on concepts. We will also introduce a restricted version of our formalism for which the complexity is ExpTime. This yields the until now unknown exact complexity of the consistency problem for statistical knowledge bases.
@inproceedings{ BaEc-GCAI17, author = {Franz {Baader} and Andreas {Ecke}}, booktitle = {{GCAI} 2017. 3rd Global Conference on Artificial Intelligence}, pages = {6--19}, publisher = {EasyChair}, series = {EPiC Series in Computing}, title = {Extending the Description Logic ALC with More Expressive Cardinality Constraints on Concepts}, volume = {50}, year = {2017}, }
Abstract BibTeX Entry PDF File PDF File (Extended Technical Report) DOI (The final publication is available at link.springer.com) ©Spinger International Publishing
We consider ontology-based query answering in a setting where some of the data are numerical and of a probabilistic nature, such as data obtained from uncertain sensor readings. The uncertainty for such numerical values can be more precisely represented by continu- ous probability distributions than by discrete probabilities for numerical facts concerning exact values. For this reason, we extend existing ap- proaches using discrete probability distributions over facts by continuous probability distributions over numerical values. We determine the exact (data and combined) complexity of query answering in extensions of the well-known description logics EL and ALC with numerical comparison operators in this probabilistic setting.
@inproceedings{ BaKoTu-FroCoS-17, author = {Franz {Baader} and Patrick {Koopmann} and Anni-Yasmin {Turhan}}, booktitle = {Frontiers of Combining Systems: 11th International Symposium}, doi = {https://doi.org/10.1007/978-3-319-66167-4_5}, pages = {77--94}, publisher = {Springer International Publishing}, series = {Lecture Notes in Computer Science}, title = {Using Ontologies to Query Probabilistic Numerical Data}, volume = {10483}, year = {2017}, }
Abstract BibTeX Entry PDF File DOI
We consider ontology-based query answering in a setting where some of the data are numerical and of a probabilistic nature, such as data obtained from uncertain sensor readings. The uncertainty for such numerical values can be more precisely represented by continuous probability distributions than by discrete probabilities for numerical facts concerning exact values. For this reason, we extend existing approaches using discrete probability distributions over facts by continuous probability distributions over numerical values. We determine the exact (data and combined) complexity of query answering in extensions of the well-known description logics EL and ALC with numerical comparison operators in this probabilistic setting.
@techreport{ BaKoTu-LTCS-17-05, address = {Germany}, author = {Franz {Baader} and Patrick {Koopmann} and Anni-Yasmin {Turhan}}, doi = {https://doi.org/10.25368/2022.235}, institution = {Chair for Automata Theory, Technische Universit{\"a}t Dresden}, note = {See \url{https://lat.inf.tu-dresden.de/research/reports.html}}, number = {17-05}, title = {Using Ontologies to Query Probabilistic Numerical Data (Extended Version)}, type = {LTCS-Report}, year = {2017}, }
Abstract BibTeX Entry PDF File
Maximum entropy reasoning (ME-reasoning) based on relational conditionals combines both the capability of ME-distributions to express uncertain knowledge in a way that excellently fits to commonsense, and the great expressivity of an underlying first-order logic. The drawbacks of this approach are its high complexity which is generally paired with a costly domain size dependency, and its non-transparency due to the non-existent a priori independence assumptions as against in Bayesian networks. In this paper we present some independence results for ME-reasoning based on the aggregating semantics for relational conditionals that help to disentangle the composition of ME-distributions, and therefore, lead to a problem reduction and provide structural insights into ME-reasoning.
@inproceedings{ WiKeEc-GCAI17, author = {Marco {Wilhelm} and Gabriele {Kern-Isberner} and Andreas {Ecke}}, booktitle = {{GCAI} 2017. 3rd Global Conference on Artificial Intelligence}, pages = {36--50}, publisher = {EasyChair}, series = {EPiC Series in Computing}, title = {Basic Independence Results for Maximum Entropy Reasoning Based on Relational Conditionals}, volume = {50}, year = {2017}, }
Generated 19 December 2024, 10:12:37.
B1: Automatic Generation of Description Logic-based Biomedical Ontologies
Ontologies such as the Gene Ontology and SNOMED CT play a major role in biology and medicine since they facilitate data integration and the consistent exchange of information between different entities. They can also be used to index and annotate data and literature, thus enabling efficient search and analysis. Unfortunately, creating the required ontologies manually is a complex, error-prone, and time and personnel-consuming effort. For this reason, approaches that try to learn ontologies automatically from text and data have been developed. The ontologies generated by these approaches are, however, usually not formal ontologies, i.e., the concepts learned by these approaches are not equipped with a formal definition. The goal of this project is to combine the expertise in ontology learning from text of Prof. Schroeder’s group with the Description Logic expertise of Prof. Baader’s group in order to develop approaches for learning Description Logic-based ontologies from text and data. The main idea is to apply non-standard Description Logic inferences developed in Prof. Baader’s group to the result of the ontology learning approach developed in Prof. Schroeder’s group in order to generate concept definitions and additional constraints (general concept inclusions). The envisioned approach is hybrid since the non-standard inferences will be modified such that they can take into account numerical information on the quality of the results produced by the ontology learning approaches.
Partners
- Prof. Dr.-Ing. Franz Baader
- Prof. Dr. Michael Schroeder
Publications and Technical Reports
2015
Abstract BibTeX Entry DOI
Background: Ontologies play a major role in life sciences, enabling a number of applications, from new data integration to knowledge verification. SNOMED CT is a large medical ontology that is formally defined so that it ensures global consistency and support of complex reasoning tasks. Most biomedical ontologies and taxonomies on the other hand define concepts only textually, without the use of logic. Here, we investigate how to automatically generate formal concept definitions from textual ones. We develop a method that uses machine learning in combination with several types of lexical and semantic features and outputs formal definitions that follow the structure of SNOMED CT concept definitions. Results: We evaluate our method on three benchmarks and test both the underlying relation extraction component as well as the overall quality of output concept definitions. In addition, we provide an analysis on the following aspects: (1) How do definitions mined from the Web and literature differ from the ones mined from manually created definitions, e.g., MESH? (2) How do different feature representations, e.g., the restrictions of relations' domain and range, impact on the generated definition quality?, (3) How do different machine learning algorithms compare to each other for the task of formal definition generation?, and, (4) What is the influence of the learning data size to the task? We discuss all of these settings in detail and show that the suggested approach can achieve success rates of over 90%. In addition, the results show that the choice of corpora, lexical features, learning algorithm and data size do not impact the performance as strongly as semantic types do. Semantic types limit the domain and range of a predicted relation, and as long as relations' domain and range pairs do not overlap, this information is most valuable in formalizing textual definitions. Conclusions: The analysis presented in this manuscript implies that automated methods can provide a valuable contribution to the formalization of biomedical knowledge, thus paving the way for future applications that go beyond retrieval and into complex reasoning. The method is implemented and accessible to the public from: https://github.com/alifahsyamsiyah/learningDL .
@article{ Petrovaetal-BioSem, author = {Alina {Petrova} and Yue {Ma} and George {Tsatsaronis} and Maria {Kissa} and Felix {Distel} and Franz {Baader} and Michael {Schroeder}}, doi = {https://doi.org/10.1186/s13326-015-0015-3}, journal = {Journal of Biomedical Semantics}, number = {22}, title = {Formalizing Biomedical Concepts from Textual Definitions}, volume = {6}, year = {2015}, }
2014
Abstract BibTeX Entry PDF File
We introduce the notion of an ontology excerpt as being a fixed-size subset of an ontology that preserves as much knowledge as possible about the terms in a given vocabulary as described in the ontology. We consider different extraction techniques for ontology excerpts based on methods from Information Retrieval. To evaluate these techniques, we measure the degree of incompleteness of the resulting excerpts using the notion of logical difference. We provide an experimental evaluation of the extraction techniques by applying them on the biomedical ontology SNOMED CT.
@inproceedings{ ChLuMaWa-DL14, author = {Jieying {Chen} and Michel {Ludwig} and Yue {Ma} and Dirk {Walther}}, booktitle = {Proceedings of the 27th International Workshop on Description Logics ({DL'14})}, editor = {Meghyn {Bienvenu} and Magdalena {Ortiz} and Riccardo {Rosati} and Mantas {Simkus}}, pages = {471--482}, series = {CEUR Workshop Proceedings}, title = {Evaluation of Extraction Techniques for Ontology Excerpts}, volume = {1193}, year = {2014}, }
Abstract BibTeX Entry PDF File
The action programming language Golog has been found useful for the control of autonomous agents such as mobile robots. In scenarios like these, tasks are often open-ended so that the respective control programs are non-terminating. Before deploying such programs on a robot, it is often desirable to verify that they meet certain requirements. For this purpose, Claßen and Lakemeyer recently introduced algorithms for the verification of temporal properties of Golog programs. However, given the expressiveness of Golog, their verification procedures are not guaranteed to terminate. In this paper, we show how decidability can be obtained by suitably restricting the underlying base logic, the effect axioms for primitive actions, and the use of actions within Golog programs. Moreover, we show that dropping any of these restrictions immediately leads to undecidability of the verification problem.
@inproceedings{ ClaLLZ-AAAI14, address = {Quebec City, Quebec, Canada}, author = {Jens {Cla{\ss}en} and Martin {Liebenberg} and Gerhard {Lakemeyer} and Benjamin {Zarrie{\ss}}}, booktitle = {Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence (AAAI 2014)}, pages = {1012--1019}, publisher = {AAAI Press}, title = {Exploring the Boundaries of Decidable Verification of Non-Terminating Golog Programs}, type = {inproceedings}, year = {2014}, }
Abstract BibTeX Entry PDF File DOI
Ontology repair remains one of the main bottlenecks for the development of ontologies for practical use. Many automated methods have been developed for suggesting potential repairs, but ultimately human intervention is required for selecting the adequate one, and the human expert might be overwhelmed by the amount of information delivered to her. We propose a decomposition of ontologies into smaller components that can be repaired in parallel. We show the utility of our approach for ontology repair, provide algorithms for computing this decomposition through standard reasoning, and study the complexity of several associated problems.
@techreport{ MaPe-LTCS-14-05, address = {Dresden, Germany}, author = {Yue {Ma} and Rafael {Pe{\~n}aloza}}, doi = {https://doi.org/10.25368/2022.207}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See \url{http://lat.inf.tu-dresden.de/research/reports.html}.}, number = {14-05}, title = {Towards Parallel Repair Using Decompositions}, type = {LTCS-Report}, year = {2014}, }
Abstract BibTeX Entry PDF File
Ontology repair remains one of the main bottlenecks for the development of ontologies for practical use. Many automated methods have been developed for suggesting potential repairs, but ultimately human intervention is required for selecting the adequate one, and the human expert might be overwhelmed by the amount of information delivered to her. We propose a decomposition of ontologies into smaller components that can be repaired in parallel. We show the utility of our approach for ontology repair, provide algorithms for computing this decomposition through standard reasoning, and study the complexity of several associated problems.
@inproceedings{ MaPe-DL14, address = {Vienna, Austria}, author = {Yue {Ma} and Rafael {Pe{\~n}aloza}}, booktitle = {Proceedings of the 27th International Workshop on Description Logics ({DL'14})}, editor = {Meghyn {Bienvenu} and Magdalena {Ortiz} and Riccardo {Rosati} and Mantas {Simkus}}, pages = {633--645}, series = {CEUR Workshop Proceedings}, title = {Towards Parallel Repair: An Ontology Decomposition-based Approach}, volume = {1193}, year = {2014}, }
2013
Abstract BibTeX Entry
In recent years approaches for extracting formal definitions from natural language have been developed. These approaches typically use methods from natural language processing, such as relation extraction or syntax parsing. They make only limited use of description logic reasoning. We propose a hybrid approach combining natural language processing methods and description logic reasoning. In a first step description candidates are obtained using a natural language processing method. Description logic reasoning is used in a post-processing step to select good quality candidate definitions. We identify the corresponding reasoning problem and examine its complexity.
@inproceedings{ DiMa-DL13, address = {Ulm, Germany}, author = {Felix {Distel} and Yue {Ma}}, booktitle = {Proceedings of the 2013 International Workshop on Description Logics ({DL'13})}, series = {CEUR-WS}, title = {A hybrid approach for learning SNOMED CT definitions from text}, year = {2013}, }
Abstract BibTeX Entry
Knowledge base metrics provide a useful way to analyze and compare knowledge bases. For example, inconsistency measurements have been proposed to distinguish different inconsistent knowledge bases. Whilst inconsistency degrees have been widely developed, the incompleteness of a knowledge base is rarely studied due to the difficulty of formalizing incompleteness. For this, we propose an incompleteness degree based on multi-valued semantics and show that it satisfies some desired properties. Moreover, we develop an algorithm to compute the proposed metric by reducing the problem to an instance of partial MaxSAT problem such that we can benefit from highly optimized partial MaxSAT solvers. We finally examine the approach over a set of knowledge bases from real applications, which experimentally shows that the proposed incompleteness metric can be computed pratically.
@inproceedings{ MaChang-Ecsquaru13, address = {Dresden, Germany}, author = {Yue {Ma} and Qingfeng {Chang}}, booktitle = {The 12th European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty}, title = {Measuring Incompleteness under Multi-Valued Semantics by Partial MaxSAT Solvers}, year = {2013}, }
Abstract BibTeX Entry PDF File
There exist a handful of natural language processing and machine learning approaches for extracting Description Logic concept definitions from natural language texts. Typically, for a single target concept several textual sentences are used, from which candidate concept descriptions are obtained. These candidate descriptions may have confidence values associated with them. In a final step, the candidates need to be combined into a single concept, in the easiest case by selecting a relevant subset and taking its conjunction. However, concept descriptions generated in this manner can contain false information, which is harmful when added to a formal knowledge base. In this paper, we claim that this can be improved by considering formal constraints that the target concept needs to satisfy. We first formalize a reasoning problem for the selection of relevant candidates and examine its computational complexity. Then, we show how it can be reduced to SAT, yielding a practical algorithm for its solution. Furthermore, we describe two ways to construct formal constraints, one is automatic and the other interactive. Applying this approach to the SNOMED CT ontology construction scenario, we show that the proposed framework brings a visible benefit for SNOMED CT development.
@inproceedings{ MaDi-KCap13, author = {Yue {Ma} and Felix {Distel}}, booktitle = {Proceedings of the 7th International Conference on Knowledge Capture}, editor = {Mathieu {d'Aquin} and Andrew {Gordon}}, publisher = {ACM}, title = {Concept Adjustment for Description Logics}, year = {2013}, }
Abstract BibTeX Entry PDF File ©Springer-Verlag (The final publication is available at link.springer.com)
Snomed CT is a widely used medical ontology which is formally expressed in a fragment of the Description Logic EL++. The underlying logics allow for expressive querying, yet make it costly to maintain and extend the ontology. In this paper we present an approach for the extraction of Snomed CT definitions from natural language text. We test and evaluate the approach using two types of texts.
@inproceedings{ MaDi-AIME13, author = {Yue {Ma} and Felix {Distel}}, booktitle = {Artificial Intelligence in Medicine}, editor = {Niels {Peek} and Roque {Mar{\'i}n Morales} and Mor {Peleg}}, pages = {73--77}, publisher = {Springer-Verlag}, series = {Lecture Notes in Computer Science}, title = {Learning Formal Definitions for Snomed CT from Text}, volume = {7885}, year = {2013}, }
Abstract BibTeX Entry PDF File DOI
Snomed CT is a widely used medical ontology which is formally expressed in a fragment of the Description Logic EL++. The underlying logics allow for expressive querying, yet make it costly to maintain and extend the ontology. Existing approaches for ontology generation mostly focus on learning superclass or subclass relations and therefore fail to be used to generate Snomed CT definitions. In this paper, we present an approach for the extraction of Snomed CT definitions from natural language texts, based on the distance relation extraction approach. By benefiting from a relatively large amount of textual data for the medical domain and the rich content of Snomed CT, such an approach comes with the benefit that no manually labelled corpus is required. We also show that the type information for Snomed CT concept is an important feature to be examined for such a system. We test and evaluate the approach using two types of texts. Experimental results show that the proposed approach is promising to assist Snomed CT development.
@techreport{ MaDi-LTCS-13-03, address = {Dresden, Germany}, author = {Yue {Ma} and Felix {Distel}}, doi = {https://doi.org/10.25368/2022.193}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {See http://lat.inf.tu-dresden.de/research/reports.html.}, number = {13-03}, title = {{Learning Formal Definitions for Snomed CT from Text}}, type = {LTCS-Report}, year = {2013}, }
Abstract BibTeX Entry
As Big Data is getting increasingly more helpful for different applications, the problem of obtaining reliable data becomes important. The importance is more obvious for domain specific applications because of their abstruse domain knowledge. Most of the Big Data based techniques manipulate directly datasets under the assumption that data quantity can lead to a good system quality. In this paper, we show that the quality can be improved by automatically enriching a given dataset with more high-quality data beforehand. This is achieved by a tractable reasoning technique over the widely used biomedical ontology SNOMED CT. Our approach is evaluated by the scenario of formal definition generation from natural language texts, where the average precision of learned definitions is improved by 5.3%.
@inproceedings{ MaMe-AIBD13, address = {Dresden, Germany}, author = {Yue {Ma} and Julian {Mendez}}, booktitle = {International Workshop on {A}rtificial {I}ntelligence for {B}ig {D}ata (in conjunction with IJCAI'13)}, note = {To appear.}, title = {High Quality Data Generation: An Ontology Reasoning based Approach}, year = {2013}, }
Generated 19 December 2024, 10:12:37.
Completed Research Projects
More and more information on individuals (e.g., persons, events, biological objects) are available electronically in a structured or semi-structured form. Selecting individuals satisfying certain constraints based on such data manually is a complex, error-prone, and time and personnel-consuming effort. For this reason, tools that can automatically or semi-automatically answer questions based on the available data need to be developed. While simple questions can directly be expressed and answered using keywords in natural language, complex questions that can refer to type and relational information increase the precision of the retrieved results, and thus reduce the effort for posterior manual verification of the results. One example for this situation is the setting where electronic patient records are used to find patients satisfying non-trivial combinations of certain properties, such as eligibility criteria for clinical trials. Another example that will also be considered as a use case in this project is the setting where a student asks the examination office questions about study and examination regulations. In both cases, the original question is formulated in natural language.In the GoAsq project, we will investigate, compare, and finally combine two different approaches for answering questions formulated in natural language over textual, semi-structured, and structured data. One approach uses the expertise in text-based question answering of the French partners to directly answer natural language questions using natural language processing and information retrieval techniques. The other tries to translate the natural language questions into formal, database-like queries and then answer these formal queries w.r.t. a domain-dependent ontology using database techniques. The automatic translation is required since it would be quite hard for the people asking the questions (medical doctors, students) to formulate them as formal queries. The ontology allows to overcome the possible semantic mismatch between the person producing the source data (e.g., the GPs writing the clinical notes) and the person formulating the question (e.g., the researcher formulating the trial criteria). GoAsq can thus leverage recent advances obtained in the ontology community on accessing data through ontologies, called ontology-based query answering (OBQA). More precisely, in Task 1 of the project we investigate the two use cases mentioned above (eligibility criteria; study regulations). In Task 2 we will introduce and analyze extensions to existing formal query languages that are required by these use cases. Task 3 will develop techniques for extracting formal queries from textual queries, and Task 4 will evaluate the approach obtained this way, compare it with approaches for text-based question answering, and develop a hybrid approach that combines the advantages of both.
Publications & Technical Reports
2021
Abstract BibTeX Entry PDF File DOI
Ontology-mediated query answering is a popular paradigm for enriching answers to user queries with background knowledge. For querying the absence of information, however, there exist only few ontology-based approaches. Moreover, these proposals conflate the closed-domain and closed-world assumption, and therefore are not suited to deal with the anonymous objects that are common in ontological reasoning. Many real-world applications, like processing electronic health records (EHRs), also contain a temporal dimension, and require efficient reasoning algorithms. Moreover, since medical data is not recorded on a regular basis, reasoners must deal with sparse data with potentially large temporal gaps. Our contribution consists of two main parts: In the first part we introduce a new closed-world semantics for answering conjunctive queries with negation over ontologies formulated in the description logic \(\mathcal{ELH}_{\bot}\), which is based on the minimal canonical model. We propose a rewriting strategy for dealing with negated query atoms, which shows that query answering is possible in polynomial time in data complexity. In the second part, we extend this minimal-world semantics for answering metric temporal conjunctive queries with negation over the lightweight temporal logic \(\mathcal{TELH}_{\bot}^{♢c,lhs,-}\) and obtain similar rewritability and complexity results.
@article{ BoFK-TPLP21, author = {Stefan {Borgwardt} and Walter {Forkel} and Alisa {Kovtunova}}, doi = {https://doi.org/10.1017/S1471068421000119}, journal = {Theory and Practice of Logic Programming}, title = {Temporal Minimal-World Query Answering over Sparse {AB}oxes}, year = {2021}, }
2020
Abstract BibTeX Entry PDF File DOI
In contrast to qualitative linear temporal logics, which can be used to state that some property will eventually be satisfied, metric temporal logics allow us to formulate constraints on how long it may take until the property is satisfied. While most of the work on combining description logics (DLs) with temporal logics has concentrated on qualitative temporal logics, there is a growing interest in extending this work to the quantitative case. In this article, we complement existing results on the combination of DLs with metric temporal logics by introducing interval-rigid concept and role names. Elements included in an interval-rigid concept or role name are required to stay in it for some specified amount of time. We investigate several combinations of (metric) temporal logics with ALC by either allowing temporal operators only on the level of axioms or also applying them to concepts. In contrast to most existing work on the topic, we consider a timeline based on the integers and also allow assertional axioms. We show that the worst-case complexity does not increase beyond the previously known bound of 2-ExpSpace and investigate in detail how this complexity can be reduced by restricting the temporal logic and the occurrences of interval-rigid names.
@article{ BaBoKoOzTh-TOCL20, author = {Franz {Baader} and Stefan {Borgwardt} and Patrick {Koopmann} and Ana {Ozaki} and Veronika {Thost}}, doi = {https://doi.org/10.1145/3399443}, journal = {ACM Transactions on Computational Logic}, month = {August}, number = {4}, pages = {30:1--30:46}, title = {Metric Temporal Description Logics with Interval-Rigid Names}, volume = {21}, year = {2020}, }
Abstract BibTeX Entry PDF File Publication
Ontology-mediated query answering is a popular paradigm for enriching answers to user queries with background knowledge. For querying the absence of information, however, there exist only few ontology-based approaches. Moreover, these proposals conflate the closed-domain and closed-world assumption, and therefore are not suited to deal with the anonymous objects that are common in ontological reasoning. Many real-world applications, like processing electronic health records (EHRs), also contain a temporal dimension, and require efficient reasoning algorithms. Moreover, since medical data is not recorded on a regular basis, reasoners must deal with sparse data with potentially large temporal gaps. Our contribution consists of three main parts: Firstly, we introduce a new closed-world semantics for answering conjunctive queries with negation over ontologies formulated in the description logic ELH⊥, which is based on the minimal universal model. We propose a rewriting strategy for dealing with negated query atoms, which shows that query answering is possible in polynomial time in data complexity. Secondly, we introduce a new temporal variant of ELH⊥ that features a convexity operator. We extend this minimal-world semantics for answering metric temporal conjunctive queries with negation over the logic and obtain similar rewritability and complexity results. Thirdly, apart from the theoretical results, we evaluate minimal-world semantics in practice by selecting patients, based on their EHRs, that match given criteria.
@thesis{ Forkel-Diss-20, address = {Dresden, Germany}, author = {Walter {Forkel}}, school = {Technische Universit\"{a}t Dresden}, title = {Closed-World Semantics for Query Answering in Temporal Description Logics}, type = {Doctoral Thesis}, year = {2020}, }
2019
Abstract BibTeX Entry PDF File
Selecting patients for clinical trials is very labor-intensive. Our goal is to design (semi-)automated techniques that can support clinical researchers in this task. In this paper we summarize our recent advances towards such a system: First, we present the challenges involved when representing electronic health records and eligibility criteria for clinical trials in a formal language. Second, we introduce temporal conjunctive queries with negation as a formal language suitable to represent clinical trials. Third, we describe our methodology for automatic translation of clinical trial eligibility criteria from natural language into our query language. The evaluation of our prototypical implementation shows promising results. Finally, we talk about the parts we are currently working on and the challenges involved.
@inproceedings{ BBFKXZHQA19, address = {Marina del Rey, USA}, author = {Franz {Baader} and Stefan {Borgwardt} and Walter {Forkel} and Alisa {Kovtunova} and Chao {Xu} and Beihai {Zhou}}, booktitle = {2nd International Workshop on Hybrid Question Answering with Structured and Unstructured Knowledge (HQA'19)}, editor = {Franz {Baader} and Brigitte {Grau} and Yue {Ma}}, title = {Temporalized Ontology-Mediated Query Answering under Minimal-World Semantics}, year = {2019}, }
Abstract BibTeX Entry PDF File DOI
Large-scale knowledge bases are at the heart of modern information systems. Their knowledge is inherently uncertain, and hence they are often materialized as probabilistic databases. However, probabilistic database management systems typically lack the capability to incorporate implicit background knowledge and, consequently, fail to capture some intuitive query answers. Ontology-mediated query answering is a popular paradigm for encoding commonsense knowledge, which can provide more complete answers to user queries. We propose a new data model that integrates the paradigm of ontology-mediated query answering with probabilistic databases, employing a log-linear probability model. We compare our approach to existing proposals, and provide supporting computational results.
@techreport{ BoCL-LTCS-19-10, address = {Dresden, Germany}, author = {Stefan {Borgwardt} and Ismail Ilkan {Ceylan} and Thomas {Lukasiewicz}}, doi = {https://doi.org/10.25368/2023.221}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, number = {19-10}, title = {Ontology-Mediated Query Answering over Log-Linear Probabilistic Data (Extended Version)}, type = {LTCS-Report}, year = {2019}, }
Abstract BibTeX Entry PDF File PDF File (Extended Technical Report) DOI
Large-scale knowledge bases are at the heart of modern information systems. Their knowledge is inherently uncertain, and hence they are often materialized as probabilistic databases. However, probabilistic database management systems typically lack the capability to incorporate implicit background knowledge and, consequently, fail to capture some intuitive query answers. Ontology-mediated query answering is a popular paradigm for encoding commonsense knowledge, which can provide more complete answers to user queries. We propose a new data model that integrates the paradigm of ontology-mediated query answering with probabilistic databases, employing a log-linear probability model. We compare our approach to existing proposals, and provide supporting computational results.
@inproceedings{ BoCL-AAAI19, address = {Honolulu, USA}, author = {Stefan {Borgwardt} and Ismail Ilkan {Ceylan} and Thomas {Lukasiewicz}}, booktitle = {Proceedings of the 33rd AAAI Conference on Artificial Intelligence (AAAI'19)}, doi = {https://doi.org/10.1609/aaai.v33i01.33012711}, editor = {Pascal Van {Hentenryck} and Zhi-Hua {Zhou}}, pages = {2711--2718}, publisher = {AAAI Press}, title = {Ontology-Mediated Query Answering over Log-linear Probabilistic Data}, year = {2019}, }
Abstract BibTeX Entry PDF File
Large-scale knowledge bases are at the heart of modern information systems. Their knowledge is inherently uncertain, and hence they are often materialized as probabilistic databases. However, probabilistic database management systems typically lack the capability to incorporate implicit background knowledge and, consequently, fail to capture some intuitive query answers. Ontology-mediated query answering is a popular paradigm for encoding commonsense knowledge, which can provide more complete answers to user queries. We propose a new data model that integrates the paradigm of ontology-mediated query answering with probabilistic databases, employing a log-linear probability model. We compare our approach to existing proposals, and provide supporting computational results.
@inproceedings{ BoCL-DL19, address = {Oslo, Norway}, author = {Stefan {Borgwardt} and Ismail Ilkan {Ceylan} and Thomas {Lukasiewicz}}, booktitle = {Proceedings of the 32nd International Workshop on Description Logics (DL'19)}, editor = {Mantas {Simkus} and Grant {Weddell}}, publisher = {CEUR-WS}, series = {CEUR Workshop Proceedings}, title = {Ontology-Mediated Query Answering over Log-linear Probabilistic Data (Abstract)}, volume = {2373}, year = {2019}, }
BibTeX Entry PDF File DOI
@inproceedings{ BoFo-IJCAI19, address = {Macao, China}, author = {Stefan {Borgwardt} and Walter {Forkel}}, booktitle = {Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI'19)}, doi = {https://doi.org/10.24963/ijcai.2019/849}, editor = {Sarit {Kraus}}, note = {Invited contribution to Sister Conference Best Paper Track.}, pages = {6131--6135}, publisher = {IJCAI}, title = {Closed-World Semantics for Conjunctive Queries with Negation over {$\mathcal{ELH}_\bot$} Ontologies}, year = {2019}, }
Abstract BibTeX Entry PDF File PDF File (Extended Technical Report) DOI ©Springer-Verlag
Ontology-mediated query answering is a popular paradigm for enriching answers to user queries with background knowledge. For querying the absence of information, however, there exist only few ontology-based approaches. Moreover, these proposals conflate the closed-domain and closed-world assumption, and therefore are not suited to deal with the anonymous objects that are common in ontological reasoning. We propose a new closed-world semantics for answering conjunctive queries with negation over ontologies formulated in the description logic \(\mathcal{ELH}_\bot\), which is based on the minimal canonical model. We propose a rewriting strategy for dealing with negated query atoms, which shows that query answering is possible in polynomial time in data complexity.
@inproceedings{ BoFo-JELIA19, address = {Rende, Italy}, author = {Stefan {Borgwardt} and Walter {Forkel}}, booktitle = {Proc.\ of the 16th European Conf.\ on Logics in Artificial Intelligence (JELIA'19)}, doi = {https://doi.org/10.1007/978-3-030-19570-0_24}, editor = {Francesco {Calimeri} and Nicola {Leone} and Marco {Manna}}, pages = {371--386}, publisher = {Springer}, series = {Lecture Notes in Artificial Intelligence}, title = {Closed-World Semantics for Conjunctive Queries with Negation over {$\mathcal{ELH}_\bot$} Ontologies}, volume = {11468}, year = {2019}, }
Abstract BibTeX Entry PDF File
Ontology-mediated-query-answering is a popular paradigm for enriching answers to user queries with background knowledge. For querying the absence of information, however, there exist only few ontology-based approaches. Moreover, these proposals conflate the closed-domain and closed-world assumption, and therefore are not suited to deal with the anonymous objects that are common in ontological reasoning. We propose a new closed-world semantics for answering conjunctive queries with negation over ontologies formulated in the description logic \(\mathcal{ELH}_\bot\), which is based on the minimal canonical model. We propose a rewriting strategy for dealing with negated query atoms, which shows that query answering is possible in polynomial time in data complexity.
@inproceedings{ BoFo-DL19, address = {Oslo, Norway}, author = {Stefan {Borgwardt} and Walter {Forkel}}, booktitle = {Proceedings of the 32nd International Workshop on Description Logics (DL'19)}, editor = {Mantas {Simkus} and Grant {Weddell}}, publisher = {CEUR-WS}, series = {CEUR Workshop Proceedings}, title = {Closed-World Semantics for Conjunctive Queries with Negation over {$\mathcal{ELH}_\bot$} Ontologies (Extended Abstract)}, volume = {2373}, year = {2019}, }
Abstract BibTeX Entry PDF File DOI
Ontology-mediated query answering is a popular paradigm for enriching answers to user queries with background knowledge. For querying the absence of information, however, there exist only few ontology-based approaches. Moreover, these proposals conflate the closed-domain and closed-world assumption, and therefore are not suited to deal with the anonymous objects that are common in ontological reasoning. We propose a new closed-world semantics for answering conjunctive queries with negation over ontologies formulated in the description logic \(\mathcal{ELH}_\bot\), which is based on the minimal canonical model. We propose a rewriting strategy for dealing with negated query atoms, which shows that query answering is possible in polynomial time in data complexity.
@techreport{ BoFo-LTCS-19-11, address = {Dresden, Germany}, author = {Stefan {Borgwardt} and Walter {Forkel}}, doi = {https://doi.org/10.25368/2023.222}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, number = {19-11}, title = {Closed-World Semantics for Conjunctive Queries with Negation over {$\mathcal{ELH}_\bot$} Ontologies (Extended Version)}, type = {LTCS-Report}, year = {2019}, }
Abstract BibTeX Entry PDF File PDF File (Extended Technical Report) DOI ©Springer-Verlag
Lightweight temporal ontology languages have become a very active field of research in recent years. Many real-world applications, like processing electronic health records (EHRs), inherently contain a temporal dimension, and require efficient reasoning algorithms. Moreover, since medical data is not recorded on a regular basis, reasoners must deal with sparse data with potentially large temporal gaps. In this paper, we introduce a temporal extension of the tractable language ELH bottom, which features a new class of convex diamond operators that can be used to bridge temporal gaps. We develop a completion algorithm for our logic, which shows that entailment remains tractable. Based on this, we develop a minimal-world semantics for answering metric temporal conjunctive queries with negation. We show that query answering is combined first-order rewritable, and hence in polynomial time in data complexity.
@inproceedings{ BoFoKo-RRML, address = {Bolzano, Italy}, author = {Stefan {Borgwardt} and Walter {Forkel} and Alisa {Kovtunova}}, booktitle = {Proc.\ of the 3rd International Joint Conference on Rules and Reasoning (RuleML+RR'19)}, doi = {https://doi.org/10.1007/978-3-030-31095-0_1}, editor = {Paul {Fodor} and Marco {Montali} and Diego {Calvanese} and Dumitru {Roman}}, pages = {3--18}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {Finding New Diamonds: {T}emporal Minimal-World Query Answering over Sparse {AB}oxes}, volume = {11784}, year = {2019}, }
Abstract BibTeX Entry PDF File DOI
Lightweight temporal ontology languages have become a very active field of research in recent years. Many real-world applications, like processing electronic health records (EHRs), inherently contain a temporal dimension, and require efficient reasoning algorithms. Moreover, since medical data is not recorded on a regular basis, reasoners must deal with sparse data with potentially large temporal gaps. In this paper, we introduce a temporal extension of the tractable language \(\mathcal{ELH}_\bot\), which features a new class of convex diamond operators that can be used to bridge temporal gaps. We develop a completion algorithm for our logic, which shows that entailment remains tractable. Based on this, we develop a minimal-world semantics for answering metric temporal conjunctive queries with negation. We show that query answering is combined first-order rewritable, and hence in polynomial time in data complexity.
@techreport{ BoFK-LTCS-19-12, address = {Dresden, Germany}, author = {Stefan {Borgwardt} and Walter {Forkel} and Alisa {Kovtunova}}, doi = {https://doi.org/10.25368/2023.223}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, number = {19-12}, title = {Finding New Diamonds: {T}emporal Minimal-World Query Answering over Sparse {AB}oxes (Extended Version)}, type = {LTCS-Report}, year = {2019}, }
Abstract BibTeX Entry PDF File
Selecting patients for clinical trials is very labor-intensive. Our goal is to develop an automated system that can support doctors in this task. This paper describes a major step towards such a system: the automatic translation of clinical trial eligibility criteria from natural language into formal, logic-based queries. First, we develop a semantic annotation process that can capture many types of clinical trial criteria. Then, we map the annotated criteria to the formal query language. We have built a prototype system based on state-of-the-art NLP tools such as Word2Vec, Stanford NLP tools, and the MetaMap Tagger, and have evaluated the quality of the produced queries on a number of criteria from clinicaltrials.gov. Finally, we discuss some criteria that were hard to translate, and give suggestions for how to formulate eligibility criteria to make them easier to translate automatically.
@inproceedings{ XFB-ODLS15, address = {Graz, Austria}, author = {Chao {Xu} and Walter {Forkel} and Stefan {Borgwardt} and Franz {Baader} and Beihai {Zhou}}, booktitle = {Proc.\ of the 9th Workshop on Ontologies and Data in Life Sciences (ODLS'19), part of The Joint Ontology Workshops (JOWO'19)}, editor = {Martin {Boeker} and Ludger {Jansen} and Frank {Loebe} and Stefan {Schulz}}, series = {CEUR Workshop Proceedings}, title = {Automatic Translation of Clinical Trial Eligibility Criteria into Formal Queries}, volume = {2518}, year = {2019}, }
Abstract BibTeX Entry PDF File DOI
Selecting patients for clinical trials is very labor-intensive. Our goal is to develop an automated system that can support doctors in this task. This paper describes a major step towards such a system: the automatic translation of clinical trial eligibility criteria from natural language into formal, logic-based queries. First, we develop a semantic annotation process that can capture many types of clinical trial criteria. Then, we map the annotated criteria to the formal query language. We have built a prototype system based on state-of-the-art NLP tools such as Word2Vec, Stanford NLP tools, and the MetaMap Tagger, and have evaluated the quality of the produced queries on a number of criteria from clinicaltrials.gov. Finally, we discuss some criteria that were hard to translate, and give suggestions for how to formulate eligibility criteria to make them easier to translate automatically.
@techreport{ XFBBZ-LTCS-19-13, address = {Dresden, Germany}, author = {Chao {Xu} and Walter {Forkel} and Stefan {Borgwardt} and Franz {Baader} and Beihai {Zhou}}, doi = {https://doi.org/10.25368/2023.224}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, number = {19-13}, title = {Automatic Translation of Clinical Trial Eligibility Criteria into Formal Queries (Extended Version)}, type = {LTCS-Report}, year = {2019}, }
2018
Abstract BibTeX Entry PDF File DOI
Finding suitable candidates for clinical trials is a labor-intensive task that requires expert medical knowledge. Our goal is to design (semi-)automated techniques that can support clinical researchers in this task. We investigate the issues involved in designing formal query languages for selecting patients that are eligible for a given clinical trial, leveraging existing ontology-based query answering techniques. In particular, we propose to use a temporal extension of existing approaches for accessing data through ontologies written in Description Logics. We sketch how such a query answering system could work and show that eligibility criteria and patient data can be adequately modeled in our formalism.
@inproceedings{ BaBF-HQA18, author = {Franz {Baader} and Stefan {Borgwardt} and Walter {Forkel}}, booktitle = {Proc.\ of the 1st Int.\ Workshop on Hybrid Question Answering with Structured and Unstructured Knowledge (HQA'18), Companion of the The Web Conference 2018}, doi = {https://doi.org/10.1145/3184558.3191538}, pages = {1069--1074}, publisher = {ACM}, title = {Patient Selection for Clinical Trials Using Temporalized Ontology-Mediated Query Answering}, year = {2018}, }
Abstract BibTeX Entry PDF File
We give a survey on recent advances at the forefront of research on probabilistic knowledge bases for representing and querying large-scale automatically extracted data. We concentrate especially on increasing the semantic expressivity of formalisms for representing and querying probabilistic knowledge (i) by giving up the closed-world assumption, (ii) by allowing for commonsense knowledge (and in parallel giving up the tuple-independence assumption), and (iii) by giving up the closed-domain assumption, while preserving some computational properties of query answering in such formalisms.
@inproceedings{ BoCL-IJCAI18, address = {Stockholm, Sweden}, author = {Stefan {Borgwardt} and Ismail Ilkan {Ceylan} and Thomas {Lukasiewicz}}, booktitle = {Proceedings of the 27th International Joint Conferences on Artificial Intelligence and the 23rd European Conference on Artificial Intelligence (IJCAI-ECAI'18)}, editor = {J{\'e}r{\^o}me {Lang}}, note = {Survey paper.}, pages = {5420--5426}, publisher = {IJCAI}, title = {Recent Advances in Querying Probabilistic Knowledge Bases}, year = {2018}, }
2017
Abstract BibTeX Entry PDF File ©Springer-Verlag
In contrast to qualitative linear temporal logics, which can be used to state that some property will eventually be satisfied, metric temporal logics allow to formulate constraints on how long it may take until the property is satisfied. While most of the work on combining Description Logics (DLs) with temporal logics has concentrated on qualitative temporal logics, there has recently been a growing interest in extending this work to the quantitative case. In this paper, we complement existing results on the combination of DLs with metric temporal logics over the natural numbers by introducing interval-rigid names. This allows to state that elements in the extension of certain names stay in this extension for at least some specified amount of time.
@inproceedings{ BaBoKoOzTh-FroCoS17, address = {Bras{\'i}lia, Brazil}, author = {Franz {Baader} and Stefan {Borgwardt} and Patrick {Koopmann} and Ana {Ozaki} and Veronika {Thost}}, booktitle = {Proceedings of the 11th International Symposium on Frontiers of Combining Systems (FroCoS'17)}, editor = {Clare {Dixon} and Marcelo {Finger}}, pages = {60--76}, series = {Lecture Notes in Computer Science}, title = {Metric Temporal Description Logics with Interval-Rigid Names}, volume = {10483}, year = {2017}, }
Abstract BibTeX Entry PDF File
In contrast to qualitative linear temporal logics, which can be used to state that some property will eventually be satisfied, metric temporal logics allow to formulate constraints on how long it may take until the property is satisfied. While most of the work on combining Description Logics (DLs) with temporal logics has concentrated on qualitative temporal logics, there has recently been a growing interest in extending this work to the quantitative case. In this paper, we complement existing results on the combination of DLs with metric temporal logics over the natural numbers by introducing interval-rigid names. This allows to state that elements in the extension of certain names stay in this extension for at least some specified amount of time.
@inproceedings{ BBK+-DL17, address = {Montpellier, France}, author = {Franz {Baader} and Stefan {Borgwardt} and Patrick {Koopmann} and Ana {Ozaki} and Veronika {Thost}}, booktitle = {Proceedings of the 30th International Workshop on Description Logics (DL'17)}, editor = {Alessandro {Artale} and Birte {Glimm} and Roman {Kontchakov}}, publisher = {CEUR-WS}, series = {CEUR Workshop Proceedings}, title = {Metric Temporal Description Logics with Interval-Rigid Names (Extended Abstract)}, volume = {1879}, year = {2017}, }
Abstract BibTeX Entry PDF File DOI
In contrast to qualitative linear temporal logics, which can be used to state that some property will eventually be satisfied, metric temporal logics allow to formulate constraints on how long it may take until the property is satisfied. While most of the work on combining Description Logics (DLs) with temporal logics has concentrated on qualitative temporal logics, there has recently been a growing interest in extending this work to the quantitative case. In this paper, we complement existing results on the combination of DLs with metric temporal logics over the natural numbers by introducing interval-rigid names. This allows to state that elements in the extension of certain names stay in this extension for at least some specified amount of time.
@techreport{ BaBoKoOzTh-LTCS-17-03, address = {Dresden, Germany}, author = {Franz {Baader} and Stefan {Borgwardt} and Patrick {Koopmann} and Ana {Ozaki} and Veronika {Thost}}, doi = {https://doi.org/10.25368/2022.233}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {see \url{https://lat.inf.tu-dresden.de/research/reports.html}}, number = {17-03}, title = {Metric Temporal Description Logics with Interval-Rigid Names (Extended Version)}, type = {LTCS-Report}, year = {2017}, }
Abstract BibTeX Entry PDF File ©IJCAI
We investigate ontology-based query answering (OBQA) in a setting where both the ontology and the query can refer to concrete values such as numbers and strings. In contrast to previous work on this topic, the built-in predicates used to compare values are not restricted to being unary. We introduce restrictions on these predicates and on the ontology language that allow us to reduce OBQA to query answering in databases using the so-called combined rewriting approach. Though at first sight our restrictions are different from the ones used in previous work, we show that our results strictly subsume some of the existing first-order rewritability results for unary predicates.
@inproceedings{ BaBL-IJCAI17, address = {Melbourne, Australia}, author = {Franz {Baader} and Stefan {Borgwardt} and Marcel {Lippmann}}, booktitle = {Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI'17)}, editor = {Carles {Sierra}}, pages = {786--792}, title = {Query Rewriting for \textit{{DL-Lite}} with {$n$}-ary Concrete Domains}, year = {2017}, }
BibTeX Entry PDF File
@inproceedings{ BaBL-DL17, address = {Montpellier, France}, author = {Franz {Baader} and Stefan {Borgwardt} and Marcel {Lippmann}}, booktitle = {Proceedings of the 30th International Workshop on Description Logics (DL'17)}, editor = {Alessandro {Artale} and Birte {Glimm} and Roman {Kontchakov}}, publisher = {CEUR-WS}, series = {CEUR Workshop Proceedings}, title = {Query Rewriting for \textit{{DL-Lite}} with {$n$}-ary Concrete Domains (Abstract)}, volume = {1879}, year = {2017}, }
Abstract BibTeX Entry PDF File DOI
We investigate ontology-based query answering (OBQA) in a setting where both the ontology and the query can refer to concrete values such as numbers and strings. In contrast to previous work on this topic, the built-in predicates used to compare values are not restricted to being unary. We introduce restrictions on these predicates and on the ontology language that allow us to reduce OBQA to query answering in databases using the so-called combined rewriting approach. Though at first sight our restrictions are different from the ones used in previous work, we show that our results strictly subsume some of the existing first-order rewritability results for unary predicates.
@techreport{ BaBL-LTCS-17-04, address = {Germany}, author = {Franz {Baader} and Stefan {Borgwardt} and Marcel {Lippmann}}, doi = {https://doi.org/10.25368/2022.234}, institution = {Chair for Automata Theory, Technische Universit{\"a}t Dresden}, note = {see \url{https://lat.inf.tu-dresden.de/research/reports.html}}, number = {17-04}, title = {Query Rewriting for \textit{{DL-Lite}} with {$n$}-ary Concrete Domains (Extended Version)}, type = {LTCS-Report}, year = {2017}, }
Abstract BibTeX Entry PDF File DOI
Fuzzy Description Logics (DLs) are are a family of knowledge representation formalisms designed to represent and reason about vague and imprecise knowledge that is inherent to many application domains. Previous work has shown that the complexity of reasoning in a fuzzy DL using finitely many truth degrees is usually not higher than that of the underlying classical DL. We show that this does not hold for fuzzy extensions of the light-weight DL EL, which is used in many biomedical ontologies, under the finitely valued Łukasiewicz semantics. More precisely, the complexity of reasoning increases from P to ExpTime, even if only one additional truth value is introduced. When adding complex role inclusions and inverse roles, the logic even becomes undecidable. Even more surprisingly, when considering the infinitely valued Łukasiewicz semantics, reasoning in fuzzy EL is undecidable.
@article{ BoCP-IJAR17, author = {Stefan {Borgwardt} and Marco {Cerami} and Rafael {Pe{\~n}aloza}}, doi = {http://dx.doi.org/10.1016/j.ijar.2017.09.005}, journal = {International Journal of Approximate Reasoning}, pages = {179--201}, title = {The Complexity of Fuzzy {$\mathcal{EL}$} under the {L}ukasiewicz T-norm}, volume = {91}, year = {2017}, }
Abstract BibTeX Entry PDF File
Fuzzy Description Logics have been proposed as formalisms for representing and reasoning about imprecise knowledge by introducing intermediate truth degrees. Unfortunately, it has been shown that reasoning in these logics easily becomes undecidable, when infinitely many truth degrees are considered and conjunction is not idempotent. In this paper, we take those results to the extreme, and show that subsumption in fuzzy EL under Łukasiewicz semantics is undecidable.This provides the first instance of a Horn-style logic with polynomial-time reasoning whose fuzzy extension becomes undecidable.
@inproceedings{ BoCP-DL17, address = {Montpellier, France}, author = {Stefan {Borgwardt} and Marco {Cerami} and Rafael {Pe{\~n}aloza}}, booktitle = {Proceedings of the 30th International Workshop on Description Logics (DL'17)}, editor = {Alessandro {Artale} and Birte {Glimm} and Roman {Kontchakov}}, publisher = {CEUR-WS}, series = {CEUR Workshop Proceedings}, title = {{L}ukasiewicz Fuzzy {$\mathcal{EL}$} is Undecidable}, volume = {1879}, year = {2017}, }
Abstract BibTeX Entry PDF File ©IJCAI
Forming the foundations of large-scale knowledge bases, probabilistic databases have been widely studied in the literature. In particular, probabilistic query evaluation has been investigated intensively as a central inference mechanism. However, despite its power, query evaluation alone cannot extract all the relevant information encompassed in large-scale knowledge bases. To exploit this potential, we study two inference tasks; namely finding the most probable database and the most probable hypothesis for a given query. As natural counterparts of most probable explanations (MPE) and maximum a posteriori hypotheses (MAP) in probabilistic graphical models, they can be used in a variety of applications that involve prediction or diagnosis tasks. We investigate these problems relative to a variety of query languages, ranging from conjunctive queries to ontology-mediated queries, and provide a detailed complexity analysis.
@inproceedings{ CeBL-IJCAI17, address = {Melbourne, Australia}, author = {Ismail Ilkan {Ceylan} and Stefan {Borgwardt} and Thomas {Lukasiewicz}}, booktitle = {Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI'17)}, editor = {Carles {Sierra}}, pages = {950--956}, title = {Most Probable Explanations for Probabilistic Database Queries}, year = {2017}, }
BibTeX Entry PDF File
@inproceedings{ CeBL-DL17, address = {Montpellier, France}, author = {Ismail Ilkan {Ceylan} and Stefan {Borgwardt} and Thomas {Lukasiewicz}}, booktitle = {Proceedings of the 30th International Workshop on Description Logics (DL'17)}, editor = {Alessandro {Artale} and Birte {Glimm} and Roman {Kontchakov}}, publisher = {CEUR-WS}, series = {CEUR Workshop Proceedings}, title = {Most Probable Explanations for Probabilistic Database Queries (Extended Abstract)}, volume = {1879}, year = {2017}, }
Generated 19 December 2024, 10:12:35.
- Doctoral Project: Construction and Extension of Description Logic Knowledge Bases with Methods of Formal Concept Analysis
- Principal Investigator: Prof. Dr.-Ing. Franz Baader
- Involved Person: Dipl.-Math. Francesco Kriegel
- Start Date: October 2013
- Funded by Technische Universität Dresden
Description
Description Logics (DLs) are a formally well-founded and frequently used family of knowledge representation languages, which provide their users with automated inferences services that can derive implicit knowledge from the explicitly represented knowledge. They have been used in various application domains, but their most notable success is the fact that they provide the logical underpinning of the Web Ontology Language (OWL) and its various dialects.
Formal Concept Analysis (FCA) is a successful field of applied algebra, which uses lattice theory to extract knowledge from data represented by tables (called formal contexts). This approach has been utilized in many application domains such as conceptual clustering, data visualization, automatic and semi-automatic knowledge acquisition, etc. In spite of the different notions of concepts used in FCA and in DLs, there has been a very fruitful interaction between these two research areas.
In this project, I will describe how methods from FCA can be used to support the (semi-)automatic construction and extension of terminological knowledge represented in lightweight DLs from data.
Publications and Technical Reports
2024
Abstract BibTeX Entry PDF File DOI ©AAAI Extended Version Addendum
We present an FCA-based axiomatization method that produces a complete \(\mathcal{EL}\) TBox (the terminological part of an OWL 2 EL ontology) from a graph dataset in at most exponential time. We describe technical details that allow for efficient implementation as well as variations that dispense with the computation of extremely large axioms, thereby rendering the approach applicable albeit some completeness is lost. Moreover, we evaluate the prototype on real-world datasets.
@inproceedings{ Kr-AAAI2024, author = {Francesco {Kriegel}}, booktitle = {Proceedings of the 38th Annual AAAI Conference on Artificial Intelligence {(AAAI 2024)}, February 20--27, 2024, Vancouver, Canada}, doi = {https://doi.org/10.1609/aaai.v38i9.28930}, pages = {10597--10606}, title = {Efficient Axiomatization of OWL 2 EL Ontologies from Data by means of Formal Concept Analysis}, year = {2024}, }
BibTeX Entry PDF File PDF File (ceur-ws.org) Full Conference Paper
@inproceedings{ Kr-DL2024, author = {Francesco {Kriegel}}, booktitle = {Proceedings of the 37th International Workshop on Description Logics ({DL} 2024), Bergen, Norway, June 18--21, 2024}, publisher = {CEUR-WS.org}, series = {{CEUR} Workshop Proceedings}, title = {Efficient Axiomatization of OWL 2 EL Ontologies from Data by means of Formal Concept Analysis (Extended Abstract)}, volume = {3739}, year = {2024}, }
2023
Abstract BibTeX Entry PDF File DOI Conference Article Addendum
We present an FCA-based axiomatization method that produces a complete \(\mathcal{EL}\) TBox (the terminological part of an OWL 2 EL ontology) from a graph dataset in at most exponential time. We describe technical details that allow for efficient implementation as well as variations that dispense with the computation of extremely large axioms, thereby rendering the approach applicable albeit some completeness is lost. Moreover, we evaluate the prototype on real-world datasets.
@techreport{ Kr-LTCS-23-01, address = {Dresden, Germany}, author = {Francesco {Kriegel}}, doi = {https://doi.org/10.25368/2023.214}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, number = {23-01}, title = {Efficient Axiomatization of OWL 2 EL Ontologies from Data by means of Formal Concept Analysis (Extended Version)}, type = {LTCS-Report}, year = {2023}, }
2020
Abstract BibTeX Entry PDF File DOI Doctoral Thesis
My thesis describes how methods from Formal Concept Analysis can be used for constructing and extending description logic ontologies. In particular, it is shown how concept inclusions can be axiomatized from data in the description logics \(\mathcal{E\mkern-1.618mu L}\), \(\mathcal{M}\), \(\textsf{Horn-}\mathcal{M}\), and \(\textsf{Prob-}\mathcal{E\mkern-1.618mu L}\). All proposed methods are not only sound but also complete, i.e., the result not only consists of valid concept inclusions but also entails each valid concept inclusion. Moreover, a lattice-theoretic view on the description logic \(\mathcal{E\mkern-1.618mu L}\) is provided. For instance, it is shown how upper and lower neighbors of \(\mathcal{E\mkern-1.618mu L}\) concept descriptions can be computed and further it is proven that the set of \(\mathcal{E\mkern-1.618mu L}\) concept descriptions forms a graded lattice with a non-elementary rank function.
@article{ Kr-KI20, author = {Francesco {Kriegel}}, doi = {https://doi.org/10.1007/s13218-020-00673-8}, journal = {KI - K{\"{u}}nstliche Intelligenz}, pages = {399--403}, title = {{Constructing and Extending Description Logic Ontologies using Methods of Formal Concept Analysis: A Dissertation Summary}}, volume = {34}, year = {2020}, }
Abstract BibTeX Entry PDF File DOI ©Elsevier
The notion of a most specific consequence with respect to some terminological box is introduced, conditions for its existence in the description logic \(\mathcal{E\mkern-1.618mu L}\) and its variants are provided, and means for its computation are developed. Algebraic properties of most specific consequences are explored. Furthermore, several applications that make use of this new notion are proposed and, in particular, it is shown how given terminological knowledge can be incorporated in existing approaches for the axiomatization of observations. For instance, a procedure for an incremental learning of concept inclusions from sequences of interpretations is developed.
@article{ Kr-DAM20, author = {Francesco {Kriegel}}, doi = {https://doi.org/10.1016/j.dam.2019.01.029}, journal = {Discrete Applied Mathematics}, pages = {172--204}, title = {{Most Specific Consequences in the Description Logic $\mathcal{E\mkern-1.618mu L}$}}, volume = {273}, year = {2020}, }
2019
Abstract BibTeX Entry PDF File Publication Summary
Description Logic (abbrv. DL) belongs to the field of knowledge representation and reasoning. DL researchers have developed a large family of logic-based languages, so-called description logics (abbrv. DLs). These logics allow their users to explicitly represent knowledge as ontologies, which are finite sets of (human- and machine-readable) axioms, and provide them with automated inference services to derive implicit knowledge. The landscape of decidability and computational complexity of common reasoning tasks for various description logics has been explored in large parts: there is always a trade-off between expressibility and reasoning costs. It is therefore not surprising that DLs are nowadays applied in a large variety of domains: agriculture, astronomy, biology, defense, education, energy management, geography, geoscience, medicine, oceanography, and oil and gas. Furthermore, the most notable success of DLs is that these constitute the logical underpinning of the Web Ontology Language (abbrv. OWL) in the Semantic Web.
Formal Concept Analysis (abbrv. FCA) is a subfield of lattice theory that allows to analyze data-sets that can be represented as formal contexts. Put simply, such a formal context binds a set of objects to a set of attributes by specifying which objects have which attributes. There are two major techniques that can be applied in various ways for purposes of conceptual clustering, data mining, machine learning, knowledge management, knowledge visualization, etc. On the one hand, it is possible to describe the hierarchical structure of such a data-set in form of a formal concept lattice. On the other hand, the theory of implications (dependencies between attributes) valid in a given formal context can be axiomatized in a sound and complete manner by the so-called canonical base, which furthermore contains a minimal number of implications w.r.t. the properties of soundness and completeness.
In spite of the different notions used in FCA and in DLs, there has been a very fruitful interaction between these two research areas. My thesis continues this line of research and, more specifically, I will describe how methods from FCA can be used to support the automatic construction and extension of DL ontologies from data.
@thesis{ Kriegel-Diss-2019, address = {Dresden, Germany}, author = {Francesco {Kriegel}}, school = {Technische Universit\"{a}t Dresden}, title = {Constructing and Extending Description Logic Ontologies using Methods of Formal Concept Analysis}, type = {Doctoral Thesis}, year = {2019}, }
Abstract BibTeX Entry PDF File DOI Conference Article
A joining implication is a restricted form of an implication where it is explicitly specified which attributes may occur in the premise and in the conclusion, respectively. A technique for sound and complete axiomatization of joining implications valid in a given formal context is provided. In particular, a canonical base for the joining implications valid in a given formal context is proposed, which enjoys the property of being of minimal cardinality among all such bases. Background knowledge in form of a set of valid joining implications can be incorporated. Furthermore, an application to inductive learning in a Horn description logic is proposed, that is, a procedure for sound and complete axiomatization of \(\textsf{Horn-}\mathcal{M}\) concept inclusions from a given interpretation is developed. A complexity analysis shows that this procedure runs in deterministic exponential time.
@techreport{ Kr-LTCS-19-02, address = {Dresden, Germany}, author = {Francesco {Kriegel}}, doi = {https://doi.org/10.25368/2022.251}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tu-dresden.de/inf/lat/reports#Kr-LTCS-19-02}}, number = {19-02}, title = {{Joining Implications in Formal Contexts and Inductive Learning in a Horn Description Logic (Extended Version)}}, type = {LTCS-Report}, year = {2019}, }
Abstract BibTeX Entry PDF File DOI ©Springer Extended Version
A joining implication is a restricted form of an implication where it is explicitly specified which attributes may occur in the premise and in the conclusion, respectively. A technique for sound and complete axiomatization of joining implications valid in a given formal context is provided. In particular, a canonical base for the joining implications valid in a given formal context is proposed, which enjoys the property of being of minimal cardinality among all such bases. Background knowledge in form of a set of valid joining implications can be incorporated. Furthermore, an application to inductive learning in a Horn description logic is proposed, that is, a procedure for sound and complete axiomatization of \(\textsf{Horn-}\mathcal{M}\) concept inclusions from a given interpretation is developed. A complexity analysis shows that this procedure runs in deterministic exponential time.
@inproceedings{ Kr-ICFCA19, author = {Francesco {Kriegel}}, booktitle = {15th International Conference on Formal Concept Analysis, {ICFCA} 2019, Frankfurt, Germany, June 25-28, 2019, Proceedings}, doi = {https://doi.org/10.1007/978-3-030-21462-3_9}, editor = {Diana {Cristea} and Florence {Le Ber} and Bar\i{}\c{s} {Sertkaya}}, pages = {110--129}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {{Joining Implications in Formal Contexts and Inductive Learning in a Horn Description Logic}}, volume = {11511}, year = {2019}, }
Abstract BibTeX Entry PDF File DOI ©Springer Extended Version
Description logics in their standard setting only allow for representing and reasoning with crisp knowledge without any degree of uncertainty. Of course, this is a serious shortcoming for use cases where it is impossible to perfectly determine the truth of a statement. For resolving this expressivity restriction, probabilistic variants of description logics have been introduced. Their model-theoretic semantics is built upon so-called probabilistic interpretations, that is, families of directed graphs the vertices and edges of which are labeled and for which there exists a probability measure on this graph family.
Results of scientific experiments, e.g., in medicine, psychology, or biology, that are repeated several times can induce probabilistic interpretations in a natural way. In this document, we shall develop a suitable axiomatization technique for deducing terminological knowledge from the assertional data given in such probabilistic interpretations. More specifically, we consider a probabilistic variant of the description logic \(\mathcal{E\mkern-1.618mu L}^{\!\bot}\), and provide a method for constructing a set of rules, so-called concept inclusions, from probabilistic interpretations in a sound and complete manner.
@inproceedings{ Kr-JELIA19, author = {Francesco {Kriegel}}, booktitle = {16th European Conference on Logics in Artificial Intelligence, {JELIA} 2019, Rende, Italy, May 7-11, 2019, Proceedings}, doi = {https://doi.org/10.1007/978-3-030-19570-0_26}, editor = {Francesco {Calimeri} and Nicola {Leone} and Marco {Manna}}, pages = {399--417}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {{Learning Description Logic Axioms from Discrete Probability Distributions over Description Graphs}}, volume = {11468}, year = {2019}, }
2018
Abstract BibTeX Entry PDF File DOI ©Springer Extended Version
For a probabilistic extension of the description logic \(\mathcal{E\mkern-1.618mu L}^{\bot}\), we consider the task of automatic acquisition of terminological knowledge from a given probabilistic interpretation. Basically, such a probabilistic interpretation is a family of directed graphs the vertices and edges of which are labeled, and where a discrete probability measure on this graph family is present. The goal is to derive so-called concept inclusions which are expressible in the considered probabilistic description logic and which hold true in the given probabilistic interpretation. A procedure for an appropriate axiomatization of such graph families is proposed and its soundness and completeness is justified.
@inproceedings{ Kr-KI18, address = {Berlin, Germany}, author = {Francesco {Kriegel}}, booktitle = {{{KI} 2018: Advances in Artificial Intelligence - 41st German Conference on AI, Berlin, Germany, September 24-28, 2018, Proceedings}}, doi = {https://doi.org/10.1007/978-3-030-00111-7_5}, editor = {Frank {Trollmann} and Anni-Yasmin {Turhan}}, pages = {46--53}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {{Acquisition of Terminological Knowledge in Probabilistic Description Logic}}, volume = {11117}, year = {2018}, }
Abstract BibTeX Entry PDF File DOI Conference Article
Description logics in their standard setting only allow for representing and reasoning with crisp knowledge without any degree of uncertainty. Of course, this is a serious shortcoming for use cases where it is impossible to perfectly determine the truth of a statement. For resolving this expressivity restriction, probabilistic variants of description logics have been introduced. Their model-theoretic semantics is built upon so-called probabilistic interpretations, that is, families of directed graphs the vertices and edges of which are labeled and for which there exists a probability measure on this graph family.
Results of scientific experiments, e.g., in medicine, psychology, or biology, that are repeated several times can induce probabilistic interpretations in a natural way. In this document, we shall develop a suitable axiomatization technique for deducing terminological knowledge from the assertional data given in such probabilistic interpretations. More specifically, we consider a probabilistic variant of the description logic \(\mathcal{E\mkern-1.618mu L}^{\!\bot}\), and provide a method for constructing a set of rules, so-called concept inclusions, from probabilistic interpretations in a sound and complete manner.
@techreport{ Kr-LTCS-18-12, address = {Dresden, Germany}, author = {Francesco {Kriegel}}, doi = {https://doi.org/10.25368/2022.247}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tu-dresden.de/inf/lat/reports#Kr-LTCS-18-12}}, number = {18-12}, title = {{Learning Description Logic Axioms from Discrete Probability Distributions over Description Graphs (Extended Version)}}, type = {LTCS-Report}, year = {2018}, }
Abstract BibTeX Entry PDF File DOI Journal Article
The notion of a most specific consequence with respect to some terminological box is introduced, conditions for its existence in the description logic \(\mathcal{E\mkern-1.618mu L}\) and its variants are provided, and means for its computation are developed. Algebraic properties of most specific consequences are explored. Furthermore, several applications that make use of this new notion are proposed and, in particular, it is shown how given terminological knowledge can be incorporated in existing approaches for the axiomatization of observations. For instance, a procedure for an incremental learning of concept inclusions from sequences of interpretations is developed.
@techreport{ Kr-LTCS-18-11, address = {Dresden, Germany}, author = {Francesco {Kriegel}}, doi = {https://doi.org/10.25368/2022.246}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tu-dresden.de/inf/lat/reports#Kr-LTCS-18-11}, accepted for publication in {Discrete Applied Mathematics}}, number = {18-11}, title = {{Most Specific Consequences in the Description Logic $\mathcal{E\mkern-1.618mu L}$}}, type = {LTCS-Report}, year = {2018}, }
Abstract BibTeX Entry PDF File DOI Conference Article
For a probabilistic extension of the description logic \(\mathcal{E\mkern-1.618mu L}^{\bot}\), we consider the task of automatic acquisition of terminological knowledge from a given probabilistic interpretation. Basically, such a probabilistic interpretation is a family of directed graphs the vertices and edges of which are labeled, and where a discrete probability measure on this graph family is present. The goal is to derive so-called concept inclusions which are expressible in the considered probabilistic description logic and which hold true in the given probabilistic interpretation. A procedure for an appropriate axiomatization of such graph families is proposed and its soundness and completeness is justified.
@techreport{ Kr-LTCS-18-03, address = {Dresden, Germany}, author = {Francesco {Kriegel}}, doi = {https://doi.org/10.25368/2022.239}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tu-dresden.de/inf/lat/reports#Kr-LTCS-18-03}}, number = {18-03}, title = {{Terminological Knowledge Acquisition in Probabilistic Description Logic}}, type = {LTCS-Report}, year = {2018}, }
Abstract BibTeX Entry PDF File DOI Conference Article
For the description logic \(\mathcal{E\mkern-1.618mu L}\), we consider the neighborhood relation which is induced by the subsumption order, and we show that the corresponding lattice of \(\mathcal{E\mkern-1.618mu L}\) concept descriptions is distributive, modular, graded, and metric. In particular, this implies the existence of a rank function as well as the existence of a distance function.
@techreport{ Kr-LTCS-18-10, address = {Dresden, Germany}, author = {Francesco {Kriegel}}, doi = {https://doi.org/10.25368/2022.245}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tu-dresden.de/inf/lat/reports#Kr-LTCS-18-10}}, number = {18-10}, title = {{The Distributive, Graded Lattice of $\mathcal{E\mkern-1.618mu L}$ Concept Descriptions and its Neighborhood Relation (Extended Version)}}, type = {LTCS-Report}, year = {2018}, }
Abstract BibTeX Entry PDF File PDF File (ceur-ws.org) Extended Version
For the description logic \(\mathcal{E\mkern-1.618mu L}\), we consider the neighborhood relation which is induced by the subsumption order, and we show that the corresponding lattice of \(\mathcal{E\mkern-1.618mu L}\) concept descriptions is distributive, modular, graded, and metric. In particular, this implies the existence of a rank function as well as the existence of a distance function.
@inproceedings{ Kr-CLA18, address = {Olomouc, Czech Republic}, author = {Francesco {Kriegel}}, booktitle = {{Proceedings of the 14th International Conference on Concept Lattices and Their Applications ({CLA 2018})}}, editor = {Dmitry I. {Ignatov} and Lhouari {Nourine}}, pages = {267--278}, publisher = {CEUR-WS.org}, series = {{CEUR} Workshop Proceedings}, title = {{The Distributive, Graded Lattice of $\mathcal{E\mkern-1.618mu L}$ Concept Descriptions and its Neighborhood Relation}}, volume = {2123}, year = {2018}, }
2017
Abstract BibTeX Entry PDF File DOI ©Springer
The Web Ontology Language (OWL) has gained serious attraction since its foundation in 2004, and it is heavily used in applications requiring representation of as well as reasoning with knowledge. It is the language of the Semantic Web, and it has a strong logical underpinning by means of so-called Description Logics (DLs). DLs are a family of conceptual languages suitable for knowledge representation and reasoning due to their strong logical foundation, and for which the decidability and complexity of common reasoning problems are widely explored. In particular, the reasoning tasks allow for the deduction of implicit knowledge from explicitly stated facts and axioms, and plenty of appropriate algorithms were developed, optimized, and implemented, e.g., tableaux algorithms and completion algorithms. In this document, we present a technique for the acquisition of terminological knowledge from social networks. More specifically, we show how OWL axioms, i.e., concept inclusions and role inclusions in DLs, can be obtained from social graphs in a sound and complete manner. A social graph is simply a directed graph, the vertices of which describe the entities, e.g., persons, events, messages, etc.; and the edges of which describe the relationships between the entities, e.g., friendship between persons, attendance of a person to an event, a person liking a message, etc. Furthermore, the vertices of social graphs are labeled, e.g., to describe properties of the entities, and also the edges are labeled to specify the concrete relationships. As an exemplary social network we consider Facebook, and show that it fits our use case.
@incollection{ Kr-FCAoSN17, address = {Cham}, author = {Francesco {Kriegel}}, booktitle = {Formal Concept Analysis of Social Networks}, doi = {https://doi.org/10.1007/978-3-319-64167-6_5}, editor = {Rokia {Missaoui} and Sergei O. {Kuznetsov} and Sergei {Obiedkov}}, pages = {97--142}, publisher = {Springer International Publishing}, series = {Lecture Notes in Social Networks ({LNSN})}, title = {Acquisition of Terminological Knowledge from Social Networks in Description Logic}, year = {2017}, }
Abstract BibTeX Entry PDF File DOI ©Springer
Entropy is a measure for the uninformativeness or randomness of a data set, i.e., the higher the entropy is, the lower is the amount of information. In the field of propositional logic it has proven to constitute a suitable measure to be maximized when dealing with models of probabilistic propositional theories. More specifically, it was shown that the model of a probabilistic propositional theory with maximal entropy allows for the deduction of other formulae which are somehow expected by humans, i.e., allows for some kind of common sense reasoning. In order to pull the technique of maximum entropy entailment to the field of Formal Concept Analysis, we define the notion of entropy of a formal context with respect to the frequency of its object intents, and then define maximum entropy entailment for quantified implication sets, i.e., for sets of partial implications where each implication has an assigned degree of confidence. Furthermore, then this entailment technique is utilized to define so-called maximum entropy implicational bases (ME-bases), and a first general example of such a ME-base is provided.
@inproceedings{ Kr-ICFCA17b, address = {Rennes, France}, author = {Francesco {Kriegel}}, booktitle = {Proceedings of the 14th International Conference on Formal Concept Analysis ({ICFCA} 2017)}, doi = {https://doi.org/10.1007/978-3-319-59271-8_10}, editor = {Karell {Bertet} and Daniel {Borchmann} and Peggy {Cellier} and S\'{e}bastien {Ferr\'{e}}}, pages = {155--167}, publisher = {Springer Verlag}, series = {Lecture Notes in Computer Science ({LNCS})}, title = {First Notes on Maximum Entropy Entailment for Quantified Implications}, volume = {10308}, year = {2017}, }
Abstract BibTeX Entry PDF File DOI ©Springer
We consider the task of acquisition of terminological knowledge from given assertional data. However, when evaluating data of real-world applications we often encounter situations where it is impractical to deduce only crisp knowledge, due to the presence of exceptions or errors. It is rather appropriate to allow for degrees of uncertainty within the derived knowledge. Consequently, suitable methods for knowledge acquisition in a probabilistic framework should be developed. In particular, we consider data which is given as a probabilistic formal context, i.e., as a triadic incidence relation between objects, attributes, and worlds, which is furthermore equipped with a probability measure on the set of worlds. We define the notion of a probabilistic attribute as a probabilistically quantified set of attributes, and define the notion of validity of implications over probabilistic attributes in a probabilistic formal context. Finally, a technique for the axiomatization of such implications from probabilistic formal contexts is developed. This is done is a sound and complete manner, i.e., all derived implications are valid, and all valid implications are deducible from the derived implications. In case of finiteness of the input data to be analyzed, the constructed axiomatization is finite, too, and can be computed in finite time.
@inproceedings{ Kr-ICFCA17a, address = {Rennes, France}, author = {Francesco {Kriegel}}, booktitle = {Proceedings of the 14th International Conference on Formal Concept Analysis ({ICFCA} 2017)}, doi = {https://doi.org/10.1007/978-3-319-59271-8_11}, editor = {Karell {Bertet} and Daniel {Borchmann} and Peggy {Cellier} and S\'{e}bastien {Ferr\'{e}}}, pages = {168--183}, publisher = {Springer Verlag}, series = {Lecture Notes in Computer Science ({LNCS})}, title = {Implications over Probabilistic Attributes}, volume = {10308}, year = {2017}, }
Abstract BibTeX Entry PDF File DOI ©Taylor and Francis
A probabilistic formal context is a triadic context the third dimension of which is a set of worlds equipped with a probability measure. After a formal definition of this notion, this document introduces probability of implications with respect to probabilistic formal contexts, and provides a construction for a base of implications the probabilities of which exceed a given lower threshold. A comparison between confidence and probability of implications is drawn, which yields the fact that both measures do not coincide. Furthermore, the results are extended towards the lightweight description logic \(\mathcal{E\mkern-1.618mu L}^{\bot}\) with probabilistic interpretations, and a method for computing a base of general concept inclusions the probabilities of which are greater than a pre-defined lower bound is proposed. Additionally, we consider so-called probabilistic attributes over probabilistic formal contexts, and provide a method for the axiomatization of implications over probabilistic attributes.
@article{ Kr-IJGS17, author = {Francesco {Kriegel}}, doi = {https://doi.org/10.1080/03081079.2017.1349575}, journal = {International Journal of General Systems}, number = {5}, pages = {511--546}, title = {Probabilistic Implication Bases in {FCA} and Probabilistic Bases of GCIs in $\mathcal{E\mkern-1.618mu L}^{\bot}$}, volume = {46}, year = {2017}, }
Abstract BibTeX Entry PDF File DOI ©Taylor and Francis
The canonical base of a formal context plays a distinguished role in Formal Concept Analysis, as it is the only minimal implicational base known so far that can be described explicitly. Consequently, several algorithms for the computation of this base have been proposed. However, all those algorithms work sequentially by computing only one pseudo-intent at a time – a fact that heavily impairs the practicability in real-world applications. In this paper, we shall introduce an approach that remedies this deficit by allowing the canonical base to be computed in a parallel manner with respect to arbitrary implicational background knowledge. First experimental evaluations show that for sufficiently large data sets the speed-up is proportional to the number of available CPU cores.
@article{ KrBo-IJGS17, author = {Francesco {Kriegel} and Daniel {Borchmann}}, doi = {https://doi.org/10.1080/03081079.2017.1349570}, journal = {International Journal of General Systems}, number = {5}, pages = {490--510}, title = {NextClosures: Parallel Computation of the Canonical Base with Background Knowledge}, volume = {46}, year = {2017}, }
2016
Abstract BibTeX Entry PDF File DOI ©Taylor and Francis
Description logic knowledge bases can be used to represent knowledge about a particular domain in a formal and unambiguous manner. Their practical relevance has been shown in many research areas, especially in biology and the Semantic Web. However, the tasks of constructing knowledge bases itself, often performed by human experts, is difficult, time-consuming and expensive. In particular the synthesis of terminological knowledge is a challenge every expert has to face. Because human experts cannot be omitted completely from the construction of knowledge bases, it would therefore be desirable to at least get some support from machines during this process. To this end, we shall investigate in this work an approach which shall allow us to extract terminological knowledge in the form of general concept inclusions from factual data, where the data is given in the form of vertex and edge labeled graphs. As such graphs appear naturally within the scope of the Semantic Web in the form of sets of RDF triples, the presented approach opens up another possibility to extract terminological knowledge from the Linked Open Data Cloud.
@article{ BoDiKr-JANCL16, author = {Daniel {Borchmann} and Felix {Distel} and Francesco {Kriegel}}, doi = {https://doi.org/10.1080/11663081.2016.1168230}, journal = {Journal of Applied Non-Classical Logics}, number = {1}, pages = {1--46}, title = {Axiomatisation of General Concept Inclusions from Finite Interpretations}, volume = {26}, year = {2016}, }
Abstract BibTeX Entry PDF File PDF File (ceur-ws.org)
We propose applications that utilize the infimum and the supremum of closure operators that are induced by structures occuring in the field of Description Logics. More specifically, we consider the closure operators induced by interpretations as well as closure operators induced by TBoxes, and show how we can learn GCIs from streams of interpretations, and how an error-tolerant axiomatization of GCIs from an interpretation guided by a hand-crafted TBox can be achieved.
@inproceedings{ Kr-FCA4AI16, address = {The Hague, The Netherlands}, author = {Francesco {Kriegel}}, booktitle = {Proceedings of the 5th International Workshop "What can {FCA} do for Artificial Intelligence?" ({FCA4AI} 2016) co-located with the European Conference on Artificial Intelligence ({ECAI} 2016)}, editor = {Sergei {Kuznetsov} and Amedeo {Napoli} and Sebastian {Rudolph}}, pages = {9--16}, publisher = {CEUR-WS.org}, series = {{CEUR} Workshop Proceedings}, title = {Axiomatization of General Concept Inclusions from Streams of Interpretations with Optional Error Tolerance}, volume = {1703}, year = {2016}, }
Abstract BibTeX Entry PDF File PDF File (ceur-ws.org)
In a former paper, the algorithm NextClosures for computing the set of all formal concepts as well as the canonical base for a given formal context has been introduced. Here, this algorithm shall be generalized to a setting where the data-set is described by means of a closure operator in a complete lattice, and furthermore it shall be extended with the possibility to handle constraints that are given in form of a second closure operator. As a special case, constraints may be predefined as implicational background knowledge. Additionally, we show how the algorithm can be modified in order to do parallel Attribute Exploration for unconstrained closure operators, as well as give a reason for the impossibility of (parallel) Attribute Exploration for constrained closure operators if the constraint is not compatible with the data-set.
@inproceedings{ Kr-CLA16, address = {Moscow, Russia}, author = {Francesco {Kriegel}}, booktitle = {Proceedings of the 13th International Conference on Concept Lattices and Their Applications ({CLA 2016})}, editor = {Marianne {Huchard} and Sergei {Kuznetsov}}, pages = {231--243}, publisher = {CEUR-WS.org}, series = {{CEUR} Workshop Proceedings}, title = {NextClosures with Constraints}, volume = {1624}, year = {2016}, }
Abstract BibTeX Entry PDF File DOI ©Springer
The canonical base of a formal context is a minimal set of implications that is sound and complete. A recent paper has provided a new algorithm for the parallel computation of canonical bases. An important extension is the integration of expert interaction for Attribute Exploration in order to explore implicational bases of inaccessible formal contexts. This paper presents and analyzes an algorithm that allows for Parallel Attribute Exploration.
@inproceedings{ Kr-ICCS16, address = {Annecy, France}, author = {Francesco {Kriegel}}, booktitle = {Proceedings of the 22nd International Conference on Conceptual Structures ({ICCS 2016})}, doi = {http://dx.doi.org/10.1007/978-3-319-40985-6_8}, editor = {Ollivier {Haemmerl{\'{e}}} and Gem {Stapleton} and Catherine {Faron{-}Zucker}}, pages = {91--106}, publisher = {Springer-Verlag}, series = {Lecture Notes in Computer Science}, title = {Parallel Attribute Exploration}, volume = {9717}, year = {2016}, }
2015
Abstract BibTeX Entry PDF File DOI
Description logic knowledge bases can be used to represent knowledge about a particular domain in a formal and unambiguous manner. Their practical relevance has been shown in many research areas, especially in biology and the semantic web. However, the tasks of constructing knowledge bases itself, often performed by human experts, is difficult, time-consuming and expensive. In particular the synthesis of terminological knowledge is a challenge every expert has to face. Because human experts cannot be omitted completely from the construction of knowledge bases, it would therefore be desirable to at least get some support from machines during this process. To this end, we shall investigate in this work an approach which shall allow us to extract terminological knowledge in the form of general concept inclusions from factual data, where the data is given in the form of vertex and edge labeled graphs. As such graphs appear naturally within the scope of the Semantic Web in the form of sets of RDF triples, the presented approach opens up the possibility to extract terminological knowledge from the Linked Open Data Cloud. We shall also present first experimental results showing that our approach has the potential to be useful for practical applications.
@techreport{ BoDiKr-LTCS-15-13, address = {Dresden, Germany}, author = {Daniel {Borchmann} and Felix {Distel} and Francesco {Kriegel}}, doi = {https://doi.org/10.25368/2022.219}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tu-dresden.de/inf/lat/reports#BoDiKr-LTCS-15-13}}, number = {15-13}, title = {{Axiomatization of General Concept Inclusions from Finite Interpretations}}, type = {LTCS-Report}, year = {2015}, }
Abstract BibTeX Entry PDF File DOI ©Springer
Probabilistic interpretations consist of a set of interpretations with a shared domain and a measure assigning a probability to each interpretation. Such structures can be obtained as results of repeated experiments, e.g., in biology, psychology, medicine, etc. A translation between probabilistic and crisp description logics is introduced, and then utilized to reduce the construction of a base of general concept inclusions of a probabilistic interpretation to the crisp case for which a method for the axiomatization of a base of GCIs is well-known.
@inproceedings{ Kr-KI15, address = {Dresden, Germany}, author = {Francesco {Kriegel}}, booktitle = {Proceedings of the 38th German Conference on Artificial Intelligence ({KI 2015})}, doi = {http://dx.doi.org/10.1007/978-3-319-24489-1_10}, editor = {Steffen {H{\"o}lldobler} and Sebastian {Rudolph} and Markus {Kr{\"o}tzsch}}, pages = {124--136}, publisher = {Springer Verlag}, series = {Lecture Notes in Artificial Intelligence ({LNAI})}, title = {Axiomatization of General Concept Inclusions in Probabilistic Description Logics}, volume = {9324}, year = {2015}, }
Abstract BibTeX Entry PDF File PDF File (ceur-ws.org)
A description graph is a directed graph that has labeled vertices and edges. This document proposes a method for extracting a knowledge base from a description graph. The technique is presented for the description logic \(\mathcal{A\mkern-1.618mu L\mkern-1.618mu E\mkern-1.618mu Q\mkern-1.618mu R}(\mathsf{Self})\) which allows for conjunctions, primitive negation, existential restrictions, value restrictions, qualified number restrictions, existential self restrictions, and complex role inclusion axioms, but also sublogics may be chosen to express the axioms in the knowledge base. The extracted knowledge base entails all statements that can be expressed in the chosen description logic and are encoded in the input graph.
@inproceedings{ Kr-SNAFCA15, address = {Nerja, Spain}, author = {Francesco {Kriegel}}, booktitle = {Proceedings of the International Workshop on Social Network Analysis using Formal Concept Analysis ({SNAFCA 2015}) in conjunction with the 13th International Conference on Formal Concept Analysis ({ICFCA} 2015)}, editor = {Sergei O. {Kuznetsov} and Rokia {Missaoui} and Sergei A. {Obiedkov}}, publisher = {CEUR-WS.org}, series = {CEUR Workshop Proceedings}, title = {Extracting $\mathcal{A\mkern-1.618mu L\mkern-1.618mu E\mkern-1.618mu Q\mkern-1.618mu R}(\mathsf{Self})$-Knowledge Bases from Graphs}, volume = {1534}, year = {2015}, }
Abstract BibTeX Entry PDF File PDF File (ceur-ws.org)
Formal Concept Analysis and its methods for computing minimal implicational bases have been successfully applied to axiomatise minimal \(\mathcal{E\mkern-1.618mu L}\)-TBoxes from models, so called bases of GCIs. However, no technique for an adjustment of an existing \(\mathcal{E\mkern-1.618mu L}\)-TBox w.r.t. a new model is available, i.e., on a model change the complete TBox has to be recomputed. This document proposes a method for the computation of a minimal extension of a TBox w.r.t. a new model. The method is then utilised to formulate an incremental learning algorithm that requires a stream of interpretations, and an expert to guide the learning process, respectively, as input.
@inproceedings{ Kr-DL15, address = {Athens, Greece}, author = {Francesco {Kriegel}}, booktitle = {Proceedings of the 28th International Workshop on Description Logics ({DL 2015})}, editor = {Diego {Calvanese} and Boris {Konev}}, pages = {452--464}, publisher = {CEUR-WS.org}, series = {CEUR Workshop Proceedings}, title = {Incremental Learning of TBoxes from Interpretation Sequences with Methods of Formal Concept Analysis}, volume = {1350}, year = {2015}, }
Abstract BibTeX Entry PDF File PDF File (ceur-ws.org)
A probabilistic formal context is a triadic context whose third dimension is a set of worlds equipped with a probability measure. After a formal definition of this notion, this document introduces probability of implications, and provides a construction for a base of implications whose probability satisfy a given lower threshold. A comparison between confidence and probability of implications is drawn, which yields the fact that both measures do not coincide, and cannot be compared. Furthermore, the results are extended towards the light-weight description logic \(\mathcal{E\mkern-1.618mu L}^{\bot}\) with probabilistic interpretations, and a method for computing a base of general concept inclusions whose probability fulfill a certain lower bound is proposed.
@inproceedings{ Kr-CLA15, address = {Clermont-Ferrand, France}, author = {Francesco {Kriegel}}, booktitle = {Proceedings of the 12th International Conference on Concept Lattices and their Applications ({CLA 2015})}, editor = {Sadok {Ben Yahia} and Jan {Konecny}}, pages = {193--204}, publisher = {CEUR-WS.org}, series = {CEUR Workshop Proceedings}, title = {Probabilistic Implicational Bases in FCA and Probabilistic Bases of GCIs in $\mathcal{E\mkern-1.618mu L}^{\bot}$}, volume = {1466}, year = {2015}, }
Abstract BibTeX Entry PDF File DOI
Probabilistic interpretations consist of a set of interpretations with a shared domain and a measure assigning a probability to each interpretation. Such structures can be obtained as results of repeated experiments, e.g., in biology, psychology, medicine, etc. A translation between probabilistic and crisp description logics is introduced and then utilised to reduce the construction of a base of general concept inclusions of a probabilistic interpretation to the crisp case for which a method for the axiomatisation of a base of GCIs is well-known.
@techreport{ Kr-LTCS-15-14, address = {Dresden, Germany}, author = {Francesco {Kriegel}}, doi = {https://doi.org/10.25368/2022.220}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tu-dresden.de/inf/lat/reports#Kr-LTCS-15-14}}, number = {15-14}, title = {{Learning General Concept Inclusions in Probabilistic Description Logics}}, type = {LTCS-Report}, year = {2015}, }
Abstract BibTeX Entry
Model-based most specific concept descriptions are a useful means to compactly represent all knowledge about a certain individual of an interpretation that is expressible in the underlying description logic. Existing approaches only cover their construction in the case of \(\mathcal{E\mkern-1.618mu L}\) and \(\mathcal{F\mkern-1.618mu L\mkern-1.618mu E}\) w.r.t. greatest fixpoint semantics, and the case of \(\mathcal{E\mkern-1.618mu L}\) w.r.t. a role-depth bound, respectively. This document extends the results towards the more expressive description logic \(\mathcal{A\mkern-1.618mu L\mkern-1.618mu E\mkern-1.618mu Q}^{\geq}\mkern-1.618mu\mathcal{N}^{\leq}\mkern-1.618mu(\mathsf{Self})\) w.r.t. role-depth bounds and also gives a method for the computation of least common subsumers.
@techreport{ Kr-LTCS-15-02, address = {Dresden, Germany}, author = {Francesco {Kriegel}}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tu-dresden.de/inf/lat/reports#Kr-LTCS-15-02}}, number = {15-02}, title = {{Model-Based Most Specific Concept Descriptions and Least Common Subsumers in $\mathcal{A\mkern-1.618mu L\mkern-1.618mu E\mkern-1.618mu Q}^{\geq}\mkern-1.618mu\mathcal{N}^{\leq}\mkern-1.618mu(\mathsf{Self})$}}, type = {LTCS-Report}, year = {2015}, }
Abstract BibTeX Entry
It is well-known that the canonical implicational base of all implications valid w.r.t. a closure operator can be obtained from the set of all pseudo-closures. NextClosures is a parallel algorithm to compute all closures and pseudo-closures of a closure operator in a graded lattice, e.g., in a powerset. Furthermore, the closures and pseudo-closures can be constrained, and partially known closure operators can be explored.
@techreport{ Kr-LTCS-15-01, address = {Dresden, Germany}, author = {Francesco {Kriegel}}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tu-dresden.de/inf/lat/reports#Kr-LTCS-15-01}}, number = {15-01}, title = {{NextClosures -- Parallel Exploration of Constrained Closure Operators}}, type = {LTCS-Report}, year = {2015}, }
Abstract BibTeX Entry PDF File PDF File (ceur-ws.org)
The canonical base of a formal context plays a distinguished role in formal concept analysis. This is because it is the only minimal base so far that can be described explicitly. For the computation of this base several algorithms have been proposed. However, all those algorithms work sequentially, by computing only one pseudo-intent at a time - a fact which heavily impairs the practicability of using the canonical base in real-world applications. In this paper we shall introduce an approach that remedies this deficit by allowing the canonical base to be computed in a parallel manner. First experimental evaluations show that for sufficiently large data-sets the speedup is proportional to the number of available CPUs.
@inproceedings{ KrBo-CLA15, address = {Clermont-Ferrand, France}, author = {Francesco {Kriegel} and Daniel {Borchmann}}, booktitle = {Proceedings of the 12th International Conference on Concept Lattices and their Applications ({CLA 2015})}, editor = {Sadok {Ben Yahia} and Jan {Konecny}}, note = {Best Paper Award.}, pages = {182--192}, publisher = {CEUR-WS.org}, series = {CEUR Workshop Proceedings}, title = {NextClosures: Parallel Computation of the Canonical Base}, volume = {1466}, year = {2015}, }
2014
Abstract BibTeX Entry PDF File PDF File (ubbcluj.ro)
Suppose a formal context \(\mathbb{K}=(G,M,I)\) is given, whose concept lattice \(\mathfrak{B}(\mathbb{K})\) with an attribute-additive concept diagram is already known, and an attribute column \(\mathbb{C}=(G,\{n\},J)\) shall be inserted to or removed from it. This paper introduces and proves an incremental update algorithm for both tasks.
@article{ Kr-ICFCA14, address = {Cluj-Napoca, Romania}, author = {Francesco {Kriegel}}, journal = {Studia Universitatis Babe{\c{s}}-Bolyai Informatica}, note = {Supplemental proceedings of the 12th International Conference on Formal Concept Analysis (ICFCA 2014), Cluj-Napoca, Romania}, pages = {45--61}, title = {Incremental Computation of Concept Diagrams}, volume = {59}, year = {2014}, }
Generated 19 December 2024, 10:12:31.
DFG Project: Reasoning in Fuzzy Description Logics with General Concept Inclusion Axioms
Principal Investigators: F. Baader, R. Peñaloza
Involved person: S. Borgwardt
Start date: May 1, 2012
Duration years: 3 years (+ 4 month parental leave)
Funded by: DFG
Description logics (DLs) are a family of logic-based knowledge representation languages that are tailored towards representing terminological knowledge, by allowing the knowledge engineer to define the relevant concepts of an application domain within this logic and then reason about these definitions using terminating inference algorithms. In order to deal with applications where the boundaries between members and non-members of concepts (e.g., “tall man,” “high blood pressure,” or “heavy network load”) are blurred, DLs have been combined with fuzzy logics, resulting in fuzzy description logics (fuzzy DLs). Considering the literature on fuzzy description logics of the last 20 years, one could get the impression that, from an algorithmic point of view, fuzzy DLs behave very similarly to their crisp counterparts: for fuzzy DLs based on simple t-norms such as Gödel, black-box procedures that call reasoners for the corresponding crisp DLs can be used, whereas fuzzy DLs based on more complicated t-norms (such as product and Lukasiewicz) can be dealt with by appropriately modifying the tableau-based reasoners for the crisp DLs. However, it has recently turned out that, in the presence of so-called general concept inclusion axioms (GCIs), the published extensions of tableaubased reasoners to fuzzy DLs do not work correctly. In fact, we were able to show that GCIs can cause undecidability for certain fuzzy DLs based on product t-norm. However, for most fuzzy DLs, the decidability status of reasoning w.r.t. GCIs is still open. The purpose of this project is to investigate the border between decidability and undecidability for fuzzy DLs with GCIs. On the one hand, we will try to show more undecidability results for specific fuzzy DLs, and then attempt to derive from these results general criteria that imply undecidability. On the other hand, we will try to determine decidable special cases, by extending tableau- and automatabased decision procedures for DLs to the fuzzy case, and also looking at other reasoning approaches for inexpressive DLs.
Publications on the topics of this project can be found on our publications web page.
PhD Project: Temporalised Description Logics with Good Algorithmic Properties, and Their Application for Monitoring Partially Observable Events
Principal Investigator: F. Baader
Involved person: M. Lippmann
Start date: July 1, 2010
Funded by: TU Dresden
Using the temporalised Description Logic ALC-LTL as a starting point, this work has as objective to investigate different kinds of extensions of this logic. The main focus will be on decidability results, the complexity of the satisfiability problem, and their usefulness for runtime monitoring. To this end, it is essential to understand the formal properties of such temporal Description Logics.
Publications on the topics of this project can be found on our publications web page.
DFG Project: Unification in Description Logics for Avoiding Redundancies in Medical Ontologies
Principal Investigator: F. Baader
Involved person: B. Morawska, S. Borgwardt, J. Mendez
Start date: July 1, 2009
Duration: 3 years
Funded by: DFG
Unification in Description Logics can be used to discover redundancies in ontologies. Up to now the unification procedure has been available only for the description logic FL0 that does not have any applications in medical ontologies. The unification in FL0 has bad complexity, and all attempts to extend this procedure to other description logics has failed up to now. We have developed recently the algorithm for unification in the description logic EL. The procedure has better complexity than that for FL0. Medical ontologies (e.g. SNOMED CT) use EL as the language of knowledge representation and the problem of redundancy occurs in them in practice.
In this project we will optimize and implement the new algorithm for the unification in EL. We will show, with the examples from SNOMED CT, how the redundancies can be discovered and removed. We will also attempt to extend the algorithm to some extensions of EL that are important for practical applications. We will define and analyze equational theories for which the procedure for EL-unification can be extended, in order to discover possible syntactic criteria that enable such extensions.
Publications on the topics of this project can be found on our publications web page.
DFG Project: Completing knowledge bases using Formal Concept Analysis
Principal Investigator: F. Baader
Involved persons: J. Mendez, B. Sertkaya
Start date: November 15, 2007
Duration: 2 years
Funded by: DFG
Description Logics are employed in various application domains, such as natural language processing, configuration, databases, and bio-medical ontologies, but their most notable success so far is due to the fact that DLs provide the logical underpinning of OWL, the standard ontology language for the semantic web. As a consequence of this standardization, ontologies written in OWL are employed in more and more applications. As the size of these ontologies grows, tools that support improving their quality becomes more important. The tools available until now use DL reasoning to detect inconsistencies and to infer consequences, i.e., implicit knowledge that can be deduced from the explicitly represented knowledge. These approaches address the quality dimension of soundness of an ontology, both within itself (consistency) and w.r.t. the intended application domain. In this project we are concerned with a different quality dimension: completeness. We aim to develop formally well-founded techniques and tools that support the ontology engineer in checking whether an ontology contains all the relevant information about the application domain, and to extend the ontology appropriately if this is not the case.
Literature:
Daniel Borchmann and Felix Distel: Mining of EL-GCIs. In The 11th IEEE International Conference on Data Mining Workshops. Vancouver, Canada, IEEE Computer Society, 11 December 2011.
Felix Distel: Learning Description Logic Knowledge Bases from Data using Methods from Formal Concept Analysis. PhD Thesis, TU Dresden, Germany, April 2011.
Felix Distel and Barış Sertkaya: On the complexity of enumerating pseudo-intents. Discrete Applied Mathematics, 159(6):450–466, 2011.
Felix Distel: An Approach to Exploring Description Logic Knowledge Bases. In Barış Sertkaya and Léonard Kwuida, editors, Proceedings of the 8th International Conference on Formal Concept Analysis, (ICFCA 2010), volume 5986 of in Lecture Notes in Artificial Intelligence, pages 209–224. Springer, 2010.
Felix Distel: Hardness of Enumerating Pseudo-Intents in the Lectic Order. In Barış Sertkaya and Léonard Kwuida, editors, Proceedings of the 8th International Conference on Formal Concept Analysis, (ICFCA 2010), volume 5986 of in Lecture Notes in Artificial Intelligence, pages 124–137. Springer, 2010.
Bernardo Cuenca Grau, Christian Halaschek-Wiener, Yevgeny Kazakov, and Boontawee Suntisrivaraporn: Incremental classification of description logics ontologies. J. of Automated Reasoning, 44(4):337–369, 2010.
Rafael Peñaloza and Barış Sertkaya: On the Complexity of Axiom Pinpointing in the EL Family of Description Logics. In Fangzhen Lin, Ulrike Sattler, and Miroslaw Truszczynski, editors, Proceedings of the Twelfth International Conference on Principles of Knowledge Representation and Reasoning (KR 2010). AAAI Press, 2010.
Franz Baader and Felix Distel: Exploring Finite Models in the Description Logic ELgfp. In Sébastien Ferré and Sebastian Rudolph, editors, Proceedings of the 7th International Conference on Formal Concept Analysis, (ICFCA 2009), volume 5548 of in Lecture Notes in Artificial Intelligence, pages 146–161. Springer Verlag, 2009.
Franz Baader and Barış Sertkaya: Usability Issues in Description Logic Knowledge Base Completion. In Sébastien Ferré and Sebastian Rudolph, editors, Proceedings of the 7th International Conference on Formal Concept Analysis, (ICFCA 2009), volume 5548 of in Lecture Notes in Artificial Ingelligence, pages 1–21. Springer Verlag, 2009.
Barış Sertkaya: OntoComP System Description. In Bernardo Cuenca Grau, Ian Horrocks, Boris Motik, and Ulrike Sattler, editors, Proceedings of the 2009 International Workshop on Description Logics (DL2009), volume 477 of in CEUR-WS, 2009.
Barış Sertkaya: OntoComP: A Protege Plugin for Completing OWL Ontologies. In Proceedings of the 6th European Semantic Web Conference, (ESWC 2009), volume 5554 of in Lecture Notes in Computer Science, pages 898–902. Springer Verlag, 2009.
Barış Sertkaya: Some Computational Problems Related to Pseudo-intents. In Sébastien Ferré and Sebastian Rudolph, editors, Proceedings of the 7th International Conference on Formal Concept Analysis, (ICFCA 2009), volume 5548 of in Lecture Notes in Artificial Intelligence, pages 130–145. Springer Verlag, 2009.
Barış Sertkaya: Towards the Complexity of Recognizing Pseudo-intents. In Frithjof Dau and Sebastian Rudolph, editors, Proceedings of the 17th International Conference on Conceptual Structures, (ICCS 2009), pages 284–292, 2009.
Franz Baader and Felix Distel: A Finite Basis for the Set of EL-Implications Holding in a Finite Model. In Raoul Medina and Sergei Obiedkov, editors, Proceedings of the 6th International Conference on Formal Concept Analysis, (ICFCA 2008), volume 4933 of in Lecture Notes in Artificial Intelligence, pages 46–61. Springer, 2008.
Franz Baader and Boontawee Suntisrivaraporn: Debugging SNOMED CT Using Axiom Pinpointing in the Description Logic EL+. In Proceedings of the 3rd Knowledge Representation in Medicine (KR-MED'08): Representing and Sharing Knowledge Using SNOMED, volume 410 of in CEUR-WS, 2008.
Miki Hermann and Barış Sertkaya: On the Complexity of Computing Generators of Closed Sets. In Raoul Medina and Sergei A. Obiedkov, editors, Proceedings of the 6th International Conference on Formal Concept Analysis, (ICFCA 2008), volume 4933 of in Lecture Notes in Computer Science, pages 158–168. Springer Verlag, 2008.
Boontawee Suntisrivaraporn: Module Extraction and Incremental Classification: A Pragmatic Approach for EL+ Ontologies. In Sean Bechhofer, Manfred Hauswirth, Joerg Hoffmann, and Manolis Koubarakis, editors, Proceedings of the 5th European Semantic Web Conference (ESWC'08), volume 5021 of in Lecture Notes in Computer Science, pages 230–244. Springer-Verlag, 2008.
Franz Baader, Bernhard Ganter, Ulrike Sattler, and Baris Sertkaya: Completing Description Logic Knowledge Bases using Formal Concept Analysis. In Proceedings of the Twentieth International Joint Conference on Artificial Intelligence (IJCAI-07). AAAI Press, 2007.
Franz Baader, Rafael Peñaloza, and Boontawee Suntisrivaraporn: Pinpointing in the Description Logic EL. In Proceedings of the 30th German Conference on Artificial Intelligence (KI2007), volume 4667 of in Lecture Notes in Artificial Intelligence, pages 52–67. Osnabrück, Germany, Springer-Verlag, 2007.
PhD Project: Knowledge Acquisition in Description Logics by Means of Formal Concept Analysis
Principal Investigator: F. Baader
Involved persons: F. Distel, D. Borchmann
Start date: May 1, 2007
Funded by: Cusanuswerk e.V. (until April 30, 2009) and TU Dresden
This work's objective is making the capabilities of Formal Concept Analysis applicable in Description Logics. The major interest will be on supporting ontology engineers in defining new concepts. At first glance Formal Concept Analysis appears to be a good starting point for this. However, a deeper examination shows that there are grave differences between concepts in FCA and concepts in DL. These differences make it necessary to extend and modify the Theory of Formal Concept Analysis. The major discrepancies lie in expressiveness with respect to intensional concept descriptions and in the contrast between open-world semantics and closed-world semantics. We try to expand Formal Concept Analysis in this direction.
Literature:
Felix Distel: Learning Description Logic Knowledge Bases from Data using Methods from Formal Concept Analysis. PhD Thesis, TU Dresden, Germany, April 2011.
Felix Distel: An Approach to Exploring Description Logic Knowledge Bases. In Barış Sertkaya and Léonard Kwuida, editors, Proceedings of the 8th International Conference on Formal Concept Analysis, (ICFCA 2010), volume 5986 of in Lecture Notes in Artificial Intelligence, pages 209–224. Springer, 2010.
Franz Baader and Felix Distel: Exploring Finite Models in the Description Logic ELgfp. In Sébastien Ferré and Sebastian Rudolph, editors, Proceedings of the 7th International Conference on Formal Concept Analysis, (ICFCA 2009), volume 5548 of in Lecture Notes in Artificial Intelligence, pages 146–161. Springer Verlag, 2009.
Franz Baader and Felix Distel: A Finite Basis for the Set of EL-Implications Holding in a Finite Model. In Raoul Medina and Sergei Obiedkov, editors, Proceedings of the 6th International Conference on Formal Concept Analysis, (ICFCA 2008), volume 4933 of in Lecture Notes in Artificial Intelligence, pages 46–61. Springer, 2008.
DFG Project: Description Logics with Existential Quantifiers and Polynomial Subsumption Problem and Their Applications in Bio-Medical Ontologies
Principal Investigator: F. Baader
Involved persons: M. Lippmann, C. Lutz, B. Suntisrivaraporn.
Start date: June 1, 2006
Duration: 2 years + 1 year extension
Funded by: Deutsche Forschungsgemeinschaft (DFG), Project BA 1122/11-1
Description logics (DLs) with value restrictions have so far been well-investigated. In particular, every expressive DLs, together with practical algorithms, have been developed. Despite having high worst-case complexity, their highly optimized implementations behave well in practice. However, it has turned out that, in bio-medical ontology applications, inexpressive DLs with existential restrictions, but without value restrictions, suffice. In the scope of this project, DLs with existential restrictions shall be investigated, both theoretically and practically. This includes identifying the polynomial borders of subsumption problems, developing optimizations for the subsumption algorithms, evaluating against realistic large-scale bio-medical ontologies. Moreover, supplemental reasoning problems (e.g., conjunctive queries) shall be investigated.
Literature:
Julian Mendez: A classification algorithm for ELHIfR+. Master Thesis, TU Dresden, Germany, 2011.
Julian Mendez, Andreas Ecke, and Anni-Yasmin Turhan: Implementing completion-based inferences for the el-family. In Riccardo Rosati, Sebastian Rudolph, and Michael Zakharyaschev, editors, Proceedings of the international Description Logics workshop. CEUR, 2011.
Franz Baader, Meghyn Bienvenu, Carsten Lutz, and Frank Wolter: Query and Predicate Emptiness in Description Logics. In Fangzhen Lin and Ulrike Sattler, editors, Proceedings of the 12th International Conference on Principles of Knowledge Representation and Reasoning (KR2010). AAAI Press, 2010.
Franz Baader, Carsten Lutz, and Anni-Yasmin Turhan: Small is again Beautiful in Description Logics. KI – Künstliche Intelligenz, 24(1):25–33, 2010.
Boris Konev, Carsten Lutz, Denis Ponomaryov, and Frank Wolter: Decomposing description logic ontologies. In Fangzhen Lin, Ulrike Sattler, and Miroslaw Trusz- czynski, editors, Proceedings of the Twelfth International Conference on Principles of Knowledge Representation and Reasoning (KR2010). AAAI Press, 2010.
Carsten Lutz and Frank Wolter. Deciding inseparability and conservative extensions in the description logic EL. Journal of Symbolic Computation, 45(2):194–228, 2010.
Roman Kontchakov, Carsten Lutz, David Toman, Frank Wolter, and Michael Zakharyaschev: The combined approach to query answering in DL-Lite. In Fangzhen Lin, Ulrike Sattler, and Miroslaw Truszczynski, editors, Proceedings of the Twelfth International Conference on Principles of Knowledge Representation and Reasoning (KR2010). AAAI Press, 2010.
Franz Baader, Stefan Schulz, Kent Spackmann, and Bontawee Suntisrivaraporn: How Should Parthood Relations be Expressed in SNOMED CT?. In Proceedings of 1. Workshop des GI-Arbeitskreises Ontologien in Biomedizin und Lebenswissenschaften (OBML 2009), 2009.
Carsten Lutz, David Toman, and Frank Wolter: Conjunctive query answering in the description logic EL using a relational database system. In Craig Boutilier, editor, Proc. of the 21st Int. Joint Conf. on Artificial Intelligence (IJCAI 2009), pages 2070–2075. IJCAI/AAAI, 2009.
Julian Mendez and Boontawee Suntisrivaraporn: Reintroducing CEL as an OWL 2 EL Reasoner. In Bernardo Cuenca Grau, Ian Horrocks, Boris Motik, and Ulrike Sattler, editors, Proceedings of the 2009 International Workshop on Description Logics (DL2009), volume 477 of in CEUR-WS, 2009.
Boris Motik, Bernardo Cuenca Grau, Ian Horrocks, Zhe Wu, and Carsten Lutz, editors: OWL 2 Web Ontology Language: Profiles. W3C Recommendation, 27 October 2009. Available at http://www.w3.org/TR/owl-profiles/.
Stefan Schulz, Boontawee Suntisrivaraporn, Franz Baader, and Martin Boeker: SNOMED reaching its adolescence: Ontologists' and logicians' health check. International Journal of Medical Informatics, 78(Supplement 1):S86–S94, 2009.
Franz Baader, Sebastian Brandt, and Carsten Lutz: Pushing the EL Envelope Further. In Kendall Clark and Peter F. Patel-Schneider, editors, In Proceedings of the OWLED 2008 DC Workshop on OWL: Experiences and Directions, 2008.
Christoph Haase and Carsten Lutz: Complexity of Subsumption in the EL Family of Description Logics: Acyclic and Cyclic TBoxes. In Malik Ghallab, Constantine D. Spyropoulos, Nikos Fakotakis, and Nikos Avouris, editors, Proceedings of the 18th European Conference on Artificial Intelligence (ECAI08), volume 178 of in Frontiers in Artificial Intelligence and Applications, pages 25–29. IOS Press, 2008.
Boris Konev, Carsten Lutz, Dirk Walther, and Frank Wolter: Semantic Modularity and Module Extraction in Description Logics. In Malik Ghallab, Constantine D. Spyropoulos, Nikos Fakotakis, and Nikos Avouris, editors, Proceedings of the 18th European Conference on Artificial Intelligence (ECAI08), volume 178 of in Frontiers in Artificial Intelligence and Applications, pages 55–59. IOS Press, 2008.
Boontawee Suntisrivaraporn: Module Extraction and Incremental Classification: A Pragmatic Approach for EL+ Ontologies. In Sean Bechhofer, Manfred Hauswirth, Joerg Hoffmann, and Manolis Koubarakis, editors, Proceedings of the 5th European Semantic Web Conference (ESWC'08), volume 5021 of in Lecture Notes in Computer Science, pages 230–244. Springer-Verlag, 2008.
Quoc Huy Vu: Subsumption in the description logic ELHIfR+. Master Thesis, TU Dresden, Germany, 2008.
A. Artale, R. Kontchakov, C. Lutz, F. Wolter, and M. Zakharyaschev: Temporalising Tractable Description Logics. In Proceedings of the Fourteenth International Symposium on Temporal Representation and Reasoning. IEEE Computer Society Press, 2007.
Adila Krisnadhi and Carsten Lutz: Data Complexity in the EL family of Description Logics. In Nachum Dershowitz and Andrei Voronkov, editors, Proceedings of the 14th International Conference on Logic for Programming, Artificial Intelligence, and Reasoning (LPAR2007), volume 4790 of in Lecture Notes in Artificial Intelligence, pages 333–347. Springer-Verlag, 2007.
Christoph Haase: Complexity of subsumption in extensions of EL. Master Thesis, TU Dresden, Germany, 2007.
Carsten Lutz, Dirk Walther, and Frank Wolter: Conservative Extensions in Expressive Description Logics. In Manuela Veloso, editor, Proceedings of the Twentieth International Joint Conference on Artificial Intelligence (IJCAI'07), pages 453–458. AAAI Press, 2007.
Carsten Lutz and Frank Wolter: Conservative Extensions in the Lightweight Description Logic EL. In Frank Pfenning, editor, Proceedings of the 21th Conference on Automated Deduction (CADE-21), volume 4603 of in Lecture Notes in Artificial Intelligence, pages 84–99. Springer-Verlag, 2007.
Stefan Schulz, Boontawee Suntisrivaraporn, and Franz Baader: SNOMED CT's Problem List: Ontologists' and Logicians' Therapy Suggestions. In , editor, Proceedings of The Medinfo 2007 Congress, volume of in Studies in Health Technology and Informatics (SHTI-series), page . IOS Press, 2007.
Boontawee Suntisrivaraporn, Franz Baader, Stefan Schulz, and Kent Spackman: Replacing SEP-Triplets in SNOMED CT using Tractable Description Logic Operators. In Jim Hunter Riccardo Bellazzi, Ameen Abu-Hanna, editor, Proceedings of the 11th Conference on Artificial Intelligence in Medicine (AIME'07), volume of in Lecture Notes in Computer Science, page . Springer-Verlag, 2007.
F. Baader, C. Lutz, and B. Suntisrivaraporn: Efficient Reasoning in EL+. In Proceedings of the 2006 International Workshop on Description Logics (DL2006), in CEUR-WS, 2006.
F. Baader, C. Lutz, and B. Suntisrivaraporn: CEL—A Polynomial-time Reasoner for Life Science Ontologies. In U. Furbach and N. Shankar, editors, Proceedings of the 3rd International Joint Conference on Automated Reasoning (IJCAR'06), volume 4130 of in Lecture Notes in Artificial Intelligence, pages 287–291. Springer-Verlag, 2006.
F. Baader, C. Lutz, and B. Suntisrivaraporn: Is Tractable Reasoning in Extensions of the Description Logic EL Useful in Practice?. In Proceedings of the Methods for Modalities Workshop (M4M-05), 2005.
DFG Project: Explaining Ontology Consequences
Principal Investigator: F. Baader
Involved person: R. Peñaloza
Start date: April 1, 2006
Duration: 2.5 years
Funded by: Deutsche Forschungsgemeinschaft (DFG) Graduiertenkolleg GRK 446
The objective of this work is to develop methods for finding small (preferably minimal) sub-ontologies from which a given consequence follows. These sub-ontologies are called explanations. The approach followed is to modify the procedures used to detect the consequence to allow for tracking the ontology axioms responsible for it. The major interest is on supporting Knowledge Engineers in diagnosing and correcting errors in the built ontologies.
Literature
Franz Baader and Rafael Peñaloza: Automata-Based Axiom Pinpointing. In Alessandro Armando, Peter Baumgartner, and Gilles Dowek, editors, Proceedings of the 4th International Joint Conference on Automated Reasoning, (IJCAR 2008), volume 5195 of in Lecture Notes in Artificial Intelligence, pages 226–241. Springer, 2008.
Franz Baader and Boontawee Suntisrivaraporn: Debugging SNOMED CT Using Axiom Pinpointing in the Description Logic EL+. In Proceedings of the 3rd Knowledge Representation in Medicine (KR-MED'08): Representing and Sharing Knowledge Using SNOMED, volume 410 of in CEUR-WS, 2008.
Rafael Peñaloza: Automata-based Pinpointing for DLs. In Proceedings of the 2008 International Workshop on Description Logics (DL2008), volume 353 of in CEUR-WS, 2008.
Franz Baader and Rafael Peñaloza: Axiom Pinpointing in General Tableaux. In N. Olivetti, editor, Proceedings of the 16th International Conference on Automated Reasoning with Analytic Tableaux and Related Methods TABLEAUX 2007, volume 4548 of in Lecture Notes in Computer Science, pages 11–27. Aix-en-Provence, France, Springer-Verlag, 2007.
Franz Baader, Rafael Peñaloza, and Boontawee Suntisrivaraporn: Pinpointing in the Description Logic EL. In Proceedings of the 2007 International Workshop on Description Logics (DL2007), in CEUR-WS, 2007.
DFG Project: Action Formalisms with Description Logic
Principal Investigators: F. Baader, M. Thielscher
Involved persons: C. Drescher, H. Liu, C. Lutz, M. Lippmann, M. Milicic
Start date: September 1, 2005
Duration: 2 years + 2 years extension
Project Partners: University of Leipzig (Germany), Aachen University of Technology (Germany), University of Freiburg(Germany)
Funded by: Deutsche Forschungsgemeinschaft (DFG), Project BA 1122/13
The aim of this project is the integration of description logic and action formalisms. The motivation for this integration is twofold. On the one hand, general action calculi like the fluent calculus and the situation calculus are based on full first order logic. This entails undecidability in general of basic reasoning tasks like e.g. checking state consistency, action applicability or computing updated states. By identifying suitable description logics for describing the current world state these issues may be adressed. On the other hand, the need for integrating some kind of action representation into description logics has arisen. Description logics are a highly successful static knowledge representation formalism with applications e.g. on the semantic web or in the life sciences. Clearly, it is desirable to have the means to model semantic web services or reason about dynamic domains in the life sciences, like e.g. clinical protocols.
Another objective of this project is to develop a version of the logic programming paradigm designed specifically for programming intelligent agents. This may be thought of as adapting the successful constraint-logic programming scheme CLP(X) to CLP(Sigma), where Sigma is a domain axiomatization in an action calculus. Of course, for this it is of tantamount importance that the special atoms of the logic program can effectively be decided via the underlying domain axiomatization. The resulting scheme instantiated with the action calculi developed in the afore-mentioned steps can then be implemented by combining the mature technologies of both plain prolog and description logic reasoners. The system will be evaluated by modeling semantic web services or clinical protocols.
Literature:
Hongkai Liu, Carsten Lutz, Maja Milicic, and Frank Wolter: Foundations of instance level updates in expressive description logics. Artificial Intelligence, 175(18):2170–2197, 2011.
M. Thielscher: A unifying action calculus. Artificial Intelligence, 175(1):120–141, 2011.
Franz Baader, Marcel Lippmann, and Hongkai Liu: Using Causal Relationships to Deal with the Ramification Problem in Action Formalisms Based on Description Logics. In Christian G. Fermüller and Andrei Voronkov, editors, Proceedings of the 17th International Conference on Logic for Programming, Artifical Intelligence, and Reasoning (LPAR-17), volume 6397 of in Lecture Notes in Computer Science (subline Advanced Research in Computing and Software Science), pages 82–96. Yogyakarta, Indonesia, Springer-Verlag, October 2010.
Franz Baader, Hongkai Liu, and Anees ul Mehdi: Verifying Properties of Infinite Sequences of Description Logic Actions. In Helder Coelho, Rudi Studer, and Michael Wooldridge, editors, Proceedings of the 19th European Conference on Artificial Intelligence (ECAI10), volume 215 of in Frontiers in Artificial Intelligence and Applications, pages 53–58. IOS Press, 2010.
C. Drescher: Action Logic Programs — How to Specify Strategic Behavior in Dynamic Domains Using Logical Rules. PhD thesis, TU Dresden, Germany, 2010.
Hongkai Liu: Updating Description Logic ABoxes. PhD thesis, TU Dresden, Germany, 2010.
Franz Baader, Andreas Bauer, and Marcel Lippmann: Runtime Verification Using a Temporal Description Logic. In Silvio Ghilardi and Roberto Sebastiani, editors, Proceedings of the 7th International Symposium on Frontiers of Combining Systems (FroCoS 2009), volume 5749 of in Lecture Notes in Computer Science, pages 149–164. Springer-Verlag, 2009.
Conrad Drescher, Hongkai Liu, Franz Baader, Steffen Guhlemann, Uwe Petersohn, Peter Steinke, and Michael Thielscher: Putting ABox Updates into Action. In Silvio Ghilardi and Roberto Sebastiani, editors, The Seventh International Symposium on Frontiers of Combining Systems (FroCoS-2009), volume 5749 of in Lecture Notes in Computer Science, pages 149–164. Springer-Verlag, 2009.
Conrad Drescher, Hongkai Liu, Franz Baader, Peter Steinke, and Michael Thielscher: Putting ABox Updates into Action. In Proceedings of the 8th IJCAI International Workshop on Nonmontonic Reasoning, Action and Change (NRAC-09), 2009.
C. Drescher, S. Schiffel, and M. Thielscher: A declarative agent programming language based on action theories. In Ghilardi, S. and Sebastiani, R., editors, Proceedings of the Seventh International Symposium on Frontiers of Combining Systems (FroCoS 2009), volume 5749 of LNCS, pages 230–245, Trento, Italy. Springer, 2009.
A. ul Mehdi: Integrate action formalisms into linear temporal logics. Master thesis, TU Dresden, Germany, 2009.
Franz Baader, Silvio Ghilardi, and Carsten Lutz: LTL over Description Logic Axioms. In Proceedings of the 11th International Conference on Principles of Knowledge Representation and Reasoning (KR2008), 2008.
C. Drescher and M. Thielscher: A fluent calculus semantics for ADL with plan constraints. In Hölldobler, S., Lutz, C., and Wansing, H., editors, Proceedings of the 11th European Conference on Logics in Artificial Intelligence (JELIA08), volume 5293 of LNCS, pages 140–152, Dresden, Germany. Springer, 2008.
Hongkai Liu, Carsten Lutz, and Maja Milicic: The Projection Problem for EL Actions. In Proceedings of the 2008 International Workshop on Description Logics (DL2008), volume 353 of in CEUR-WS, 2008.
Y. Bong: Description Logic ABox Updates Revisited. Master thesis, TU Dresden, Germany, 2007.
C. Drescher and M. Thielscher: Integrating action calculi and description logics. In Hertzberg, J., Beetz, M., and Englert, R., editors, Proceedings of the 30th Annual German Conference on Artificial Intelligence (KI 2007), volume 4667 of LNCS, pages 68–83, Osnabrück, Germany. Springer, 2007.
Conrad Drescher and Michael Thielscher: Reasoning about actions with description logics. In P. Peppas and M.-A. Williams, editors, Proceedings of the 7th IJCAI International Workshop on Nonmonotonic Reasoning, Action and Change (NRAC 2007), Hyderabad, India, January 2007.
H. Liu, C. Lutz, M. Milicic, and F. Wolter: Description Logic Actions with general TBoxes: a Pragmatic Approach. In Proceedings of the 2006 International Workshop on Description Logics (DL2006), 2006.
H. Liu, C. Lutz, M. Milicic, and F. Wolter: Reasoning about Actions using Description Logics with general TBoxes. In Michael Fisher, Wiebe van der Hoek, Boris Konev, and Alexei Lisitsa, editors, Proceedings of the 10th European Conference on Logics in Artificial Intelligence (JELIA 2006), volume 4160 of in Lecture Notes in Artificial Intelligence, pages 266–279. Springer-Verlag, 2006.
H. Liu, C. Lutz, M. Milicic, and F. Wolter: Updating Description Logic ABoxes. In Patrick Doherty, John Mylopoulos, and Christopher Welty, editors, Proceedings of the Tenth International Conference on Principles of Knowledge Representation and Reasoning (KR'06), pages 46–56. AAAI Press, 2006.
Michael Thielscher and Thomas Witkowski: The Features-and-Fluents semantics for the fluent calculus. In P. Doherty, J. Mylopoulos, and C. Welty, editors, Proceedings of the International Conference on Principles of Knowledge Representation and Reasoning (KR), pages 362–370, Lake District, UK, June 2006.
F. Baader, C. Lutz, M. Milicic, U. Sattler, and F. Wolter: A Description Logic Based Approach to Reasoning about Web Services. In Proceedings of the WWW 2005 Workshop on Web Service Semantics (WSS2005), 2005.
F. Baader, C. Lutz, M. Milicic, U. Sattler, and F. Wolter: Integrating Description Logics and Action Formalisms: First Results. In Proceedings of the 2005 International Workshop on Description Logics (DL2005), number 147 in CEUR-WS, 2005.
F. Baader, C. Lutz, M. Milicic, U. Sattler, and F. Wolter: Integrating Description Logics and Action Formalisms: First Results. In Proceedings of the Twentieth National Conference on Artificial Intelligence (AAAI-05), 2005.
EU Project: Thinking Ontologies (TONES)
Principal Investigator: F. Baader
Involved persons: C. Lutz, M. Milicic, B. Sertkaya, B. Suntisrivaraporn, A.-Y. Turhan.
Start date: September 1, 2005
Duration: 3 years
Project Partners: Free University of Bolzano (Italy), Università degli Studi di Roma "La Sapienza" (Italy), The University of Manchester (UK), Technische Universität Hamburg-Harburg (Germany)
Funded by: EU (FP6-7603)
Ontologies are seen as the key technology used to describe the semantics of information at various sites, overcoming the problem of implicit and hidden knowledge and thus enabling exchange of semantic contents. As such, they have found applications in key growth areas, such as e-commerce, bio-informatics, Grid computing, and the Semantic Web.
The aim of the project is to study and develop automated reasoning techniques for both offline and online tasks associated with ontologies, either seen in isolation or as a community of interoperating systems, and devise methodologies for the deployment of such techniques, on the one hand in advanced tools supporting ontology design and management, and on the other hand in applications supporting software agents in operating with ontologies.
Reports on the TONES project appeared in the following news:
- the ACM tech news (Dec. 2009)
- L'ATELIER—a french on-line magazine (Nov. 2009)
Konkretes Ziel der zweiten Projektphase soll daher sein, verschiedene Ansätze zur Entwicklung von Entscheidungsalgorithmen für Logiken zu vergleichen und miteinander zu integrieren. Insbesondere sollen hier tableau- und automatenbasierte Verfahren für Beschreibungs- und Modallogiken sowie GF untersucht werden, mit dem Ziel einen einheitlichen algorithmischen Ansatz zu erhalten, der die Vorteile beider Verfahren aufweist. Die so erhaltenen Algorithmen sollen wieder prototypisch implementiert und evaluiert werden. Ein weiteres Ziel ist die Konstruktion effizienter Algorithmen für das Auswertungsproblem dieser Logiken. Schließlich soll für die hier betrachteten Logiken der Zusammenhang zwischen der Struktur von formeln und ihren algorithmischen Eigenschaften analysiert werden. Solche Resultate sollen einerseits dazu dienen, effizient entscheidbare Fragmente in diesen Logiken zu isolieren, sie sollen andererseits eine Basis zur Konstruktion Wahrscheinlichkeitsverteilungen von Formeln liefern, unter denen das average-case Verhalten von Entscheidungsverfahren analysiert werden kann.
DFG Project: Logik-Algorithmen in der Wissensrepräsentation
Principal Investigator: F. Baader
Involved persons: S. Tobies, J. Hladik
Funded by: Deutsche Forschungsgemeinschaft (DFG)
The aim of this project is the construction of decision procedures and the study of complexity issues of decision problems which are relevant for applications in the area of knowledge representation. In contrast to well-known explorations in the context of the classical Decision Problem of mathematical logic (prefix signature classes), the relevant classes of formulae are characterized by different criteria: on the one hand, the restriction to formulae with few variables or limited quantification is important, on the other hand, certain constructs (fixed points, transitive closure, number restrictions...) which are not dealt with in the classical framework are of interest.
During the first phase of this project, guarded logics, in particular the "Guarded Fragment" (GF) and its extensions, were identified as a class of logics which are relevant for for knowledge representation and very expressive but retain stable decidability properties. Moreover, practical tableau-based decision procedures for GF and expressive description logics were developed and implemented.
The practical aim of the second phase is the comparison and combination of different approaches for the development of decision procedures for logics. In particular, tableau- and automata-based procedures for GF, modal and description logics are going to be examined with the goal of a unitary algorithmical approach combining the advantages of both procedures. Another goal is the development of efficient model checking procedures for these logics.
Literature:
J. Hladik: A Tableau System for the Description Logic SHIO. In Ulrike Sattler, editor, Contributions to the Doctoral Programme of IJCAR 2004. CEUR, 2004. Available from ceur-ws.org
J. Hladik: Spinoza's Ontology. In G. Büchel, B. Klein, and T. Roth-Berghofer, editors, Proceedings of the 1st Workshop on Philosophy and Informatics (WSPI 2004), number RR-04-02 in DFKI Research Reports, 2004.
J. Hladik and J. Model: Tableau Systems for SHIO and SHIQ. In V. Haarslev and R. Möller, editors, Proceedings of the 2004 International Workshop on Description Logics (DL 2004). CEUR, 2004. Available from ceur-ws.org
F. Baader, J. Hladik, C. Lutz, and F. Wolter: From Tableaux to Automata for Description Logics. In Moshe Vardi and Andrei Voronkov, editors, Proceedings of the 10th International Conference on Logic for Programming, Artificial Intelligence, and Reasoning (LPAR 2003), volume 2850 of in Lecture Notes in Computer Science, pages 1–32. Springer, 2003.
Franz Baader, Jan Hladik, Carsten Lutz, and Frank Wolter: From Tableaux to Automata for Description Logics. Fundamenta Informaticae, 57:1–33, 2003.
J. Hladik and U. Sattler: A Translation of Looping Alternating Automata to Description Logics. In Proc. of the 19th Conference on Automated Deduction (CADE-19), volume 2741 of in Lecture Notes in Artificial Intelligence. Springer Verlag, 2003.
Jan Hladik: Reasoning about Nominals with FaCT and RACER. In Proceedings of the 2003 International Workshop on Description Logics (DL2003), in CEUR-WS, 2003.
J. Hladik: Implementation and Optimisation of a Tableau Algorithm for the Guarded Fragment. In U. Egly and C. G. Fermüller, editors, Proceedings of the International Conference on Automated Reasoning with Tableaux and Related Methods (Tableaux 2002), volume 2381 of in Lecture Notes in Artificial Intelligence. Springer-Verlag, 2002.
G. Pan, U. Sattler, and M. Y. Vardi: BDD-Based Decision Procedures for K. In Proceedings of the Conference on Automated Deduction, volume 2392 of in Lecture Notes in Artificial Intelligence. Springer Verlag, 2002.
F. Baader and S. Tobies: The Inverse Method Implements the Automata Approach for Modal Satisfiability. In Proceedings of the International Joint Conference on Automated Reasoning IJCAR'01, volume 2083 of in Lecture Notes in Artificial Intelligence, pages 92–106. Springer-Verlag, 2001.
J. Hladik: Implementierung eines Entscheidungsverfahrens für das Bewachte Fragment der Prädikatenlogik. Diploma thesis, RWTH Aachen, Germany, 2001.
S. Tobies: Complexity Results and Practical Algorithms for Logics in Knowledge Representation. PhD thesis, RWTH Aachen, 2001.
Stephan Tobies: PSPACE Reasoning for Graded Modal Logics. Journal of Logic and Computation, 11(1):85–106, 2001.
F. Baader and U. Sattler: Tableau Algorithms for Description Logics. In R. Dyckhoff, editor, Proceedings of the International Conference on Automated Reasoning with Tableaux and Related Methods (Tableaux 2000), volume 1847 of in Lecture Notes in Artificial Intelligence, pages 1–18. St Andrews, Scotland, UK, Springer-Verlag, 2000.
C. Hirsch and S. Tobies: A Tableau Algorithm for the Clique Guarded Fragment. In Proceedings of the Workshop Advances in Modal Logic AiML 2000, 2000. Final version appeared in Advanced in Modal Logic Volume 3, 2001.
Jan Hladik: Implementing the n-ary Description Logic GF1-. In Proceedings of the International Workshop in Description Logics 2000 (DL2000), 2000.
I. Horrocks, U. Sattler, and S. Tobies: Practical Reasoning for Very Expressive Description Logics. Logic Journal of the IGPL, 8(3):239–264, 2000.
I. Horrocks, U. Sattler, and S. Tobies: Reasoning with Individuals for the Description Logic SHIQ. In David MacAllester, editor, Proceedings of the 17th International Conference on Automated Deduction (CADE-17), number 1831 in Lecture Notes in Computer Science. Germany, Springer Verlag, 2000.
I. Horrocks and S. Tobies: Reasoning with Axioms: Theory and Practice. In A. G. Cohn, F. Giunchiglia, and B. Selman, editors, Principles of Knowledge Representation and Reasoning: Proceedings of the Seventh International Conference (KR2000). San Francisco, CA, Morgan Kaufmann Publishers, 2000.
I. Horrocks, U. Sattler, S. Tessaris, and S. Tobies: How to decide Query Containment under Constraints using a Description Logic. In Andrei Voronkov, editor, Proceedings of the 7th International Conference on Logic for Programming and Automated Reasoning (LPAR'2000), number 1955 in Lecture Notes in Artificial Intelligence. Springer Verlag, 2000.
U. Sattler: Description Logics for the Representation of Aggregated Objects. In W.Horn, editor, Proceedings of the 14th European Conference on Artificial Intelligence. IOS Press, Amsterdam, 2000.
Stephan Tobies: The Complexity of Reasoning with Cardinality Restrictions and Nominals in Expressive Description Logics. Journal of Artificial Intelligence Research, 12:199–217, 2000.
F. Baader, R. Molitor, and S. Tobies: Tractable and Decidable Fragments of Conceptual Graphs. In W. Cyre and W. Tepfenhart, editors, Proceedings of the Seventh International Conference on Conceptual Structures (ICCS'99), number 1640 in Lecture Notes in Computer Science, pages 480–493. Springer Verlag, 1999.
I. Horrocks and U. Sattler: A Description Logic with Transitive and Inverse Roles and Role Hierarchies. Journal of Logic and Computation, 9(3):385–410, 1999.
Ian Horrocks, Ulrike Sattler, and Stephan Tobies: Practical Reasoning for Expressive Description Logics. In Harald Ganzinger, David McAllester, and Andrei Voronkov, editors, Proceedings of the 6th International Conference on Logic for Programming and Automated Reasoning (LPAR'99), number 1705 in Lecture Notes in Artificial Intelligence, pages 161–180. Springer-Verlag, September 1999.
C. Lutz, U. Sattler, and S. Tobies: A Suggestion for an n-ary Description Logic. In Patrick Lambrix, Alex Borgida, Maurizio Lenzerini, Ralf Möller, and Peter Patel-Schneider, editors, Proceedings of the International Workshop on Description Logics, number 22 in CEUR-WS, pages 81–85. Linkoeping, Sweden, Linköping University, July 30 – August 1 1999. Proceedings online available from http://SunSITE.Informatik.RWTH-Aachen.DE/Publications/CEUR-WS/Vol-22/
S .Tobies: A NExpTime-complete Description Logic Strictly Contained in C2. In J. Flum and M. Rodríguez-Artalejo, editors, Proceedings of the Annual Conference of the European Association for Computer Science Logic (CSL-99), in LNCS 1683, pages 292–306. Springer-Verlag, 1999.
S. Tobies: A PSpace Algorithm for Graded Modal Logic. In H. Ganzinger, editor, Automated Deduction – CADE-16, 16th International Conference on Automated Deduction, in LNAI 1632, pages 52–66. Trento, Italy, Springer-Verlag, July 7–10, 1999.
DFG Project: Novel Inference Services in Description Logics
Principal Investigator: F. Baader
Involved persons: S. Brandt, A.-Y. Turhan
Funded by: Deutsche Forschungsgemeinschaft (DFG)
Over the past 15 years the area of Description Logics has seen extensive research on both theoretical and practical aspects of standard inference problems, such as subsumption and the instance problem. When DL systems were employed for practical KR-applications, however, additional inference services facilitating build-up and maintenance of large knowledge bases proved indispensible. This led to the developing of non-standard inferences such as: least common subsumer, most specific concept, approximation, and matching.
In the first project phase, non-standard inference problems have been examined w.r.t. their formal aspects, e.g. computational complexity, and have been evaluated in practice in one specific prototypical application scenario in the domain of chemical process engineering.
Building on the lessons learned during the first project phase, the second phase aims to examine how non-standard inference services can be employed in a more general application area: for the build-up, maintenance, and deployment of ontologies, e.g. for the Semantic Web. To this end, existing algorithms have to be extended considerably and new ones to be found. The more general application area moreover gives rise to additional non-standard inferences not examined during the first project phase.
The ultimate goal of this project is to gain comprehensive knowledge about the formal properties of non-standard inference problems in Description Logics and to demonstrate their utility for build-up and maintenance of knowledge bases. An additional practical goal of the second project phase is to develop a prototypical support system for a specific ontology editor, e.g. OilEd.
Literature:
Franz Baader, Barış Sertkaya, and Anni-Yasmin Turhan: Computing the Least Common Subsumer w.r.t. a Background Terminology. Journal of Applied Logic, 5(3):392–420, 2007.
F. Baader and A. Okhotin: Complexity of Language Equations With One-Sided Concatenation and All Boolean Operations. In Jordi Levy, editor, Proceedings of the 20th International Workshop on Unification, UNIF'06, pages 59–73, 2006.
F. Baader and R. Küsters: Nonstandard Inferences in Description Logics: The Story So Far. In D.M. Gabbay, S.S. Goncharov, and M. Zakharyaschev, editors, Mathematical Problems from Applied Logic I, volume 4 of in International Mathematical Series, pages 1–75. Springer-Verlag, 2006.
Carsten Lutz, Franz Baader, Enrico Franconi, Domenico Lembo, Ralf Möller, Riccardo Rosati, Ulrike Sattler, Boontawee Suntisrivaraporn, and Sergio Tessaris: Reasoning Support for Ontology Design. In Bernardo Cuenca Grau, Pascal Hitzler, Connor Shankey, and Evan Wallace, editors, In Proceedings of the second international workshop OWL: Experiences and Directions, November 2006. To appear
S. Brandt: Standard and Non-standard reasoning in Description Logics. Ph.D. Dissertation, Institute for Theoretical Computer Science, TU Dresden, Germany, 2006.
Barış Sertkaya: Computing the hierarchy of conjunctions of concept names and their negations in a Description Logic knowledge base using Formal Concept Analysis (ICFCA 2006). In Bernhard Ganter and Leonard Kwuida, editors, Contributions to ICFCA 2006, pages 73–86. Dresden, Germany, Verlag Allgemeine Wissenschaft, 2006.
Anni-Yasmin Turhan, Sean Bechhofer, Alissa Kaplunova, Thorsten Liebig, Marko Luther, Ralf Möller, Olaf Noppens, Peter Patel-Schneider, Boontawee Suntisrivaraporn, and Timo Weithöner: DIG 2.0 – Towards a Flexible Interface for Description Logic Reasoners. In Bernardo Cuenca Grau, Pascal Hitzler, Connor Shankey, and Evan Wallace, editors, In Proceedings of the second international workshop OWL: Experiences and Directions, November 2006.
F. Baader, S. Brandt, and C. Lutz: Pushing the EL Envelope. In Proceedings of the Nineteenth International Joint Conference on Artificial Intelligence IJCAI-05. Edinburgh, UK, Morgan-Kaufmann Publishers, 2005.
Sebastian Brandt and Jörg Model: Subsumption in EL w.r.t. hybrid TBoxes. In Proceedings of the 28th Annual German Conference on Artificial Intelligence, KI 2005, in Lecture Notes in Artificial Intelligence. Springer-Verlag, 2005.
Hongkai Liu: Matching in Description Logics with Existential Restrictions and Terminological Cycles. Master thesis, TU Dresden, Germany, 2005.
J. Model: Subsumtion in EL bezüglich hybrider TBoxen. Diploma thesis, TU Dresden, Germany, 2005.
Anni-Yasmin Turhan: Pushing the SONIC border — SONIC 1.0. In Reinhold Letz, editor, FTP 2005 — Fifth International Workshop on First-Order Theorem Proving. Technical Report University of Koblenz, 2005. http://www.uni-koblenz.de/fb4/publikationen/gelbereihe/RR-13-2005.pdf
F. Baader: A Graph-Theoretic Generalization of the Least Common Subsumer and the Most Specific Concept in the Description Logic EL. In J. Hromkovic and M. Nagl, editors, Proceedings of the 30th International Workshop on Graph-Theoretic Concepts in Computer Science (WG 2004), volume 3353 of in Lecture Notes in Computer Science, pages 177–188. Bad Honnef, Germany, Springer-Verlag, 2004.
F. Baader and B. Sertkaya: Applying Formal Concept Analysis to Description Logics. In P. Eklund, editor, Proceedings of the 2nd International Conference on Formal Concept Analysis (ICFCA 2004), volume 2961 of in Lecture Notes in Artificial Intelligence, pages 261–286. Springer, 2004.
F. Baader, B. Sertkaya, and A.-Y. Turhan: Computing the Least Common Subsumer w.r.t. a Background Terminology. In José Júlio Alferes and João Alexandre Leite, editors, Proceedings of the 9th European Conference on Logics in Artificial Intelligence (JELIA 2004), volume 3229 of in Lecture Notes in Computer Science, pages 400–412. Lisbon, Portugal, Springer-Verlag, 2004.
Franz Baader, Baris Sertkaya, and Anni-Yasmin Turhan: Computing the Least Common Subsumer w.r.t. a Background Terminology. In Proceedings of the 2004 International Workshop on Description Logics (DL2004), in CEUR-WS, 2004.
Sebastian Brandt: On Subsumption and Instance Problem in ELH w.r.t. General TBoxes. In Proceedings of the 2004 International Workshop on Description Logics (DL2004), in CEUR-WS, 2004.
Sebastian Brandt: Polynomial Time Reasoning in a Description Logic with Existential Restrictions, GCI Axioms, and—What Else?. In R. López de Mantáras and L. Saitta, editors, Proceedings of the 16th European Conference on Artificial Intelligence (ECAI-2004), pages 298–302. IOS Press, 2004.
Sebastian Brandt and Hongkai Liu: Implementing Matching in ALN. In Proceedings of the KI-2004 Workshop on Applications of Description Logics (KI-ADL'04), in CEUR-WS, September 2004.
Anni-Yasmin Turhan and Christian Kissig: Sonic—Non-standard Inferences go OilEd. In D. Basin and M. Rusinowitch, editors, Proceedings of the 2nd International Joint Conference on Automated Reasoning (IJCAR'04), volume 3097 of in Lecture Notes in Artificial Intelligence. Springer-Verlag, 2004.
Anni-Yasmin Turhan and Christian Kissig: Sonic—System Description. In Proceedings of the 2004 International Workshop on Description Logics (DL2004), in CEUR-WS, 2004.
Franz Baader: Computing the least common subsumer in the description logic EL w.r.t. terminological cycles with descriptive semantics. In Proceedings of the 11th International Conference on Conceptual Structures, ICCS 2003, volume 2746 of in Lecture Notes in Artificial Intelligence, pages 117–130. Springer-Verlag, 2003.
Franz Baader: Least Common Subsumers and Most Specific Concepts in a Description Logic with Existential Restrictions and Terminological Cycles. In Georg Gottlob and Toby Walsh, editors, Proceedings of the 18th International Joint Conference on Artificial Intelligence, pages 319–324. Morgan Kaufman, 2003.
Franz Baader: Terminological Cycles in a Description Logic with Existential Restrictions. In Georg Gottlob and Toby Walsh, editors, Proceedings of the 18th International Joint Conference on Artificial Intelligence, pages 325–330. Morgan Kaufmann, 2003.
Franz Baader: The instance problem and the most specific concept in the description logic EL w.r.t. terminological cycles with descriptive semantics. In Proceedings of the 26th Annual German Conference on Artificial Intelligence, KI 2003, volume 2821 of in Lecture Notes in Artificial Intelligence, pages 64–78. Hamburg, Germany, Springer-Verlag, 2003.
Sebastian Brandt: Implementing Matching in ALE—First Results. In Proceedings of the 2003 International Workshop on Description Logics (DL2003), in CEUR-WS, 2003.
Sebastian Brandt and Anni-Yasmin Turhan: Computing least common subsumers for FLE+. In Proceedings of the 2003 International Workshop on Description Logics, in CEUR-WS, 2003.
Sebastian Brandt, Anni-Yasmin Turhan, and Ralf Küsters: Extensions of Non-standard Inferences to Description Logics with transitive Roles. In Moshe Vardi and Andrei Voronkov, editors, Proceedings of the 10th International Conference on Logic for Programming, Artificial Intelligence, and Reasoning (LPAR 2003), in Lecture Notes in Computer Science. Springer, 2003.
F. Baader and R. Küsters: Unification in a Description Logic with Inconsistency and Transitive Closure of Roles. In I. Horrocks and S. Tessaris, editors, Proceedings of the 2002 International Workshop on Description Logics, 2002. See http://sunsite.informatik.rwth-aachen.de/Publications/CEUR-WS/Vol-53/
F. Baader and A.-Y. Turhan: On the problem of computing small representations of least common subsumers. In Proceedings of the German Conference on Artificial Intelligence, 25th German Conference on Artificial Intelligence (KI 2002), in Lecture Notes in Artificial Intelligence. Aachen, Germany, Springer–Verlag, 2002.
S. Brandt, R. Küsters, and A.-Y. Turhan: Approximating ALCN-Concept Descriptions. In Proceedings of the 2002 International Workshop on Description Logics, 2002.
S. Brandt, R. Küsters, and A.-Y. Turhan: Approximation and Difference in Description Logics. In D. Fensel, F. Giunchiglia, D. McGuiness, and M.-A. Williams, editors, Proceedings of the Eighth International Conference on Principles of Knowledge Representation and Reasoning (KR2002), pages 203–214. San Francisco, CA, Morgan Kaufman, 2002.
S. Brandt and A.-Y. Turhan: An Approach for Optimized Approximation. In Proceedings of the KI-2002 Workshop on Applications of Description Logics (KIDLWS'01), in CEUR-WS. Aachen, Germany, RWTH Aachen, September 2002. Proceedings online available from http://SunSITE.Informatik.RWTH-Aachen.DE/Publications/CEUR-WS/
F. Baader, S. Brandt, and R. Küsters: Matching under Side Conditions in Description Logics. In B. Nebel, editor, Proceedings of the Seventeenth International Joint Conference on Artificial Intelligence, IJCAI'01, pages 213–218. Seattle, Washington, Morgan Kaufmann, 2001.
F. Baader and P. Narendran: Unification of Concepts Terms in Description Logics. J. Symbolic Computation, 31(3):277–305, 2001.
F. Baader and R. Küsters: Unification in a Description Logic with Transitive Closure of Roles. In R. Nieuwenhuis and A. Voronkov, editors, Proceedings of the 8th International Conference on Logic for Programming, Artificial Intelligence, and Reasoning (LPAR 2001), volume 2250 of in Lecture Notes in Computer Science, pages 217–232. Havana, Cuba, Springer-Verlag, 2001.
F. Baader and A.-Y. Turhan: TBoxes do not yield a compact representation of least common subsumers. In Proceedings of the International Workshop in Description Logics 2001 (DL2001), August 2001.
R. Küsters, and R. Molitor: Approximating most specific concepts in description logics with existential restrictions. In F. Baader, G. Brewka, and T. Eiter, editors, KI 2001: Advances in Artificial Intelligence, Proceedings of the Joint German/Austrian Conference on AI (KI 2001), volume 2174 of Lecture Notes in Artificial Intelligence, pages 217–232. Vienna, Austria, Springer-Verlag, 2001.
R. Küsters and R. Molitor: Computing least common subsumers in ALEN. In B. Nebel, editor, Proceedings of the Seventeenth International Joint Conference on Artificial Intelligence, IJCAI'01, pages 219–224, Seattle, Washington, Morgan Kaufmann, 2001.
A.-Y. Turhan and R. Molitor: Using lazy unfolding for the computation of least common subsumers. In Proceedings of the International Workshop in Description Logics 2001 (DL2001), August 2001.
S. Brandt: Matching under Side Conditions in Description Logics. Diploma thesis, RWTH Aachen, Germany, 2000.
DFG Project: Combination of Model and Description Logics
Principal Investigator: F. Baader
Involved person: C. Lutz
Funded by: Deutsche Forschungsgemeinschaft (DFG)
The goal of this project is to establish a direct cooperation between researchers in Modal Logic and researchers in Description Logic in order to achieve a mutual exchange of knowledge and advanced techniques between these two areas. In the one direction, we aim at adapting the strong techniques and meta-results obtained in the area of Modal Logics to Description Logics. In the other direction, we want to use the algorithmic techniques developed in the area of Description Logics to devise implementable algorithms for Modal Logics. Additionally, we investigate the combination of Modal and Description Logics. From the view point of Description Logics, such combinations allow for the representation of intensional knowledge (e.g. about knowledge and belief of intelligent agents) and of dynamic knowledge (e.g. temporal or spatial knowledge). From the view point of Modal Logics, such combinations are fragments of First (or higher) Order Modal Logics which have attractive computational and model-theoretic properties since their first order part is restricted to Description Logics.
This project is a cooperation with Frank Wolter from the University of Leipzig.
Literature:
Franz Baader, Silvio Ghilardi, and Cesare Tinelli: A new combination procedure for the word problem that generalizes fusion decidability results in modal logics. Information and Computation, 204(10):1413–1452, 2006.
F. Baader and S. Ghilardi: Connecting Many-Sorted Structures and Theories through Adjoint Functions. In Proceedings of the 5th International Workshop on Frontiers of Combining Systems (FroCoS'05), volume 3717 of in Lecture Notes in Artificial Intelligence. Vienna (Austria), Springer-Verlag, 2005.
F. Baader and S. Ghilardi: Connecting Many-Sorted Theories. In Proceedings of the 20th International Conference on Automated Deduction (CADE-05), volume 3632 of in Lecture Notes in Artificial Intelligence, pages 278–294. Tallinn (Estonia), Springer-Verlag, 2005.
Franz Baader and Silvio Ghilardi: Connecting Many-Sorted Theories. LTCS-Report LTCS-05-04, Chair for Automata Theory, Institute for Theoretical Computer Science, Dresden University of Technology, Germany, 2005. See http://lat.inf.tu-dresden.de/research/reports.html.
F. Baader, S. Ghilardi, and C. Tinelli: A New Combination Procedure for the Word Problem that Generalizes Fusion Decidability Results in Modal Logics. In D. Basin and M. Rusinowitch, editors, Proceedings of the 2nd International Joint Conference on Automated Reasoning (IJCAR'04), volume 3097 of in Lecture Notes in Artificial Intelligence, pages 183–197. Springer-Verlag, 2004.
R. Kontchakov, C. Lutz, F. Wolter, and M. Zakharyaschev: Temporal Tableaux. Studia Logica, 76(1):91–134, 2004.
O. Kutz, C. Lutz, F. Wolter, and M. Zakharyaschev: E-Connections of Abstract Description Systems. Artificial Intelligence, 156(1):1–73, 2004.
F. Baader, R Küsters, and F. Wolter: Extensions to Description Logics. In Franz Baader, Diego Calvanese, Deborah McGuinness, Daniele Nardi, and Peter F. Patel-Schneider, editors, The Description Logic Handbook: Theory, Implementation, and Applications, pages 219–261. Cambridge University Press, 2003.
Franz Baader, Jan Hladik, Carsten Lutz, and Frank Wolter: From Tableaux to Automata for Description Logics. Fundamenta Informaticae, 57:1–33, 2003.
C. Lutz, F. Wolter, and M. Zakharyaschev: Reasoning about concepts and similarity. In Proceedings of the 2003 International Workshop on Description Logics (DL2003), in CEUR-WS, 2003.
F. Baader, C. Lutz, H. Sturm, and F. Wolter: Fusions of Description Logics and Abstract Description Systems. Journal of Artificial Intelligence Research (JAIR), 16:1–58, 2002.
C. Lutz, H. Sturm, F. Wolter, and M. Zakharyaschev: A Tableau Decision Algorithm for Modalized ALC with Constant Domains. Studia Logica, 72(2):199–232, 2002.
C. Lutz, U. Sattler, and F. Wolter: Description Logics and the Two-Variable Fragment. In D.L. McGuiness, P.F. Pater-Schneider, C. Goble, and R. Möller, editors, Proceedings of the 2001 International Workshop in Description Logics (DL-2001), pages 66–75, 2001. Proceedings online available from http://SunSITE.Informatik.RWTH-Aachen.DE/Publications/CEUR-WS/
C. Lutz, U. Sattler, and F. Wolter: Modal Logics and the two-variable fragment. In Annual Conference of the European Association for Computer Science Logic CSL'01, in LNCS. Paris, France, Springer Verlag, 2001.
C. Lutz, H. Sturm, F. Wolter, and M. Zakharyaschev: Tableaux for Temporal Description Logic with Constant Domain. In Rajeev Goré, Alexander Leitsch, and Tobias Nipkow, editors, Proceedings of the International Joint Conference on Automated Reasoning, number 2083 in Lecture Notes in Artifical Intelligence, pages 121–136. Siena, Italy, Springer Verlag, 2001.
F. Baader, C. Lutz, H. Sturm, and F. Wolter: Fusions of Description Logics. In F. Baader and U. Sattler, editors, Proceedings of the International Workshop in Description Logics 2000 (DL2000), number 33 in CEUR-WS, pages 21–30. Aachen, Germany, RWTH Aachen, August 2000. Proceedings online available from http://SunSITE.Informatik.RWTH-Aachen.DE/Publications/CEUR-WS/Vol-33/
C. Lutz and U. Sattler: The Complexity of Reasoning with Boolean Modal Logic. In Advances in Modal Logic 2000 (AiML 2000), 2000. Final version appeared in Advanced in Modal Logic Volume 3, 2001.
PhD Project: Non-standard Inference Problems in Description Logics
Principal Investigator: F. Baader
Involved person: R. Küsters
Funded by: Studienstiftung des deutschen Volkes
Literature:
F. Baader, A. Borgida, and D.L. McGuinness: Matching in Description Logics: Preliminary Results. In M.-L. Mugnier and M. Chein, editors, Proceedings of the Sixth International Conference on Conceptual Structures (ICCS-98), volume 1453 of in Lecture Notes in Computer Science, pages 15–34. Montpelier (France), Springer–Verlag, 1998.
F. Baader and P. Narendran: Unification of Concept Terms in Description Logics. In H. Prade, editor, Proceedings of the 13th European Conference on Artificial Intelligence (ECAI-98), pages 331–335. John Wiley & Sons Ltd, 1998.
F. Baader and R. Küsters: Computing the least common subsumer and the most specific concept in the presence of cyclic ALN-concept descriptions. In O. Herzog and A. Günter, editors, Proceedings of the 22nd Annual German Conference on Artificial Intelligence, KI-98, volume 1504 of in Lecture Notes in Computer Science, pages 129–140. Bremen, Germany, Springer–Verlag, 1998.
F. Baader, R. Küsters, and R. Molitor: Structural Subsumption Considered from an Automata Theoretic Point of View. In Proceedings of the 1998 International Workshop on Description Logics DL'98, 1998.
R. Küsters: Characterizing the Semantics of Terminological Cycles in ALN using Finite Automata. In Proceedings of the Sixth International Conference on Principles of Knowledge Representation and Reasoning (KR'98), pages 499–510. Morgan Kaufmann, 1998.
PhD Project: Using Novel Inference Services to support the development of models of chemical processes
Principal Investigator: F. Baader
Involved person: R. Molitor
Funded by: DFG Graduiertenkolleg "Informatik und Technik"
In the PhD project Terminological knowledge representation systems in a process engineering application it has been shown that the development of models of chemical processes can be supported by terminological knowledge representation systems. These systems are based on description logics, a highly expressive formalism with well-defined semantics, and provide powerful inference services. The main focus of the project was the investigation of traditional inference services like computing the subsumption hierarchy and instance checking. The new project aims at the formal investigation of novel inference services that allow for a more comprehensive support for the process engineers.
The development of models of chemical processes as carried out at the Department of Process Engineering is based on the usage of so-called building blocks. Using a DL-system, these building blocks can be stored in a class hierarchy. So far, the integration of new (classes of) building blocks into the existing hierarchy is not sufficiently supported. The given system services only allow for checking consistency of extended classes, but they can not be used to define new classes. Hence, the existing classes become larger and larger, the hierarchy degenerates, and the retrieval and re-use of building blocks becomes harder.
The approach considered in this PhD project can be described as follows: instead of directly defining a new class, the knowledge engineer introduces several typical examples (building blocks) of instances of the new class. These examples (resp. their descritpions) are then translated into individuals (resp. concept descriptions) in the DL knowledge base. For the resulting set of concept descriptions, a concept definition describing the examples as specific as possible is automatically computed by computing so-called most specific concepts (msc) and least common subsumers (lcs) (see [1], [2] for detatils). Unfortunately, it turned out that, due to the nature of the algorithms for computing the msc and the lcs and the inherent complexity of these operations, the resulting concept description does not contain defined concepts and is quite large. Thus, in order to obtain a concept description that is easy to read and comprehend, a rewriting of the result is computed [3]. This rewriting is then offered to the knowledge engineer as a possible candidate for the new class definition.
The inference problems underlying the notions most specific concept (msc) and least common subsumer (lcs) were introduced for the DL used in the DL-system Classic developped at AT&T. For DLs relevant in the process engineering application, all novel inference services employed in the approach have been investigated only recently.
Literature:
F. Baader, R. Küsters, and R. Molitor: Computing Least Common Subsumers in Description Logics with Existential Restrictions. In T. Dean, editor, Proceedings of the 16th International Joint Conference on Artificial Intelligence (IJCAI'99), pages 96–101. Morgan Kaufmann, 1999.
F. Baader and R. Molitor: Rewriting Concepts Using Terminologies. In P. Lambrix, A. Borgida, M. Lenzerini, R. Möller, and P. Patel-Schneider, editors, Proceedings of the International Workshop on Description Logics 1999 (DL'99), number 22 in CEUR-WS. Sweden, Linköping University, 1999. Proceedings online available from http://SunSITE.Informatik.RWTH-Aachen.DE/Publications/CEUR-WS/Vol-22/
F. Baader, R. Molitor, and S. Tobies: Tractable and Decidable Fragments of Conceptual Graphs. In W. Cyre and W. Tepfenhart, editors, Proceedings of the Seventh International Conference on Conceptual Structures (ICCS'99), number 1640 in Lecture Notes in Computer Science, pages 480–493. Springer Verlag, 1999.
F. Baader, R. Küsters, and R. Molitor: Structural Subsumption Considered from an Automata Theoretic Point of View. In Proceedings of the 1998 International Workshop on Description Logics DL'98, 1998.
Postgraduate Programme Specification of discrete processes and systems of processes by operational models and logics
The concept of reactive, technical systems forms the motivation for this research project. Such systems can be considered as configurations of resources on which processes, i.e., sequences of actions are running';' such sequences depend on the inner state of the system and on external events. Examples for reactive systems are operating systems, communication systems, control systems of technical installations and systems for medical diagnosis.In our research project we aim at describing processes of such reactive, technical systems by means of formal methods. From these formal descriptions we try to derive properties of all the processes which run on the system. Because of the complexity of such systems, only formal methods can be a sufficient basis for verification and reliability of such properties.There is a huge diversity of formal models for the decription of processes. Roughly, they can be partitioned into operational models and logics. The aim of the research programme is, on the one hand, to continue the investigation about how to describe processes in a formal way and the comparison of different formalizations. On the other hand, we aim at enriching these formal concepts by verification techniques.
EU-LTR Project: Data Warehouse Quality (DWQ)
Principal Investigator: F. Baader
Involved person: U. Sattler
Funded by: EU
Due to the increasing speed and memory capacities of information systems, the amount of data processed by these systems is steadily increasing. Furthermore, the data available in electronic form is increasing tremendously, too. As a consequence, enterprises can access a huge amount of data concerning their business. Unfortunately, these data is mostly distributed among different systems and thus has different formats and thus cannot be consolidated as a whole. Now, data warehouses are designed to process these huge amounts of data in such a way that decisions can be based on this data. Data warehouses are software tools closely related to database systems which have the following features: They allow (1) to extract and integrate data from different sources into a central schema, (2) to combine and aggregate this integrated data, and (3) to ask ad-hoc queries. Furthermore, they are closely related to "on-line analytical processing"-systems (OLAP) and "decision-support-systems" (DSS).
Within the DWQ project, we are mainly concerned with the investigation of multidimensional aggregation. This comprises aggregation and part-whole relations per se as well as properties of multiply hierarchically structured domains such as time, space, organizations, etc. The understanding of these properties shall lead to a formalism that supports the aggregation of information along multiple dimensions. The degree of support will be evaluated with respect to the quality criteria developed within this project.
Literature:
I. Horrocks and U. Sattler: A Description Logic with Transitive and Inverse Roles and Role Hierarchies. Journal of Logic and Computation, 9(3):385–410, 1999.
F. Baader and U. Sattler: Description Logics with Concrete Domains and Aggregation. In H. Prade, editor, Proceedings of the 13th European Conference on Artificial Intelligence (ECAI-98), pages 336–340. John Wiley & Sons Ltd, 1998.
I. Horrocks and U. Sattler: A Description Logic with Transitive and Converse Roles and Role Hierarchies. In Proceedings of the International Workshop on Description Logics. Povo - Trento, Italy, IRST, 1998.
F. Baader and U. Sattler: Description Logics with Aggregates and Concrete Domains. In Proceedings of the International Workshop on Description Logics, 1997.
Project: Design of a Medical Information System with Vague Knowledge
Principal Investigator: F. Baader
Involved person: C. Tresp
Funded by: DFG Graduiertenkolleg "Informatik und Technik"
Literature:
R. Molitor and C.B. Tresp: Extending Description Logics to Vague Knowledge in Medicine. In P. Szczepaniak, P.J.G. Lisboa, and S. Tsumoto, editors, Fuzzy Systems in Medicine, volume 41 of in Studies in Fuzziness and Soft Computing, pages 617–635. Springer Verlag, 2000.
C.B. Tresp and R. Molitor: A Description Logic for Vague Knowledge. In Proceedings of the 13th biennial European Conference on Artificial Intelligence (ECAI'98), pages 361–365. Brighton, UK, J. Wiley and Sons, 1998.
C. Tresp and U. Tüben: Medical Terminology Processing for a Tutoring System. In International Conference on Computational Intelligence and Multimedia Applications (ICCIMA98), Februar 1998.
J. Weidemann, H.-P. Hohn, J. Hiltner, K. Tochtermann, C. Tresp, D. Bozinov, K. Venjakob, A. Freund, B. Reusch, and H.-W. Denker: A Hypermedia Tutorial for Cross-Sectional Anatomy: HyperMed. Acta Anatomica, 158, 1997.
Project: On the expressive power of loop constructs in imperative programming languages
Involved persons: K. Indermark, C. A. Albayrak
Literature:
Can Adam Albayrak and Thomas Noll: The WHILE Hierarchy of Program Schemes is Infinite. In Maurice Nivat, editor, Proceedings of Foundations of Software Science and Computation Structures, pages 35–47. LNCS 1378, Springer, 1998.
C.A. Albayrak: Die WHILE-Hierarchie für Programmschemata. RWTH Aachen, 1998.
Project: Using Novel Inference Services to support the development of models of chemical processes
Principal Investigator: F. Baader
Involved person: U. Sattler
Funded by: DFG Graduiertenkolleg "Informatik und Technik"
In this project, the suitability of different terminological KR languages for representing relevant concepts in different engineering domains will be investigated. In cooperation with Prof. Dr. Marquardt, Aachen, we are trying to design a terminological KR language that is expressive enough to support modeling in process engineering, without having inference problems of unacceptably high complexity. The goal of representing this knowledge is to support the modeling of large chemical plants for planing and optimization purposes. The complex structure of such plants demands for a highly expressive terminological language, in particular because the support of top-down modeling requires the appropriate treatment of part-whole relations. The formally well-founded and algorithmically manageable integration of such relations is one of our main research goals here.
Literature:
U. Sattler: Terminological knowledge representation systems in a process engineering application. LuFG Theoretical Computer Science, RWTH-Aachen, 1998.
F. Baader and U. Sattler: Description Logics with Symbolic Number Restrictions. In W. Wahlster, editor, Proceedings of the Twelfth European Conference on Artificial Intelligence (ECAI-96), pages 283–287. John Wiley & Sons Ltd, 1996. An extended version has appeared as Technical Report LTCS-96-03
F. Baader and U. Sattler: Knowledge Representation in Process Engineering. In Proceedings of the International Workshop on Description Logics. Cambridge (Boston), MA, U.S.A., AAAI Press/The MIT Press, 1996.
F. Baader and U. Sattler: Number Restrictions on Complex Roles in Description Logics. In Proceedings of the Fifth International Conference on the Principles of Knowledge Representation and Reasoning (KR-96). Morgan Kaufmann, Los Altos, 1996. An extended version has appeared as Technical Report LTCS-96-02
U. Sattler: A Concept Language Extended with Different Kinds of Transitive Roles. In G. Görz and S. Hölldobler, editors, 20. Deutsche Jahrestagung für Künstliche Intelligenz, number 1137 in Lecture Notes in Artificial Intelligence. Springer Verlag, 1996.
Project: Integration of modal operators into terminological knowledge representation languages
Involved persons: F. Baader in cooperation with Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI) andMax-Planck-Institut für Informatik (MPI)
Traditional terminological knowledge representation systems can be used to represent the objective, time-independent knowledge of an application domain. Representing subjective knowledge (e.g., belief and knowledge of intelligent agents) and time-dependent knowledge is only possible in a very restricted way. In systems modeling aspects of intelligent agents, however, intentions, beliefs, and time-dependent facts play an important role.
Modal logics with Kripke-style possible worlds semantics yields a formally well-founded and well-investigated framework for the representation of such notions. However, most modal logics have been defined using first order predicate logic as underlying formalism. This leads to strong undecidability of these logics. Substituting first order predicate logic by terminological languages as underlying formalism, one might substantially raise the expressive power of the terminological language while preserving the decidability of the inference problems.
In collaboration with researchers at DFKI and MPII (Saarbrücken), we are investigating different possibilities for integrating modal operators into terminological KR systems. Our main goal is to design a combined formalism for which all the important terminological inference problems (such as the subsumption problem) are still decidable. Otherwise, it would not be possible to employ such a formalism in an implemented system.
Literature:
F. Baader and A. Laux: Terminological Logics with Modal Operators. In C. Mellish, editor, Proceedings of the 14th International Joint Conference on Artificial Intelligence, pages 808–814. Montréal, Canada, Morgan Kaufmann, 1995.
F. Baader and H.-J. Ohlbach: A Multi-Dimensional Terminological Knowledge Representation Language. J. Applied Non-Classical Logics, 5:153–197, 1995.
H.-J. Ohlbach and F. Baader: A Multi-Dimensional Terminological Knowledge Representation Language. In Proceedings of the 13th International Joint Conference on Artificial Intelligence, IJCAI-93, pages 690–695, 1993.
DFG Project: Combination of special deduction procedures
Principal Investigator: F. Baader
Involved person: J. Richts
Funded by: DFG, Schwerpunkt "Deduktion"
Since September 1994, this research project is funded within the Schwerpunkt "Deduktion" by the Deutsche Forschungsgemeinschaft (DFG) for two years, possibly with a prolongation for two more years. It is joint work with the research group of Prof. Schulz at the CIS, University of Munich.
This research is mainly concerned with combining decision procedures for unification problems. Such a procedure can be used to decide whether a given set of equations is solvable with respect to an equational theory or a combination of equational theories. For unification procedures that return complete sets of unifiers, the combination problem has been investigated in greater detail. In contrast to these procedures, a decision procedure only returns a boolean value as result indicating whether a solution exists or not.
One aim of this research project is to develop optimizations of the known combination method for decision procedures, which apply to restricted classes of equational theories. The general combination algorithm is nondeterministic, which means that in the worst-case, exponentially many possibilities must be investigated. If the equational theories under consideration satisfy some simple syntactic restrictions, large parts of the search tree can be pruned. We will investigate to which extent such optimizations are possible.
Another aim is to generalize the existing combination algorithms, for instance to the case of theories with non-disjoint signatures, or to more general problems than unification problems. The long-term objective of this research is to reach a better understanding of the basic algorithmic problems that occur when combining special deduction procedures, and to develop a rather general combination framework.
Since many optimized combination algorithms depend on special procedures that satisfy additional requirements, we will also investigate how well-known special inference procedures can be extended in this direction.
In order to be able to assess the effects of our optimizations and to determine their relevance in practice, we will implement the investigated unification algorithms - the combination algorithm as well as selected special algorithms. For the implementation of special unification algorithms, we have chosen the equational theories A, AC and ACI, which contain a binary function symbol that is associative, commutative, and/or idempotent.
Literature:
Stephan Kepser and Jörn Richts: Optimisation Techniques for Combining Constraint Solvers. In Dov Gabbay and Maarten de Rijke, editors, Frontiers of Combining Systems 2, Papers presented at FroCoS'98, pages 193–210. Amsterdam, Research Studies Press/Wiley, 1999.
Stephan Kepser and Jörn Richts: UniMoK: A System for Combining Equational Unification Algorithms. In Rewriting Techniques and Applications, Proceedings RTA-99, volume 1631 of in Lecture Notes in Computer Science, pages 248–251. Springer-Verlag, 1999.
Jörn Richts: Effiziente Entscheidungsverfahren zur E-Unifikation. PhD Thesis, RWTH Aachen. Shaker Verlag, Germany, 1999.
F. Baader and K. Schulz: Combination of Constraint Solvers for Free and Quasi-Free Structures. Theoretical Computer Science, 192:107–161, 1998.
F. Baader and C. Tinelli: Deciding the Word Problem in the Union of Equational Theories. UIUCDCS-Report UIUCDCS-R-98-2073, Department of Computer Science, University of Illinois at Urbana-Champaign, 1998.
F. Baader: Combination of Compatible Reduction Orderings that are Total on Ground Terms. In G. Winskel, editor, Proceedings of the Twelfth Annual IEEE Symposium on Logic in Computer Science (LICS-97), pages 2–13. Warsaw, Poland, IEEE Computer Society Press, 1997.
F. Baader and C. Tinelli: A New Approach for Combining Decision Procedures for the Word Problem, and Its Connection to the Nelson-Oppen Combination Method. In W. McCune, editor, Proceedings of the 14th International Conference on Automated Deduction (CADE-97), volume 1249 of in Lecture Notes in Artificial Intelligence, pages 19–33. Springer-Verlag, 1997.
Franz Baader and Klaus U. Schulz, editors: Frontiers of Combining Systems. Kluwer Academic Publishers, 1996.
F. Baader and K. U. Schulz: Unification in the Union of Disjoint Equational Theories: Combining Decision Procedures. J. Symbolic Computation, 21:211–243, 1996.
Xiaorong Huang, Manfred Kerber, Michael Kohlhase, Erica Melis, Dan Nesmith, Jörn Richts, and Jörg Siekmann: Die Beweisentwicklungsumgebung Omega-MKRP. Informatik – Forschung und Entwicklung, 11(1):20–26, 1996. In German
F. Baader and K.U. Schulz: Combination Techniques and Decision Problems for Disunification. Theoretical Computer Science B, 142:229–255, 1995.
F. Baader and K.U. Schulz: Combination of Constraint Solving Techniques: An Algebraic Point of View. In Proceedings of the 6th International Conference on Rewriting Techniques and Applications, volume 914 of in Lecture Notes in Artificial Intelligence, pages 352–366. Kaiserslautern, Germany, Springer Verlag, 1995.
F. Baader and K.U. Schulz: On the Combination of Symbolic Constraints, Solution Domains, and Constraint Solvers. In Proceedings of the International Conference on Principles and Practice of Constraint Programming, CP95, volume 976 of in Lecture Notes in Artificial Intelligence, pages 380–397. Cassis, France, Springer Verlag, 1995.
Xiaorong Huang, Manfred Kerber, Michael Kohlhase, Erica Melis, Dan Nesmith, Jörn Richts, and Jörg Siekmann: KEIM: A Toolkit for Automated Deduction. In Alan Bundy, editor, Automated Deduction — CADE-12, in Proceedings of the 12th International Conference on Automated Deduction, pages 807–810. Nancy, Springer-Verlag LNAI 814, 1994.
F. Baader and K. Schulz: Unification in the Union of Disjoint Equational Theories: Combining Decision Procedures. In Proceedings of the 11th International Conference on Automated Deduction, CADE-92, volume 607 of in Lecture Notes in Computer Science, pages 50–65. Saratoga Springs (USA), Springer–Verlag, 1992.