Dissertationen

The list of theses is also available as PDF document.

2017201620142011
2010200920082007200620022001
200019991998

2017

İsmail İlkan Ceylan: Query Answering in Probabilistic Data and Knowledge Bases. Doctoral Thesis, Technische Universität Dresden, Dresden, Germany, 2017.
Abstract  BibTeX Entry  PDF File  Publication
Probabilistic data and knowledge bases are becoming increasingly important in academia and industry. They are continuously extended with new data, powered by modern information extraction tools that associate probabilities with knowledge base facts. The state of the art to store and process such data is founded on probabilistic database systems, which are widely and successfully employed. Beyond all the success stories, however, such systems still lack the fundamental machinery to convey some of the valuable knowledge hidden in them to the end user, which limits their potential applications in practice. In particular, in their classical form, such systems are typically based on strong, unrealistic limitations, such as the closed-world assumption, the closed-domain assumption, the tuple-independence assumption, and the lack of commonsense knowledge. These limitations do not only lead to unwanted consequences, but also put such systems on weak footing in important tasks, querying answering being a very central one. In this thesis, we enhance probabilistic data and knowledge bases with more realistic data models, thereby allowing for better means for querying them. Building on the long endeavor of unifying logic and probability, we develop different rigorous semantics for probabilistic data and knowledge bases, analyze their computational properties and identify sources of (in)tractability and design practical scalable query answering algorithms whenever possible. To achieve this, the current work brings together some recent paradigms from logics, probabilistic inference, and database theory.
@thesis{ Ceylan-Diss-2017,
author = {{\.I}smail {\.I}lkan {Ceylan}},
school = {Technische Universit\"{a}t Dresden},
title = {Query Answering in Probabilistic Data and Knowledge Bases},
type = {Doctoral Thesis},
year = {2017},
}


Veronika Thost: Using Ontology-Based Data Access to Enable Context Recognition in the Presence of Incomplete Information. Doctoral Thesis, Technische Universität Dresden, Dresden, Germany, 2017.
Abstract  BibTeX Entry  Publication
Ontology-based data access (OBDA) augments classical query answering in databases by including domain knowledge provided by an ontology. An ontology captures the terminology of an application domain and describes domain knowledge in a machine-processable way. Formal ontology languages additionally provide semantics to these specifications. Systems for OBDA thus may apply logical reasoning to answer queries; they use the ontological knowledge to infer new information, which is only implicitly given in the data. Moreover, they usually employ the open-world assumption, which means that knowledge not stated explicitly in the data or inferred is neither assumed to be true nor false. Classical OBDA regards the knowledge however only w.r.t. a single moment, which means that information about time is not used for reasoning and hence lost; in particular, the queries generally cannot express temporal aspects. We investigate temporal query languages that allow to access temporal data through classical ontologies. In particular, we study the computational complexity of temporal query answering regarding ontologies written in lightweight description logics, which are known to allow for efficient reasoning in the atemporal setting and are successfully applied in practice. Furthermore, we present a so-called rewritability result for ontology-based temporal query answering, which suggests ways for implementation. Our results may thus guide the choice of a query language for temporal OBDA in data-intensive applications that require fast processing, such as context recognition.
@thesis{ Thost-Diss-2017,
author = {Veronika {Thost}},
school = {Technische Universit\"{a}t Dresden},
title = {Using Ontology-Based Data Access to Enable Context Recognition in the Presence of Incomplete Information},
type = {Doctoral Thesis},
year = {2017},
}


2016

Andreas Ecke: Quantitative Methods for Similarity in Description Logics. Doctoral Thesis, Technische Universität Dresden, Dresden, Germany, 2016.
Abstract  BibTeX Entry  PDF File
Description Logics (DLs) are a family of logic-based knowledge representation languages used to describe the knowledge of an application domain and reason about it in formally well-defined way. They allow users to describe the important notions and classes of the knowledge domain as concepts, which formalize the necessary and sufficient conditions for individual objects to belong to that concept. A variety of different DLs exist, differing in the set of properties one can use to express concepts, the so-called concept constructors, as well as the set of axioms available to describe the relations between concepts or individuals. However, all classical DLs have in common that they can only express exact knowledge, and correspondingly only allow exact inferences. Either we can infer that some individual belongs to a concept, or we can't, there is no in-between. In practice though, knowledge is rarely exact. Many definitions have their exceptions or are vaguely formulated in the first place, and people might not only be interested in exact answers, but also in alternatives that are "close enough".

This thesis is aimed at tackling how to express that something "close enough", and how to integrate this notion into the formalism of Description Logics. To this end, we will use the notion of similarity and dissimilarity measures as a way to quantify how close exactly two concepts are. We will look at how useful measures can be defined in the context of DLs, and how they can be incorporated into the formal framework in order to generalize it. In particular, we will look closer at two applications of thus measures to DLs: Relaxed instance queries will incorporate a similarity measure in order to not just give the exact answer to some query, but all answers that are reasonably similar. Prototypical definitions on the other hand use a measure of dissimilarity or distance between concepts in order to allow the definitions of and reasoning with concepts that capture not just those individuals that satisfy exactly the stated properties, but also those that are "close enough".
@thesis{ Ecke-PhD-2016,
author = {Andreas {Ecke}},
school = {Technische Universit\"{a}t Dresden},
title = {Quantitative Methods for Similarity in Description Logics},
type = {Doctoral Thesis},
year = {2016},
}


2014

Stefan Borgwardt: Fuzzy Description Logics with General Concept Inclusions. Doctoral Thesis, Technische Universität Dresden, Dresden, Germany, 2014.
Abstract  BibTeX Entry  Publication
Description logics (DLs) are used to represent knowledge of an application domain and provide standard reasoning services to infer consequences of this knowledge. However, classical DLs are not suited to represent vagueness in the description of the knowledge. We consider a combination of DLs and Fuzzy Logics to address this task. In particular, we consider the t-norm-based semantics for fuzzy DLs introduced by Hajek in 2005. Since then, many tableau algorithms have been developed for reasoning in fuzzy DLs. Another popular approach is to reduce fuzzy ontologies to classical ones and use existing highly optimized classical reasoners to deal with them. However, a systematic study of the computational complexity of the different reasoning problems is so far missing from the literature on fuzzy DLs. Recently, some of the developed tableau algorithms have been shown to be incorrect in the presence of general concept inclusion axioms (GCIs). In some fuzzy DLs, reasoning with GCIs has even turned out to be undecidable. This work provides a rigorous analysis of the boundary between decidable and undecidable reasoning problems in t-norm-based fuzzy DLs, in particular for GCIs. Existing undecidability proofs are extended to cover large classes of fuzzy DLs, and decidability is shown for most of the remaining logics considered here. Additionally, the computational complexity of reasoning in fuzzy DLs with semantics based on finite lattices is analyzed. For most decidability results, tight complexity bounds can be derived.
@thesis{ Borgwardt-Diss-2014,
author = {Stefan {Borgwardt}},
school = {Technische Universit\"{a}t Dresden},
title = {Fuzzy Description Logics with General Concept Inclusions},
type = {Doctoral Thesis},
year = {2014},
}


Marcel Lippmann: Temporalised Description Logics for Monitoring Partially Observable Events. Doctoral Thesis, Technische Universität Dresden, Dresden, Germany, 2014.
Abstract  BibTeX Entry  Publication
Inevitably, it becomes more and more important to verify that the systems surrounding us have certain properties. This is indeed unavoidable for safety-critical systems such as power plants and intensive-care units. We refer to the term system in a broad sense: it may be man-made (e.g. a computer system) or natural (e.g. a patient in an intensive-care unit). Whereas in Model Checking it is assumed that one has complete knowledge about the functioning of the system, we consider an open-world scenario and assume that we can only observe the behaviour of the actual running system by sensors. Such an abstract sensor could sense e.g. the blood pressure of a patient or the air traffic observed by radar.

Then the observed data are preprocessed appropriately and stored in a fact base. Based on the data available in the fact base, situation-awareness tools are supposed to help the user to detect certain situations that require intervention by an expert. Such situations could be that the heart-rate of a patient is rather high while the blood pressure is low, or that a collision of two aeroplanes is about to happen.

Moreover, the information in the fact base can be used by monitors to verify that the system has certain properties. It is not realistic, however, to assume that the sensors always yield a complete description of the current state of the observed system. Thus, it makes sense to assume that information that is not present in the fact base is unknown rather than false. Moreover, very often one has some knowledge about the functioning of the system. This background knowledge can be used to draw conclusions about the possible future behaviour of the system. Employing description logics (DLs) is one way to deal with these requirements. In this thesis, we tackle the sketched problem in three different contexts: (i) runtime verification using a temporalised DL, (ii) temporalised query entailment, and (iii) verification in DL-based action formalisms.
@thesis{ Lippmann-Diss-2014,
author = {Marcel {Lippmann}},
school = {Technische Universit\"{a}t Dresden},
title = {Temporalised Description Logics for Monitoring Partially Observable Events},
type = {Doctoral Thesis},
year = {2014},
}


2011

Felix Distel: Learning Description Logic Knowledge Bases from Data Using Methods from Formal Concept Analysis. Doctoral Thesis, Technische Universität Dresden, Dresden, Germany, 2011.
Abstract  BibTeX Entry  PDF File  Publication
Description Logics (DLs) are a class of knowledge representation formalisms that can represent terminological and assertional knowledge using a well-defined semantics. Often, knowledge engineers are experts in their own fields, but not in logics, and require assistance in the process of ontology design. This thesis presents three methods that can extract terminological knowledge from existing data and thereby assist in the design process. They are based on similar formalisms from Formal Concept Analysis (FCA), in particular the Next-Closure Algorithm and Attribute-Exploration. The first of the three methods computes terminological knowledge from the data, without any expert interaction. The two other methods use expert interaction, where a human expert can confirm each terminological axiom or refute it by providing a counterexample. These two methods differ only in the way counterexamples are provided.
@thesis{ dis-diss-2011,
author = {Felix {Distel}},
school = {Technische Universit\"{a}t Dresden},
title = {Learning Description Logic Knowledge Bases from Data Using Methods from Formal Concept Analysis},
type = {Doctoral Thesis},
year = {2011},
}


2010

Martin Knechtel: Access Restrictions to and with Description Logic Web Ontologies. Doctoral Thesis, Technische Universität Dresden, Dresden, Germany, 2010.
Abstract  BibTeX Entry  PDF File  Publication  Slides
Access restrictions are essential in standard information systems and became an issue for ontologies in the following two aspects. Ontologies can represent explicit and implicit knowledge about an access policy. For this aspect we provided a methodology to represent and systematically complete role-based access control policies. Orthogonally, an ontology might be available for limited reading access. Independently of a specific ontology language or reasoner, we provided a lattice-based framework to assign labels to an ontology's axioms and consequences. We looked at the problems to compute and repair one or multiple consequence labels and to assign a query-based access restriction. An empirical evaluation has shown that the algorithms perform well in practical scenarios with large-scale ontologies.
@thesis{ kne-diss-2010,
author = {Martin {Knechtel}},
school = {Technische Universit\"{a}t Dresden},
title = {Access Restrictions to and with Description Logic Web Ontologies},
type = {Doctoral Thesis},
year = {2010},
}


Hongkai Liu: Computing Updates in Description Logics. Doctoral Thesis, Technische Universität Dresden, Dresden, Germany, 2010.
Abstract  BibTeX Entry  PDF File  Publication
Description Logics (DLs) form a family of knowledge representation formalisms which can be used to represent and reason with conceptual knowledge about a domain of interest. The knowledge represented by DLs is mainly static. In many applications, the domain knowledge is dynamic. This observation motivates the research on how to update the knowledge when changes in the application domain take place. This thesis is dedicated to the study of updating knowledge, more precisely, assertional knowledge represented in DLs. We explore whether the updated knowledge can be expressed in several standard DLs and, if so, whether it is computable and what is its size.
@thesis{ Liu-PhD-10,
author = {Hongkai {Liu}},
school = {Technische Universit\"{a}t Dresden},
title = {Computing Updates in Description Logics},
type = {Doctoral Thesis},
year = {2010},
}


2009

Rafael Peñaloza Nyssen: Axiom-Pinpointing in Description Logics and Beyond. Doctoral Thesis, Technische Universität Dresden, Dresden, Germany, 2009.
Abstract  BibTeX Entry  PDF File  Publication
Building and mantaining large-scale ontologies is an error-prone task. It is thus not uncommon to find unwanted or unexpected consequences that follow implicitely from the restrictions in the ontology. To understand and correct these consequences, it is helpful to find the specific portions of the ontology that are responsible for them. Axiom-pinpointing is the task of finding minimal subontologies that entail a given consequence, also called MinAs. In this work we look at the task of computing all the MinAs by means of modified decision procedures. We first show that tableaux- and automata-based decision procedures can be transformed into pinpointing algorithms that output a (compact) representation of the set of all MinAs. We then explore the complexity of the problem.
@thesis{ Pen-PhD-09,
author = {Rafael Pe\~{n}aloza {Nyssen}},
school = {Technische Universit\"{a}t Dresden},
title = {Axiom-Pinpointing in Description Logics and Beyond},
type = {Doctoral Thesis},
year = {2009},
}


Boontawee Suntisrivaraporn: Polynomial-Time Reasoning Support for Design and Maintenance of Large-Scale Biomedical Ontologies. Doctoral Thesis, Technische Universität Dresden, Dresden, Germany, 2009.
Abstract  BibTeX Entry  PDF File  Publication
Description Logics (DLs) belong to a successful family of knowledge representation formalisms with two key assets: formally well-defined semantics which allows to represent knowledge in an unambiguous way and automated reasoning which allows to infer implicit knowledge from the one given explicitly. This thesis investigates various reasoning techniques for tractable DLs in the EL family which have been implemented in the CEL system. It suggests that the use of the lightweight DLs, in which reasoning is tractable, is beneficial for ontology design and maintenance both in terms of expressivity and scalability. The claim is supported by a case study on the renown medical ontology SNOMED CT and extensive empirical evaluation on several large-scale biomedical ontologies.
@thesis{ Sun-PhD-09,
author = {Boontawee {Suntisrivaraporn}},
school = {Technische Universit\"{a}t Dresden},
title = {Polynomial-Time Reasoning Support for Design and Maintenance of Large-Scale Biomedical Ontologies},
type = {Doctoral Thesis},
year = {2009},
}


2008

Maja Miličić: Action, Time and Space in Description Logics. Doctoral Thesis, Technische Universität Dresden, Dresden, Germany, 2008.
Abstract  BibTeX Entry  PDF File  Publication
Description Logics (DLs) are a family of logic-based knowledge representation (KR) formalisms designed to represent and reason about static conceptual knowledge in a semantically well-understood way. On the other hand, standard action formalisms are KR formalisms based on classical logic designed to model and reason about dynamic systems. The largest part of the present work is dedicated to integrating DLs with action formalisms, with the main goal of obtaining decidable action formalisms with an expressiveness significantly beyond propositional. To this end, we offer DL-tailored solutions to the frame and ramification problem. The main technical result is that standard reasoning problems about actions (executability and projection), as well as the plan existence problem are decidable if one restricts the logic for describing action pre- and post-conditions and the state of the world to decidable Description Logics. Moreover, reasoning about DL actions is reducible to standard DL reasoning problems, with the nice consequence that already available DL reasoners can be employed to provide automated support for reasoning about a dynamically changing world. A smaller part of the work is related to the design of a tableau algorithm for decidable extensions of Description Logics with concrete datatypes, most importantly with those allowing to refer to the notions of space and time, and with powerful domain constraints (general concept inclusions).
@thesis{ MajaMilicicDiss,
author = {Maja {Mili\v{c}i\'{c}}},
school = {Technische Universit\"{a}t Dresden},
title = {Action, Time and Space in Description Logics},
type = {Doctoral Thesis},
year = {2008},
}


2007

Jan Hladik: To and Fro Between Tableaus and Automata for Description Logics. Doctoral Thesis, Technische Universität Dresden, Dresden, Germany, 2007.
Abstract  BibTeX Entry  Publication
Description Logics (DLs) are a family of knowledge representation languages with well-defined logic-based semantics and decidable inference problems, e.g. satisfiability. Two of the most widely used decision procedures for the satisfiability problem are tableau- and automata-based algorithms. Due to their different operation, these two classes have complementary properties: tableau algorithms are well-suited for implementation and for showing PSPACE and NEXPTIME complexity results, whereas automata algorithms are particularly useful for showing EXPTIME results. Additionally, they allow for an elegant handling of infinite structures, but they are not suited for implementation. The aim of this thesis is to analyse the reasons for these differences and to find ways of transferring properties between the two approaches in order to reconcile the positive properties of both.

In a first approach, we describe a method to translate automata into DL knowledge bases, which allows us to use tableau reasoners to decide the automata emptiness problem. Since empirical results show that this does not lead to acceptable performance in practice, we develop a way for transferring PSPACE complexity results from tableaus to automata by modifying the automata emptiness test with optimisation techniques known from tableaus, in particular by using a blocking condition. Finally, in order to transfer EXPTIME complexity results from the automata to the tableau paradigm, we develop Tableau Systems, a framework for tableau algorithms. From an algorithm formalised within this framework, it is possible to derive both an EXPTIME automata algorithm and a tableau algorithm that is useable in practice.
@thesis{ Hladik-Diss-2007,
school = {Technische Universit\"{a}t Dresden},
title = {To and Fro Between Tableaus and Automata for Description Logics},
type = {Doctoral Thesis},
year = {2007},
}


Barış Sertkaya: Formal Concept Analysis Methods for Description Logics. Doctoral Thesis, Technische Universität Dresden, Dresden, Germany, 2007.
Abstract  BibTeX Entry  Publication
Description Logics (DLs) are a class of logic based knowledge representation formalisms that are used to represent terminological knowledge of an application domain in a structured way. Formal Concept Analysis (FCA), on the other hand, is a field of applied mathematics that aims to formalize the notions of a concept and a conceptual hierarchy by means of mathematical tools. Although the notion of a concept as a collection of objects sharing certain properties, and the notion of a conceptual hierarchy are fundamental to both DLs and FCA, the ways concepts are described and obtained differ significantly between these two research areas.

In the present work we show that, despite these differences, DL research can benefit from FCA research for solving problems encountered in knowledge representation by employing FCA methods. Our contributions in this direction fall mainly under two topics: 1) supporting bottom-up construction of DL knowledge bases, 2) completing DL knowledge bases. In the first one we show how the attribute exploration method can be used for computing least common subsumers w.r.t. a background knowledge base. In the second one, we develop an extension of FCA for open-world semantics of DL knowledge bases, and show how the extended attribute exploration method can be used to detect missing axioms and individuals in a DL knowledge base. Moreover we also contribute to FCA research by investigating the computational complexity of several decision and counting problems related to minimal generators of closed sets.
@thesis{ Sertkaya-Diss-2007,
author = {Bar\i{}\c{s} {Sertkaya}},
school = {Technische Universit\"{a}t Dresden},
title = {Formal Concept Analysis Methods for Description Logics},
type = {Doctoral Thesis},
year = {2007},
}


Anni-Yasmin Turhan: On the Computation of Common Subsumers in Description Logics. Doctoral Thesis, Technische Universität Dresden, Dresden, Germany, 2007.
Abstract  BibTeX Entry  PDF File  Publication
Description logics (DL) knowledge bases are often build by users with expertise in the application domain, but little expertise in logic. To support this kind of users when building their knowledge bases a number of extension methods have been proposed to provide the user with concept descriptions as a starting point for new concept definitions. The inference service central to several of these approaches is the computation of (least) common subsumers of concept descriptions.

In case disjunction of concepts can be expressed in the DL under consideration, the least common subsumer (lcs) is just the disjunction of the input concepts. Such a trivial lcs is of little use as a starting point for a new concept definition to be edited by the user. To address this problem we propose two approaches to obtain "meaningful" common subsumers in the presence of disjunction tailored to two different methods to extend DL knowledge bases. More precisely, we devise computation methods for the approximation-based approach and the customization of DL knowledge bases, extend these methods to DLs with number restrictions and discuss their efficient implementation.
@thesis{ Turhan-PhD,
author = {Anni-Yasmin {Turhan}},
school = {Technische Universit\"{a}t Dresden},
title = {On the Computation of Common Subsumers in Description Logics},
type = {Doctoral Thesis},
year = {2007},
}


2006

Sebastian-Philipp Brandt: Standard and Non-standard Reasoning in Description Logics. Doctoral Thesis, Technische Universität Dresden, Dresden, Germany, 2006.
Abstract  BibTeX Entry  PDF File  Publication
The present work deals with Description Logics (DLs), a class of knowledge representation formalisms used to represent and reason about classes of individuals and relations between such classes in a formally well-defined way. We provide novel results in three main directions. (1) Tractable reasoning revisited: in the 1990s, DL research has largely answered the question for practically relevant yet tractable DL formalisms in the negative. Due to novel application domains, especially the Life Sciences, and a surprising tractability result by Baader, we have re-visited this question, this time looking in a new direction: general terminologies (TBoxes) and extensions thereof defined over the DL EL and extensions thereof. As main positive result, we devise EL++(D)-CBoxes as a tractable DL formalism with optimal expressivity in the sense that every additional standard DL constructor, every extension of the TBox formalism, or every more powerful concrete domain, makes reasoning intractable. (2) Non-standard inferences for knowledge maintenance: non-standard inferences, such as matching, can support domain experts in maintaining DL knowledge bases in a structured and well-defined way. In order to extend their availability and promote their use, the present work extends the state of the art of non-standard inferences both w.r.t. theory and implementation. Our main results are implementations and performance evaluations of known matching algorithms for the DLs ALE and ALN, optimal non-deterministic polynomial time algorithms for matching under acyclic side conditions in ALN and sublanguages, and optimal algorithms for matching w.r.t. cyclic (and hybrid) EL-TBoxes. (3) Non-standard inferences over general concept inclusion (GCI) axioms: the utility of GCIs in modern DL knowledge bases and the relevance of non-standard inferences to knowledge maintenance naturally motivate the question for tractable DL formalism in which both can be provided. As main result, we propose hybrid EL-TBoxes as a solution to this hitherto open question.
@thesis{ Brandt-PhD-2006,
author = {Sebastian-Philipp {Brandt}},
school = {Technische Universit\"{a}t Dresden},
title = {Standard and Non-standard Reasoning in Description Logics},
type = {Doctoral Thesis},
year = {2006},
}


2002

T. Hinze: Universelle Modelle und ausgewählte Algorithmen des DNA-Computing. Doctoral Thesis, Technische Universität Dresden, Dresden, Germany, 2002.
Abstract  BibTeX Entry  PDF File  PS File
Die Arbeit beleuchtet das Forschungsgebiet des DNA-Computing vordergründig aus dem Blickwinkel der Berechenbarkeitstheorie. Universelle sowie platzbeschränkt universelle Modelle des DNA-Computing, deren DNA-basierte Daten auf der Abbildung linearer DNA beruhen, werden untersucht, klassifiziert und als Beschreibungssysteme für Algorithmen angewendet. Mit dem TT6-EH-System und dem Simulationssystem Sisyphus werden zwei universelle DNA-Computing-Modelle eingeführt, deren Modelleigenschaften labornah ausgerichtet sind. Das TT6-EH-System stellt ein endlichkomponentiges verteiltes Splicing-System dar, das sich durch einen statischen Systemaufbau, eine Minimierung der in die Verarbeitung einbezogenen Ressourcen und ein niedriges Abstraktionsniveau der Modelloperationen auszeichnet. Das Simulationssystem Sisyphus berücksichtigt Seiteneffekte der den Modelloperationen zugrundeliegenden molekularbiologischen Prozesse. Zusätzlich besitzt das Modell die Eigenschaften "restriktiv" und "multimengenbasiert". Anhand einer Probleminstanz des NP-vollständigen Rucksackproblems erfolgte eine laborpraktische Verifikation.
@phdthesis{ Hinze-TUD-02,
author = {T. {Hinze}},
school = {Technische Universit\"at Dresden},
title = {Universelle Modelle und ausgew\"ahlte Algorithmen des DNA-Computing},
type = {Doctoral Thesis},
year = {2002},
}


Carsten Lutz: The Complexity of Description Logics with Concrete Domains. Doctoral Thesis, Rheinisch-Westfälische Technische Hochschule Aachen, Aachen, Germany, 2002.
Abstract  BibTeX Entry  PDF File  PS File  Publication
Concrete domains are an extension of Description Logics (DLs) that allows to integrate reasoning about conceptual knowledge with reasoning about "concrete qualities" of real world entities such as their age, weight, shape, and temporal extension. In this thesis, we perform an in-depth analysis of the computational complexity of reasoning with DLs that are equipped with concrete domains.

The main results are that (i) reasoning with ALC(D), the basic DL ALC extended with a concrete domain D, is PSpace-complete if reasoning with D is in PSpace; (ii) for many seemingly "harmless" extensions of ALC(D), such as the extension with acyclic TBoxes, role conjunction, and inverse roles, the complexity of reasoning leaps from PSpace-completeness to NExpTime-completeness—at least for a large class of concrete domains; and (iii) although, in general, the combination of concrete domains and general TBoxes leads to undecidability of reasoning, there exists an interesting temporal concrete domain that can be combined with general TBoxes without sacrificing decidability. This concrete domain is used to devise a rather powerful interval-based temporal Description Logic.
As a side-track, we illuminate the connection between Description Logics equipped with concrete domains and DLs providing for so-called feature agreement and disagreement constructors. Indeed, this connection turns out to be quite close: in most cases, the complexity of reasoning with a DL providing for concrete domains is identical to the complexity of reasoning with the corresponding DL that offers feature (dis)agreements.
@thesis{ Lutz-PhD-2002,
author = {Carsten {Lutz}},
school = {Rheinisch-Westf\"{a}lische Technische Hochschule Aachen},
title = {The Complexity of Description Logics with Concrete Domains},
type = {Doctoral Thesis},
year = {2002},
}


2001

Stephan Tobies: Complexity Results and Practical Algorithms for Logics in Knowledge Representation. Doctoral Thesis, Rheinisch-Westfälische Technische Hochschule Aachen, Aachen, Germany, 2001.
Abstract  BibTeX Entry  PDF File  PS File  Publication
Description Logics (DLs) are used in knowledge-based systems to represent and reason about terminological knowledge of the application domain in a semantically well-defined manner. In this thesis, we establish a number of novel complexity results and give practical algorithms for expressive DLs that provide different forms of counting quantifiers.
We show that, in many cases, adding local counting in the form of qualifying number restrictions to DLs does not increase the complexity of the inference problems, even if binary coding of numbers in the input is assumed. On the other hand, we show that adding different forms of global counting restrictions to a logic may increase the complexity of the inference problems dramatically.

We provide exact complexity results and a practical, tableau based algorithm for the DL SHIQ, which forms the basis of the highly optimized DL system iFaCT.

Finally, we describe a tableau algorithm for the clique guarded fragment (CGF), which we hope will serve as the basis for an efficient implementation of a CGF reasoner.
@thesis{ Tobies-PhD-2001,
author = {Stephan {Tobies}},
school = {Rheinisch-Westf\"{a}lische Technische Hochschule Aachen},
title = {Complexity Results and Practical Algorithms for Logics in Knowledge Representation},
type = {Doctoral Thesis},
year = {2001},
}


2000

Ralf Küsters: Non-Standard Inferences in Description Logics. Doctoral Thesis, Rheinisch-Westfälische Technische Hochschule Aachen, Aachen, Germany, 2000.
Description Logics denote a family of knowledge representation formalisms that can be used to represent the terminological knowledge of an application domain in a structured and well-defined way. The standard inferences in these logics, like subsumption and instance checking, have been investigated in detail during the last fifteen years, both from a theoretical and a practical point of view. In some applications it has turned out, however, that the optimal support of building and maintaining large knowledge bases requires additional (non-standard) inferences, such as least common subsumer, most specific concept, and matching of concept descriptions. Some systems contain ad hoc implementations of such inferences, mostly without having defined them formally or having investigated their computational complexity. In this work, a formal basis for these inferences is established by providing precise definitions, complete algorithms, and first complexity results.
@thesis{ Kuesters-Diss-2000,
author = {Ralf {K\"{u}sters}},
school = {Rheinisch-Westf\"{a}lische Technische Hochschule Aachen},
title = {Non-Standard Inferences in Description Logics},
type = {Doctoral Thesis},
year = {2000},
}


Ralf Molitor: Unterstützung der Modellierung verfahrenstechnischer Prozesse durch Nicht-Standardinferenzen in Beschreibungslogiken. Doctoral Thesis, Rheinisch-Westfälische Technische Hochschule Aachen, Aachen, Germany, 2000.
Abstract  BibTeX Entry  PDF File  PS File  Publication
In der Prozesstechnik ist, wie in vielen anderen Bereichen, eine strukturierte Darstellung und Speicherung des anwendungsspezifischen Wissens wünschenswert. Im Rahmen einer Kooperation zwischen dem Lehrstuhl für Prozesstechnik und dem Lehr- und Forschungsgebiet Theoretische Informatik wurde bereits nachgewiesen, dass sich Beschreibungslogiken aufgrund ihrer hohen Ausdrucksstärke und mächtigen Inferenzalgorithmen sehr gut für diese Aufgabe eignen. So ermöglichen die standardmäßig von Beschreibungslogik-Systemen bereitgestellten Inferenzdienste beispielsweise die automatische Berechnung der Spezialisierungshierarchie einer Wissensbasis. Es hat sich jedoch herausgestellt, dass sie für eine umfassende Unterstützung der Erstellung und Wartung der Wissensbasis nicht ausreichen.
In dieser Arbeit werden daher die Nicht-Standardinferenzen Least Common Subsumer, Most Specific Concept und Rewriting untersucht, die im Zusammenspiel die Definition neuer Konzepte und damit die Erweiterung und Pflege der Wissensbasis unterstützen. Die Resultate zur Existenz, Berechenbarkeit und Komplexität sowie die Entwicklung vollständiger Algorithmen zur Lösung dieser Inferenzprobleme konzentrieren sich dabei auf Beschreibungslogiken, die bereits erfolgreich in der Prozesstechnik eingesetzt werden.
@thesis{ Molitor-diss,
author = {Ralf {Molitor}},
school = {Rheinisch-Westf\"{a}lische Technische Hochschule Aachen},
title = {Unterst\"{u}tzung der Modellierung verfahrenstechnischer Prozesse durch Nicht-Standardinferenzen in Beschreibungslogiken},
type = {Doctoral Thesis},
year = {2000},
}


1999

Jörn Richts: Effiziente Entscheidungsverfahren zur E-Unifikation. Doctoral Thesis, Rheinisch-Westfälische Technische Hochschule Aachen, Aachen, Germany, 1999.
Im automatischen Beweisen ist der Einsatz von E-Unifikation eine Technik, um allgemein anwendbare Beweisverfahren wie Resolution oder Vervollständigung mit Spezialverfahren zu ergänzen. Diese können zwar nur sehr spezielle Probleme lösen, sind dabei aber besonders effizient. Durch den Einsatz von E-Unifikation wird die Behandlung einer Gleichungstheorie E von dem allgemeinen Beweisverfahren in ein spezielles Unifikationsverfahren verlagert.
Treten in einem Beweisproblem mehrere Gleichungstheorien E0, ... , En auf, so entsteht die Aufgabe, die einzelnen Unifikationsverfahren für diese Theorien zu einem Unifikationsverfahren für die Vereinigung der Theorien E0, ..., En zu kombinieren. Zu diesem Zweck wurden in der Literatur mehrere Kombinationsalgorithmen vorgestellt. Diese beschäftigen sich aber fast ausschließlich mit der Kombination von E-Unifikationsverfahren, die sogenannte vollständige Mengen von Unifikatoren, also Mengen von Lösungen, berechnen. In den letzten Jahren ist zusätzlich das Interesse an E-Unifikationsverfahren gestiegen, die als reine Entscheidungsverfahren arbeiten. Franz Baader und Klaus Schulz haben einen Kombinationsalgorithmus vorgestellt, der auch Entscheidungsverfahren zur E-Unifikation kombinieren kann. Dieser Algorithmus ist allerdings vornehmlich von theoretischem Interesse, da eine direkte Implementierung wegen des enorm großen Suchraums für praktische Probleme nicht geeignet ist.

In der vorliegenden Arbeit wird aufbauend auf dem Algorithmus von Baader und Schulz ein optimiertes Kombinationsverfahren für Entscheidungsverfahren zur E-Unifikation vorgestellt. Der grundlegende Gedanke dieser Optimierung ist, den Suchraum durch den Austausch von Informationen zwischen den Komponentenverfahren der beteiligten Theorien E0, ... , En zu beschränken. Um diesen Informationsaustausch spezifizieren und seine Korrektheit beweisen zu können, wird ein spezieller Formalismus, die sogenannten Constraints, eingeführt. Außerdem wird für drei ausgewählte Theorien vorgestellt, wie bekannte Entscheidungsverfahren für diese Theorien so erweitert werden können, dass sie sich an dem angesprochenen Informationsaustausch beteiligen können. Laufzeittests mit den implementieren Algorithmen zeigen, dass diese Optimierungen tatsächlich zu Unifikationsalgorithmen führen, welche für Probleme in der Praxis einsetzbar sind.
@thesis{ Richts-Diss-1999,
author = {J\"{o}rn {Richts}},
school = {Rheinisch-Westf\"{a}lische Technische Hochschule Aachen},
title = {Effiziente Entscheidungsverfahren zur E-Unifikation},
type = {Doctoral Thesis},
year = {1999},
}


Christopher B. Tresp: Beschreibungslogiken zur Behandlung von unscharfem Wissen im Kontext eines medizinischen Anwendungsszenarios. Doctoral Thesis, Rheinisch-Westfälische Technische Hochschule Aachen, Aachen, Germany, 1999.
Im Anwendungsbereich Medizin wird der Einsatz traditioneller Wissensrepräsentationssysteme dadurch erschwert, dass Aussagen und Folgerungen häufig nicht exakt sind. Vielmehr liegen diese oft nur mit einem gewissen Grad an Unschärfe vor. Allerdings gibt es nichtsdestotrotz oft die Anforderung, das entsprechende Wissen geeignet zu formaliseren, um Anwendungssysteme zu entwickeln, die den Mediziner in seiner Ausbildung oder im Alltagsgeschäft unterstützen.
Zum Aufbau eines medizinischen Tutoringsystems im Bereich der Radiologie wird daher im Rahmen dieser Arbeit eine unendlichwertige (fuzzy) Beschreibungslogik entwickelt, welche speziell das in diesem Rahmen vorkommende unscharfe Wissen zu repräsentieren erlaubt.

Die Arbeit umfasst als einen Schwerpunkt die theoretischen Grundlagen der Beschreibungslogik und das darauf aufbauende Schlussfolgerungsverfahren, welches den Ausgangspunkt für ein Computersystem liefert. Über die an dieser Stelle gezeigten Eigenschaften lassen sich für die spätere Anwendung wichtige Aussagen treffen, welche beispielsweise die Korrektheit einer Inferenz in dieser Logik umfassen.

Zum anderen werden aber auch die praktischen Fragestellungen behandelt, die beim Entwurf eines medizinischen Anwendungssystems auftauchen. Ein wichtiger Punkt ist hier die Verarbeitung der medizinischen Fachsprache, die es im Kontext des Ausbildungssystems vollautomatisch in Konstrukte der Wissensrepräsentationssprache zu übersetzen gilt.
@thesis{ Tresp-Diss-1999,
author = {Christopher B. {Tresp}},
school = {Rheinisch-Westf\"{a}lische Technische Hochschule Aachen},
title = {Beschreibungslogiken zur Behandlung von unscharfem Wissen im Kontext eines medizinischen Anwendungsszenarios},
type = {Doctoral Thesis},
year = {1999},
}


1998

Can Adam Albayrak: Die WHILE-Hierarchie für Programmschemata. Doctoral Thesis, Rheinisch-Westfälische Technische Hochschule Aachen, Aachen, Germany, 1998.
In der hier vorliegenden Arbeit werden verschiedene, aus der Literatur bekannte Klassen monadischer Programmschemata betrachtet. Monadische Programmschemata sind uninterpretierte Programme und stellen damit Programm über potentiell beliebigen Rechenbereichen mit beliebigen einstelligen Grundfunktionen dar. Im Gegensatz zum Äquivalenzbegriff für Programme, bei dem zwei Programme als äquivalent betrachtet werden, wenn sie die gleiche Funktion berechnen, fordert man beim Äquivalenzbegriff für Programmschemata, dass für alle Interpretationen der Grundfunktionen die entstehenden Programme die gleiche Funktion berechnen. Für die Klasse der Ianov-Schemata ist eine Charakterisierung äquivalenter Programmschemata bekannt. Hier wird eine solche Charakterisierung für die Klasse der Dijkstra-Schemata und die Klasse der Kosaraju-Schemata angegeben. Diese Charakterisierung erfolgt durch die Zuordnung von regulären Sprachen bzw. Vektoren von solchen Sprachen, wobei hervorzuheben ist, dass sich die charakterisierenden Sprachen am induktiven Aufbau der jeweiligen Klassen orientieren. Durch die Verwendung dieser Charakterisierungen kann beispielsweise das Äquivalenzproblem für Programmschemata dieser Klassen sehr effizient entschieden werden.
Darüber hinaus wird gezeigt, dass die Schachtelungstiefe von WHILE-Schleifen in Dijkstra-Schemata eine echte semantische Hierarchie von Funktionen, die sogenannte WHILE-Hierarchie, bildet. Dieses Hierarchieresultat zeigt, dass bei Betrachtung beliebiger Rechenbereiche und beliebigen Grundfunktionen der klassische Satz der Rekursionstheorie von S.C. Kleene nicht mehr gilt, der besagt, dass eine einzige WHILE-Schleife zur Programmierung beliebiger Funktionen auf natürlichen Zahlen ausreicht.
@thesis{ Albayrak-Diss-1998,
school = {Rheinisch-Westf\"{a}lische Technische Hochschule Aachen},
title = {Die WHILE-Hierarchie f\"{u}r Programmschemata},
type = {Doctoral Thesis},
year = {1998},
}


Ulrike Sattler: Terminological Knowledge Representation Systems in a Chemical Engineering Application. Doctoral Thesis, Rheinisch-Westfälische Technische Hochschule Aachen, Aachen, Germany, 1998.
Abstract  BibTeX Entry  PS File  ©Verlag Mainz
This work is concerned with the question of how far terminological knowledge representation systems can support the development of mathematical models of chemical processes. Terminological knowledge representation systems are based on Description Logics, a highly expressive formalism with well-defined semantics, and provide powerful inference services. These system services can be used to support the structuring of the application domain, namely the organised storage of parts of process models. However, the process systems engineering application asks for the extension of the expressive power of already existing Description Logics. These extensions are introduced and investigated with respect to the computational complexity of the corresponding inference problems.
@thesis{ Sattler-diss,
author = {Ulrike {Sattler}},
school = {Rheinisch-Westf\"{a}lische Technische Hochschule Aachen},
title = {Terminological Knowledge Representation Systems in a Chemical Engineering Application},
type = {Doctoral Thesis},
year = {1998},
}


Generated 23 September 2018, 19:13:26.

Zu dieser Seite

Francesco Kriegel
Letzte Änderung: 27.02.2018