Francesco Kriegel
Research Associate
NameMr Dr.Ing. Francesco Kriegel
Send encrypted mail via the SecureMail portal (for TUD external users only).
Visitor Address:
AndreasPfitzmannBau, Room 3032 Nöthnitzer Straße 46
01187 Dresden
Concept Explorer FX (conexpfx)
Table of contents
Projects
 Current Project: Repairing Description Logic Ontologies Further Information
 Completed Project: Construction and Extension of Description Logic Knowledge Bases with Methods of Formal Concept Analysis Further Information
Awards

Best Paper Award: International Conference on Concept Lattices and Their Applications (CLA) 2015
Talk Slides
 Computing Optimal Repairs of Quantified ABoxes w.r.t. Static EL TBoxes Accompanying Poster Publication (KR 2021)
 Computing Optimal Repairs of Quantified ABoxes w.r.t. Static EL TBoxes Publication (CADE28)
 Repairing EL TBoxes by Means of Countermodels Obtained by Model Transformation Publication (DL 2021)
 Navigating the EL Subsumption Hierarchy Publication (DL 2021)
Publications
2022
Abstract BibTeX Entry PDF File Extended Technical Report
Errors in Description Logic (DL) ontologies are often detected when a reasoner computes unwanted consequences. The question is then how to repair the ontology such that the unwanted consequences no longer follow, but as many of the other consequences as possible are preserved. The problem of computing such optimal repairs was addressed in our previous work in the setting where the data (expressed by an ABox) may contain errors, but the schema (expressed by an \(\mathcal{EL}\) TBox) is assumed to be correct. Actually, we consider a generalization of ABoxes called quantified ABoxes (qABoxes) both as input for and as result of the repair process. Using qABoxes for repair allows us to retain more information, but the disadvantage is that standard DL systems do not accept qABoxes as input. This raises the question, investigated in the present paper, whether and how one can obtain optimal repairs if one restricts the output of the repair process to being ABoxes. In general, such optimal ABox repairs need not exist. Our main contribution is that we show how to decide the existence of optimal ABox repairs in exponential time, and how to compute all such repairs in case they exist.
@inproceedings{ BaKoKrNuESWC2022, author = {Franz {Baader} and Patrick {Koopmann} and Francesco {Kriegel} and Adrian {Nuradiansyah}}, booktitle = {The Semantic Web  19th International Conference, {ESWC} 2022, Hersonissos, Greece, May 29  June 2, 2022, Proceedings}, editor = {Paul {Groth} and MariaEsther {Vidal} and Fabian {Suchanek}}, note = {To appear.}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {Optimal ABox Repair w.r.t.\ Static {$\mathcal{EL}$} TBoxes: from Quantified ABoxes back to ABoxes}, year = {2022}, }
Abstract BibTeX Entry Extended Technical Report
Ontologies based on Description Logic (DL) represent general background knowledge in a terminology (TBox) and the actual data in an ABox. DL systems can then be used to compute consequences (such as answers to certain queries) from an ontology consisting of a TBox and an ABox. Since both humanmade and machinelearned data sets may contain errors, which manifest themselves as unintuitive or obviously incorrect consequences, repairing DLbased ontologies in the sense of removing such unwanted consequences is an important topic in DL research. Most of the repair approaches described in the literature produce repairs that are not optimal, in the sense that they do not guarantee that only a minimal set of consequences is removed. In a series of papers, we have developed an approach for computing optimal repairs, starting with the restricted setting of an \(\mathcal{EL}\) instance store, extending this to the more general setting of a quantified ABox (where some individuals may be anonymous), and then adding a static \(\mathcal{EL}\) TBox.
Here, we extend the expressivity of the underlying DL considerably, by adding nominals, inverse roles, regular role inclusions and the bottom concept to \(\mathcal{EL}\), which yields a fragment of the wellknown DL Horn\(\mathcal{SROIQ}\). The ideas underlying our repair approach still apply to this DL, though several nontrivial extensions are needed to deal with the new constructors and axioms. The developed repair approach can also be used to treat unwanted consequences expressed by certain conjunctive queries or regular path queries, and to handle Horn\(\mathcal{ALCOI}\) TBoxes with regular role inclusions.
@inproceedings{ BaKrKR2022, author = {Franz {Baader} and Francesco {Kriegel}}, booktitle = {Proceedings of the 19th International Conference on Principles of Knowledge Representation and Reasoning, {KR} 2022, Haifa, Israel, July 31  August 5, 2022}, note = {To appear.}, title = {Pushing Optimal ABox Repair from {$\mathcal{EL}$} Towards More Expressive HornDLs}, year = {2022}, }
2021
Abstract BibTeX Entry PDF File DOI Extended Abstract Extended Technical Report
The application of automated reasoning approaches to Description Logic (DL) ontologies may produce certain consequences that either are deemed to be wrong or should be hidden for privacy reasons. The question is then how to repair the ontology such that the unwanted consequences can no longer be deduced. An optimal repair is one where the least amount of other consequences is removed. Most of the previous approaches to ontology repair are of a syntactic nature in that they remove or weaken the axioms explicitly present in the ontology, and thus cannot achieve semantic optimality. In previous work, we have addressed the problem of computing optimal repairs of (quantified) ABoxes, where the unwanted consequences are described by concept assertions of the lightweight DL \(\mathcal{E\!L}\). In the present paper, we improve on the results achieved so far in two ways. First, we allow for the presence of terminological knowledge in the form of an \(\mathcal{E\!L}\) TBox. This TBox is assumed to be static in the sense that it cannot be changed in the repair process. Second, the construction of optimal repairs described in our previous work is best case exponential. We introduce an optimized construction that is exponential only in the worst case. First experimental results indicate that this reduces the size of the computed optimal repairs considerably.
@inproceedings{ BaKoKrNuCADE2021, author = {Franz {Baader} and Patrick {Koopmann} and Francesco {Kriegel} and Adrian {Nuradiansyah}}, booktitle = {Proceedings of the 28th International Conference on Automated Deduction (CADE28), July 1116, 2021, Virtual Event, United States}, doi = {https://doi.org/10.1007/9783030798765_18}, editor = {Andr{\'e} {Platzer} and Geoff {Sutcliffe}}, pages = {309326}, series = {Lecture Notes in Computer Science}, title = {{Computing Optimal Repairs of Quantified ABoxes w.r.t. Static $\mathcal{E\!L}$ TBoxes}}, volume = {12699}, year = {2021}, }
BibTeX Entry PDF File PDF File (Poster) PDF File (rwthaachen.de) Full Conference Paper
@inproceedings{ BaKoKrNuKR2021, author = {Franz {Baader} and Patrick {Koopmann} and Francesco {Kriegel} and Adrian {Nuradiansyah}}, booktitle = {Recent Published Research (RPR) track of the 18th International Conference on Principles of Knowledge Representation and Reasoning, {KR} 2021, Virtual Event, November 312, 2021}, title = {{Computing Optimal Repairs of Quantified ABoxes w.r.t.\ Static $\mathcal{EL}$ TBoxes (Extended Abstract)}}, year = {2021}, }
Abstract BibTeX Entry PDF File PDF File (ceurws.org) Extended Technical Report
We review our recent work on how to compute optimal repairs, optimal compliant anonymizations, and optimal safe anonymizations of ABoxes containing possibly anonymized individuals. The results can be used both to remove erroneous consequences from a knowledge base and to hide secret information before publication of the knowledge base, while keeping as much as possible of the original information.
@inproceedings{ BaKoKrNuPeDL2021, author = {Franz {Baader} and Patrick {Koopmann} and Francesco {Kriegel} and Adrian {Nuradiansyah} and Rafael {Pe\~{n}aloza}}, booktitle = {Proceedings of the 34th International Workshop on Description Logics ({DL} 2021), Hybrid Event, Bratislava, Slovakia, September 1922, 2021}, editor = {Martin {Homola} and Vladislav {Ryzhikov} and Renate A. {Schmidt}}, publisher = {CEURWS.org}, series = {{CEUR} Workshop Proceedings}, title = {{PrivacyPreserving Ontology Publishing: The Case of Quantified ABoxes w.r.t.\ a Static CycleRestricted $\mathcal{EL}$ TBox}}, volume = {2954}, year = {2021}, }
Abstract BibTeX Entry PDF File DOI Extended Technical Report
In recent work, we have shown how to compute compliant anonymizations of quantified ABoxes w.r.t. \(\mathcal{E\!L}\) policies. In this setting, quantified ABoxes can be used to publish information about individuals, some of which are anonymized. The policy is given by concepts of the Description Logic (DL) \(\mathcal{E\!L}\), and compliance means that one cannot derive from the ABox that some nonanonymized individual is an instance of a policy concept. If one assumes that a possible attacker could have additional knowledge about some of the involved nonanonymized individuals, then compliance with a policy is not sufficient. One wants to ensure that the quantified ABox is safe in the sense that none of the secret instance information is revealed, even if the attacker has additional compliant knowledge. In the present paper, we show that safety can be decided in polynomial time, and that the unique optimal safe anonymization of a nonsafe quantified ABox can be computed in exponential time, provided that the policy consists of a single \(\mathcal{E\!L}\) concept.
@inproceedings{ BaKrNuPeSAC2021, author = {Franz {Baader} and Francesco {Kriegel} and Adrian {Nuradiansyah} and Rafael {Pe{\~n}aloza}}, booktitle = {Proceedings of the 36th Annual ACM Symposium on Applied Computing (SAC '21), March 2226, 2021, Virtual Event, Republic of Korea}, doi = {https://doi.org/10.1145/3412841.3441961}, pages = {863872}, publisher = {Association for Computing Machinery}, title = {{Safety of Quantified ABoxes w.r.t.\ Singleton $\mathcal{E\!L}$ Policies}}, year = {2021}, }
Abstract BibTeX Entry PDF File PDF File (ceurws.org)
Knowledge engineers might face situations in which an unwanted consequence is derivable from an ontology. It is then desired to revise the ontology such that it no longer entails the consequence. For this purpose, we introduce a novel technique for repairing TBoxes formulated in the description logic \(\mathcal{EL}\). Specifically, we first compute a canonical model of the TBox and then transform it into a countermodel to the unwanted consequence. As formalism for the model transformation we employ transductions. We then obtain a TBox repair as the axiomatization of the logical intersection of the original TBox and the theory of the countermodel. In fact, we construct a set of countermodels, each of which induces a TBox repair. For the actual computation of the repairs we use results from Formal Concept Analysis.
@inproceedings{ HiKrNuDL2021, author = {Willi {Hieke} and Francesco {Kriegel} and Adrian {Nuradiansyah}}, booktitle = {Proceedings of the 34th International Workshop on Description Logics ({DL} 2021), Hybrid Event, Bratislava, Slovakia, September 1922, 2021}, editor = {Martin {Homola} and Vladislav {Ryzhikov} and Renate A. {Schmidt}}, publisher = {CEURWS.org}, series = {{CEUR} Workshop Proceedings}, title = {{Repairing $\mathcal{EL}$ TBoxes by Means of Countermodels Obtained by Model Transformation}}, volume = {2954}, year = {2021}, }
Abstract BibTeX Entry PDF File PDF File (ceurws.org)
The \(\mathcal{EL}\) subsumption hierarchy consists of all \(\mathcal{EL}\) concept descriptions and is partially ordered by subsumption. In order to navigate within this hierarchy, one can go up to subsumers and down to subsumees. We analyze how smallest steps can be made. Specifically, we show how all upper neighbors as well as all lower neighbors of a given \(\mathcal{EL}\) concept description can be efficiently computed, where two concepts are neighbors if one subsumes the other and there is no third concept in between. We further show that the hierarchy contains very long chains: there is a sequence of concepts \(C_n\) with size linear in \(n\) such that each chain of neighbors from \(\top\) to \(C_n\) has at least \(n\)fold exponential length. As applications, we provide a template for determining upper complexity bounds for deciding whether a concept is maximally general or maximally specific w.r.t. a property, we construct a metric on the set of all \(\mathcal{EL}\) concept descriptions, we introduce a similarity measure that fulfills the triangle inequality, and we conclude that an uninformed search for a target concept by subsequently computing neighbors or, equivalently, along an ideal refinement operator is not feasible in practical applications.
@inproceedings{ KrDL2021, author = {Francesco {Kriegel}}, booktitle = {Proceedings of the 34th International Workshop on Description Logics ({DL} 2021), Hybrid Event, Bratislava, Slovakia, September 1922, 2021}, editor = {Martin {Homola} and Vladislav {Ryzhikov} and Renate A. {Schmidt}}, publisher = {CEURWS.org}, series = {{CEUR} Workshop Proceedings}, title = {{Navigating the $\mathcal{EL}$ Subsumption Hierarchy}}, volume = {2954}, year = {2021}, }
2020
Abstract BibTeX Entry PDF File DOI ©SpringerVerlag Extended Technical Report
We adapt existing approaches for privacypreserving publishing of linked data to a setting where the data are given as Description Logic (DL) ABoxes with possibly anonymised (formally: existentially quantified) individuals and the privacy policies are expressed using sets of concepts of the DL \(\mathcal{E\!L}\). We provide a chacterization of compliance of such ABoxes w.r.t. \(\mathcal{E\!L}\) policies, and show how optimal compliant anonymisations of ABoxes that are noncompliant can be computed. This work extends previous work on privacypreserving ontology publishing, in which a very restricted form of ABoxes, called instance stores, had been considered, but restricts the attention to compliance. The approach developed here can easily be adapted to the problem of computing optimal repairs of quantified ABoxes.
@inproceedings{ BaKrNuPeISWC2020, author = {Franz {Baader} and Francesco {Kriegel} and Adrian {Nuradiansyah} and Rafael {Pe{\~n}aloza}}, booktitle = {Proceedings of the 19th International Semantic Web Conference (ISWC 2020), Part {I}, Athens, Greece, November 26, 2020}, doi = {https://doi.org/10.1007/9783030624194_1}, editor = {Jeff Z. {Pan} and Valentina A. M. {Tamma} and Claudia {d'Amato} and Krzysztof {Janowicz} and Bo {Fu} and Axel {Polleres} and Oshani {Seneviratne} and Lalana {Kagal}}, pages = {320}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {{Computing Compliant Anonymisations of Quantified ABoxes w.r.t. $\mathcal{E\!L}$ Policies}}, volume = {12506}, year = {2020}, }
Abstract BibTeX Entry PDF File DOI Doctoral Thesis
My thesis describes how methods from Formal Concept Analysis can be used for constructing and extending description logic ontologies. In particular, it is shown how concept inclusions can be axiomatized from data in the description logics \(\mathcal{E\mkern1.618mu L}\), \(\mathcal{M}\), \(\textsf{Horn}\mathcal{M}\), and \(\textsf{Prob}\mathcal{E\mkern1.618mu L}\). All proposed methods are not only sound but also complete, i.e., the result not only consists of valid concept inclusions but also entails each valid concept inclusion. Moreover, a latticetheoretic view on the description logic \(\mathcal{E\mkern1.618mu L}\) is provided. For instance, it is shown how upper and lower neighbors of \(\mathcal{E\mkern1.618mu L}\) concept descriptions can be computed and further it is proven that the set of \(\mathcal{E\mkern1.618mu L}\) concept descriptions forms a graded lattice with a nonelementary rank function.
@article{ KrKI20, author = {Francesco {Kriegel}}, doi = {https://doi.org/10.1007/s13218020006738}, journal = {KI  K{\"{u}}nstliche Intelligenz}, note = {This is a postpeerreview, precopyedit version of an article published in KI  K{\"{u}}nstliche Intelligenz. The final authenticated version is available online.}, pages = {399403}, title = {{Constructing and Extending Description Logic Ontologies using Methods of Formal Concept Analysis: A Dissertation Summary}}, volume = {34}, year = {2020}, }
Abstract BibTeX Entry PDF File DOI ©Elsevier
The notion of a most specific consequence with respect to some terminological box is introduced, conditions for its existence in the description logic \(\mathcal{E\mkern1.618mu L}\) and its variants are provided, and means for its computation are developed. Algebraic properties of most specific consequences are explored. Furthermore, several applications that make use of this new notion are proposed and, in particular, it is shown how given terminological knowledge can be incorporated in existing approaches for the axiomatization of observations. For instance, a procedure for an incremental learning of concept inclusions from sequences of interpretations is developed.
@article{ KrDAM20, author = {Francesco {Kriegel}}, doi = {https://doi.org/10.1016/j.dam.2019.01.029}, journal = {Discrete Applied Mathematics}, pages = {172204}, title = {{Most Specific Consequences in the Description Logic $\mathcal{E\mkern1.618mu L}$}}, volume = {273}, year = {2020}, }
2019
Abstract BibTeX Entry PDF File DOI ©SpringerVerlag Extended Technical Report
We make a first step towards adapting an existing approach for privacypreserving publishing of linked data to Description Logic (DL) ontologies. We consider the case where both the knowledge about individuals and the privacy policies are expressed using concepts of the DL \(\mathcal{EL}\), which corresponds to the setting where the ontology is an \(\mathcal{EL}\) instance store. We introduce the notions of compliance of a concept with a policy and of safety of a concept for a policy, and show how optimal compliant (safe) generalizations of a given \(\mathcal{EL}\) concept can be computed. In addition, we investigate the complexity of the optimality problem.
@inproceedings{ BaKrNuJELIA19, author = {Franz {Baader} and Francesco {Kriegel} and Adrian {Nuradiansyah}}, booktitle = {16th European Conference on Logics in Artificial Intelligence, {JELIA} 2019, Rende, Italy, May 711, 2019, Proceedings}, doi = {https://doi.org/10.1007/9783030195700_21}, editor = {Francesco {Calimeri} and Nicola {Leone} and Marco {Manna}}, pages = {323338}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {{PrivacyPreserving Ontology Publishing for $\mathcal{EL}$ Instance Stores}}, volume = {11468}, year = {2019}, }
Abstract BibTeX Entry PDF File DOI ©SpringerVerlag Extended Technical Report
A joining implication is a restricted form of an implication where it is explicitly specified which attributes may occur in the premise and in the conclusion, respectively. A technique for sound and complete axiomatization of joining implications valid in a given formal context is provided. In particular, a canonical base for the joining implications valid in a given formal context is proposed, which enjoys the property of being of minimal cardinality among all such bases. Background knowledge in form of a set of valid joining implications can be incorporated. Furthermore, an application to inductive learning in a Horn description logic is proposed, that is, a procedure for sound and complete axiomatization of \(\textsf{Horn}\mathcal{M}\) concept inclusions from a given interpretation is developed. A complexity analysis shows that this procedure runs in deterministic exponential time.
@inproceedings{ KrICFCA19, author = {Francesco {Kriegel}}, booktitle = {15th International Conference on Formal Concept Analysis, {ICFCA} 2019, Frankfurt, Germany, June 2528, 2019, Proceedings}, doi = {https://doi.org/10.1007/9783030214623_9}, editor = {Diana {Cristea} and Florence {Le Ber} and Bar\i{}\c{s} {Sertkaya}}, pages = {110129}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {{Joining Implications in Formal Contexts and Inductive Learning in a Horn Description Logic}}, volume = {11511}, year = {2019}, }
Abstract BibTeX Entry PDF File DOI ©SpringerVerlag Extended Technical Report
Description logics in their standard setting only allow for representing and reasoning with crisp knowledge without any degree of uncertainty. Of course, this is a serious shortcoming for use cases where it is impossible to perfectly determine the truth of a statement. For resolving this expressivity restriction, probabilistic variants of description logics have been introduced. Their modeltheoretic semantics is built upon socalled probabilistic interpretations, that is, families of directed graphs the vertices and edges of which are labeled and for which there exists a probability measure on this graph family.
Results of scientific experiments, e.g., in medicine, psychology, or biology, that are repeated several times can induce probabilistic interpretations in a natural way. In this document, we shall develop a suitable axiomatization technique for deducing terminological knowledge from the assertional data given in such probabilistic interpretations. More specifically, we consider a probabilistic variant of the description logic \(\mathcal{E\mkern1.618mu L}^{\!\bot}\), and provide a method for constructing a set of rules, socalled concept inclusions, from probabilistic interpretations in a sound and complete manner.
@inproceedings{ KrJELIA19, author = {Francesco {Kriegel}}, booktitle = {16th European Conference on Logics in Artificial Intelligence, {JELIA} 2019, Rende, Italy, May 711, 2019, Proceedings}, doi = {https://doi.org/10.1007/9783030195700_26}, editor = {Francesco {Calimeri} and Nicola {Leone} and Marco {Manna}}, pages = {399417}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {{Learning Description Logic Axioms from Discrete Probability Distributions over Description Graphs}}, volume = {11468}, year = {2019}, }
2018
BibTeX Entry PDF File PDF File (ceurws.org) Full Conference Paper
@inproceedings{ BaKrNuPeDL2018, author = {Franz {Baader} and Francesco {Kriegel} and Adrian {Nuradiansyah} and Rafael {Pe{\~n}aloza}}, booktitle = {Proceedings of the 31st International Workshop on Description Logics, Tempe, Arizona, October 2729, 2018}, editor = {Magdalena {Ortiz} and Thomas {Schneider}}, publisher = {CEURWS.org}, series = {{CEUR} Workshop Proceedings}, title = {{Making Repairs in Description Logics More Gentle (Extended Abstract)}}, volume = {2211}, year = {2018}, }
Abstract BibTeX Entry PDF File ©AAAI Extended Abstract Extended Technical Report
The classical approach for repairing a Description Logic ontology \(\mathfrak{O}\) in the sense of removing an unwanted consequence \(\alpha\) is to delete a minimal number of axioms from \(\mathfrak{O}\) such that the resulting ontology \(\mathfrak{O}'\) does not have the consequence \(\alpha\). However, the complete deletion of axioms may be too rough, in the sense that it may also remove consequences that are actually wanted. To alleviate this problem, we propose a more gentle notion of repair in which axioms are not deleted, but only weakened. On the one hand, we investigate general properties of this gentle repair method. On the other hand, we propose and analyze concrete approaches for weakening axioms expressed in the Description Logic \(\mathcal{E\mkern1.618mu L}\).
@inproceedings{ BaKrNuPeKR2018, address = {USA}, author = {Franz {Baader} and Francesco {Kriegel} and Adrian {Nuradiansyah} and Rafael {Pe{\~n}aloza}}, booktitle = {Principles of Knowledge Representation and Reasoning: Proceedings of the Sixteenth International Conference, {KR} 2018, Tempe, Arizona, 30 October  2 November 2018}, editor = {Frank {Wolter} and Michael {Thielscher} and Francesca {Toni}}, pages = {319328}, publisher = {{AAAI} Press}, title = {{Making Repairs in Description Logics More Gentle}}, year = {2018}, }
Abstract BibTeX Entry PDF File DOI ©SpringerVerlag Extended Technical Report
For a probabilistic extension of the description logic \(\mathcal{E\mkern1.618mu L}^{\bot}\), we consider the task of automatic acquisition of terminological knowledge from a given probabilistic interpretation. Basically, such a probabilistic interpretation is a family of directed graphs the vertices and edges of which are labeled, and where a discrete probability measure on this graph family is present. The goal is to derive socalled concept inclusions which are expressible in the considered probabilistic description logic and which hold true in the given probabilistic interpretation. A procedure for an appropriate axiomatization of such graph families is proposed and its soundness and completeness is justified.
@inproceedings{ KrKI18, address = {Berlin, Germany}, author = {Francesco {Kriegel}}, booktitle = {{{KI} 2018: Advances in Artificial Intelligence  41st German Conference on AI, Berlin, Germany, September 2428, 2018, Proceedings}}, doi = {https://doi.org/10.1007/9783030001117_5}, editor = {Frank {Trollmann} and AnniYasmin {Turhan}}, pages = {4653}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {{Acquisition of Terminological Knowledge in Probabilistic Description Logic}}, volume = {11117}, year = {2018}, }
Abstract BibTeX Entry PDF File PDF File (ceurws.org)
For the description logic \(\mathcal{E\mkern1.618mu L}\), we consider the neighborhood relation which is induced by the subsumption order, and we show that the corresponding lattice of \(\mathcal{E\mkern1.618mu L}\) concept descriptions is distributive, modular, graded, and metric. In particular, this implies the existence of a rank function as well as the existence of a distance function.
@inproceedings{ KrCLA18, address = {Olomouc, Czech Republic}, author = {Francesco {Kriegel}}, booktitle = {{Proceedings of the 14th International Conference on Concept Lattices and Their Applications ({CLA 2018})}}, editor = {Dmitry I. {Ignatov} and Lhouari {Nourine}}, pages = {267278}, publisher = {CEURWS.org}, series = {{CEUR} Workshop Proceedings}, title = {{The Distributive, Graded Lattice of $\mathcal{E\mkern1.618mu L}$ Concept Descriptions and its Neighborhood Relation}}, volume = {2123}, year = {2018}, }
2017
Abstract BibTeX Entry PDF File DOI ©SpringerVerlag
The Web Ontology Language (OWL) has gained serious attraction since its foundation in 2004, and it is heavily used in applications requiring representation of as well as reasoning with knowledge. It is the language of the Semantic Web, and it has a strong logical underpinning by means of socalled Description Logics (DLs). DLs are a family of conceptual languages suitable for knowledge representation and reasoning due to their strong logical foundation, and for which the decidability and complexity of common reasoning problems are widely explored. In particular, the reasoning tasks allow for the deduction of implicit knowledge from explicitly stated facts and axioms, and plenty of appropriate algorithms were developed, optimized, and implemented, e.g., tableaux algorithms and completion algorithms. In this document, we present a technique for the acquisition of terminological knowledge from social networks. More specifically, we show how OWL axioms, i.e., concept inclusions and role inclusions in DLs, can be obtained from social graphs in a sound and complete manner. A social graph is simply a directed graph, the vertices of which describe the entities, e.g., persons, events, messages, etc.; and the edges of which describe the relationships between the entities, e.g., friendship between persons, attendance of a person to an event, a person liking a message, etc. Furthermore, the vertices of social graphs are labeled, e.g., to describe properties of the entities, and also the edges are labeled to specify the concrete relationships. As an exemplary social network we consider Facebook, and show that it fits our use case.
@incollection{ KrFCAoSN17, address = {Cham}, author = {Francesco {Kriegel}}, booktitle = {Formal Concept Analysis of Social Networks}, doi = {https://doi.org/10.1007/9783319641676_5}, editor = {Rokia {Missaoui} and Sergei O. {Kuznetsov} and Sergei {Obiedkov}}, pages = {97142}, publisher = {Springer International Publishing}, series = {Lecture Notes in Social Networks ({LNSN})}, title = {Acquisition of Terminological Knowledge from Social Networks in Description Logic}, year = {2017}, }
Abstract BibTeX Entry PDF File DOI ©SpringerVerlag
Entropy is a measure for the uninformativeness or randomness of a data set, i.e., the higher the entropy is, the lower is the amount of information. In the field of propositional logic it has proven to constitute a suitable measure to be maximized when dealing with models of probabilistic propositional theories. More specifically, it was shown that the model of a probabilistic propositional theory with maximal entropy allows for the deduction of other formulae which are somehow expected by humans, i.e., allows for some kind of common sense reasoning. In order to pull the technique of maximum entropy entailment to the field of Formal Concept Analysis, we define the notion of entropy of a formal context with respect to the frequency of its object intents, and then define maximum entropy entailment for quantified implication sets, i.e., for sets of partial implications where each implication has an assigned degree of confidence. Furthermore, then this entailment technique is utilized to define socalled maximum entropy implicational bases (MEbases), and a first general example of such a MEbase is provided.
@inproceedings{ KrICFCA17b, address = {Rennes, France}, author = {Francesco {Kriegel}}, booktitle = {Proceedings of the 14th International Conference on Formal Concept Analysis ({ICFCA} 2017)}, doi = {https://doi.org/10.1007/9783319592718_10}, editor = {Karell {Bertet} and Daniel {Borchmann} and Peggy {Cellier} and S\'{e}bastien {Ferr\'{e}}}, pages = {155167}, publisher = {Springer Verlag}, series = {Lecture Notes in Computer Science ({LNCS})}, title = {First Notes on Maximum Entropy Entailment for Quantified Implications}, volume = {10308}, year = {2017}, }
Abstract BibTeX Entry PDF File DOI ©SpringerVerlag
We consider the task of acquisition of terminological knowledge from given assertional data. However, when evaluating data of realworld applications we often encounter situations where it is impractical to deduce only crisp knowledge, due to the presence of exceptions or errors. It is rather appropriate to allow for degrees of uncertainty within the derived knowledge. Consequently, suitable methods for knowledge acquisition in a probabilistic framework should be developed. In particular, we consider data which is given as a probabilistic formal context, i.e., as a triadic incidence relation between objects, attributes, and worlds, which is furthermore equipped with a probability measure on the set of worlds. We define the notion of a probabilistic attribute as a probabilistically quantified set of attributes, and define the notion of validity of implications over probabilistic attributes in a probabilistic formal context. Finally, a technique for the axiomatization of such implications from probabilistic formal contexts is developed. This is done is a sound and complete manner, i.e., all derived implications are valid, and all valid implications are deducible from the derived implications. In case of finiteness of the input data to be analyzed, the constructed axiomatization is finite, too, and can be computed in finite time.
@inproceedings{ KrICFCA17a, address = {Rennes, France}, author = {Francesco {Kriegel}}, booktitle = {Proceedings of the 14th International Conference on Formal Concept Analysis ({ICFCA} 2017)}, doi = {https://doi.org/10.1007/9783319592718_11}, editor = {Karell {Bertet} and Daniel {Borchmann} and Peggy {Cellier} and S\'{e}bastien {Ferr\'{e}}}, pages = {168183}, publisher = {Springer Verlag}, series = {Lecture Notes in Computer Science ({LNCS})}, title = {Implications over Probabilistic Attributes}, volume = {10308}, year = {2017}, }
Abstract BibTeX Entry PDF File DOI ©Taylor and Francis
A probabilistic formal context is a triadic context the third dimension of which is a set of worlds equipped with a probability measure. After a formal definition of this notion, this document introduces probability of implications with respect to probabilistic formal contexts, and provides a construction for a base of implications the probabilities of which exceed a given lower threshold. A comparison between confidence and probability of implications is drawn, which yields the fact that both measures do not coincide. Furthermore, the results are extended towards the lightweight description logic \(\mathcal{E\mkern1.618mu L}^{\bot}\) with probabilistic interpretations, and a method for computing a base of general concept inclusions the probabilities of which are greater than a predefined lower bound is proposed. Additionally, we consider socalled probabilistic attributes over probabilistic formal contexts, and provide a method for the axiomatization of implications over probabilistic attributes.
@article{ KrIJGS17, author = {Francesco {Kriegel}}, doi = {https://doi.org/10.1080/03081079.2017.1349575}, journal = {International Journal of General Systems}, number = {5}, pages = {511546}, title = {Probabilistic Implication Bases in {FCA} and Probabilistic Bases of GCIs in $\mathcal{E\mkern1.618mu L}^{\bot}$}, volume = {46}, year = {2017}, }
Abstract BibTeX Entry PDF File DOI ©Taylor and Francis
The canonical base of a formal context plays a distinguished role in Formal Concept Analysis, as it is the only minimal implicational base known so far that can be described explicitly. Consequently, several algorithms for the computation of this base have been proposed. However, all those algorithms work sequentially by computing only one pseudointent at a time – a fact that heavily impairs the practicability in realworld applications. In this paper, we shall introduce an approach that remedies this deficit by allowing the canonical base to be computed in a parallel manner with respect to arbitrary implicational background knowledge. First experimental evaluations show that for sufficiently large data sets the speedup is proportional to the number of available CPU cores.
@article{ KrBoIJGS17, author = {Francesco {Kriegel} and Daniel {Borchmann}}, doi = {https://doi.org/10.1080/03081079.2017.1349570}, journal = {International Journal of General Systems}, number = {5}, pages = {490510}, title = {NextClosures: Parallel Computation of the Canonical Base with Background Knowledge}, volume = {46}, year = {2017}, }
2016
Abstract BibTeX Entry PDF File DOI ©Taylor and Francis
Description logic knowledge bases can be used to represent knowledge about a particular domain in a formal and unambiguous manner. Their practical relevance has been shown in many research areas, especially in biology and the Semantic Web. However, the tasks of constructing knowledge bases itself, often performed by human experts, is difficult, timeconsuming and expensive. In particular the synthesis of terminological knowledge is a challenge every expert has to face. Because human experts cannot be omitted completely from the construction of knowledge bases, it would therefore be desirable to at least get some support from machines during this process. To this end, we shall investigate in this work an approach which shall allow us to extract terminological knowledge in the form of general concept inclusions from factual data, where the data is given in the form of vertex and edge labeled graphs. As such graphs appear naturally within the scope of the Semantic Web in the form of sets of RDF triples, the presented approach opens up another possibility to extract terminological knowledge from the Linked Open Data Cloud.
@article{ BoDiKrJANCL16, author = {Daniel {Borchmann} and Felix {Distel} and Francesco {Kriegel}}, doi = {https://doi.org/10.1080/11663081.2016.1168230}, journal = {Journal of Applied NonClassical Logics}, number = {1}, pages = {146}, title = {Axiomatisation of General Concept Inclusions from Finite Interpretations}, volume = {26}, year = {2016}, }
Abstract BibTeX Entry PDF File PDF File (ceurws.org)
We propose applications that utilize the infimum and the supremum of closure operators that are induced by structures occuring in the field of Description Logics. More specifically, we consider the closure operators induced by interpretations as well as closure operators induced by TBoxes, and show how we can learn GCIs from streams of interpretations, and how an errortolerant axiomatization of GCIs from an interpretation guided by a handcrafted TBox can be achieved.
@inproceedings{ KrFCA4AI16, address = {The Hague, The Netherlands}, author = {Francesco {Kriegel}}, booktitle = {Proceedings of the 5th International Workshop "What can {FCA} do for Artificial Intelligence?" ({FCA4AI} 2016) colocated with the European Conference on Artificial Intelligence ({ECAI} 2016)}, editor = {Sergei {Kuznetsov} and Amedeo {Napoli} and Sebastian {Rudolph}}, pages = {916}, publisher = {CEURWS.org}, series = {{CEUR} Workshop Proceedings}, title = {Axiomatization of General Concept Inclusions from Streams of Interpretations with Optional Error Tolerance}, volume = {1703}, year = {2016}, }
Abstract BibTeX Entry PDF File PDF File (ceurws.org)
In a former paper, the algorithm NextClosures for computing the set of all formal concepts as well as the canonical base for a given formal context has been introduced. Here, this algorithm shall be generalized to a setting where the dataset is described by means of a closure operator in a complete lattice, and furthermore it shall be extended with the possibility to handle constraints that are given in form of a second closure operator. As a special case, constraints may be predefined as implicational background knowledge. Additionally, we show how the algorithm can be modified in order to do parallel Attribute Exploration for unconstrained closure operators, as well as give a reason for the impossibility of (parallel) Attribute Exploration for constrained closure operators if the constraint is not compatible with the dataset.
@inproceedings{ KrCLA16, address = {Moscow, Russia}, author = {Francesco {Kriegel}}, booktitle = {Proceedings of the 13th International Conference on Concept Lattices and Their Applications ({CLA 2016})}, editor = {Marianne {Huchard} and Sergei {Kuznetsov}}, pages = {231243}, publisher = {CEURWS.org}, series = {{CEUR} Workshop Proceedings}, title = {NextClosures with Constraints}, volume = {1624}, year = {2016}, }
Abstract BibTeX Entry PDF File DOI ©SpringerVerlag
The canonical base of a formal context is a minimal set of implications that is sound and complete. A recent paper has provided a new algorithm for the parallel computation of canonical bases. An important extension is the integration of expert interaction for Attribute Exploration in order to explore implicational bases of inaccessible formal contexts. This paper presents and analyzes an algorithm that allows for Parallel Attribute Exploration.
@inproceedings{ KrICCS16, address = {Annecy, France}, author = {Francesco {Kriegel}}, booktitle = {Proceedings of the 22nd International Conference on Conceptual Structures ({ICCS 2016})}, doi = {http://dx.doi.org/10.1007/9783319409856_8}, editor = {Ollivier {Haemmerl{\'{e}}} and Gem {Stapleton} and Catherine {Faron{}Zucker}}, pages = {91106}, publisher = {SpringerVerlag}, series = {Lecture Notes in Computer Science}, title = {Parallel Attribute Exploration}, volume = {9717}, year = {2016}, }
2015
Abstract BibTeX Entry PDF File DOI ©SpringerVerlag
Probabilistic interpretations consist of a set of interpretations with a shared domain and a measure assigning a probability to each interpretation. Such structures can be obtained as results of repeated experiments, e.g., in biology, psychology, medicine, etc. A translation between probabilistic and crisp description logics is introduced, and then utilized to reduce the construction of a base of general concept inclusions of a probabilistic interpretation to the crisp case for which a method for the axiomatization of a base of GCIs is wellknown.
@inproceedings{ KrKI15, address = {Dresden, Germany}, author = {Francesco {Kriegel}}, booktitle = {Proceedings of the 38th German Conference on Artificial Intelligence ({KI 2015})}, doi = {http://dx.doi.org/10.1007/9783319244891_10}, editor = {Steffen {H{\"o}lldobler} and Sebastian {Rudolph} and Markus {Kr{\"o}tzsch}}, pages = {124136}, publisher = {Springer Verlag}, series = {Lecture Notes in Artificial Intelligence ({LNAI})}, title = {Axiomatization of General Concept Inclusions in Probabilistic Description Logics}, volume = {9324}, year = {2015}, }
Abstract BibTeX Entry PDF File PDF File (ceurws.org)
A description graph is a directed graph that has labeled vertices and edges. This document proposes a method for extracting a knowledge base from a description graph. The technique is presented for the description logic \(\mathcal{A\mkern1.618mu L\mkern1.618mu E\mkern1.618mu Q\mkern1.618mu R}(\mathsf{Self})\) which allows for conjunctions, primitive negation, existential restrictions, value restrictions, qualified number restrictions, existential self restrictions, and complex role inclusion axioms, but also sublogics may be chosen to express the axioms in the knowledge base. The extracted knowledge base entails all statements that can be expressed in the chosen description logic and are encoded in the input graph.
@inproceedings{ KrSNAFCA15, address = {Nerja, Spain}, author = {Francesco {Kriegel}}, booktitle = {Proceedings of the International Workshop on Social Network Analysis using Formal Concept Analysis ({SNAFCA 2015}) in conjunction with the 13th International Conference on Formal Concept Analysis ({ICFCA} 2015)}, editor = {Sergei O. {Kuznetsov} and Rokia {Missaoui} and Sergei A. {Obiedkov}}, publisher = {CEURWS.org}, series = {CEUR Workshop Proceedings}, title = {Extracting $\mathcal{A\mkern1.618mu L\mkern1.618mu E\mkern1.618mu Q\mkern1.618mu R}(\mathsf{Self})$Knowledge Bases from Graphs}, volume = {1534}, year = {2015}, }
Abstract BibTeX Entry PDF File PDF File (ceurws.org)
Formal Concept Analysis and its methods for computing minimal implicational bases have been successfully applied to axiomatise minimal \(\mathcal{E\mkern1.618mu L}\)TBoxes from models, so called bases of GCIs. However, no technique for an adjustment of an existing \(\mathcal{E\mkern1.618mu L}\)TBox w.r.t. a new model is available, i.e., on a model change the complete TBox has to be recomputed. This document proposes a method for the computation of a minimal extension of a TBox w.r.t. a new model. The method is then utilised to formulate an incremental learning algorithm that requires a stream of interpretations, and an expert to guide the learning process, respectively, as input.
@inproceedings{ KrDL15, address = {Athens, Greece}, author = {Francesco {Kriegel}}, booktitle = {Proceedings of the 28th International Workshop on Description Logics ({DL 2015})}, editor = {Diego {Calvanese} and Boris {Konev}}, pages = {452464}, publisher = {CEURWS.org}, series = {CEUR Workshop Proceedings}, title = {Incremental Learning of TBoxes from Interpretation Sequences with Methods of Formal Concept Analysis}, volume = {1350}, year = {2015}, }
Abstract BibTeX Entry PDF File PDF File (ceurws.org)
A probabilistic formal context is a triadic context whose third dimension is a set of worlds equipped with a probability measure. After a formal definition of this notion, this document introduces probability of implications, and provides a construction for a base of implications whose probability satisfy a given lower threshold. A comparison between confidence and probability of implications is drawn, which yields the fact that both measures do not coincide, and cannot be compared. Furthermore, the results are extended towards the lightweight description logic \(\mathcal{E\mkern1.618mu L}^{\bot}\) with probabilistic interpretations, and a method for computing a base of general concept inclusions whose probability fulfill a certain lower bound is proposed.
@inproceedings{ KrCLA15, address = {ClermontFerrand, France}, author = {Francesco {Kriegel}}, booktitle = {Proceedings of the 12th International Conference on Concept Lattices and their Applications ({CLA 2015})}, editor = {Sadok {Ben Yahia} and Jan {Konecny}}, pages = {193204}, publisher = {CEURWS.org}, series = {CEUR Workshop Proceedings}, title = {Probabilistic Implicational Bases in FCA and Probabilistic Bases of GCIs in $\mathcal{E\mkern1.618mu L}^{\bot}$}, volume = {1466}, year = {2015}, }
Abstract BibTeX Entry PDF File PDF File (ceurws.org)
The canonical base of a formal context plays a distinguished role in formal concept analysis. This is because it is the only minimal base so far that can be described explicitly. For the computation of this base several algorithms have been proposed. However, all those algorithms work sequentially, by computing only one pseudointent at a time  a fact which heavily impairs the practicability of using the canonical base in realworld applications. In this paper we shall introduce an approach that remedies this deficit by allowing the canonical base to be computed in a parallel manner. First experimental evaluations show that for sufficiently large datasets the speedup is proportional to the number of available CPUs.
@inproceedings{ KrBoCLA15, address = {ClermontFerrand, France}, author = {Francesco {Kriegel} and Daniel {Borchmann}}, booktitle = {Proceedings of the 12th International Conference on Concept Lattices and their Applications ({CLA 2015})}, editor = {Sadok {Ben Yahia} and Jan {Konecny}}, note = {Best Paper Award.}, pages = {182192}, publisher = {CEURWS.org}, series = {CEUR Workshop Proceedings}, title = {NextClosures: Parallel Computation of the Canonical Base}, volume = {1466}, year = {2015}, }
2014
Abstract BibTeX Entry PDF File PDF File (ubbcluj.ro)
Suppose a formal context \(\mathbb{K}=(G,M,I)\) is given, whose concept lattice \(\mathfrak{B}(\mathbb{K})\) with an attributeadditive concept diagram is already known, and an attribute column \(\mathbb{C}=(G,\{n\},J)\) shall be inserted to or removed from it. This paper introduces and proves an incremental update algorithm for both tasks.
@article{ KrICFCA14, address = {ClujNapoca, Romania}, author = {Francesco {Kriegel}}, journal = {Studia Universitatis Babe{\c{s}}Bolyai Informatica}, note = {Supplemental proceedings of the 12th International Conference on Formal Concept Analysis (ICFCA 2014), ClujNapoca, Romania}, pages = {4561}, title = {Incremental Computation of Concept Diagrams}, volume = {59}, year = {2014}, }
Generated 24 May 2022, 13:31:30.
Technical Reports
2022
Abstract BibTeX Entry PDF File DOI
Errors in Description Logic (DL) ontologies are often detected when a reasoner computes unwanted consequences. The question is then how to repair the ontology such that the unwanted consequences no longer follow, but as many of the other consequences as possible are preserved. The problem of computing such optimal repairs was addressed in our previous work in the setting where the data (expressed by an ABox) may contain errors, but the schema (expressed by an \(\mathcal{EL}\) TBox) is assumed to be correct. Actually, we consider a generalization of ABoxes called quantified ABoxes (qABoxes) both as input for and as result of the repair process. Using qABoxes for repair allows us to retain more information, but the disadvantage is that standard DL systems do not accept qABoxes as input. This raises the question, investigated in the present paper, whether and how one can obtain optimal repairs if one restricts the output of the repair process to being ABoxes. In general, such optimal ABox repairs need not exist. Our main contribution is that we show how to decide the existence of optimal ABox repairs in exponential time, and how to compute all such repairs in case they exist.
@techreport{ BaKoKrNuLTCS2201, address = {Dresden, Germany}, author = {Franz {Baader} and Patrick {Koopmann} and Francesco {Kriegel} and Adrian {Nuradiansyah}}, doi = {https://doi.org/10.25368/2022.65}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, number = {2201}, title = {Optimal ABox Repair w.r.t.\ Static {$\mathcal{EL}$} TBoxes: from Quantified ABoxes back to ABoxes (Extended Version)}, type = {LTCSReport}, year = {2022}, }
Abstract BibTeX Entry PDF File DOI
Ontologies based on Description Logic (DL) represent general background knowledge in a terminology (TBox) and the actual data in an ABox. DL systems can then be used to compute consequences (such as answers to certain queries) from an ontology consisting of a TBox and an ABox. Since both humanmade and machinelearned data sets may contain errors, which manifest themselves as unintuitive or obviously incorrect consequences, repairing DLbased ontologies in the sense of removing such unwanted consequences is an important topic in DL research. Most of the repair approaches described in the literature produce repairs that are not optimal, in the sense that they do not guarantee that only a minimal set of consequences is removed. In a series of papers, we have developed an approach for computing optimal repairs, starting with the restricted setting of an \(\mathcal{EL}\) instance store, extending this to the more general setting of a quantified ABox (where some individuals may be anonymous), and then adding a static \(\mathcal{EL}\) TBox.
Here, we extend the expressivity of the underlying DL considerably, by adding nominals, inverse roles, regular role inclusions and the bottom concept to \(\mathcal{EL}\), which yields a fragment of the wellknown DL Horn\(\mathcal{SROIQ}\). The ideas underlying our repair approach still apply to this DL, though several nontrivial extensions are needed to deal with the new constructors and axioms. The developed repair approach can also be used to treat unwanted consequences expressed by certain conjunctive queries or regular path queries, and to handle Horn\(\mathcal{ALCOI}\) TBoxes with regular role inclusions.
@techreport{ BaKrLTCS2202, address = {Dresden, Germany}, author = {Franz {Baader} and Francesco {Kriegel}}, doi = {https://doi.org/10.25368/2022.131}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, number = {2202}, title = {Pushing Optimal ABox Repair from {$\mathcal{EL}$} Towards More Expressive HornDLs (Extended Version)}, type = {LTCSReport}, year = {2022}, }
2021
Abstract BibTeX Entry PDF File DOI
The application of automated reasoning approaches to Description Logic (DL) ontologies may produce certain consequences that either are deemed to be wrong or should be hidden for privacy reasons. The question is then how to repair the ontology such that the unwanted consequences can no longer be deduced. An optimal repair is one where the least amount of other consequences is removed. Most of the previous approaches to ontology repair are of a syntactic nature in that they remove or weaken the axioms explicitly present in the ontology, and thus cannot achieve semantic optimality. In previous work, we have addressed the problem of computing optimal repairs of (quantified) ABoxes, where the unwanted consequences are described by concept assertions of the lightweight DL \(\mathcal{EL}\). In the present paper, we improve on the results achieved so far in two ways. First, we allow for the presence of terminological knowledge in the form of an \(\mathcal{EL}\) TBox. This TBox is assumed to be static in the sense that it cannot be changed in the repair process. Second, the construction of optimal repairs described in our previous work is best case exponential. We introduce an optimized construction that is exponential only in the worst case. First experimental results indicate that this reduces the size of the computed optimal repairs considerably.
@techreport{ BaKoKrNuLTCS2101, address = {Dresden, Germany}, author = {Franz {Baader} and Patrick {Koopmann} and Francesco {Kriegel} and Adrian {Nuradiansyah}}, doi = {https://doi.org/10.25368/2022.64}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, number = {2101}, title = {Computing Optimal Repairs of Quantified ABoxes w.r.t. Static {$\mathcal{EL}$} TBoxes (Extended Version)}, type = {LTCSReport}, year = {2021}, }
Abstract BibTeX Entry PDF File
We review our recent work on how to compute optimal repairs, optimal compliant anonymizations, and optimal safe anonymizations of ABoxes containing possibly anonymized individuals. The results can be used both to remove erroneous consequences from a knowledge base and to hide secret information before publication of the knowledge base, while keeping as much as possible of the original information.
@techreport{ BaKoKrNuPeLTCS2104, address = {Dresden, Germany}, author = {Franz {Baader} and Patrick {Koopmann} and Francesco {Kriegel} and Adrian {Nuradiansyah} and Rafael {Pe\~{n}aloza}}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, number = {2104}, title = {{PrivacyPreserving Ontology Publishing: The Case of Quantified ABoxes w.r.t.\ a Static CycleRestricted $\mathcal{EL}$ TBox (Extended Version)}}, type = {LTCSReport}, year = {2021}, }
2020
Abstract BibTeX Entry PDF File
We adapt existing approaches for privacypreserving publishing of linked data to a setting where the data are given as Description Logic (DL) ABoxes with possibly anonymised (formally: existentially quantified) individuals and the privacy policies are expressed using sets of concepts of the DL \(\mathcal{E\!L}\). We provide a chacterization of compliance of such ABoxes w.r.t. \(\mathcal{E\!L}\) policies, and show how optimal compliant anonymisations of ABoxes that are noncompliant can be computed. This work extends previous work on privacypreserving ontology publishing, in which a very restricted form of ABoxes, called instance stores, had been considered, but restricts the attention to compliance. The approach developed here can easily be adapted to the problem of computing optimal repairs of quantified ABoxes.
@techreport{ BaKrNuPeLTCS2008, address = {Dresden, Germany}, author = {Franz {Baader} and Francesco {Kriegel} and Adrian {Nuradiansyah} and Rafael {Pe\~{n}aloza}}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, number = {2008}, title = {Computing Compliant Anonymisations of Quantified ABoxes w.r.t. $\mathcal{E\!L}$ Policies (Extended Version)}, type = {LTCSReport}, year = {2020}, }
Abstract BibTeX Entry PDF File
In recent work, we have shown how to compute compliant anonymizations of quantified ABoxes w.r.t. \(\mathcal{E\!L}\) policies. In this setting, quantified ABoxes can be used to publish information about individuals, some of which are anonymized. The policy is given by concepts of the Description Logic (DL) \(\mathcal{E\!L}\), and compliance means that one cannot derive from the ABox that some nonanonymized individual is an instance of a policy concept. If one assumes that a possible attacker could have additional knowledge about some of the involved nonanonymized individuals, then compliance with a policy is not sufficient. One wants to ensure that the quantified ABox is safe in the sense that none of the secret instance information is revealed, even if the attacker has additional compliant knowledge. In the present paper, we show that safety can be decided in polynomial time, and that the unique optimal safe anonymization of a nonsafe quantified ABox can be computed in exponential time, provided that the policy consists of a single \(\mathcal{E\!L}\) concept.
@techreport{ BaKrNuPeLTCS2009, address = {Dresden, Germany}, author = {Franz {Baader} and Francesco {Kriegel} and Adrian {Nuradiansyah} and Rafael {Pe\~{n}aloza}}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, number = {2009}, title = {Computing Safe Anonymisations of Quantified ABoxes w.r.t.\ $\mathcal{E\!L}$ Policies (Extended Version)}, type = {LTCSReport}, year = {2020}, }
2019
Abstract BibTeX Entry PDF File
We make a first step towards adapting an existing approach for privacypreserving publishing of linked data to Description Logic (DL) ontologies. We consider the case where both the knowledge about individuals and the privacy policies are expressed using concepts of the DL \(\mathcal{EL}\), which corresponds to the setting where the ontology is an \(\mathcal{EL}\) instance store. We introduce the notions of compliance of a concept with a policy and of safety of a concept for a policy, and show how optimal compliant (safe) generalizations of a given \(\mathcal{EL}\) concept can be computed. In addition, we investigate the complexity of the optimality problem.
@techreport{ BaKrNuLTCS1901, address = {Dresden, Germany}, author = {Franz {Baader} and Francesco {Kriegel} and Adrian {Nuradiansyah}}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tudresden.de/inf/lat/reports#BaKrNuLTCS1901}}, number = {1901}, title = {{PrivacyPreserving Ontology Publishing for $\mathcal{EL}$ Instance Stores (Extended Version)}}, type = {LTCSReport}, year = {2019}, }
Abstract BibTeX Entry PDF File
A joining implication is a restricted form of an implication where it is explicitly specified which attributes may occur in the premise and in the conclusion, respectively. A technique for sound and complete axiomatization of joining implications valid in a given formal context is provided. In particular, a canonical base for the joining implications valid in a given formal context is proposed, which enjoys the property of being of minimal cardinality among all such bases. Background knowledge in form of a set of valid joining implications can be incorporated. Furthermore, an application to inductive learning in a Horn description logic is proposed, that is, a procedure for sound and complete axiomatization of \(\textsf{Horn}\mathcal{M}\) concept inclusions from a given interpretation is developed. A complexity analysis shows that this procedure runs in deterministic exponential time.
@techreport{ KrLTCS1902, address = {Dresden, Germany}, author = {Francesco {Kriegel}}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tudresden.de/inf/lat/reports#KrLTCS1902}}, number = {1902}, title = {{Joining Implications in Formal Contexts and Inductive Learning in a Horn Description Logic (Extended Version)}}, type = {LTCSReport}, year = {2019}, }
2018
Abstract BibTeX Entry PDF File
The classical approach for repairing a Description Logic ontology \(\mathfrak{O}\) in the sense of removing an unwanted consequence \(\alpha\) is to delete a minimal number of axioms from \(\mathfrak{O}\) such that the resulting ontology \(\mathfrak{O}'\) does not have the consequence \(\alpha\). However, the complete deletion of axioms may be too rough, in the sense that it may also remove consequences that are actually wanted. To alleviate this problem, we propose a more gentle way of repair in which axioms are not necessarily deleted, but only weakened. On the one hand, we investigate general properties of this gentle repair method. On the other hand, we propose and analyze concrete approaches for weakening axioms expressed in the Description Logic \(\mathcal{E\mkern1.618mu L}\).
@techreport{ BaKrNuPeLTCS1801, address = {Dresden, Germany}, author = {Franz {Baader} and Francesco {Kriegel} and Adrian {Nuradiansyah} and Rafael {Pe\~{n}aloza}}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tudresden.de/inf/lat/reports#BaKrNuPeLTCS1801}}, number = {1801}, title = {{Repairing Description Logic Ontologies by Weakening Axioms}}, type = {LTCSReport}, year = {2018}, }
Abstract BibTeX Entry PDF File
For a probabilistic extension of the description logic \(\mathcal{E\mkern1.618mu L}^{\bot}\), we consider the task of automatic acquisition of terminological knowledge from a given probabilistic interpretation. Basically, such a probabilistic interpretation is a family of directed graphs the vertices and edges of which are labeled, and where a discrete probability measure on this graph family is present. The goal is to derive socalled concept inclusions which are expressible in the considered probabilistic description logic and which hold true in the given probabilistic interpretation. A procedure for an appropriate axiomatization of such graph families is proposed and its soundness and completeness is justified.
@techreport{ KrLTCS1803, address = {Dresden, Germany}, author = {Francesco {Kriegel}}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tudresden.de/inf/lat/reports#KrLTCS1803}}, number = {1803}, title = {{Terminological Knowledge Acquisition in Probabilistic Description Logic}}, type = {LTCSReport}, year = {2018}, }
Abstract BibTeX Entry PDF File
For the description logic \(\mathcal{E\mkern1.618mu L}\), we consider the neighborhood relation which is induced by the subsumption order, and we show that the corresponding lattice of \(\mathcal{E\mkern1.618mu L}\) concept descriptions is distributive, modular, graded, and metric. In particular, this implies the existence of a rank function as well as the existence of a distance function.
@techreport{ KrLTCS1810, address = {Dresden, Germany}, author = {Francesco {Kriegel}}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tudresden.de/inf/lat/reports#KrLTCS1810}}, number = {1810}, title = {{The Distributive, Graded Lattice of $\mathcal{E\mkern1.618mu L}$ Concept Descriptions and its Neighborhood Relation (Extended Version)}}, type = {LTCSReport}, year = {2018}, }
Abstract BibTeX Entry PDF File
The notion of a most specific consequence with respect to some terminological box is introduced, conditions for its existence in the description logic \(\mathcal{E\mkern1.618mu L}\) and its variants are provided, and means for its computation are developed. Algebraic properties of most specific consequences are explored. Furthermore, several applications that make use of this new notion are proposed and, in particular, it is shown how given terminological knowledge can be incorporated in existing approaches for the axiomatization of observations. For instance, a procedure for an incremental learning of concept inclusions from sequences of interpretations is developed.
@techreport{ KrLTCS1811, address = {Dresden, Germany}, author = {Francesco {Kriegel}}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tudresden.de/inf/lat/reports#KrLTCS1811}, accepted for publication in {Discrete Applied Mathematics}}, number = {1811}, title = {{Most Specific Consequences in the Description Logic $\mathcal{E\mkern1.618mu L}$}}, type = {LTCSReport}, year = {2018}, }
Abstract BibTeX Entry PDF File
Description logics in their standard setting only allow for representing and reasoning with crisp knowledge without any degree of uncertainty. Of course, this is a serious shortcoming for use cases where it is impossible to perfectly determine the truth of a statement. For resolving this expressivity restriction, probabilistic variants of description logics have been introduced. Their modeltheoretic semantics is built upon socalled probabilistic interpretations, that is, families of directed graphs the vertices and edges of which are labeled and for which there exists a probability measure on this graph family.
Results of scientific experiments, e.g., in medicine, psychology, or biology, that are repeated several times can induce probabilistic interpretations in a natural way. In this document, we shall develop a suitable axiomatization technique for deducing terminological knowledge from the assertional data given in such probabilistic interpretations. More specifically, we consider a probabilistic variant of the description logic \(\mathcal{E\mkern1.618mu L}^{\!\bot}\), and provide a method for constructing a set of rules, socalled concept inclusions, from probabilistic interpretations in a sound and complete manner.
@techreport{ KrLTCS1812, address = {Dresden, Germany}, author = {Francesco {Kriegel}}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tudresden.de/inf/lat/reports#KrLTCS1812}}, number = {1812}, title = {{Learning Description Logic Axioms from Discrete Probability Distributions over Description Graphs (Extended Version)}}, type = {LTCSReport}, year = {2018}, }
2015
Abstract BibTeX Entry
It is wellknown that the canonical implicational base of all implications valid w.r.t. a closure operator can be obtained from the set of all pseudoclosures. NextClosures is a parallel algorithm to compute all closures and pseudoclosures of a closure operator in a graded lattice, e.g., in a powerset. Furthermore, the closures and pseudoclosures can be constrained, and partially known closure operators can be explored.
@techreport{ KrLTCS1501, address = {Dresden, Germany}, author = {Francesco {Kriegel}}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tudresden.de/inf/lat/reports#KrLTCS1501}}, number = {1501}, title = {{NextClosures  Parallel Exploration of Constrained Closure Operators}}, type = {LTCSReport}, year = {2015}, }
Abstract BibTeX Entry
Modelbased most specific concept descriptions are a useful means to compactly represent all knowledge about a certain individual of an interpretation that is expressible in the underlying description logic. Existing approaches only cover their construction in the case of \(\mathcal{E\mkern1.618mu L}\) and \(\mathcal{F\mkern1.618mu L\mkern1.618mu E}\) w.r.t. greatest fixpoint semantics, and the case of \(\mathcal{E\mkern1.618mu L}\) w.r.t. a roledepth bound, respectively. This document extends the results towards the more expressive description logic \(\mathcal{A\mkern1.618mu L\mkern1.618mu E\mkern1.618mu Q}^{\geq}\mkern1.618mu\mathcal{N}^{\leq}\mkern1.618mu(\mathsf{Self})\) w.r.t. roledepth bounds and also gives a method for the computation of least common subsumers.
@techreport{ KrLTCS1502, address = {Dresden, Germany}, author = {Francesco {Kriegel}}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tudresden.de/inf/lat/reports#KrLTCS1502}}, number = {1502}, title = {{ModelBased Most Specific Concept Descriptions and Least Common Subsumers in $\mathcal{A\mkern1.618mu L\mkern1.618mu E\mkern1.618mu Q}^{\geq}\mkern1.618mu\mathcal{N}^{\leq}\mkern1.618mu(\mathsf{Self})$}}, type = {LTCSReport}, year = {2015}, }
Abstract BibTeX Entry PDF File
Description logic knowledge bases can be used to represent knowledge about a particular domain in a formal and unambiguous manner. Their practical relevance has been shown in many research areas, especially in biology and the semantic web. However, the tasks of constructing knowledge bases itself, often performed by human experts, is difficult, timeconsuming and expensive. In particular the synthesis of terminological knowledge is a challenge every expert has to face. Because human experts cannot be omitted completely from the construction of knowledge bases, it would therefore be desirable to at least get some support from machines during this process. To this end, we shall investigate in this work an approach which shall allow us to extract terminological knowledge in the form of general concept inclusions from factual data, where the data is given in the form of vertex and edge labeled graphs. As such graphs appear naturally within the scope of the Semantic Web in the form of sets of RDF triples, the presented approach opens up the possibility to extract terminological knowledge from the Linked Open Data Cloud. We shall also present first experimental results showing that our approach has the potential to be useful for practical applications.
@techreport{ BoDiKrLTCS1513, address = {Dresden, Germany}, author = {Daniel {Borchmann} and Felix {Distel} and Francesco {Kriegel}}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tudresden.de/inf/lat/reports#BoDiKrLTCS1513}}, number = {1513}, title = {{Axiomatization of General Concept Inclusions from Finite Interpretations}}, type = {LTCSReport}, year = {2015}, }
Abstract BibTeX Entry PDF File
Probabilistic interpretations consist of a set of interpretations with a shared domain and a measure assigning a probability to each interpretation. Such structures can be obtained as results of repeated experiments, e.g., in biology, psychology, medicine, etc. A translation between probabilistic and crisp description logics is introduced and then utilised to reduce the construction of a base of general concept inclusions of a probabilistic interpretation to the crisp case for which a method for the axiomatisation of a base of GCIs is wellknown.
@techreport{ KrLTCS1514, address = {Dresden, Germany}, author = {Francesco {Kriegel}}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tudresden.de/inf/lat/reports#KrLTCS1514}}, number = {1514}, title = {{Learning General Concept Inclusions in Probabilistic Description Logics}}, type = {LTCSReport}, year = {2015}, }
Generated 24 May 2022, 13:31:16.
Theses
2019
Abstract BibTeX Entry PDF File Publication Summary
Description Logic (abbrv. DL) belongs to the field of knowledge representation and reasoning. DL researchers have developed a large family of logicbased languages, socalled description logics (abbrv. DLs). These logics allow their users to explicitly represent knowledge as ontologies, which are finite sets of (human and machinereadable) axioms, and provide them with automated inference services to derive implicit knowledge. The landscape of decidability and computational complexity of common reasoning tasks for various description logics has been explored in large parts: there is always a tradeoff between expressibility and reasoning costs. It is therefore not surprising that DLs are nowadays applied in a large variety of domains: agriculture, astronomy, biology, defense, education, energy management, geography, geoscience, medicine, oceanography, and oil and gas. Furthermore, the most notable success of DLs is that these constitute the logical underpinning of the Web Ontology Language (abbrv. OWL) in the Semantic Web.
Formal Concept Analysis (abbrv. FCA) is a subfield of lattice theory that allows to analyze datasets that can be represented as formal contexts. Put simply, such a formal context binds a set of objects to a set of attributes by specifying which objects have which attributes. There are two major techniques that can be applied in various ways for purposes of conceptual clustering, data mining, machine learning, knowledge management, knowledge visualization, etc. On the one hand, it is possible to describe the hierarchical structure of such a dataset in form of a formal concept lattice. On the other hand, the theory of implications (dependencies between attributes) valid in a given formal context can be axiomatized in a sound and complete manner by the socalled canonical base, which furthermore contains a minimal number of implications w.r.t. the properties of soundness and completeness.
In spite of the different notions used in FCA and in DLs, there has been a very fruitful interaction between these two research areas. My thesis continues this line of research and, more specifically, I will describe how methods from FCA can be used to support the automatic construction and extension of DL ontologies from data.
@thesis{ KriegelDiss2019, address = {Dresden, Germany}, author = {Francesco {Kriegel}}, school = {Technische Universit\"{a}t Dresden}, title = {Constructing and Extending Description Logic Ontologies using Methods of Formal Concept Analysis}, type = {Doctoral Thesis}, year = {2019}, }
2013
Abstract BibTeX Entry Publication
Draft and proof of an algorithm computing incremental changes within a labeled layouted concept lattice upon insertion or removal of an attribute column in the underlying formal context. Furthermore some implementational details and mathematical background knowledge are presented.
@thesis{ KriegelDiploma2013, address = {Dresden, Germany}, author = {Francesco {Kriegel}}, school = {Technische Universit\"{a}t Dresden}, title = {Visualization of Conceptual Data with Methods of Formal Concept Analysis}, type = {Diploma Thesis}, year = {2013}, }
Generated 24 May 2022, 13:31:54.
Membership in Program Commitees

International Joint Conference on Artificial Intelligence (IJCAI) 2022

International Workshop on Description Logics (DL) 2022

International Conference on Concept Lattices and Their Applications (CLA) 2022

International Joint Conference on Artificial Intelligence (IJCAI) 2021

AAAI Conference on Artificial Intelligence 2021

Workshop What can FCA do for Artificial Intelligence? (FCA4AI) 2021

International Workshop on Description Logics (DL) 2020

International Joint Conference on Artificial Intelligence (IJCAI) 2020

Workshop What can FCA do for Artificial Intelligence? (FCA4AI) 2020

International Conference on Concept Lattices and Their Applications (CLA) 2020

International Conference on Knowledge Engineering and Knowledge Management (EKAW) 2020

European Conference on Artificial Intelligence (ECAI) 2020

AAAI Conference on Artificial Intelligence 2020
 International Conference on Knowledge Engineering and Knowledge Management (EKAW) 2018
 International Conference on Concept Lattices and Their Applications (CLA) 2018
 International Conference on Concept Lattices and Their Applications (CLA) 2016
 International Workshop on Concept Discovery in Unstructured Data (CDUD) 2016
 International Workshop on Soft Computing Applications and Knowledge Discovery (SCAKD) 2016
 International Conference on Concept Lattices and Their Applications (CLA) 2015
Reviewing Activities

International Joint Conference on Artificial Intelligence (IJCAI) 2022

International Workshop on Description Logics (DL) 2022

International Journal Information Sciences (INS) 2022

International Conference on Concept Lattices and Their Applications (CLA) 2022

International Joint Conference on Artificial Intelligence (IJCAI) 2021

AAAI Conference on Artificial Intelligence (AAAI) 2021

International Journal Discrete Applied Mathematics (DAM) 2021

International Journal Information Sciences (INS) 2021

Workshop What can FCA do for Artificial Intelligence? (FCA4AI) 2021

International Workshop on Description Logics (DL) 2020

International Joint Conference on Artificial Intelligence (IJCAI) 2020

AAAI Conference on Artificial Intelligence (AAAI) 2020

Workshop What can FCA do for Artificial Intelligence? (FCA4AI) 2020

International Conference on Concept Lattices and Their Applications (CLA) 2020

International Conference on Knowledge Engineering and Knowledge Management (EKAW) 2020

International Conference on Logic for Programming, Artificial Intelligence and Reasoning (LPAR) 2020

European Conference on Artificial Intelligence (ECAI) 2020

AAAI Conference on Artificial Intelligence 2020

International Workshop on Description Logics (DL) 2019
 International Journal Information Sciences (INS) 2019
 International Journal of Approximate Reasoning (IJA) 2019
 International Conference on Knowledge Engineering and Knowledge Management (EKAW) 2018
 International Journal Discrete Applied Mathematics (DAM) 2018
 International Journal Information Sciences (INS) 2018
 International Conference on Concept Lattices and Their Applications (CLA) 2018
 International Journal Information Sciences (INS) 2017
 Book Series Lecture Notes in Social Networks (LNSN) 2017
 International Journal on General Systems (IJGS) 2016
 International Journal Information Sciences (INS) 2016
 International Conference on Concept Lattices and Their Applications (CLA) 2016
 International Workshop on Concept Discovery in Unstructured Data (CDUD) 2016
 International Workshop on Soft Computing Applications and Knowledge Discovery (SCAKD) 2016
 International Journal Information Sciences (INS) 2015
 International Conference on Concept Lattices and Their Applications (CLA) 2015
 MultiDisciplinary International Workshop on Artificial Intelligence (MIWAI) 2015
 International Conference on Artificial Intelligence: Methodology, Systems, Applications (AIMSA) 2014
Teaching Experience
 Tutorium & Kursassistenz Description Logic (Prof. Franz Baader) SS2019
 Tutorium & Kursassistenz Automata and Logic (PD AnniYasmin Turhan) WS2018/2019
 Seminar Learning in Description Logics (Prof. Franz Baader, PD AnniYasmin Turhan) WS2018/2019
 Proseminar Ausgewählte Themen der Theoretischen Informatik (Prof. Franz Baader, Monika Sturm) WS2018/2019
 Tutorium & Kursassistenz Term Rewriting Systems (Prof. Franz Baader) SS2018
 Proseminar Ausgewählte Themen der Theoretischen Informatik (Prof. Franz Baader, Monika Sturm) SS2018
 Proseminar Ausgewählte Themen der Theoretischen Informatik (Prof. Franz Baader, Monika Sturm) WS2017/2018
 Tutorium Theoretische Informatik und Logik (Prof. Markus Krötzsch, Daniel Borchmann) SS2017
 Tutorium Formale Systeme (Prof. Markus Krötzsch, Daniel Borchmann) WS2016/2017
 Tutorium & Kursassistenz Database Theory (Prof. Markus Krötzsch) SS2016
 Tutorium & Kursassistenz Introduction to Automatic Structures (PD AnniYasmin Turhan) SS2016
 Tutorium & Kursassistenz Description Logic (PD AnniYasmin Turhan) WS2015/2016
 Tutorium & Kursassistenz Automata and Logic (Daniel Borchmann) SS2015
 Proseminar Die Unvollständigkeitssätze von Kurt Gödel (Prof. Franz Baader, Monika Sturm) SS2015
 Tutorium & Kursassistenz Description Logic (AnniYasmin Turhan) WS2014/2015
 Tutorium Theoretische Informatik und Logik (Prof. Franz Baader, Monika Sturm) SS2014
 Tutorium Formale Systeme (Prof. Franz Baader, Monika Sturm) WS2013/2014
 Programmierübungen Informatik für Biologen (Monika Sturm) WS2013/2014
 Tutorium & Korrektur Elemente der Algebra und Zahlentheorie (Sebastian Kerkhoff, Daniel Borchmann) SS2012
 Tutorium & Korrektur Einführung in die Mathematik für Informatiker, Lineare Algebra (Prof. Ulrike Baumann, Ilse Ilsche) WS2011/2012
 Tutorium & Korrektur Elemente der Algebra und Zahlentheorie (Prof. Bernhard Ganter, Daniel Borchmann) SS2011
 Korrektur Analysis I (Prof. Friedemann Schuricht, Zoja Milbers) WS2010/2011
 Tutorium & Korrektur Einführung in die Mathematik für Informatiker, Lineare Algebra (Jürgen Brunner, Ilse Ilsche) WS2010/2011
 Tutorium & Korrektur Mathematische Methoden für Informatiker (Prof. Ulrike Baumann, Ilse Ilsche) SS2010
 Tutorium & Korrektur Einführung in die Mathematik für Informatiker, Diskrete Strukturen (Prof. Bernhard Ganter, Ilse Ilsche) WS2009/2010
 Tutorium & Korrektur Funktionentheorie (Prof. Friedemann Schuricht, Zoja Milbers) SS2009
 Tutorium & Korrektur Mathematik für Informatiker IV (Prof. Ulrike Baumann, Ilse Ilsche) SS2009
 Korrektur Analysis III (Prof. Friedemann Schuricht, Karin Weigel) WS2008/2009
 Tutorium & Korrektur Mathematik für Informatiker III (Prof. Ulrike Baumann, Ilse Ilsche) WS2008/2009
 Korrektur Analysis II (Prof. Friedemann Schuricht, Karin Weigel) SS2008
 Tutorium & Korrektur Mathematik für Informatiker II (Prof. Ulrike Baumann, Ilse Ilsche) SS2008
 Tutorium Mathematik für Informatiker I (Prof. Ulrike Baumann, Ilse Ilsche) WS2007/2008
 Tutorium Brückenkurs Mathematik WS2007/2008
 Nachhilfe Mathematik für Ingenieure II SS2007
 Tutorium Brückenkurs Mathematik WS2006/2007
 Nachhilfe Mathematik für Ingenieure I WS2006/2007
Practical Experience
 Eigene Software Concept Explorer FX (Interaktive, iterative, und parallele Algorithmen in formaler Begriffsanalyse mit beschreibungslogischen Erweiterungen) conexpfx auf GitHub
 Werkstudent und dann Diplomand SAP Research Dresden Cubist Projekt (Aufgabe: Entwurf, Entwicklung, und Test eines Systems zur interaktiven Darstellung von Wissen als Verbandsdiagramm mit Methoden der formalen Begriffsanalyse) 2011—2012
 Praktikant und dann Werkstudent SAP Research Dresden Aletheia Projekt (Aufgabe: Entwurf, Entwicklung, und Test eines Systems zur Informationsextraktion aus unstrukturierten Datenquellen) 2009—2011
Visited Courses during my Studies
 Assmann: Softwaretechnologie (WS2006/2007)
 Bär: Räumliche Kinematik & Robotik (SS2008)
 Baumann: Algebraische Graphentheorie (WS2008/2009)
 Baumann: Algebraische Methoden der Kryptologie (WS2008/2009)
 Baumann: Diskrete Strukturen (WS2007/2008)
 Baumann: Graphentheorie (SS2007)
 Baumann: Proseminar Kryptografie (WS2006/2007)
 Colson: Englisch L1.1 (WS2005/2006)
 Colson: Englisch L2.2 (SS2006)
 Finger, Hiller: Elektronische Medien & Digitaler Rundfunk (WS2007/2008)
 Fischer, Vanselow: Optimierung (SS2007)
 Ganter: Seminar Algebra (WS2008/2009)
 Ganter, Hereth Correia: Strukturtheorie vollständiger Verbände (WS2007/2008)
 Ganter, Liebscher: Ordnungs & Verbandstheorie (SS2008)
 Hansen, Wolf: Praktikum Kernreaktortechnik (SS2006)
 Karl: Übersetzungstechnik (WS2008/2009)
 Linß, Matthes: Numerik (WS2006/2007)
 Petersohn: Künstliche Intelligenz (WS2007/2008)
 Picard, Freymond: Funktionalanalysis I (WS2007/2008)
 Pönisch: Grundpraktikum (WS2008/2009)
 Pöschel: Funktionen & Relationenalgebren (SS2009)
 Pöschel: Lineare Algebra & Analytische Geometrie I (WS2005/2006)
 Pöschel: Permutationsgruppen (SS2009)
 Pöschel, Kirsten: Lineare Algebra & Analytische Geometrie II (SS2006)
 Pöschel, Reppe: Kategorientheorie (SS2008)
 Pöschel, Zschalig: Universelle Algebra (WS2007/2008)
 Rhodius, Kayser: Analysis I (WS2005/2006)
 Rhodius, Kayser: Analysis II (SS2006)
 Rhodius, Kayser: Analysis III (WS2006/2007)
 Rüdiger: Praktikum Programmierung I (WS2005/2006)
 Rüdiger: Praktikum Programmierung II (SS2006)
 Schill, Bellmann: Rechnernetze (SS2007)
 Schilling: Wahrscheinlichkeitstheorie (WS2008/2009)
 Schmidt K.D., Hess: Maßtheorie & Stochastik (SS2007)
 Schmidt S.E.: Algebraische Datenapproximation (SS2010)
 Schmidt S.E.: Algebraische Strukturen (SS2009)
 Schmidt S.E.: DempsterShaferTheorie (SS2009)
 Schmidt S.E.: Seminar Algebra (Schreiben mathematischer Texte) (SS2009)
 Schmidt S.E.: Tropische Geometrie & Algebraische Statistik (SS2008)
 Schmidt S.E., Hereth Correia: Algebra (WS2006/2007)
 Schmidt S.E., Hereth Correia: Qualitative Datenanalyse (SS2007)
 Sturm: Funktionale Programmierung & Typtheorie (WS2008/2009)
 Timmermann: Funktionalanalysis I (WS2008/2009)
 Timmermann: Funktionalanalysis II (SS2008)
 Voigt, Vogt: Funktionentheorie (SS2008)
 Vogler, Vater: Algorithmen & Datenstrukturen (WS2005/2006)
 Vogler, Vater: Programmierung (SS2006)
 Walter: Objektorientiertes Programmieren mit Java (SS2011)
 Walter: Programmieren I (WS2005/2006)
 Walter: Programmieren II (SS2006)
 Weiß, Lehmann: Geometrie (SS2007)
Scripts and Reports from my Studies
 Ordnungen, Verbände und Kontexte (Skript zu Vorlesungen bei Prof. Ganter)
 Funktionalanalysis I+II (Skript zur Vorlesung bei Prof. Timmermann im SS2008, WS2008/2009, digitalisierte Mitschriften mit Zusätzen, leider nicht vollständig, Achtung: nicht von Prof. Timmermann überprüft)
 Funktionale Programmierung und Typtheorie (Skript zur Vorlesung bei Prof. Sturm im WS2008/2009)
 Rough Sets  Unexakte Mengen (Seminararbeit bei Prof. Ganter im WS2008/2009)
 Tropische Geometrie & Algebraische Statistik (Skript zur Vorlesung bei Prof. Schmidt S.E. im SS2008, neu strukturierte Mitschriften in digitaler Form)
 Sets for Mathematics  Die Kategorie der Mengen (Seminararbeit bei Prof. Schmidt im SS2009, mit Maik Luther, Georg Schollmeyer, Jeremias Epperlein)
 Maß, Integral und Martingal (Unvollständiges Skript zur Vorlesung bei Prof. Schilling)
 2DGuillotineZuschnitt und WangAlgorithmus, Präsentation, Implementierung jWang (Ausarbeitung, Präsentation und Implementierung eines Themas im Rahmen eines universitätsinternen Grundpraktikums im WS2008/2009, mit Matthias Lange)
 Analysis I+II+III (Skript zur Vorlesung bei Prof. Rhodius im WS2005/2006, SS2006, WS2006/2007, digitalisierte Mitschriften mit Zusätzen, leider nicht vollständig)
 RSA (Seminararbeit bei Prof. Baumann im WS2006/2007)
Citations (in German)
Die schönsten Momente im Leben eines Mathematikers sind die ersten Momente, nachdem er einen Beweis vollendet hat, jedoch bevor er den Fehler findet. (Unbekannt)
Bedenke, wage, beharre! Und du wirst vieles im Leben erringen. (Deutsche Spruchweisheit)
Es gibt Leute, die meinen, alles wäre vernünftig, was man mit einem ernsthaften Gesicht tut. (Georg Christoph Lichtenberg)
Ein wahrhaft großer Mann wird weder einen Wurm zertreten noch vor einem Kaiser kriechen. (Benjamin Franklin)
Wer aufhört, besser sein zu wollen, hat aufgehört gut zu sein. (Oliver Cromwell)
Such nicht andere, sondern dich selbst zu übertreffen. (Marcus Tullius Cicero)
Alles auf einmal tun zu wollen zerstört alles auf einmal. (Georg Christoph Lichtenberg)
Wenn die Fäuste geballt sind, kann man sich nicht die Hände reichen. (Indira Gandhi)
Bevor du weißt, was Leben heißt, ist die Hälfte weg zumeist. (Deutsches Sprichwort)
Ein reizbarer Mensch ist wie ein verkehrt eingerollter Igel, der sich mit seinen Stacheln peinigt. (Lebensweisheit)
Der Freunde Fehler soll man kennen, aber nicht nennen. (Lebensweisheit)
Ich habe stets beobachtet, dass man, um Erfolg zu haben in der Welt, närrisch scheinen oder weise sein muss. (CharlesLouis de Secondat)
Unsere Mängel sind unsere besten Lehrer, aber gegen die besten Lehrer ist man immer undankbar. (Friedrich Nietzsche)
Alle ungeschickten Arbeiter schimpfen auf ihr Werkzeug. (Russisches Sprichwort)
Erst im Kopf bedenken, dann das Mundwerk lenken. (Sorbische Redensart)
Wer sich auf die Zehen stellt, steht nicht fest. (Laotse)
Bildungshunger und Wissensdurst sind keine Dickmacher. (Lothar Schmidt)
Gebildet ist der, der weiß, wo er findet, was er nicht weiß. (Chinesisches Sprichwort)
Wer nicht richtig faulenzen kann, der kann auch nicht richtig arbeiten. (Sizilianisches Sprichwort)
Die Freiheit ist nicht die Willkür, beliebig zu handeln, sondern die Fähigkeit, vernünftig zu handeln. (Rudolf Virchow)
Der Verstand und die Fähigkeit, ihn zu gebrauchen, sind zwei verschiedene Gaben. (Franz Grillparzer)
Enttäuschungen sollte man verbrennen und nicht einbalsamieren. (Mark Twain)
Es ist ein Beweis hoher Bildung, die größten Dinge auf die einfachste Art zu sagen. (Ralph Waldo Emerson)
Um ein tadelloses Mitglied einer Schafherde sein zu können, muss man vor allem ein Schaf sein. (Albert Einstein)
Man kann keinen Eierkuchen backen, ohne ein paar Eier zu zerschlagen. (Napoleon I. Bonaparte)
Manche Hähne glauben, dass die Sonne ihretwegen aufgeht. (Theodor Fontane)
Erziehung ist nicht das Anfüllen eines Eimers, sondern das Entfachen eines Feuers. (William Butler Yeats)
Seelenleiden zu heilen vermag der Verstand wenig, die Zeit viel, entschlossene Tätigkeit alle. (Johann Wolfgang von Goethe)
Einen Edelstein kann man nicht blank machen, ohne ihn zu reiben. (Konfuzius)
Morgen ist immer der Tag, an dem der Faule am meisten zu tun hat. (Norwegisches Sprichwort)
Ein Ziel von Bildung ist zu merken, wenn jemand Unsinn redet. (Harold Macmillan)
Ein Fehler, der geleugnet wird, verdoppelt sich. (Französisches Sprichwort)
Faulheit ist der Hang zur Ruhe ohne vorhergehende Arbeit. (Immanuel Kant)
Wer absolute Klarheit will, bevor er einen Entschluss fasst, wird sich nie entscheiden. (HenriFrédéric Amiel)
Wer an der Küste bleibt, kann keine neuen Ozeane entdecken. (Magellan)
Ein Experte ist jemand, der hinterher genau sagen kann, warum seine Prognose nicht gestimmt hat. (Sir Winston Churchill)
Erfahrungen vererben sicht nicht  jeder muss sie allein machen. (Kurt Tucholsky)
Bildung ist die Fähigkeit, Wesentliches von Unwesentlichem zu unterscheiden und jenes ernst zu nehmen. (Paul Anton de Lagarde)
Auch eine Enttäuschung, wenn sie nur gründlich und endgültig ist, bedeutet einen Schritt vorwärts. (Max Planck)
Die Dummheit drängt sich vor, um gesehen zu werden; die Klugheit steht zurück, um zu sehen. (Carmen Sylva)
Das Café ist der rechte Ort für Leute, die allein sein wollen, dafür aber Gesellschaft brauchen. (Alfred Polgar)
Besser ist es, hinkend auf dem rechten Weg zu gehen, als mit einem festen Schritt abseits. (Augustinus Aurelius)
Im Hinblick auf seine eigenen Ansichten ist jedermann konservativ. (Lothar Schmidt)
Manchmal führt ein Rechenfehler zur richtigen Lösung. (Stanislaw Jerzy Lec)
Hast du einen jungen Menschen davor bewahrt, Fehler zu machen, dann hast du ihn auch davor bewahrt, Entschlüsse zu fassen. (John Erskine)
Wer wagt, selbst zu denken, der wird auch selbst handeln. (Bettina von Arnim)
Faule Leute wollen alles auf einmal schaffen. (Chinesisches Sprichwort)
Es ist nicht von Bedeutung, wie langsam du gehst, solange du nicht stehen bleibst. (Konfuzius)
Leute, die wenig wissen, sind oft Schwätzer. (JeanJaques Rousseau)
Bei der Eroberung des Weltraums sind zwei Probleme zu lösen: die Schwerkraft und der Papierkrieg. Mit der Schwerkraft wären wir fertig geworden. (Wernher von Braun)