Francesco Kriegel
Research and Teaching Associate
NameMr Dipl.Math. Francesco Kriegel
Send encrypted mail via the SecureMail portal (for TUD external users only).
Visitor Address:
AndreasPfitzmannBau, Room 3032 Nöthnitzer Straße 46
01187 Dresden
 work Tel.
 +49 351 46338253
 fax Fax
 +49 351 46337959
Concept Explorer FX (conexpfx)
Table of contents
Projekt
Promotionsprojekt: Construction and Extension of Description Logic Knowledge Bases with Methods of Formal Concept Analysis Weitere Informationen
Auszeichnungen

Best Paper Award: International Conference on Concept Lattices and Their Applications (CLA) 2015
Veröffentlichungen
2019
Abstract BibTeX Entry PDF File DOI ©SpringerVerlag Extended Technical Report
We make a first step towards adapting an existing approach for privacypreserving publishing of linked data to Description Logic (DL) ontologies. We consider the case where both the knowledge about individuals and the privacy policies are expressed using concepts of the DL \(\mathcal{EL}\), which corresponds to the setting where the ontology is an \(\mathcal{EL}\) instance store. We introduce the notions of compliance of a concept with a policy and of safety of a concept for a policy, and show how optimal compliant (safe) generalizations of a given \(\mathcal{EL}\) concept can be computed. In addition, we investigate the complexity of the optimality problem.
@inproceedings{ BaKrNuJELIA19, author = {Franz {Baader} and Francesco {Kriegel} and Adrian {Nuradiansyah}}, booktitle = {16th European Conference on Logics in Artificial Intelligence, {JELIA} 2019, Rende, Italy, May 711, 2019, Proceedings}, doi = {https://dx.doi.org/10.1007/9783030195700_21}, editor = {Francesco {Calimeri} and Nicola {Leone} and Marco {Manna}}, pages = {323338}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {{PrivacyPreserving Ontology Publishing for $\mathcal{EL}$ Instance Stores}}, volume = {11468}, year = {2019}, }
Abstract BibTeX Entry PDF File DOI ©SpringerVerlag Extended Technical Report
A joining implication is a restricted form of an implication where it is explicitly specified which attributes may occur in the premise and in the conclusion, respectively. A technique for sound and complete axiomatization of joining implications valid in a given formal context is provided. In particular, a canonical base for the joining implications valid in a given formal context is proposed, which enjoys the property of being of minimal cardinality among all such bases. Background knowledge in form of a set of valid joining implications can be incorporated. Furthermore, an application to inductive learning in a Horn description logic is proposed, that is, a procedure for sound and complete axiomatization of \(\textsf{Horn}\mathcal{M}\) concept inclusions from a given interpretation is developed. A complexity analysis shows that this procedure runs in deterministic exponential time.
@inproceedings{ KrICFCA19, author = {Francesco {Kriegel}}, booktitle = {15th International Conference on Formal Concept Analysis, {ICFCA} 2019, Frankfurt, Germany, June 2528, 2019, Proceedings}, doi = {https://dx.doi.org/10.1007/9783030214623_9}, editor = {Diana {Cristea} and Florence {Le Ber} and Bar\i{}\c{s} {Sertkaya}}, pages = {110129}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {{Joining Implications in Formal Contexts and Inductive Learning in a Horn Description Logic}}, volume = {11511}, year = {2019}, }
Abstract BibTeX Entry PDF File DOI ©SpringerVerlag Extended Technical Report
Description logics in their standard setting only allow for representing and reasoning with crisp knowledge without any degree of uncertainty. Of course, this is a serious shortcoming for use cases where it is impossible to perfectly determine the truth of a statement. For resolving this expressivity restriction, probabilistic variants of description logics have been introduced. Their modeltheoretic semantics is built upon socalled probabilistic interpretations, that is, families of directed graphs the vertices and edges of which are labeled and for which there exists a probability measure on this graph family.
Results of scientific experiments, e.g., in medicine, psychology, or biology, that are repeated several times can induce probabilistic interpretations in a natural way. In this document, we shall develop a suitable axiomatization technique for deducing terminological knowledge from the assertional data given in such probabilistic interpretations. More specifically, we consider a probabilistic variant of the description logic \(\mathcal{E\mkern1.618mu L}^{\!\bot}\), and provide a method for constructing a set of rules, socalled concept inclusions, from probabilistic interpretations in a sound and complete manner.
@inproceedings{ KrJELIA19, author = {Francesco {Kriegel}}, booktitle = {16th European Conference on Logics in Artificial Intelligence, {JELIA} 2019, Rende, Italy, May 711, 2019, Proceedings}, doi = {https://dx.doi.org/10.1007/9783030195700_26}, editor = {Francesco {Calimeri} and Nicola {Leone} and Marco {Manna}}, pages = {399417}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {{Learning Description Logic Axioms from Discrete Probability Distributions over Description Graphs}}, volume = {11468}, year = {2019}, }
Abstract BibTeX Entry PDF File DOI ©Elsevier
The notion of a most specific consequence with respect to some terminological box is introduced, conditions for its existence in the description logic \(\mathcal{E\mkern1.618mu L}\) and its variants are provided, and means for its computation are developed. Algebraic properties of most specific consequences are explored. Furthermore, several applications that make use of this new notion are proposed and, in particular, it is shown how given terminological knowledge can be incorporated in existing approaches for the axiomatization of observations. For instance, a procedure for an incremental learning of concept inclusions from sequences of interpretations is developed.
@article{ KrDAM19, author = {Francesco {Kriegel}}, doi = {https://doi.org/10.1016/j.dam.2019.01.029}, journal = {Discrete Applied Mathematics}, note = {To appear.}, title = {{Most Specific Consequences in the Description Logic $\mathcal{E\mkern1.618mu L}$}}, year = {2019}, }
2018
BibTeX Entry PDF File PDF File (ceurws.org) Full Conference Paper
@inproceedings{ BaKrNuPeDL2018, author = {Franz {Baader} and Francesco {Kriegel} and Adrian {Nuradiansyah} and Rafael {Pe{\~n}aloza}}, booktitle = {Proceedings of the 31st International Workshop on Description Logics, Tempe, Arizona, October 2729, 2018}, editor = {Magdalena {Ortiz} and Thomas {Schneider}}, publisher = {CEURWS.org}, series = {{CEUR} Workshop Proceedings}, title = {{Making Repairs in Description Logics More Gentle (Extended Abstract)}}, volume = {2211}, year = {2018}, }
Abstract BibTeX Entry PDF File ©AAAI Extended Abstract Extended Technical Report
The classical approach for repairing a Description Logic ontology \(\mathfrak{O}\) in the sense of removing an unwanted consequence \(\alpha\) is to delete a minimal number of axioms from \(\mathfrak{O}\) such that the resulting ontology \(\mathfrak{O}'\) does not have the consequence \(\alpha\). However, the complete deletion of axioms may be too rough, in the sense that it may also remove consequences that are actually wanted. To alleviate this problem, we propose a more gentle notion of repair in which axioms are not deleted, but only weakened. On the one hand, we investigate general properties of this gentle repair method. On the other hand, we propose and analyze concrete approaches for weakening axioms expressed in the Description Logic \(\mathcal{E\mkern1.618mu L}\).
@inproceedings{ BaKrNuPeKR2018, address = {USA}, author = {Franz {Baader} and Francesco {Kriegel} and Adrian {Nuradiansyah} and Rafael {Pe{\~n}aloza}}, booktitle = {Principles of Knowledge Representation and Reasoning: Proceedings of the Sixteenth International Conference, {KR} 2018, Tempe, Arizona, 30 October  2 November 2018}, editor = {Frank {Wolter} and Michael {Thielscher} and Francesca {Toni}}, pages = {319328}, publisher = {{AAAI} Press}, title = {{Making Repairs in Description Logics More Gentle}}, year = {2018}, }
Abstract BibTeX Entry PDF File DOI ©SpringerVerlag Extended Technical Report
For a probabilistic extension of the description logic \(\mathcal{E\mkern1.618mu L}^{\bot}\), we consider the task of automatic acquisition of terminological knowledge from a given probabilistic interpretation. Basically, such a probabilistic interpretation is a family of directed graphs the vertices and edges of which are labeled, and where a discrete probability measure on this graph family is present. The goal is to derive socalled concept inclusions which are expressible in the considered probabilistic description logic and which hold true in the given probabilistic interpretation. A procedure for an appropriate axiomatization of such graph families is proposed and its soundness and completeness is justified.
@inproceedings{ KrKI18, address = {Berlin, Germany}, author = {Francesco {Kriegel}}, booktitle = {{{KI} 2018: Advances in Artificial Intelligence  41st German Conference on AI, Berlin, Germany, September 2428, 2018, Proceedings}}, doi = {https://doi.org/10.1007/9783030001117_5}, editor = {Frank {Trollmann} and AnniYasmin {Turhan}}, pages = {4653}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, title = {{Acquisition of Terminological Knowledge in Probabilistic Description Logic}}, volume = {11117}, year = {2018}, }
Abstract BibTeX Entry PDF File PDF File (ceurws.org)
For the description logic \(\mathcal{E\mkern1.618mu L}\), we consider the neighborhood relation which is induced by the subsumption order, and we show that the corresponding lattice of \(\mathcal{E\mkern1.618mu L}\) concept descriptions is distributive, modular, graded, and metric. In particular, this implies the existence of a rank function as well as the existence of a distance function.
@inproceedings{ KrCLA18, address = {Olomouc, Czech Republic}, author = {Francesco {Kriegel}}, booktitle = {{Proceedings of the 14th International Conference on Concept Lattices and Their Applications ({CLA 2018})}}, editor = {Dmitry I. {Ignatov} and Lhouari {Nourine}}, pages = {267278}, publisher = {CEURWS.org}, series = {{CEUR} Workshop Proceedings}, title = {{The Distributive, Graded Lattice of $\mathcal{E\mkern1.618mu L}$ Concept Descriptions and its Neighborhood Relation}}, volume = {2123}, year = {2018}, }
2017
Abstract BibTeX Entry PDF File DOI ©SpringerVerlag
The Web Ontology Language (OWL) has gained serious attraction since its foundation in 2004, and it is heavily used in applications requiring representation of as well as reasoning with knowledge. It is the language of the Semantic Web, and it has a strong logical underpinning by means of socalled Description Logics (DLs). DLs are a family of conceptual languages suitable for knowledge representation and reasoning due to their strong logical foundation, and for which the decidability and complexity of common reasoning problems are widely explored. In particular, the reasoning tasks allow for the deduction of implicit knowledge from explicitly stated facts and axioms, and plenty of appropriate algorithms were developed, optimized, and implemented, e.g., tableaux algorithms and completion algorithms. In this document, we present a technique for the acquisition of terminological knowledge from social networks. More specifically, we show how OWL axioms, i.e., concept inclusions and role inclusions in DLs, can be obtained from social graphs in a sound and complete manner. A social graph is simply a directed graph, the vertices of which describe the entities, e.g., persons, events, messages, etc.; and the edges of which describe the relationships between the entities, e.g., friendship between persons, attendance of a person to an event, a person liking a message, etc. Furthermore, the vertices of social graphs are labeled, e.g., to describe properties of the entities, and also the edges are labeled to specify the concrete relationships. As an exemplary social network we consider Facebook, and show that it fits our use case.
@incollection{ KrFCAoSN17, address = {Cham}, author = {Francesco {Kriegel}}, booktitle = {Formal Concept Analysis of Social Networks}, doi = {https://dx.doi.org/10.1007/9783319641676_5}, editor = {Rokia {Missaoui} and Sergei O. {Kuznetsov} and Sergei {Obiedkov}}, pages = {97142}, publisher = {Springer International Publishing}, series = {Lecture Notes in Social Networks ({LNSN})}, title = {Acquisition of Terminological Knowledge from Social Networks in Description Logic}, year = {2017}, }
Abstract BibTeX Entry PDF File DOI ©SpringerVerlag
Entropy is a measure for the uninformativeness or randomness of a data set, i.e., the higher the entropy is, the lower is the amount of information. In the field of propositional logic it has proven to constitute a suitable measure to be maximized when dealing with models of probabilistic propositional theories. More specifically, it was shown that the model of a probabilistic propositional theory with maximal entropy allows for the deduction of other formulae which are somehow expected by humans, i.e., allows for some kind of common sense reasoning. In order to pull the technique of maximum entropy entailment to the field of Formal Concept Analysis, we define the notion of entropy of a formal context with respect to the frequency of its object intents, and then define maximum entropy entailment for quantified implication sets, i.e., for sets of partial implications where each implication has an assigned degree of confidence. Furthermore, then this entailment technique is utilized to define socalled maximum entropy implicational bases (MEbases), and a first general example of such a MEbase is provided.
@inproceedings{ KrICFCA17b, address = {Rennes, France}, author = {Francesco {Kriegel}}, booktitle = {Proceedings of the 14th International Conference on Formal Concept Analysis ({ICFCA} 2017)}, doi = {https://doi.org/10.1007/9783319592718_10}, editor = {Karell {Bertet} and Daniel {Borchmann} and Peggy {Cellier} and S\'{e}bastien {Ferr\'{e}}}, pages = {155167}, publisher = {Springer Verlag}, series = {Lecture Notes in Computer Science ({LNCS})}, title = {First Notes on Maximum Entropy Entailment for Quantified Implications}, volume = {10308}, year = {2017}, }
Abstract BibTeX Entry PDF File DOI ©SpringerVerlag
We consider the task of acquisition of terminological knowledge from given assertional data. However, when evaluating data of realworld applications we often encounter situations where it is impractical to deduce only crisp knowledge, due to the presence of exceptions or errors. It is rather appropriate to allow for degrees of uncertainty within the derived knowledge. Consequently, suitable methods for knowledge acquisition in a probabilistic framework should be developed. In particular, we consider data which is given as a probabilistic formal context, i.e., as a triadic incidence relation between objects, attributes, and worlds, which is furthermore equipped with a probability measure on the set of worlds. We define the notion of a probabilistic attribute as a probabilistically quantified set of attributes, and define the notion of validity of implications over probabilistic attributes in a probabilistic formal context. Finally, a technique for the axiomatization of such implications from probabilistic formal contexts is developed. This is done is a sound and complete manner, i.e., all derived implications are valid, and all valid implications are deducible from the derived implications. In case of finiteness of the input data to be analyzed, the constructed axiomatization is finite, too, and can be computed in finite time.
@inproceedings{ KrICFCA17a, address = {Rennes, France}, author = {Francesco {Kriegel}}, booktitle = {Proceedings of the 14th International Conference on Formal Concept Analysis ({ICFCA} 2017)}, doi = {https://doi.org/10.1007/9783319592718_11}, editor = {Karell {Bertet} and Daniel {Borchmann} and Peggy {Cellier} and S\'{e}bastien {Ferr\'{e}}}, pages = {168183}, publisher = {Springer Verlag}, series = {Lecture Notes in Computer Science ({LNCS})}, title = {Implications over Probabilistic Attributes}, volume = {10308}, year = {2017}, }
Abstract BibTeX Entry PDF File DOI ©Taylor and Francis
A probabilistic formal context is a triadic context the third dimension of which is a set of worlds equipped with a probability measure. After a formal definition of this notion, this document introduces probability of implications with respect to probabilistic formal contexts, and provides a construction for a base of implications the probabilities of which exceed a given lower threshold. A comparison between confidence and probability of implications is drawn, which yields the fact that both measures do not coincide. Furthermore, the results are extended towards the lightweight description logic \(\mathcal{E\mkern1.618mu L}^{\bot}\) with probabilistic interpretations, and a method for computing a base of general concept inclusions the probabilities of which are greater than a predefined lower bound is proposed. Additionally, we consider socalled probabilistic attributes over probabilistic formal contexts, and provide a method for the axiomatization of implications over probabilistic attributes.
@article{ KrIJGS17, author = {Francesco {Kriegel}}, doi = {https://doi.org/10.1080/03081079.2017.1349575}, journal = {International Journal of General Systems}, number = {5}, pages = {511546}, title = {Probabilistic Implication Bases in {FCA} and Probabilistic Bases of GCIs in $\mathcal{E\mkern1.618mu L}^{\bot}$}, volume = {46}, year = {2017}, }
Abstract BibTeX Entry PDF File DOI ©Taylor and Francis
The canonical base of a formal context plays a distinguished role in Formal Concept Analysis, as it is the only minimal implicational base known so far that can be described explicitly. Consequently, several algorithms for the computation of this base have been proposed. However, all those algorithms work sequentially by computing only one pseudointent at a time – a fact that heavily impairs the practicability in realworld applications. In this paper, we shall introduce an approach that remedies this deficit by allowing the canonical base to be computed in a parallel manner with respect to arbitrary implicational background knowledge. First experimental evaluations show that for sufficiently large data sets the speedup is proportional to the number of available CPU cores.
@article{ KrBoIJGS17, author = {Francesco {Kriegel} and Daniel {Borchmann}}, doi = {https://doi.org/10.1080/03081079.2017.1349570}, journal = {International Journal of General Systems}, number = {5}, pages = {490510}, title = {NextClosures: Parallel Computation of the Canonical Base with Background Knowledge}, volume = {46}, year = {2017}, }
2016
Abstract BibTeX Entry PDF File DOI ©Taylor and Francis
Description logic knowledge bases can be used to represent knowledge about a particular domain in a formal and unambiguous manner. Their practical relevance has been shown in many research areas, especially in biology and the Semantic Web. However, the tasks of constructing knowledge bases itself, often performed by human experts, is difficult, timeconsuming and expensive. In particular the synthesis of terminological knowledge is a challenge every expert has to face. Because human experts cannot be omitted completely from the construction of knowledge bases, it would therefore be desirable to at least get some support from machines during this process. To this end, we shall investigate in this work an approach which shall allow us to extract terminological knowledge in the form of general concept inclusions from factual data, where the data is given in the form of vertex and edge labeled graphs. As such graphs appear naturally within the scope of the Semantic Web in the form of sets of RDF triples, the presented approach opens up another possibility to extract terminological knowledge from the Linked Open Data Cloud.
@article{ BoDiKrJANCL16, author = {Daniel {Borchmann} and Felix {Distel} and Francesco {Kriegel}}, doi = {https://dx.doi.org/10.1080/11663081.2016.1168230}, journal = {Journal of Applied NonClassical Logics}, number = {1}, pages = {146}, title = {Axiomatisation of General Concept Inclusions from Finite Interpretations}, volume = {26}, year = {2016}, }
Abstract BibTeX Entry PDF File PDF File (ceurws.org)
We propose applications that utilize the infimum and the supremum of closure operators that are induced by structures occuring in the field of Description Logics. More specifically, we consider the closure operators induced by interpretations as well as closure operators induced by TBoxes, and show how we can learn GCIs from streams of interpretations, and how an errortolerant axiomatization of GCIs from an interpretation guided by a handcrafted TBox can be achieved.
@inproceedings{ KrFCA4AI16, address = {The Hague, The Netherlands}, author = {Francesco {Kriegel}}, booktitle = {Proceedings of the 5th International Workshop "What can {FCA} do for Artificial Intelligence?" ({FCA4AI} 2016) colocated with the European Conference on Artificial Intelligence ({ECAI} 2016)}, editor = {Sergei {Kuznetsov} and Amedeo {Napoli} and Sebastian {Rudolph}}, pages = {916}, publisher = {CEURWS.org}, series = {{CEUR} Workshop Proceedings}, title = {Axiomatization of General Concept Inclusions from Streams of Interpretations with Optional Error Tolerance}, volume = {1703}, year = {2016}, }
Abstract BibTeX Entry PDF File PDF File (ceurws.org)
In a former paper, the algorithm NextClosures for computing the set of all formal concepts as well as the canonical base for a given formal context has been introduced. Here, this algorithm shall be generalized to a setting where the dataset is described by means of a closure operator in a complete lattice, and furthermore it shall be extended with the possibility to handle constraints that are given in form of a second closure operator. As a special case, constraints may be predefined as implicational background knowledge. Additionally, we show how the algorithm can be modified in order to do parallel Attribute Exploration for unconstrained closure operators, as well as give a reason for the impossibility of (parallel) Attribute Exploration for constrained closure operators if the constraint is not compatible with the dataset.
@inproceedings{ KrCLA16, address = {Moscow, Russia}, author = {Francesco {Kriegel}}, booktitle = {Proceedings of the 13th International Conference on Concept Lattices and Their Applications ({CLA 2016})}, editor = {Marianne {Huchard} and Sergei {Kuznetsov}}, pages = {231243}, publisher = {CEURWS.org}, series = {{CEUR} Workshop Proceedings}, title = {NextClosures with Constraints}, volume = {1624}, year = {2016}, }
Abstract BibTeX Entry PDF File DOI ©SpringerVerlag
The canonical base of a formal context is a minimal set of implications that is sound and complete. A recent paper has provided a new algorithm for the parallel computation of canonical bases. An important extension is the integration of expert interaction for Attribute Exploration in order to explore implicational bases of inaccessible formal contexts. This paper presents and analyzes an algorithm that allows for Parallel Attribute Exploration.
@inproceedings{ KrICCS16, address = {Annecy, France}, author = {Francesco {Kriegel}}, booktitle = {Proceedings of the 22nd International Conference on Conceptual Structures ({ICCS 2016})}, doi = {http://dx.doi.org/10.1007/9783319409856_8}, editor = {Ollivier {Haemmerl{\'{e}}} and Gem {Stapleton} and Catherine {Faron{}Zucker}}, pages = {91106}, publisher = {SpringerVerlag}, series = {Lecture Notes in Computer Science}, title = {Parallel Attribute Exploration}, volume = {9717}, year = {2016}, }
2015
Abstract BibTeX Entry PDF File DOI ©SpringerVerlag
Probabilistic interpretations consist of a set of interpretations with a shared domain and a measure assigning a probability to each interpretation. Such structures can be obtained as results of repeated experiments, e.g., in biology, psychology, medicine, etc. A translation between probabilistic and crisp description logics is introduced, and then utilized to reduce the construction of a base of general concept inclusions of a probabilistic interpretation to the crisp case for which a method for the axiomatization of a base of GCIs is wellknown.
@inproceedings{ KrKI15, address = {Dresden, Germany}, author = {Francesco {Kriegel}}, booktitle = {Proceedings of the 38th German Conference on Artificial Intelligence ({KI 2015})}, doi = {http://dx.doi.org/10.1007/9783319244891_10}, editor = {Steffen {H{\"o}lldobler} and Sebastian {Rudolph} and Markus {Kr{\"o}tzsch}}, pages = {124136}, publisher = {Springer Verlag}, series = {Lecture Notes in Artificial Intelligence ({LNAI})}, title = {Axiomatization of General Concept Inclusions in Probabilistic Description Logics}, volume = {9324}, year = {2015}, }
Abstract BibTeX Entry PDF File PDF File (ceurws.org)
A description graph is a directed graph that has labeled vertices and edges. This document proposes a method for extracting a knowledge base from a description graph. The technique is presented for the description logic \(\mathcal{A\mkern1.618mu L\mkern1.618mu E\mkern1.618mu Q\mkern1.618mu R}(\mathsf{Self})\) which allows for conjunctions, primitive negation, existential restrictions, value restrictions, qualified number restrictions, existential self restrictions, and complex role inclusion axioms, but also sublogics may be chosen to express the axioms in the knowledge base. The extracted knowledge base entails all statements that can be expressed in the chosen description logic and are encoded in the input graph.
@inproceedings{ KrSNAFCA15, address = {Nerja, Spain}, author = {Francesco {Kriegel}}, booktitle = {Proceedings of the International Workshop on Social Network Analysis using Formal Concept Analysis ({SNAFCA 2015}) in conjunction with the 13th International Conference on Formal Concept Analysis ({ICFCA} 2015)}, editor = {Sergei O. {Kuznetsov} and Rokia {Missaoui} and Sergei A. {Obiedkov}}, publisher = {CEURWS.org}, series = {CEUR Workshop Proceedings}, title = {Extracting $\mathcal{A\mkern1.618mu L\mkern1.618mu E\mkern1.618mu Q\mkern1.618mu R}(\mathsf{Self})$Knowledge Bases from Graphs}, volume = {1534}, year = {2015}, }
Abstract BibTeX Entry PDF File PDF File (ceurws.org)
Formal Concept Analysis and its methods for computing minimal implicational bases have been successfully applied to axiomatise minimal \(\mathcal{E\mkern1.618mu L}\)TBoxes from models, so called bases of GCIs. However, no technique for an adjustment of an existing \(\mathcal{E\mkern1.618mu L}\)TBox w.r.t. a new model is available, i.e., on a model change the complete TBox has to be recomputed. This document proposes a method for the computation of a minimal extension of a TBox w.r.t. a new model. The method is then utilised to formulate an incremental learning algorithm that requires a stream of interpretations, and an expert to guide the learning process, respectively, as input.
@inproceedings{ KrDL15, address = {Athens, Greece}, author = {Francesco {Kriegel}}, booktitle = {Proceedings of the 28th International Workshop on Description Logics ({DL 2015})}, editor = {Diego {Calvanese} and Boris {Konev}}, pages = {452464}, publisher = {CEURWS.org}, series = {CEUR Workshop Proceedings}, title = {Incremental Learning of TBoxes from Interpretation Sequences with Methods of Formal Concept Analysis}, volume = {1350}, year = {2015}, }
Abstract BibTeX Entry PDF File PDF File (ceurws.org)
A probabilistic formal context is a triadic context whose third dimension is a set of worlds equipped with a probability measure. After a formal definition of this notion, this document introduces probability of implications, and provides a construction for a base of implications whose probability satisfy a given lower threshold. A comparison between confidence and probability of implications is drawn, which yields the fact that both measures do not coincide, and cannot be compared. Furthermore, the results are extended towards the lightweight description logic \(\mathcal{E\mkern1.618mu L}^{\bot}\) with probabilistic interpretations, and a method for computing a base of general concept inclusions whose probability fulfill a certain lower bound is proposed.
@inproceedings{ KrCLA15, address = {ClermontFerrand, France}, author = {Francesco {Kriegel}}, booktitle = {Proceedings of the 12th International Conference on Concept Lattices and their Applications ({CLA 2015})}, editor = {Sadok {Ben Yahia} and Jan {Konecny}}, pages = {193204}, publisher = {CEURWS.org}, series = {CEUR Workshop Proceedings}, title = {Probabilistic Implicational Bases in FCA and Probabilistic Bases of GCIs in $\mathcal{E\mkern1.618mu L}^{\bot}$}, volume = {1466}, year = {2015}, }
Abstract BibTeX Entry PDF File PDF File (ceurws.org)
The canonical base of a formal context plays a distinguished role in formal concept analysis. This is because it is the only minimal base so far that can be described explicitly. For the computation of this base several algorithms have been proposed. However, all those algorithms work sequentially, by computing only one pseudointent at a time  a fact which heavily impairs the practicability of using the canonical base in realworld applications. In this paper we shall introduce an approach that remedies this deficit by allowing the canonical base to be computed in a parallel manner. First experimental evaluations show that for sufficiently large datasets the speedup is proportional to the number of available CPUs.
@inproceedings{ KrBoCLA15, address = {ClermontFerrand, France}, author = {Francesco {Kriegel} and Daniel {Borchmann}}, booktitle = {Proceedings of the 12th International Conference on Concept Lattices and their Applications ({CLA 2015})}, editor = {Sadok {Ben Yahia} and Jan {Konecny}}, note = {Best Paper Award.}, pages = {182192}, publisher = {CEURWS.org}, series = {CEUR Workshop Proceedings}, title = {NextClosures: Parallel Computation of the Canonical Base}, volume = {1466}, year = {2015}, }
2014
Abstract BibTeX Entry PDF File PDF File (ubbcluj.ro)
Suppose a formal context \(\mathbb{K}=(G,M,I)\) is given, whose concept lattice \(\mathfrak{B}(\mathbb{K})\) with an attributeadditive concept diagram is already known, and an attribute column \(\mathbb{C}=(G,\{n\},J)\) shall be inserted to or removed from it. This paper introduces and proves an incremental update algorithm for both tasks.
@article{ KrICFCA14, address = {ClujNapoca, Romania}, author = {Francesco {Kriegel}}, journal = {Studia Universitatis Babe{\c{s}}Bolyai Informatica}, note = {Supplemental proceedings of the 12th International Conference on Formal Concept Analysis (ICFCA 2014), ClujNapoca, Romania}, pages = {4561}, title = {Incremental Computation of Concept Diagrams}, volume = {59}, year = {2014}, }
Generated 13 September 2019, 13:14:31.
Technische Berichte
2019
Abstract BibTeX Entry PDF File
We make a first step towards adapting an existing approach for privacypreserving publishing of linked data to Description Logic (DL) ontologies. We consider the case where both the knowledge about individuals and the privacy policies are expressed using concepts of the DL \(\mathcal{EL}\), which corresponds to the setting where the ontology is an \(\mathcal{EL}\) instance store. We introduce the notions of compliance of a concept with a policy and of safety of a concept for a policy, and show how optimal compliant (safe) generalizations of a given \(\mathcal{EL}\) concept can be computed. In addition, we investigate the complexity of the optimality problem.
@techreport{ BaKrNuLTCS1901, address = {Dresden, Germany}, author = {Franz {Baader} and Francesco {Kriegel} and Adrian {Nuradiansyah}}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tudresden.de/inf/lat/reports#BaKrNuLTCS1901}}, number = {1901}, title = {{PrivacyPreserving Ontology Publishing for $\mathcal{EL}$ Instance Stores (Extended Version)}}, type = {LTCSReport}, year = {2019}, }
Abstract BibTeX Entry PDF File
A joining implication is a restricted form of an implication where it is explicitly specified which attributes may occur in the premise and in the conclusion, respectively. A technique for sound and complete axiomatization of joining implications valid in a given formal context is provided. In particular, a canonical base for the joining implications valid in a given formal context is proposed, which enjoys the property of being of minimal cardinality among all such bases. Background knowledge in form of a set of valid joining implications can be incorporated. Furthermore, an application to inductive learning in a Horn description logic is proposed, that is, a procedure for sound and complete axiomatization of \(\textsf{Horn}\mathcal{M}\) concept inclusions from a given interpretation is developed. A complexity analysis shows that this procedure runs in deterministic exponential time.
@techreport{ KrLTCS1902, address = {Dresden, Germany}, author = {Francesco {Kriegel}}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tudresden.de/inf/lat/reports#KrLTCS1902}}, number = {1902}, title = {{Joining Implications in Formal Contexts and Inductive Learning in a Horn Description Logic (Extended Version)}}, type = {LTCSReport}, year = {2019}, }
2018
Abstract BibTeX Entry PDF File
The classical approach for repairing a Description Logic ontology \(\mathfrak{O}\) in the sense of removing an unwanted consequence \(\alpha\) is to delete a minimal number of axioms from \(\mathfrak{O}\) such that the resulting ontology \(\mathfrak{O}'\) does not have the consequence \(\alpha\). However, the complete deletion of axioms may be too rough, in the sense that it may also remove consequences that are actually wanted. To alleviate this problem, we propose a more gentle way of repair in which axioms are not necessarily deleted, but only weakened. On the one hand, we investigate general properties of this gentle repair method. On the other hand, we propose and analyze concrete approaches for weakening axioms expressed in the Description Logic \(\mathcal{E\mkern1.618mu L}\).
@techreport{ BaKrNuPeLTCS1801, address = {Dresden, Germany}, author = {Franz {Baader} and Francesco {Kriegel} and Adrian {Nuradiansyah} and Rafael {Pe\~{n}aloza}}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tudresden.de/inf/lat/reports#BaKrNuPeLTCS1801}}, number = {1801}, title = {{Repairing Description Logic Ontologies by Weakening Axioms}}, type = {LTCSReport}, year = {2018}, }
Abstract BibTeX Entry PDF File
For a probabilistic extension of the description logic \(\mathcal{E\mkern1.618mu L}^{\bot}\), we consider the task of automatic acquisition of terminological knowledge from a given probabilistic interpretation. Basically, such a probabilistic interpretation is a family of directed graphs the vertices and edges of which are labeled, and where a discrete probability measure on this graph family is present. The goal is to derive socalled concept inclusions which are expressible in the considered probabilistic description logic and which hold true in the given probabilistic interpretation. A procedure for an appropriate axiomatization of such graph families is proposed and its soundness and completeness is justified.
@techreport{ KrLTCS1803, address = {Dresden, Germany}, author = {Francesco {Kriegel}}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tudresden.de/inf/lat/reports#KrLTCS1803}}, number = {1803}, title = {{Terminological Knowledge Acquisition in Probabilistic Description Logic}}, type = {LTCSReport}, year = {2018}, }
Abstract BibTeX Entry PDF File
For the description logic \(\mathcal{E\mkern1.618mu L}\), we consider the neighborhood relation which is induced by the subsumption order, and we show that the corresponding lattice of \(\mathcal{E\mkern1.618mu L}\) concept descriptions is distributive, modular, graded, and metric. In particular, this implies the existence of a rank function as well as the existence of a distance function.
@techreport{ KrLTCS1810, address = {Dresden, Germany}, author = {Francesco {Kriegel}}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tudresden.de/inf/lat/reports#KrLTCS1810}}, number = {1810}, title = {{The Distributive, Graded Lattice of $\mathcal{E\mkern1.618mu L}$ Concept Descriptions and its Neighborhood Relation (Extended Version)}}, type = {LTCSReport}, year = {2018}, }
Abstract BibTeX Entry PDF File
The notion of a most specific consequence with respect to some terminological box is introduced, conditions for its existence in the description logic \(\mathcal{E\mkern1.618mu L}\) and its variants are provided, and means for its computation are developed. Algebraic properties of most specific consequences are explored. Furthermore, several applications that make use of this new notion are proposed and, in particular, it is shown how given terminological knowledge can be incorporated in existing approaches for the axiomatization of observations. For instance, a procedure for an incremental learning of concept inclusions from sequences of interpretations is developed.
@techreport{ KrLTCS1811, address = {Dresden, Germany}, author = {Francesco {Kriegel}}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tudresden.de/inf/lat/reports#KrLTCS1811}, accepted for publication in {Discrete Applied Mathematics}}, number = {1811}, title = {{Most Specific Consequences in the Description Logic $\mathcal{E\mkern1.618mu L}$}}, type = {LTCSReport}, year = {2018}, }
Abstract BibTeX Entry PDF File
Description logics in their standard setting only allow for representing and reasoning with crisp knowledge without any degree of uncertainty. Of course, this is a serious shortcoming for use cases where it is impossible to perfectly determine the truth of a statement. For resolving this expressivity restriction, probabilistic variants of description logics have been introduced. Their modeltheoretic semantics is built upon socalled probabilistic interpretations, that is, families of directed graphs the vertices and edges of which are labeled and for which there exists a probability measure on this graph family.
Results of scientific experiments, e.g., in medicine, psychology, or biology, that are repeated several times can induce probabilistic interpretations in a natural way. In this document, we shall develop a suitable axiomatization technique for deducing terminological knowledge from the assertional data given in such probabilistic interpretations. More specifically, we consider a probabilistic variant of the description logic \(\mathcal{E\mkern1.618mu L}^{\!\bot}\), and provide a method for constructing a set of rules, socalled concept inclusions, from probabilistic interpretations in a sound and complete manner.
@techreport{ KrLTCS1812, address = {Dresden, Germany}, author = {Francesco {Kriegel}}, institution = {Chair of Automata Theory, Institute of Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tudresden.de/inf/lat/reports#KrLTCS1812}}, number = {1812}, title = {{Learning Description Logic Axioms from Discrete Probability Distributions over Description Graphs (Extended Version)}}, type = {LTCSReport}, year = {2018}, }
2015
Abstract BibTeX Entry
It is wellknown that the canonical implicational base of all implications valid w.r.t. a closure operator can be obtained from the set of all pseudoclosures. NextClosures is a parallel algorithm to compute all closures and pseudoclosures of a closure operator in a graded lattice, e.g., in a powerset. Furthermore, the closures and pseudoclosures can be constrained, and partially known closure operators can be explored.
@techreport{ KrLTCS1501, address = {Dresden, Germany}, author = {Francesco {Kriegel}}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tudresden.de/inf/lat/reports#KrLTCS1501}}, number = {1501}, title = {{NextClosures  Parallel Exploration of Constrained Closure Operators}}, type = {LTCSReport}, year = {2015}, }
Abstract BibTeX Entry
Modelbased most specific concept descriptions are a useful means to compactly represent all knowledge about a certain individual of an interpretation that is expressible in the underlying description logic. Existing approaches only cover their construction in the case of \(\mathcal{E\mkern1.618mu L}\) and \(\mathcal{F\mkern1.618mu L\mkern1.618mu E}\) w.r.t. greatest fixpoint semantics, and the case of \(\mathcal{E\mkern1.618mu L}\) w.r.t. a roledepth bound, respectively. This document extends the results towards the more expressive description logic \(\mathcal{A\mkern1.618mu L\mkern1.618mu E\mkern1.618mu Q}^{\geq}\mkern1.618mu\mathcal{N}^{\leq}\mkern1.618mu(\mathsf{Self})\) w.r.t. roledepth bounds and also gives a method for the computation of least common subsumers.
@techreport{ KrLTCS1502, address = {Dresden, Germany}, author = {Francesco {Kriegel}}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tudresden.de/inf/lat/reports#KrLTCS1502}}, number = {1502}, title = {{ModelBased Most Specific Concept Descriptions and Least Common Subsumers in $\mathcal{A\mkern1.618mu L\mkern1.618mu E\mkern1.618mu Q}^{\geq}\mkern1.618mu\mathcal{N}^{\leq}\mkern1.618mu(\mathsf{Self})$}}, type = {LTCSReport}, year = {2015}, }
Abstract BibTeX Entry PDF File
Description logic knowledge bases can be used to represent knowledge about a particular domain in a formal and unambiguous manner. Their practical relevance has been shown in many research areas, especially in biology and the semantic web. However, the tasks of constructing knowledge bases itself, often performed by human experts, is difficult, timeconsuming and expensive. In particular the synthesis of terminological knowledge is a challenge every expert has to face. Because human experts cannot be omitted completely from the construction of knowledge bases, it would therefore be desirable to at least get some support from machines during this process. To this end, we shall investigate in this work an approach which shall allow us to extract terminological knowledge in the form of general concept inclusions from factual data, where the data is given in the form of vertex and edge labeled graphs. As such graphs appear naturally within the scope of the Semantic Web in the form of sets of RDF triples, the presented approach opens up the possibility to extract terminological knowledge from the Linked Open Data Cloud. We shall also present first experimental results showing that our approach has the potential to be useful for practical applications.
@techreport{ BoDiKrLTCS1513, address = {Dresden, Germany}, author = {Daniel {Borchmann} and Felix {Distel} and Francesco {Kriegel}}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tudresden.de/inf/lat/reports#BoDiKrLTCS1513}}, number = {1513}, title = {{Axiomatization of General Concept Inclusions from Finite Interpretations}}, type = {LTCSReport}, year = {2015}, }
Abstract BibTeX Entry PDF File
Probabilistic interpretations consist of a set of interpretations with a shared domain and a measure assigning a probability to each interpretation. Such structures can be obtained as results of repeated experiments, e.g., in biology, psychology, medicine, etc. A translation between probabilistic and crisp description logics is introduced and then utilised to reduce the construction of a base of general concept inclusions of a probabilistic interpretation to the crisp case for which a method for the axiomatisation of a base of GCIs is wellknown.
@techreport{ KrLTCS1514, address = {Dresden, Germany}, author = {Francesco {Kriegel}}, institution = {Chair for Automata Theory, Institute for Theoretical Computer Science, Technische Universit{\"a}t Dresden}, note = {\url{https://tudresden.de/inf/lat/reports#KrLTCS1514}}, number = {1514}, title = {{Learning General Concept Inclusions in Probabilistic Description Logics}}, type = {LTCSReport}, year = {2015}, }
Generated 13 September 2019, 13:15:05.
Abschlussarbeiten
2013
Abstract BibTeX Entry Publication
Draft and proof of an algorithm computing incremental changes within a labeled layouted concept lattice upon insertion or removal of an attribute column in the underlying formal context. Furthermore some implementational details and mathematical background knowledge are presented.
@thesis{ KriegelDiploma2013, address = {Dresden, Germany}, author = {Francesco {Kriegel}}, school = {Technische Universit\"{a}t Dresden}, title = {Visualization of Conceptual Data with Methods of Formal Concept Analysis}, type = {Diploma Thesis}, year = {2013}, }
Generated 13 September 2019, 13:15:17.
Mitglied in Programmkommitees
 International Conference on Knowledge Engineering and Knowledge Management (EKAW) 2018
 International Conference on Concept Lattices and Their Applications (CLA) 2018
 International Conference on Concept Lattices and Their Applications (CLA) 2016
 International Workshop on Concept Discovery in Unstructured Data (CDUD) 2016
 International Workshop on Soft Computing Applications and Knowledge Discovery (SCAKD) 2016
 International Conference on Concept Lattices and Their Applications (CLA) 2015
Reviewaktivitäten

International Workshop on Description Logics (DL) 2019
 International Journal Information Sciences (INS) 2019
 International Journal of Approximate Reasoning (IJA) 2019
 International Conference on Knowledge Engineering and Knowledge Management (EKAW) 2018
 International Journal Discrete Applied Mathematics (DAM) 2018
 International Journal Information Sciences (INS) 2018
 International Conference on Concept Lattices and Their Applications (CLA) 2018
 International Journal Information Sciences (INS) 2017
 Book Series Lecture Notes in Social Networks (LNSN) 2017
 International Journal on General Systems (IJGS) 2016
 International Journal Information Sciences (INS) 2016
 International Conference on Concept Lattices and Their Applications (CLA) 2016
 International Workshop on Concept Discovery in Unstructured Data (CDUD) 2016
 International Workshop on Soft Computing Applications and Knowledge Discovery (SCAKD) 2016
 International Journal Information Sciences (INS) 2015
 International Conference on Concept Lattices and Their Applications (CLA) 2015
 MultiDisciplinary International Workshop on Artificial Intelligence (MIWAI) 2015
 International Conference on Artificial Intelligence: Methodology, Systems, Applications (AIMSA) 2014
Lehrerfahrungen
 Tutorium & Kursassistenz Description Logic (Prof. Dr.Ing. Franz Baader) SS2019
 Tutorium & Kursassistenz Automata and Logic (PD Dr.Ing. habil. AnniYasmin Turhan) WS2018/2019
 Seminar Learning in Description Logics (Prof. Dr.Ing. Franz Baader, PD Dr.Ing. habil. AnniYasmin Turhan) WS2018/2019
 Proseminar Ausgewählte Themen der Theoretischen Informatik (Prof. Dr.Ing. Franz Baader, Dr.Ing. Monika Sturm) WS2018/2019
 Tutorium & Kursassistenz Term Rewriting Systems (Prof. Dr.Ing. Franz Baader) SS2018
 Proseminar Ausgewählte Themen der Theoretischen Informatik (Prof. Dr.Ing. Franz Baader, Dr.Ing. Monika Sturm) SS2018
 Proseminar Ausgewählte Themen der Theoretischen Informatik (Prof. Dr.Ing. Franz Baader, Dr.Ing. Monika Sturm) WS2017/2018
 Tutorium Theoretische Informatik und Logik (Prof. Dr. rer. pol. Markus Krötzsch, Dr. rer. nat. Daniel Borchmann) SS2017
 Tutorium Formale Systeme (Prof. Dr. rer. pol. Markus Krötzsch, Dr. rer. nat. Daniel Borchmann) WS2016/2017
 Tutorium & Kursassistenz Database Theory (Prof. Dr. rer. pol. Markus Krötzsch) SS2016
 Tutorium & Kursassistenz Introduction to Automatic Structures (PD Dr.Ing. habil. AnniYasmin Turhan) SS2016
 Tutorium & Kursassistenz Description Logic (PD Dr.Ing. habil. AnniYasmin Turhan) WS2015/2016
 Tutorium & Kursassistenz Automata and Logic (Dr. rer. nat. Daniel Borchmann) SS2015
 Proseminar Die Unvollständigkeitssätze von Kurt Gödel (Prof. Dr.Ing. Franz Baader, Dr.Ing. Monika Sturm) SS2015
 Tutorium & Kursassistenz Description Logic (Dr.Ing. habil. AnniYasmin Turhan) WS2014/2015
 Tutorium Theoretische Informatik und Logik (Prof. Dr.Ing. Franz Baader, Dr.Ing. Monika Sturm) SS2014
 Tutorium Formale Systeme (Prof. Dr.Ing. Franz Baader, Dr.Ing. Monika Sturm) WS2013/2014
 Programmierübungen Informatik für Biologen (Dr.Ing. Monika Sturm) WS2013/2014
 Tutorium & Korrektur Elemente der Algebra und Zahlentheorie (Dr. Sebastian Kerkhoff, Dipl.Math. Daniel Borchmann) SS2012
 Tutorium & Korrektur Einführung in die Mathematik für Informatiker, Lineare Algebra (Prof. Dr. Ulrike Baumann, Dipl.Math. Ilse Ilsche) WS2011/2012
 Tutorium & Korrektur Elemente der Algebra und Zahlentheorie (Prof. Dr. Bernhard Ganter, Dipl.Math. Daniel Borchmann) SS2011
 Korrektur Analysis I (Prof. Dr. Friedemann Schuricht, Dr. Zoja Milbers) WS2010/2011
 Tutorium & Korrektur Einführung in die Mathematik für Informatiker, Lineare Algebra (Dr. Jürgen Brunner, Dipl.Math. Ilse Ilsche) WS2010/2011
 Tutorium & Korrektur Mathematische Methoden für Informatiker (Prof. Dr. Ulrike Baumann, Dipl.Math. Ilse Ilsche) SS2010
 Tutorium & Korrektur Einführung in die Mathematik für Informatiker, Diskrete Strukturen (Prof. Dr. Bernhard Ganter, Dipl.Math. Ilse Ilsche) WS2009/2010
 Tutorium & Korrektur Funktionentheorie (Prof. Dr. Friedemann Schuricht, Dr. Zoja Milbers) SS2009
 Tutorium & Korrektur Mathematik für Informatiker IV (Prof. Dr. Ulrike Baumann, Dipl.Math. Ilse Ilsche) SS2009
 Korrektur Analysis III (Prof. Dr. Friedemann Schuricht, Dr. Karin Weigel) WS2008/2009
 Tutorium & Korrektur Mathematik für Informatiker III (Prof. Dr. Ulrike Baumann, Dipl.Math. Ilse Ilsche) WS2008/2009
 Korrektur Analysis II (Prof. Dr. Friedemann Schuricht, Dr. Karin Weigel) SS2008
 Tutorium & Korrektur Mathematik für Informatiker II (Prof. Dr. Ulrike Baumann, Dipl.Math. Ilse Ilsche) SS2008
 Tutorium Mathematik für Informatiker I (Prof. Dr. Ulrike Baumann, Dipl.Math. Ilse Ilsche) WS2007/2008
 Tutorium Brückenkurs Mathematik WS2007/2008
 Nachhilfe Mathematik für Ingenieure II SS2007
 Tutorium Brückenkurs Mathematik WS2006/2007
 Nachhilfe Mathematik für Ingenieure I WS2006/2007
Praktische Erfahrungen
 Eigene Software Concept Explorer FX (Interaktive, iterative, und parallele Algorithmen in formaler Begriffsanalyse mit beschreibungslogischen Erweiterungen) conexpfx auf GitHub
 Werkstudent und dann Diplomand SAP Research Dresden Cubist Projekt (Aufgabe: Entwurf, Entwicklung, und Test eines Systems zur interaktiven Darstellung von Wissen als Verbandsdiagramm mit Methoden der formalen Begriffsanalyse) 2011—2012
 Praktikant und dann Werkstudent SAP Research Dresden Aletheia Projekt (Aufgabe: Entwurf, Entwicklung, und Test eines Systems zur Informationsextraktion aus unstrukturierten Datenquellen) 2009—2011