Knowledge Representation and Reasoning Research Publications

2019

Britz K, Casini G, Meyer T, Varzinczak I. A KLM Perspective on Defeasible Reasoning for Description Logics. In: Description Logic, Theory Combination, And All That. Switzerland: Springer; 2019. doi:https://doi.org/10.1007/978-3-030-22102-7 _ 7.

In this paper we present an approach to defeasible reasoning for the description logic ALC. The results discussed here are based on work done by Kraus, Lehmann and Magidor (KLM) on defeasible conditionals in the propositional case. We consider versions of a preferential semantics for two forms of defeasible subsumption, and link these semantic constructions formally to KLM-style syntactic properties via representation results. In addition to showing that the semantics is appropriate, these results pave the way for more effective decision procedures for defeasible reasoning in description logics. With the semantics of the defeasible version of ALC in place, we turn to the investigation of an appropriate form of defeasible entailment for this enriched version of ALC. This investigation includes an algorithm for the computation of a form of defeasible entailment known as rational closure in the propositional case. Importantly, the algorithm relies completely on classical entailment checks and shows that the computational complexity of reasoning over defeasible ontologies is no worse than that of the underlying classical ALC. Before concluding, we take a brief tour of some existing work on defeasible extensions of ALC that go beyond defeasible subsumption.

@inbook{240,
  author = {Katarina Britz and Giovanni Casini and Thomas Meyer and Ivan Varzinczak},
  title = {A KLM Perspective on Defeasible Reasoning for Description Logics},
  abstract = {In this paper we present an approach to defeasible reasoning for the description logic ALC. The results discussed here are based on work done by Kraus, Lehmann and Magidor (KLM) on defeasible conditionals in the propositional case. We consider versions of a preferential semantics for two forms of defeasible subsumption, and link these semantic constructions formally to KLM-style syntactic properties via representation results. In addition to showing that the semantics is appropriate, these results pave the way for more effective decision procedures for defeasible reasoning in description logics. With the semantics of the defeasible version of ALC in place, we turn to the investigation of an appropriate form of defeasible entailment for this enriched version of ALC. This investigation includes an algorithm for the computation of a form of defeasible entailment known as rational closure in the propositional case. Importantly, the algorithm relies completely on classical entailment checks and shows that the computational complexity of reasoning over defeasible ontologies is no worse than that of the underlying classical ALC. Before concluding, we take a brief tour of some existing work on defeasible extensions of ALC that go beyond defeasible subsumption.},
  year = {2019},
  journal = {Description Logic, Theory Combination, and All That},
  pages = {147–173},
  publisher = {Springer},
  address = {Switzerland},
  isbn = {978-3-030-22101-0},
  url = {https://link.springer.com/book/10.1007%2F978-3-030-22102-7},
  doi = {https://doi.org/10.1007/978-3-030-22102-7 _ 7},
}
Casini G, Meyer T, Varzinczak I. Taking Defeasible Entailment Beyond Rational Closure. European Conference on Logics in Artificial Intelligence. 2019. doi:https://doi.org/10.1007/978-3-030-19570-0 _ 12.

We present a systematic approach for extending the KLM framework for defeasible entailment. We first present a class of basic defeasible entailment relations, characterise it in three distinct ways and provide a high-level algorithm for computing it. This framework is then refined, with the refined version being characterised in a similar manner. We show that the two well-known forms of defeasible entailment, rational closure and lexicographic closure, fall within our refined framework, that rational closure is the most conservative of the defeasible entailment relations within the framework (with respect to subset inclusion), but that there are forms of defeasible entailment within our framework that are more “adventurous” than lexicographic closure.

@proceedings{238,
  author = {Giovanni Casini and Thomas Meyer and Ivan Varzinczak},
  title = {Taking Defeasible Entailment Beyond Rational Closure},
  abstract = {We present a systematic approach for extending the KLM framework for defeasible entailment. We first present a class of basic defeasible entailment relations, characterise it in three distinct ways and provide a high-level algorithm for computing it. This framework is then refined, with the refined version being characterised in a similar manner. We show that the two well-known forms of defeasible entailment, rational closure and lexicographic closure, fall within our refined framework, that rational closure is the most conservative of the defeasible entailment relations within the framework (with respect to subset inclusion), but that there are forms of defeasible entailment within our framework that are more “adventurous” than lexicographic closure.},
  year = {2019},
  journal = {European Conference on Logics in Artificial Intelligence},
  pages = {182 - 197},
  month = {07/05 - 11/05},
  publisher = {Springer},
  address = {Switzerland},
  isbn = {978-3-030-19569-4},
  url = {https://link.springer.com/chapter/10.1007%2F978-3-030-19570-0_12},
  doi = {https://doi.org/10.1007/978-3-030-19570-0 _ 12},
}
Botha L, Meyer T, Peñaloza R. A Bayesian Extension of the Description Logic ALC. European Conference on Logics in Artificial Intelligence. 2019. doi:https://doi.org/10.1007/978-3-030-19570-0 _ 22.

Description logics (DLs) are well-known knowledge representation formalisms focused on the representation of terminological knowledge. A probabilistic extension of a light-weight DL was recently proposed for dealing with certain knowledge occurring in uncertain contexts. In this paper, we continue that line of research by introducing the Bayesian extension BALC of the DL ALC. We present a tableau based procedure for deciding consistency, and adapt it to solve other probabilistic, contextual, and general inferences in this logic. We also show that all these problems remain ExpTime-complete, the same as reasoning in the underlying classical ALC.

@proceedings{237,
  author = {Leonard Botha and Thomas Meyer and Rafael Peñaloza},
  title = {A Bayesian Extension of the Description Logic ALC},
  abstract = {Description logics (DLs) are well-known knowledge representation formalisms focused on the representation of terminological knowledge. A probabilistic extension of a light-weight DL was recently proposed for dealing with certain knowledge occurring in uncertain contexts. In this paper, we continue that line of research by introducing the Bayesian extension BALC of the DL ALC. We present a tableau based procedure for deciding consistency, and adapt it to solve other probabilistic, contextual, and general inferences in this logic. We also show that all these problems remain ExpTime-complete, the same as reasoning in the underlying classical ALC.},
  year = {2019},
  journal = {European Conference on Logics in Artificial Intelligence},
  pages = {339 - 354},
  month = {07/05 - 11/05},
  publisher = {Springer},
  address = {Switzerland},
  isbn = {978-3-030-19569-4},
  url = {https://link.springer.com/chapter/10.1007%2F978-3-030-19570-0_22},
  doi = {https://doi.org/10.1007/978-3-030-19570-0 _ 22},
}
Casini G, Meyer T, Varzinczak I. Simple Conditionals with Constrained Right Weakening. International Joint Conference on Artificial Intelligence. 2019. doi:10.24963/ijcai.2019/226.

In this paper we introduce and investigate a very basic semantics for conditionals that can be used to define a broad class of conditional reasoning systems. We show that it encompasses the most popular kinds of conditional reasoning developed in logic-based KR. It turns out that the semantics we propose is appropriate for a structural analysis of those conditionals that do not satisfy the property of Right Weakening. We show that it can be used for the further development of an analysis of the notion of relevance in conditional reasoning.

@proceedings{226,
  author = {Giovanni Casini and Thomas Meyer and Ivan Varzinczak},
  title = {Simple Conditionals with Constrained Right Weakening},
  abstract = {In this paper we introduce and investigate a very basic semantics for conditionals that can be used to define a broad class of conditional reasoning systems. We show that it encompasses the most popular kinds of conditional reasoning developed in logic-based KR. It turns out that the semantics we propose is appropriate for a structural analysis of those conditionals that do not satisfy the property of Right Weakening. We show that it can be used for the further development of an analysis of the notion of relevance in conditional reasoning.},
  year = {2019},
  journal = {International Joint Conference on Artificial Intelligence},
  pages = {1632-1638},
  month = {10/08-16/08},
  publisher = {International Joint Conferences on Artificial Intelligence},
  isbn = {978-0-9992411-4-1},
  url = {https://www.ijcai.org/Proceedings/2019/0226.pdf},
  doi = {10.24963/ijcai.2019/226},
}
Morris M, Ross T, Meyer T. Defeasible disjunctive datalog. Forum for Artificial Intelligence Research. 2019. http://ceur-ws.org/Vol-2540/FAIR2019_paper_38.pdf.

Datalog is a declarative logic programming language that uses classical logical reasoning as its basic form of reasoning. Defeasible reasoning is a form of non-classical reasoning that is able to deal with exceptions to general assertions in a formal manner. The KLM approach to defeasible reasoning is an axiomatic approach based on the concept of plausible inference. Since Datalog uses classical reasoning, it is currently not able to handle defeasible implications and exceptions. We aim to extend the expressivity of Datalog by incorporating KLM-style defeasible reasoning into classical Datalog. We present a systematic approach to extending the KLM properties and a well-known form of defeasible entailment: Rational Closure. We conclude by exploring Datalog extensions of less conservative forms of defeasible entailment: Relevant and Lexicographic Closure.

@proceedings{225,
  author = {Matthew Morris and Tala Ross and Thomas Meyer},
  title = {Defeasible disjunctive datalog},
  abstract = {Datalog is a declarative logic programming language that uses classical logical reasoning as its basic form of reasoning. Defeasible reasoning is a form of non-classical reasoning that is able to deal with exceptions to general assertions in a formal manner. The KLM approach to defeasible reasoning is an axiomatic approach based on the concept of plausible inference. Since Datalog uses classical reasoning, it is currently not able to handle defeasible implications and exceptions. We aim to extend the expressivity of Datalog by incorporating KLM-style defeasible reasoning into classical Datalog. We present a systematic approach to extending the KLM properties and a well-known form of defeasible entailment: Rational Closure. We conclude by exploring Datalog extensions of less conservative forms of defeasible entailment: Relevant and Lexicographic Closure.},
  year = {2019},
  journal = {Forum for Artificial Intelligence Research},
  pages = {208-219},
  month = {03/12-06/12},
  publisher = {CEUR},
  isbn = {1613-0073},
  url = {http://ceur-ws.org/Vol-2540/FAIR2019_paper_38.pdf},
}
Harrison M, Meyer T. Rational preferential reasoning for datalog. Forum for Artificial Intelligence Research. 2019. http://ceur-ws.org/Vol-2540/FAIR2019_paper_67.pdf.

Datalog is a powerful language that can be used to represent explicit knowledge and compute inferences in knowledge bases. Datalog cannot represent or reason about contradictory rules, though. This is a limitation as contradictions are often present in domains that contain exceptions. In this paper, we extend datalog to represent contradictory and defeasible information. We define an approach to efficiently reason about contradictory information in datalog and show that it satisfies the KLM requirements for a rational consequence relation. Finally, we introduce an implementation of this approach in the form of a defeasible datalog reasoning tool and evaluate the performance of this tool.

@proceedings{224,
  author = {Michael Harrison and Thomas Meyer},
  title = {Rational preferential reasoning for datalog},
  abstract = {Datalog is a powerful language that can be used to represent explicit knowledge and compute inferences in knowledge bases. Datalog cannot represent or reason about contradictory rules, though. This is a limitation as contradictions are often present in domains that contain exceptions. In this paper, we extend datalog to represent contradictory and defeasible information. We define an approach to efficiently reason
about contradictory information in datalog and show that it satisfies the KLM requirements for a rational consequence relation. Finally, we introduce an implementation of this approach in the form of a defeasible datalog reasoning tool and evaluate the performance of this tool.},
  year = {2019},
  journal = {Forum for Artificial Intelligence Research},
  pages = {232-243},
  month = {03/12-06/12},
  publisher = {CEUR},
  isbn = {1613-0073},
  url = {http://ceur-ws.org/Vol-2540/FAIR2019_paper_67.pdf},
}
Chingoma J, Meyer T. Forrester’s paradox using typicality. Forum for Artificial Intelligence Research. 2019. http://ceur-ws.org/Vol-2540/FAIR2019_paper_54.pdf.

Deontic logic is a logic often used to formalise scenarios in the legal domain. Within the legal domain there are many exceptions and conflicting obligations. This motivates the enrichment of deontic logic with a notion of typicality which is based on defeasibility, with defeasibility allowing for reasoning about exceptions. Propositional Typicality Logic (PTL) is a logic that employs typicality. Deontic paradoxes are often used to examine logic systems as they provide undesirable results even if the scenarios seem intuitive. Forrester’s paradox is one of the most famous of these paradoxes. This paper shows that PTL can be used to represent and reason with Forrester’s paradox in such a way as to block undesirable conclusions without sacrificing desirable deontic properties.

@proceedings{223,
  author = {Julian Chingoma and Thomas Meyer},
  title = {Forrester’s paradox using typicality},
  abstract = {Deontic logic is a logic often used to formalise scenarios in the legal domain. Within the legal domain there are many exceptions and conflicting obligations. This motivates the enrichment of deontic logic with a notion of typicality which is based on defeasibility, with defeasibility allowing for reasoning about exceptions. Propositional Typicality Logic (PTL) is a logic that employs typicality. Deontic paradoxes are often used to examine logic systems as they provide undesirable results even if the scenarios seem intuitive. Forrester’s paradox is one of the most famous of these paradoxes. This paper shows that PTL can be used to represent and reason with Forrester’s paradox in such a way as to block undesirable conclusions without sacrificing desirable deontic properties.},
  year = {2019},
  journal = {Forum for Artificial Intelligence Research},
  pages = {220-231},
  month = {03/12-06/12},
  publisher = {CEUR},
  isbn = {1613-0073},
  url = {http://ceur-ws.org/Vol-2540/FAIR2019_paper_54.pdf},
}
Casini G, Harrison M, Meyer T, Swan R. Arbitrary Ranking of Defeasible Subsumption. 32nd International Work- shop on Description Logics. 2019. http://ceur-ws.org/Vol-2373/paper-9.pdf.

In this paper we propose an algorithm that generalises existing procedures for the implementation of defeasible reasoning in the framework of Description Logics (DLs). One of the well-known approaches to defeasible reasoning, the so-called KLM approach, is based on con- structing specific rankings of defeasible information, and using these rankings to determine priorities in case of conflicting information. Here we propose a procedure that allows us to input any possible ranking of the defeasible concept inclusions contained in the knowledge base. We analyse and investigate the forms of defeasible reasoning obtained when conclusions drawn are obtained using these rankings.

@misc{222,
  author = {Giovanni Casini and Michael Harrison and Thomas Meyer and Reid Swan},
  title = {Arbitrary Ranking of Defeasible Subsumption},
  abstract = {In this paper we propose an algorithm that generalises existing procedures for the implementation of defeasible reasoning in the framework of Description Logics (DLs). One of the well-known approaches to defeasible reasoning, the so-called KLM approach, is based on con- structing specific rankings of defeasible information, and using these rankings to determine priorities in case of conflicting information. Here we propose a procedure that allows us to input any possible ranking of the defeasible concept inclusions contained in the knowledge base. We analyse and investigate the forms of defeasible reasoning obtained when conclusions drawn are obtained using these rankings.},
  year = {2019},
  journal = {32nd International Work- shop on Description Logics},
  month = {06/2019},
  url = {http://ceur-ws.org/Vol-2373/paper-9.pdf},
}
Casini G, Straccia U, Meyer T. A polynomial Time Subsumption Algorithm for Nominal Safe ELO⊥ under Rational Closure. Information Sciences. 2019;501. doi:https://doi.org/10.1016/j.ins.2018.09.037.

Description Logics (DLs) under Rational Closure (RC) is a well-known framework for non-monotonic reasoning in DLs. In this paper, we address the concept subsumption decision problem under RC for nominal safe ELO⊥, a notable and practically important DL representative of the OWL 2 profile OWL 2 EL. Our contribution here is to define a polynomial time subsumption procedure for nominal safe ELO⊥ under RC that relies entirely on a series of classical, monotonic EL⊥ subsumption tests. Therefore, any existing classical monotonic EL⊥ reasoner can be used as a black box to implement our method. We then also adapt the method to one of the known extensions of RC for DLs, namely Defeasible Inheritance-based DLs without losing the computational tractability.

@article{221,
  author = {Giovanni Casini and Umberto Straccia and Thomas Meyer},
  title = {A polynomial Time Subsumption Algorithm for Nominal Safe ELO⊥ under Rational Closure},
  abstract = {Description Logics (DLs) under Rational Closure (RC) is a well-known framework for non-monotonic reasoning in DLs. In this paper, we address the concept subsumption decision problem under RC for nominal safe ELO⊥, a notable and practically important DL representative of the OWL 2 profile OWL 2 EL. Our contribution here is to define a polynomial time subsumption procedure for nominal safe ELO⊥ under RC that relies entirely on a series of classical, monotonic EL⊥ subsumption tests. Therefore, any existing classical monotonic EL⊥ reasoner can be used as a black box to implement our method. We then also adapt the method to one of the known extensions of RC for DLs, namely Defeasible Inheritance-based DLs without losing the computational tractability.},
  year = {2019},
  journal = {Information Sciences},
  volume = {501},
  pages = {588 - 620},
  publisher = {Elsevier},
  isbn = {0020-0255},
  url = {http://www.sciencedirect.com/science/article/pii/S0020025518307436},
  doi = {https://doi.org/10.1016/j.ins.2018.09.037},
}
Leenen L, Meyer T. Artificial Intelligence and Big Data Analytics in Support of Cyber Defense. In: Developments In Information Security And Cybernetic Wars. United States of America: Information Science Reference, IGI Global; 2019. doi:10.4018/978-1-5225-8304-2.ch002.

Cybersecurity analysts rely on vast volumes of security event data to predict, identify, characterize, and deal with security threats. These analysts must understand and make sense of these huge datasets in order to discover patterns which lead to intelligent decision making and advance warnings of possible threats, and this ability requires automation. Big data analytics and artificial intelligence can improve cyber defense. Big data analytics methods are applied to large data sets that contain different data types. The purpose is to detect patterns, correlations, trends, and other useful information. Artificial intelligence provides algorithms that can reason or learn and improve their behavior, and includes semantic technologies. A large number of automated systems are currently based on syntactic rules which are generally not sophisticated enough to deal with the level of complexity in this domain. An overview of artificial intelligence and big data technologies in cyber defense is provided, and important areas for future research are identified and discussed.

@inbook{220,
  author = {Louise Leenen and Thomas Meyer},
  title = {Artificial Intelligence and Big Data Analytics in Support of Cyber Defense},
  abstract = {Cybersecurity analysts rely on vast volumes of security event data to predict, identify, characterize, and deal with security threats. These analysts must understand and make sense of these huge datasets in order to discover patterns which lead to intelligent decision making and advance warnings of possible threats, and this ability requires automation. Big data analytics and artificial intelligence can improve cyber defense. Big data analytics methods are applied to large data sets that contain different data types. The purpose is to detect patterns, correlations, trends, and other useful information. Artificial intelligence provides algorithms that can reason or learn and improve their behavior, and includes semantic technologies. A large number of automated systems are currently based on syntactic rules which are generally not sophisticated enough to deal with the level of complexity in this domain. An overview of artificial intelligence and big data technologies in cyber defense is provided, and important areas for future research are identified and discussed.},
  year = {2019},
  journal = {Developments in Information Security and Cybernetic Wars},
  pages = {42 - 63},
  publisher = {Information Science Reference, IGI Global},
  address = {United States of America},
  isbn = {9781522583042},
  doi = {10.4018/978-1-5225-8304-2.ch002},
}
Booth R, Casini G, Meyer T, Varzinczak I. On rational entailment for Propositional Typicality Logic. Artificial Intelligence . 2019;227. doi:https://doi.org/10.1016/j.artint.2019.103178.

Propositional Typicality Logic (PTL) is a recently proposed logic, obtained by enriching classical propositional logic with a typicality operator capturing the most typical (alias normal or conventional) situations in which a given sentence holds. The semantics of PTL is in terms of ranked models as studied in the well-known KLM approach to preferential reasoning and therefore KLM-style rational consequence relations can be embedded in PTL. In spite of the non-monotonic features introduced by the semantics adopted for the typicality operator, the obvious Tarskian definition of entailment for PTL remains monotonic and is therefore not appropriate in many contexts. Our first important result is an impossibility theorem showing that a set of proposed postulates that at first all seem appropriate for a notion of entailment with regard to typicality cannot be satisfied simultaneously. Closer inspection reveals that this result is best interpreted as an argument for advocating the development of more than one type of PTL entailment. In the spirit of this interpretation, we investigate three different (semantic) versions of entailment for PTL, each one based on the definition of rational closure as introduced by Lehmann and Magidor for KLM-style conditionals, and constructed using different notions of minimality.

@article{219,
  author = {Richard Booth and Giovanni Casini and Thomas Meyer and Ivan Varzinczak},
  title = {On rational entailment for Propositional Typicality Logic},
  abstract = {Propositional Typicality Logic (PTL) is a recently proposed logic, obtained by enriching classical propositional logic with a typicality operator capturing the most typical (alias normal or conventional) situations in which a given sentence holds. The semantics of PTL is in terms of ranked models as studied in the well-known KLM approach to preferential reasoning and therefore KLM-style rational consequence relations can be embedded in PTL. In spite of the non-monotonic features introduced by the semantics adopted for the typicality operator, the obvious Tarskian definition of entailment for PTL remains monotonic and is therefore not appropriate in many contexts. Our first important result is an impossibility theorem showing that a set of proposed postulates that at first all seem appropriate for a notion of entailment with regard to typicality cannot be satisfied simultaneously. Closer inspection reveals that this result is best interpreted as an argument for advocating the development of more than one type of PTL entailment. In the spirit of this interpretation, we investigate three different (semantic) versions of entailment for PTL, each one based on the definition of rational closure as introduced by Lehmann and Magidor for KLM-style conditionals, and constructed using different notions of minimality.},
  year = {2019},
  journal = {Artificial Intelligence},
  volume = {227},
  pages = {103178},
  publisher = {Elsevier},
  isbn = {0004-3702},
  url = {https://www.sciencedirect.com/science/article/abs/pii/S000437021830506X?via%3Dihub},
  doi = {https://doi.org/10.1016/j.artint.2019.103178},
}

2018

Meyer T, Leenen L. Semantic Technologies and Big Data Analytics for Cyber Defence. In: Information Retrieval And Management: Concepts, Methodologies, Tools, And Applications. IGI Global; 2018. https://researchspace.csir.co.za/dspace/bitstream/handle/10204/8932/Leenen_2016.pdf?sequence=1.

The Governments, military forces and other organisations responsible for cybersecurity deal with vast amounts of data that has to be understood in order to lead to intelligent decision making. Due to the vast amounts of information pertinent to cybersecurity, automation is required for processing and decision making, specifically to present advance warning of possible threats. The ability to detect patterns in vast data sets, and being able to understanding the significance of detected patterns are essential in the cyber defence domain. Big data technologies supported by semantic technologies can improve cybersecurity, and thus cyber defence by providing support for the processing and understanding of the huge amounts of information in the cyber environment. The term big data analytics refers to advanced analytic techniques such as machine learning, predictive analysis, and other intelligent processing techniques applied to large data sets that contain different data types. The purpose is to detect patterns, correlations, trends and other useful information. Semantic technologies is a knowledge representation paradigm where the meaning of data is encoded separately from the data itself. The use of semantic technologies such as logicbased systems to support decision making is becoming increasingly popular. However, most automated systems are currently based on syntactic rules. These rules are generally not sophisticated enough to deal with the complexity of decisions required to be made. The incorporation of semantic information allows for increased understanding and sophistication in cyber defence systems. This paper argues that both big data analytics and semantic technologies are necessary to provide counter measures against cyber threats. An overview of the use of semantic technologies and big data technologies in cyber defence is provided, and important areas for future research in the combined domains are discussed.

@inbook{206,
  author = {Thomas Meyer and Louise Leenen},
  title = {Semantic Technologies and Big Data Analytics for Cyber Defence},
  abstract = {The Governments, military forces and other organisations responsible for cybersecurity deal with vast amounts of data that has to be understood in order to lead to intelligent decision making. Due to the vast amounts of information pertinent to cybersecurity, automation is required for processing and decision making, specifically to present advance warning of possible threats. The ability to detect patterns in vast data sets, and being able to understanding the significance of detected patterns are essential in the cyber defence domain. Big data technologies supported by semantic technologies can improve cybersecurity, and thus cyber defence by providing support for the processing and understanding of the huge amounts of information in the cyber environment.
The term big data analytics refers to advanced analytic techniques such as machine learning, predictive analysis, and other intelligent processing techniques applied to large data sets that contain different data types. The purpose is to detect patterns, correlations, trends and other useful information. Semantic technologies is a knowledge representation paradigm where the meaning of data is encoded separately from the data itself. The use of semantic technologies such as logicbased systems to support decision making is becoming increasingly popular. However, most automated systems are currently based on syntactic rules. These rules are generally not sophisticated enough to deal with the complexity of decisions required to be made. The incorporation of semantic information allows for increased understanding and sophistication in cyber defence systems.
This paper argues that both big data analytics and semantic technologies are necessary to provide counter measures against cyber threats. An overview of the use of semantic technologies and big data technologies in cyber defence is provided, and important areas for future research in the combined domains are discussed.},
  year = {2018},
  journal = {Information Retrieval and Management: Concepts, Methodologies, Tools, and Applications},
  pages = {1375-1388},
  publisher = {IGI Global},
  isbn = {9781522551911},
  url = {https://researchspace.csir.co.za/dspace/bitstream/handle/10204/8932/Leenen_2016.pdf?sequence=1},
}
Botha L, Meyer T, Peñaloza R. The Bayesian Description Logic BALC. International Workshop on Description Logics. 2018. http://ceur-ws.org/Vol-2211/.

Description Logics (DLs) that support uncertainty are not as well studied as their crisp alternatives, thereby limiting their use in real world domains. The Bayesian DL BEL and its extensions have been introduced to deal with uncertain knowledge without assuming (probabilistic) independence between axioms. In this paper we combine the classical DL ALC with Bayesian Networks. Our new DL includes a solution to the consistency checking problem and changes to the tableaux algorithm that are not a part of BEL. Furthermore, BALC also supports probabilistic assertional information which was not studied for BEL. We present algorithms for four categories of reasoning problems for our logic; two versions of concept satis ability (referred to as total concept satis- ability and partial concept satis ability respectively), knowledge base consistency, subsumption, and instance checking. We show that all reasoning problems in BALC are in the same complexity class as their classical variants, provided that the size of the Bayesian Network is included in the size of the knowledge base.

@proceedings{205,
  author = {Leonard Botha and Thomas Meyer and Rafael Peñaloza},
  title = {The Bayesian Description Logic BALC},
  abstract = {Description Logics (DLs) that support uncertainty are not as well studied as their crisp alternatives, thereby limiting their use in real world domains. The Bayesian DL BEL and its extensions have been introduced to deal with uncertain knowledge without assuming (probabilistic) independence between axioms. In this paper we combine the classical DL ALC with Bayesian Networks. Our new DL includes a solution to the consistency checking problem and changes to the tableaux algorithm that are not a part of BEL. Furthermore, BALC also supports probabilistic assertional information which was not studied for BEL. We present algorithms for four categories of reasoning problems for our logic; two versions of concept satisability (referred to as total concept satis- ability and partial concept satisability respectively), knowledge base consistency, subsumption, and instance checking. We show that all reasoning problems in BALC are in the same complexity class as their classical variants, provided that the size of the Bayesian Network is included in the size of the knowledge base.},
  year = {2018},
  journal = {International Workshop on Description Logics},
  month = {27/10-29/10},
  url = {http://ceur-ws.org/Vol-2211/},
}
Casini G, Meyer T, Varzinczak I. Defeasible Entailment: from Rational Closure to Lexicographic Closure and Beyond. 7th International Workshop on Non-Monotonic Reasoning (NMR 2018). 2018. http://orbilu.uni.lu/bitstream/10993/37393/1/NMR2018Paper.pdf.

In this paper we present what we believe to be the first systematic approach for extending the framework for defeasible entailment first presented by Kraus, Lehmann, and Magidor—the so-called KLM approach. Drawing on the properties for KLM, we first propose a class of basic defeasible entailment relations. We characterise this basic framework in three ways: (i) semantically, (ii) in terms of a class of properties, and (iii) in terms of ranks on statements in a knowlege base. We also provide an algorithm for computing the basic framework. These results are proved through various representation results. We then refine this framework by defining the class of rational defeasible entailment relations. This refined framework is also characterised in thee ways: semantically, in terms of a class of properties, and in terms of ranks on statements. We also provide an algorithm for computing the refined framework. Again, these results are proved through various representation results. We argue that the class of rational defeasible entailment relations—a strengthening of basic defeasible entailment which is itself a strengthening of the original KLM proposal—is worthy of the term rational in the sense that all of them can be viewed as appropriate forms of defeasible entailment. We show that the two well-known forms of defeasible entailment, rational closure and lexicographic closure, fall within our rational defeasible framework. We show that rational closure is the most conservative of the defeasible entailment relations within the framework (with respect to subset inclusion), but that there are forms of defeasible entailment within our framework that are more “adventurous” than lexicographic closure.

@proceedings{200,
  author = {Giovanni Casini and Thomas Meyer and Ivan Varzinczak},
  title = {Defeasible Entailment: from Rational Closure to Lexicographic Closure and Beyond},
  abstract = {In this paper we present what we believe to be the first systematic approach for extending the framework for defeasible entailment first presented by Kraus, Lehmann, and Magidor—the so-called KLM approach. Drawing on the properties for KLM, we first propose a class of basic defeasible entailment relations. We characterise this basic framework in three ways: (i) semantically, (ii) in terms of a class of properties, and (iii) in terms of ranks on statements in a knowlege base. We also provide an algorithm for computing the basic framework. These results are proved through various representation results. We then refine this framework by defining the class of rational defeasible entailment relations. This refined framework is also characterised in thee ways: semantically, in terms of a class of properties, and in terms of ranks on statements. We also provide an algorithm for computing the refined framework. Again, these results are proved through various representation results.
We argue that the class of rational defeasible entailment relations—a strengthening of basic defeasible entailment which is itself a strengthening of the original KLM proposal—is worthy of the term rational in the sense that all of them can be viewed as appropriate forms of defeasible entailment. We show that the two well-known forms of defeasible entailment, rational closure and lexicographic closure, fall within our rational defeasible framework. We show that rational closure is the most conservative of the defeasible entailment relations within the framework (with respect to subset inclusion), but that there are forms of defeasible entailment within our framework that are more “adventurous” than lexicographic closure.},
  year = {2018},
  journal = {7th International Workshop on Non-Monotonic Reasoning (NMR 2018)},
  pages = {109-118},
  month = {27/10-29/10},
  url = {http://orbilu.uni.lu/bitstream/10993/37393/1/NMR2018Paper.pdf},
}
Rens G, Meyer T, Nayak A. Maximizing Expected Impact in an Agent Reputation Network. 41st German Conference on AI, Berlin, Germany, September 24–28, 2018. 2018. https://www.springer.com/us/book/9783030001100.

We propose a new framework for reasoning about the reputation of multiple agents, based on the partially observable Markov decision process (POMDP). It is general enough for the specification of a variety of stochastic multi-agent system (MAS) domains involving the impact of agents on each other’s reputations. Assuming that an agent must maintain a good enough reputation to survive in the system, a method for an agent to select optimal actions is developed.

@proceedings{198,
  author = {Gavin Rens and Thomas Meyer and A. Nayak},
  title = {Maximizing Expected Impact in an Agent Reputation Network},
  abstract = {We propose a new framework for reasoning about the reputation of multiple agents, based on the partially observable Markov decision process (POMDP). It is general enough for the specification of a variety of stochastic multi-agent system (MAS) domains involving the impact of agents on each other’s reputations. Assuming that an agent must maintain a good enough reputation to survive in the system, a method for an agent to select optimal actions is developed.},
  year = {2018},
  journal = {41st German Conference on AI, Berlin, Germany, September 24–28, 2018},
  pages = {99-106},
  month = {24/09-28/09},
  publisher = {Springer},
  isbn = {978-3-030-00110-0},
  url = {https://www.springer.com/us/book/9783030001100},
}
Casini G, Eduardo F, Meyer T, Varzinczak I. A Semantic Perspective on Belief Change in a Preferential Non-Monotonic Framework. 16th International Conference on Principles of Knowledge Representation and Reasoning. 2018. https://dblp.org/db/conf/kr/kr2018.html.

Belief change and non-monotonic reasoning are usually viewed as two sides of the same coin, with results showing that one can formally be defined in terms of the other. In this paper we investigate the integration of the two formalisms by studying belief change for a (preferential) non-monotonic framework. We show that the standard AGM approach to belief change can be transferred to a preferential non-monotonic framework in the sense that change operations can be defined on conditional knowledge bases. We take as a point of departure the results presented by Casini and Meyer (2017), and we develop and extend such results with characterisations based on semantics and entrenchment relations, showing how some of the constructions defined for propositional logic can be lifted to our preferential non-monotonic framework.

@proceedings{197,
  author = {Giovanni Casini and F. Eduardo and Thomas Meyer and Ivan Varzinczak},
  title = {A Semantic Perspective on Belief Change in a Preferential Non-Monotonic Framework},
  abstract = {Belief change and non-monotonic reasoning are usually viewed as two sides of the same coin, with results showing that one can formally be defined in terms of the other. In this paper we investigate the integration of the two formalisms by studying belief change for a (preferential) non-monotonic framework. We show that the standard AGM approach to belief change can be transferred to a preferential non-monotonic framework in the sense that change operations can be defined on conditional knowledge bases. We take as a point of departure the results presented by Casini and Meyer (2017), and we develop and extend such results with characterisations based on semantics and entrenchment relations, showing how some of the constructions defined for propositional logic can be lifted to our preferential non-monotonic framework.},
  year = {2018},
  journal = {16th International Conference on Principles of Knowledge Representation and Reasoning},
  pages = {220-229},
  month = {27/10-02/11},
  publisher = {AAAI Press},
  address = {United States of America},
  isbn = {978-1-57735-803-9},
  url = {https://dblp.org/db/conf/kr/kr2018.html},
}

2017

Casini G, Meyer T. Belief Change in a Preferential Non-Monotonic Framework. International Joint Conference on Artificial Intelligence (IJCAI-17). 2017.

Belief change and non-monotonic reasoning are usually viewed as two sides of the same coin, with results showing that one can formally be defined in terms of the other. In this paper we show that we can also integrate the two formalisms by studying belief change within a (preferential) non-monotonic framework. This integration relies heavily on the identification of the monotonic core of a non-monotonic framework. We consider belief change operators in a non-monotonic propositional setting with a view towards preserving consistency. These results can also be applied to the preservation of coherence—an important notion within the field of logic-based ontologies. We show that the standard AGM approach to belief change can be adapted to a preferential non-monotonic framework, with the definition of expansion, contraction, and revision operators, and corresponding representation results. Surprisingly, preferential AGM belief change, as defined here, can be obtained in terms of classical AGM belief change.

@proceedings{167,
  author = {Giovanni Casini and Thomas Meyer},
  title = {Belief Change in a Preferential Non-Monotonic Framework},
  abstract = {Belief change and non-monotonic reasoning are usually viewed as two sides of the same coin, with results showing that one can formally be defined in terms of the other. In this paper we show that we can also integrate the two formalisms by studying belief change within a (preferential) non-monotonic framework. This integration relies heavily on the identification of the monotonic core of a non-monotonic framework. We consider belief change operators in a non-monotonic propositional setting with a view towards preserving consistency. These results can also be applied to the preservation of coherence—an important notion within the field of logic-based ontologies. We show that the standard AGM approach to belief change can be adapted to a preferential non-monotonic framework, with the definition of expansion, contraction, and revision operators, and corresponding representation results. Surprisingly, preferential AGM belief change, as defined here, can be obtained in terms of classical AGM belief change.},
  year = {2017},
  journal = {International Joint Conference on Artificial Intelligence (IJCAI-17)},
  pages = {929-935},
  month = {19/08-25/08},
  isbn = {978-0-9992411-0-3},
}
Mouton F, Teixeira M, Meyer T. Benchmarking a Mobile Implementation of the Social Engineering Prevention Training Tool. Information Security for South Africa (ISSA). 2017.

As the nature of information stored digitally becomes more important and confidential, the security of the systems put in place to protect this information needs to be increased. The human element, however, remains a vulnerability of the system and it is this vulnerability that social engineers attempt to exploit. The Social Engineering Attack Detection Model version 2 (SEADMv2) has been proposed to help people identify malicious social engineering attacks. Prior to this study, the SEADMv2 had not been implemented as a user friendly application or tested with real subjects. This paper describes how the SEADMv2 was implemented as an Android application. This Android application was tested on 20 subjects, to determine whether it reduces the probability of a subject falling victim to a social engineering attack or not. The results indicated that the Android implementation of the SEADMv2 significantly reducedthe number of subjects that fell victim to social engineering attacks. The Android application also significantly reduced the number of subjects that fell victim to malicious social engineering attacks, bidirectional communication social engineering attacks and indirect communication social engineering attacks. The Android application did not have a statistically significant effect on harmless scenarios and unidirectional communication social engineering attacks.

@proceedings{166,
  author = {F. Mouton and M. Teixeira and Thomas Meyer},
  title = {Benchmarking a Mobile Implementation of the Social Engineering Prevention Training Tool},
  abstract = {As the nature of information stored digitally becomes more important and confidential, the security of the systems put in place to protect this information needs to be increased. The human element, however, remains a vulnerability of the system and it is this vulnerability that social engineers attempt to exploit. The Social Engineering Attack Detection Model version 2 (SEADMv2) has been proposed to help people identify malicious social engineering attacks. Prior to this study, the SEADMv2 had not been implemented as a user friendly application or tested with real subjects. This paper describes how the SEADMv2 was implemented as an Android application. This Android application was tested on 20 subjects, to determine whether it reduces the probability of a subject falling victim to a social engineering attack or not. The results indicated that the Android implementation of the SEADMv2 significantly reducedthe number of subjects that fell victim to social engineering attacks. The Android application also significantly reduced the number of subjects that fell victim to malicious social engineering attacks, bidirectional communication social engineering attacks and indirect communication social engineering attacks. The Android application did not have a statistically significant effect on harmless scenarios and unidirectional communication social engineering attacks.},
  year = {2017},
  journal = {Information Security for South Africa (ISSA)},
  pages = {106-116},
  month = {16/08-17/08},
  isbn = {978-1-5386-0545-5},
}
Booth R, Casini G, Meyer T, Varzinczak I. Extending Typicality for Description Logics. 2017. http://orbilu.uni.lu/bitstream/10993/32165/1/TforDL-Technical_report.pdf.

Recent extensions of description logics for dealing with different forms of non-monotonic reasoning don’t take us beyond the case of defeasible subsumption. In this paper we enrich the DL EL⊥ with a (constrained version of) a typicality operator •, the intuition of which is to capture the most typical members of a class, providing us with the DL EL•⊥. We argue that EL•⊥ is the smallest step one can take to increase the expressivity beyond the case of defeasible subsumption for DLs, while still retaining all the rationality properties an appropriate notion of defeasible subsumption is required to satisfy, and investigate what an appropriate notion of non-monotonic entailment for EL• ⊥ should look like.

@misc{165,
  author = {Richard Booth and Giovanni Casini and Thomas Meyer and Ivan Varzinczak},
  title = {Extending Typicality for Description Logics},
  abstract = {Recent extensions of description logics for dealing with different forms of non-monotonic reasoning don’t take us beyond the case of defeasible subsumption. In this paper we enrich the DL EL⊥ with a (constrained version of) a typicality operator •, the intuition of which is to capture the most typical members of a class, providing us with the DL EL•⊥. We argue that EL•⊥ is the smallest step one can take to increase the expressivity beyond the case of defeasible subsumption for DLs, while still retaining all the rationality properties an appropriate notion of defeasible subsumption is required to satisfy, and investigate what an appropriate notion of non-monotonic entailment for EL• ⊥ should look like.},
  year = {2017},
  url = {http://orbilu.uni.lu/bitstream/10993/32165/1/TforDL-Technical_report.pdf},
}
Rens G, Meyer T. Imagining Probabilistic Belief Change as Imaging. 2017. https://arxiv.org/pdf/1705.01172.pdf.

Imaging is a form of probabilistic belief change which could be employed for both revision and update. In this paper, we propose a new framework for probabilistic belief change based on imaging, called Expected Distance Imaging (EDI). EDI is sufficiently general to define Bayesian conditioning and other forms of imaging previously defined in the literature. We argue that, and investigate how, EDI can be used for both revision and update. EDI’s definition depends crucially on a weight function whose properties are studied and whose effect on belief change operations is analysed. Finally, four EDI instantiations are proposed, two for revision and two for update, and probabilistic rationality postulates are suggested for their analysis.

@misc{164,
  author = {Gavin Rens and Thomas Meyer},
  title = {Imagining Probabilistic Belief Change as Imaging},
  abstract = {Imaging is a form of probabilistic belief change which could be employed for both revision and update. In this paper, we propose a new framework for probabilistic belief change based on imaging, called Expected Distance Imaging (EDI). EDI is sufficiently general to define Bayesian conditioning and other forms of imaging previously defined in the literature. We argue that, and investigate how, EDI can be used for both revision and update. EDI’s definition depends crucially on a weight function whose properties are studied and whose effect on belief change operations is analysed. Finally, four EDI instantiations are proposed, two for revision and two for update, and probabilistic rationality postulates are suggested for their analysis.},
  year = {2017},
  url = {https://arxiv.org/pdf/1705.01172.pdf},
}
Gerber A, Morar N, Meyer T, Eardley C. Ontology-based support for taxonomic functions. Ecological Informatics. 2017;41. https://ac.els-cdn.com/S1574954116301959/1-s2.0-S1574954116301959-main.pdf?_tid=487687ca-01b3-11e8-89aa-00000aacb35e&acdnat=1516873196_6a2c94e428089403763ccec46613cf0f.

This paper reports on an investigation into the use of ontology technologies to support taxonomic functions. Support for taxonomy is imperative given several recent discussions and publications that voiced concern over the taxonomic impediment within the broader context of the life sciences. Taxonomy is defined as the scientific classification, description and grouping of biological organisms into hierarchies based on sets of shared characteristics, and documenting the principles that enforce such classification. Under taxonomic functions we identified two broad categories: the classification functions concerned with identification and naming of organisms, and secondly classification functions concerned with categorization and revision (i.e. grouping and describing, or revisiting existing groups and descriptions). Ontology technologies within the broad field of artificial intelligence include computational ontologies that are knowledge representation mechanisms using standardized representations that are based on description logics (DLs). This logic base of computational ontologies provides for the computerized capturing and manipulation of knowledge. Furthermore, the set-theoretical basis of computational ontologies ensures particular suitability towards classification, which is considered as a core function of systematics or taxonomy. Using the specific case of Afrotropical bees, this experimental research study represents the taxonomic knowledge base as an ontology, explore the use of available reasoning algorithms to draw the necessary inferences that support taxonomic functions (identification and revision) over the ontology and implement a Web-based application (the WOC). The contributions include the ontology, a reusable and standardized computable knowledge base of the taxonomy of Afrotropical bees, as well as the WOC and the evaluation thereof by experts.

@article{163,
  author = {Aurona Gerber and Nishal Morar and Thomas Meyer and C. Eardley},
  title = {Ontology-based support for taxonomic functions},
  abstract = {This paper reports on an investigation into the use of ontology technologies to support taxonomic functions. Support for taxonomy is imperative given several recent discussions and publications that voiced concern over the taxonomic impediment within the broader context of the life sciences. Taxonomy is defined as the scientific classification, description and grouping of biological organisms into hierarchies based on sets of shared characteristics, and documenting the principles that enforce such classification. Under taxonomic functions we identified two broad categories: the classification functions concerned with identification and naming of organisms, and secondly classification functions concerned with categorization and revision (i.e. grouping and describing, or revisiting existing groups and descriptions).
Ontology technologies within the broad field of artificial intelligence include computational ontologies that are knowledge representation mechanisms using standardized representations that are based on description logics (DLs). This logic base of computational ontologies provides for the computerized capturing and manipulation of knowledge. Furthermore, the set-theoretical basis of computational ontologies ensures particular suitability towards classification, which is considered as a core function of systematics or taxonomy.
Using the specific case of Afrotropical bees, this experimental research study represents the taxonomic knowledge base as an ontology, explore the use of available reasoning algorithms to draw the necessary inferences that support taxonomic functions (identification and revision) over the ontology and implement a Web-based application (the WOC). The contributions include the ontology, a reusable and standardized computable knowledge base of the taxonomy of Afrotropical bees, as well as the WOC and the evaluation thereof by experts.},
  year = {2017},
  journal = {Ecological Informatics},
  volume = {41},
  pages = {11-23},
  publisher = {Elsevier},
  isbn = {1574-9541},
  url = {https://ac.els-cdn.com/S1574954116301959/1-s2.0-S1574954116301959-main.pdf?_tid=487687ca-01b3-11e8-89aa-00000aacb35e&acdnat=1516873196_6a2c94e428089403763ccec46613cf0f},
}
Rens G, Meyer T, Moodley D. A Stochastic Belief Management Architecture for Agent Control. 2017. http://pubs.cs.uct.ac.za/archive/00001201/01/AGA_2017_Rens_et_al.pdf.

We propose an architecture for agent control, where the agent stores its beliefs and environment models as logical sentences. Given successive observations, the agent’s current state (of beliefs) is maintained by a combination of probability, POMDP and belief change theory. Two existing logics are employed for knowledge representation and reasoning: the stochastic decision logic of Rens et al. (2015) and p-logic of Zhuanget al. (2017) (a restricted version of a logic designedby Fagin et al. (1990)). The proposed architecture assumes two streams of observations: active, which correspond to agent intentions and passive, which is received without the agent’s direct involvement. Stochastic uncertainty, and ignorance due to lack of information are both dealt with in the architecture. Planning, and learning of environment models are assumed present but are not covered in this proposal.

@misc{155,
  author = {Gavin Rens and Thomas Meyer and Deshen Moodley},
  title = {A Stochastic Belief Management Architecture for Agent Control},
  abstract = {We propose an architecture for agent control, where the agent stores its beliefs and environment models as logical sentences. Given successive observations, the agent’s current state (of beliefs) is maintained by a combination of probability, POMDP and belief change theory. Two existing logics are employed for knowledge representation and reasoning: the stochastic decision logic of Rens et al. (2015) and p-logic of Zhuanget al. (2017) (a restricted version of a logic designedby Fagin et al. (1990)). The proposed architecture assumes two streams of observations: active, which correspond to agent intentions and passive, which is received without the agent’s direct involvement. Stochastic uncertainty, and ignorance due to lack of information are both dealt with in the architecture. Planning, and learning of environment models are assumed present but are not covered in this proposal.},
  year = {2017},
  url = {http://pubs.cs.uct.ac.za/archive/00001201/01/AGA_2017_Rens_et_al.pdf},
}

2016

Rens G, Meyer T, Casini G. Revising Incompletely Specified Convex Probabilistic Belief Bases. 2016.

We propose a method for an agent to revise its incomplete probabilistic beliefs when a new piece of propositional information is observed. In this work, an agent’s beliefs are represented by a set of probabilistic formulae – a belief base. The method involves determining a representative set of ‘boundary’ probability distributions consistent with the current belief base, revising each of these probability distributions and then translating the revised information into a new belief base. We use a version of Lewis Imaging as the revision operation. The correctness of the approach is proved. The expressivity of the belief bases under consideration are rather restricted, but has some applications. We also discuss methods of belief base revision employing the notion of optimum entropy, and point out some of the benefits and difficulties in those methods. Both the boundary distribution method and the optimum entropy method are reasonable, yet yield different results.

@misc{131,
  author = {Gavin Rens and Thomas Meyer and Giovanni Casini},
  title = {Revising Incompletely Specified Convex Probabilistic Belief Bases},
  abstract = {We propose a method for an agent to revise its incomplete probabilistic beliefs when a new piece of propositional information is observed. In this work, an agent’s beliefs are represented by a set of probabilistic formulae – a belief base. The method involves determining a representative set of ‘boundary’ probability distributions consistent with the current belief base, revising each of these probability distributions and then translating the revised information into a new belief base. We use a version of Lewis Imaging as the revision operation. The correctness of the approach is proved. The expressivity of the belief bases under consideration are rather restricted, but has some applications. We also discuss methods of belief base revision employing the notion of optimum entropy, and point out some of the benefits and difficulties in those methods. Both the boundary distribution method and the optimum entropy method are reasonable, yet yield different results.},
  year = {2016},
  isbn = {ISSN 0933-6192},
}

2015

Britz K, Casini G, Meyer T, Moodley K, Sattler U, Varzinczak I. Rational Defeasible Reasoning for Expressive Description Logics. 2015.

In this paper, we enrich description logics (DLs) with non-monotonic reasoning features in a number of ways. We start by investigating a notion of defeasible conditional in the spirit of KLM-style defeasible consequence. In particular, we consider a natural and intuitive semantics for defeasible subsumption in terms of DL interpretations enriched with a preference relation. We propose and investigate syntactic properties (à la Gentzen) for both preferential and rational conditionals and prove representation results for the description logic ALC. This representation result paves the way for more effective decision procedures for defeasible reasoning in DLs. We then move to non-monotonicity in DLs at the level of entailment. We investigate versions of entailment in the context of both preferential and rational subsumption, relate them to preferential and rational closure, and show that computing them can be reduced to classical ALC entailment. This provides further evidence that our semantic constructions are appropriate in a non-monotonic DL setting. One of the barriers to evaluating performance scalability of rational closure is the abscence of naturally occurring DL-based ontologies with defeasible features. We overcome this barrier by devising an approach to introduce defeasible subsumption into classical real world ontologies. This culminates in a set of semi-natural defeasible ontologies that is used, together with a purely artificial set, to test our rational closure algorithms. We found that performance is scalable on the whole with no major bottlenecks.

@misc{130,
  author = {Katarina Britz and Giovanni Casini and Thomas Meyer and Kody Moodley and U. Sattler and Ivan Varzinczak},
  title = {Rational Defeasible Reasoning for Expressive Description Logics},
  abstract = {In this paper, we enrich description logics (DLs) with non-monotonic reasoning features in a number of ways. We start by investigating a notion of defeasible conditional in the spirit of KLM-style defeasible consequence. In particular, we consider a natural and intuitive semantics for defeasible subsumption in terms of DL interpretations enriched with a preference relation. We propose and investigate syntactic properties (à la Gentzen) for both preferential and rational conditionals and prove representation results for the description logic ALC. This representation result paves the way for more effective decision procedures for defeasible reasoning in DLs. We then move to non-monotonicity in DLs at the level of entailment. We investigate versions of entailment in the context of both preferential and rational subsumption, relate them to preferential and rational closure, and show that computing them can be reduced to classical ALC entailment. This provides further evidence that our semantic constructions are appropriate in a non-monotonic DL setting. One of the barriers to evaluating performance scalability of rational closure is the abscence of naturally occurring DL-based ontologies with defeasible features. We overcome this barrier by devising an approach to introduce defeasible subsumption into classical real world ontologies. This culminates in a set of semi-natural defeasible ontologies that is used, together with a purely artificial set, to test our rational closure algorithms. We found that performance is scalable on the whole with no major bottlenecks.},
  year = {2015},
}
Casini G, Straccia U, Meyer T. A Polynomial Time Subsumption Algorithm for EL⊥ under Rational Closure. 2015.

No Abstract

@misc{117,
  author = {Giovanni Casini and Umberto Straccia and Thomas Meyer},
  title = {A Polynomial Time Subsumption Algorithm for EL⊥ under Rational Closure},
  abstract = {No Abstract},
  year = {2015},
}
Casini G, Meyer T, Moodley K, Varzinczak I, Sattler U. Introducing Defeasibility into OWL Ontologies. The International Semantic Web Conference. 2015.

In recent years, various approaches have been developed for representing and reasoning with exceptions in OWL. The price one pays for such capabilities, in terms of practical performance, is an important factor that is yet to be quantified comprehensively. A major barrier is the lack of naturally occurring ontologies with defeasible features - the ideal candidates for evaluation. Such data is unavailable due to absence of tool support for representing defeasible features. In the past, defeasible reasoning implementations have favoured automated generation of defeasible ontologies. While this suffices as a preliminary approach, we posit that a method somewhere in between these two would yield more meaningful results. In this work, we describe a systematic approach to modify real-world OWL ontologies to include defeasible features, and we apply this to the Manchester OWL Repository to generate defeasible ontologies for evaluating our reasoner DIP (Defeasible-Inference Platform). The results of this evaluation are provided together with some insights into where the performance bottle-necks lie for this kind of reasoning. We found that reasoning was feasible on the whole, with surprisingly few bottle-necks in our evaluation.

@proceedings{113,
  author = {Giovanni Casini and Thomas Meyer and Kody Moodley and Ivan Varzinczak and U. Sattler},
  title = {Introducing Defeasibility into OWL Ontologies},
  abstract = {In recent years, various approaches have been developed for representing and reasoning with exceptions in OWL. The price one pays for such capabilities, in terms of practical performance, is an important factor that is yet to be quantified comprehensively. A major barrier is the lack of naturally occurring ontologies with defeasible features - the ideal candidates for evaluation. Such data is unavailable due to absence of tool support for representing defeasible features. In the past, defeasible reasoning implementations have favoured automated generation of defeasible ontologies. While this suffices as a preliminary approach, we posit that a method somewhere in between these two would yield more meaningful results. In this work, we describe a systematic approach to modify real-world OWL ontologies to include defeasible features, and we apply this to the Manchester OWL Repository to generate defeasible ontologies for evaluating our reasoner DIP (Defeasible-Inference Platform). The results of this evaluation are provided together with some insights into where the performance bottle-necks lie for this kind of reasoning. We found that reasoning was feasible on the whole, with surprisingly few bottle-necks in our evaluation.},
  year = {2015},
  journal = {The International Semantic Web Conference},
  month = {11/10-15/10},
}

2014

Ongoma N, Keet M, Meyer T. Transition Constraints for Temporal Attributes. 2014. http://ceur-ws.org/Vol-1193/paper_25.pdf.

Representing temporal data in conceptual data models and ontologies is required by various application domains. For it to be useful for modellers to represent the information precisely and reason over it, it is essential to have a language that is expressive enough to capture the required operational semantics of the time-varying information. Temporal modelling languages have little support for temporal attributes, if at all, yet attributes are a standard element in the widely used conceptual modelling languages such as EER and UML. This hiatus prevents one to utilise a complete temporal conceptual data model and keep track of evolving values of data and its interaction with temporal classes. A rich axiomatisation of fully temporised attributes is possible with a minor extension to the already very expressive description logic language DLRUS. We formalise the notion of transition of attributes, and their interaction with transition of classes. The transition specified for attributes are extension, evolution, and arbitrary quantitative extension.

@misc{91,
  author = {Nasubo Ongoma and Maria Keet and Thomas Meyer},
  title = {Transition Constraints for Temporal Attributes},
  abstract = {Representing temporal data in conceptual data models and ontologies
is required by various application domains. For it to be useful for modellers to
represent the information precisely and reason over it, it is essential to have a language
that is expressive enough to capture the required operational semantics of
the time-varying information. Temporal modelling languages have little support
for temporal attributes, if at all, yet attributes are a standard element in the widely
used conceptual modelling languages such as EER and UML. This hiatus prevents
one to utilise a complete temporal conceptual data model and keep track of
evolving values of data and its interaction with temporal classes. A rich axiomatisation
of fully temporised attributes is possible with a minor extension to the
already very expressive description logic language DLRUS. We formalise the
notion of transition of attributes, and their interaction with transition of classes.
The transition specified for attributes are extension, evolution, and arbitrary quantitative
extension.},
  year = {2014},
  url = {http://ceur-ws.org/Vol-1193/paper_25.pdf},
}

2013

Casini G, Meyer T, Moodley K, Varzinczak I. Nonmonotonic reasoning in Description Logics: Rational Closure for the ABox. 2013.

The introduction of defeasible reasoning in Description Logics has been a main research topic in the field in the last years. Despite the fact that various interesting formalizations of nonmonotonic reasoning for the TBox have been proposed, the application of such a kind of reasoning also to ABoxes is more problematic. In what follows we are going to present the adaptation for the ABox of a classical nonmonotonic form of reasoning, Lehmann and Magidor’s Rational Closure. We present both a procedural and a semantical characterization, and we conclude the paper with a comparison between our and other analogous proposals, and suggesting some possible heuristics useful for the ABox querying procedure.

@misc{114,
  author = {Giovanni Casini and Thomas Meyer and Kody Moodley and Ivan Varzinczak},
  title = {Nonmonotonic reasoning in Description Logics: Rational Closure for the ABox},
  abstract = {The introduction of defeasible reasoning in Description Logics has been a main research topic in the field in the last years. Despite the fact that various interesting formalizations of nonmonotonic reasoning for the TBox have been proposed, the application of such a kind of reasoning also to ABoxes is more problematic. In what follows we are going to present the adaptation for the ABox of a classical nonmonotonic form of reasoning, Lehmann and Magidor’s Rational Closure. We present both a procedural and a semantical characterization, and we conclude the paper with a comparison between our and other analogous proposals, and suggesting some possible heuristics useful for the ABox querying procedure.},
  year = {2013},
}
  • CSIR
  • DSI