Research Publications

2015

Crichton R, Pillay A, Moodley D. The Open Health Information Mediator: an Architecture for Enabling Interoperability in Low to Middle Income Countries. 2015;MSc.

Interoperability and system integration are central problems that limit the effective use of health information systems to improve efficiency and effectiveness of health service delivery. There is currently no proven technology that provides a general solution in low and middle income countries where the challenges are especially acute. Engineering health information systems in low resource environments have several challenges that include poor infrastructure, skills shortages, fragmented and piecemeal applications deployed and managed by multiple organisations as well as low levels of resourcing. An important element of modern solutions to these problems is a health information exchange that enable disparate systems to share health information. It is a challenging task to develop systems as complex as health information exchanges that will have wide applicability in low and middle income countries. This work takes a case study approach and uses the development of a health information exchange in Rwanda as the case study. This research reports on the design, implementation and analysis of an architecture, the Health Information Mediator, that is a central component of a health information exchange. While such architectures have been used successfully in high income countries their efficacy has not been demonstrated in low and middle income countries. The Rwandan case study was used to understand and identify the challenges and requirements for health information exchange in low and middle income countries. These requirements were used to derive a set of key concerns for the architecture that were then used to drive its design. Novel features of the architecture include: the ability to mediate messages at both the service provider and service consumer interfaces; support for multiple internal representations of messages to facilitate the adoption of new and evolving standards; and the provision of a general method for mediating health information exchange transactions agnostic of the type of transactions. The architecture is shown to satisfy the key concerns and was validated by implementing and deploying a reference application, the OpenHIM, within the Rwandan health information exchange. The architecture is also analysed using the Architecture Trade-off Analysis Method. It has also been successfully implemented in other low and middle income countries with relatively minor configuration changes which demonstrates the architectures generalizability.

@phdthesis{110,
  author = {Ryan Crichton and Anban Pillay and Deshen Moodley},
  title = {The Open Health Information Mediator: an Architecture for Enabling Interoperability in Low to Middle Income Countries},
  abstract = {Interoperability and system integration are central problems that limit the effective use of health information systems to improve efficiency and effectiveness of health service delivery. There is currently no proven technology that provides a general solution in low and middle income countries where the challenges are especially acute. Engineering health information systems in low resource environments have several challenges that include poor infrastructure, skills shortages, fragmented and piecemeal applications deployed and managed by multiple organisations as well as low levels of resourcing. An important element of modern solutions to these problems is a health information exchange that enable disparate systems to share health information. 
It is a challenging task to develop systems as complex as health information exchanges that will have wide applicability in low and middle income countries. This work takes a case study approach and uses the development of a health information exchange in Rwanda as the case study. This research reports on the design, implementation and analysis of an architecture, the Health Information Mediator, that is a central component of a health information exchange. While such architectures have been used successfully in high income countries their efficacy has not been demonstrated in low and middle income countries. The Rwandan case study was used to understand and identify the challenges and requirements for health information exchange in low and middle income countries. These requirements were used to derive a set of key concerns for the architecture that were then used to drive its design. Novel features of the architecture include: the ability to mediate messages at both the service provider and service consumer interfaces; support for multiple internal representations of messages to facilitate the adoption of new and evolving standards; and the provision of a general method for mediating health information exchange transactions agnostic of the type of transactions.
The architecture is shown to satisfy the key concerns and was validated by implementing and deploying a reference application, the OpenHIM, within the Rwandan health information exchange. The architecture is also analysed using the Architecture Trade-off Analysis Method. It has also been successfully implemented in other low and middle income countries with relatively minor configuration changes which demonstrates the architectures generalizability.},
  year = {2015},
  volume = {MSc},
}
Ruttkamp-Bloem E, Casini G, Meyer T. A Non-Classical Logical Foundation for Naturalised Realism. In: Logica Yearbook 2014. Unknown; 2015.

In this paper, by suggesting a formal representation of science based on recent advances in logic-based Artificial Intelligence (AI), we show how three serious concerns around the realisation of traditional scientific realism (the theory/observation distinction, over-determination of theories by data, and theory revision) can be overcome such that traditional realism is given a new guise as ‘naturalised’. We contend that such issues can be dealt with (in the context of scientific realism) by developing a formal representation of science based on the application of the following tools from Knowledge Representation: the family of Description Logics, an enrichment of classical logics via defeasible statements, and an application of the preferential interpretation of the approach to Belief Revision.

@inbook{109,
  author = {Emma Ruttkamp-Bloem and Giovanni Casini and Thomas Meyer},
  title = {A Non-Classical Logical Foundation for Naturalised Realism},
  abstract = {In this paper, by suggesting a formal representation of science based on recent advances in logic-based Artificial Intelligence (AI), we show how three serious concerns around the realisation of traditional scientific realism (the theory/observation distinction, over-determination of theories by
data, and theory revision) can be overcome such that traditional realism is given a new guise as ‘naturalised’. We contend that such issues can be dealt
with (in the context of scientific realism) by developing a formal representation of science based on the application of the following tools from Knowledge Representation: the family of Description Logics, an enrichment of classical logics via defeasible statements, and an application of the preferential interpretation of the approach to Belief Revision.},
  year = {2015},
  journal = {Logica Yearbook 2014},
  publisher = {Unknown},
}
Booth R, Casini G, Meyer T, Varzinczak I. What Does Entailment for PTL Mean? Commonsense 2015. 2015.

We continue recent investigations into the problem of reasoning about typicality. We do so in the framework of Propositional Typicality Logic (PTL), which is obtained by enriching classical propositional logic with a typicality operator and characterized by a preferential semantics `a la KLM. In this paper we study different notions of entailment for PTL. We take as a starting point the notion of Rational Closure defined for KLM-style conditionals. We show that the additional expressivity of PTL results in different versions of Rational Closure for PTL — versions that are equivalent with respect to the conditional language originally proposed by KLM.

@proceedings{108,
  author = {Richard Booth and Giovanni Casini and Thomas Meyer and Ivan Varzinczak},
  title = {What Does Entailment for PTL Mean?},
  abstract = {We continue recent investigations into the problem of reasoning about typicality. We do so in the framework of Propositional Typicality Logic (PTL), which is obtained by enriching classical propositional logic with a typicality operator and characterized by a preferential semantics `a la KLM. In this paper we study different notions of entailment for PTL. We take as a starting point the notion of Rational Closure defined for KLM-style conditionals. We show that the additional expressivity of PTL results in different versions of Rational Closure for PTL — versions that are equivalent with respect to the conditional language originally proposed by KLM.},
  year = {2015},
  journal = {Commonsense 2015},
  month = {23/03-25/03},
}
Ongoma N, Keet M. Temporal Attributes: Status and Subsumption. Asia-Pacific Conference on Conceptual Modelling. 2015.

Representing data that changes over time in conceptual data models is required by various application domains, and requires a language that is expressive enough to fully capture the operational semantics of the time-varying information. Temporal modelling languages typically focus on representing and reasoning over temporal classes and relationships, but have scant support for temporal attributes, if at all. This prevents one to fully utilise a temporal conceptual data model, which, however, is needed to model not only evolving objects (e.g., an employee’s role), but also its attributes, such as changes in salary and bonus payouts. To characterise temporal attributes precisely, we use the DLRUS Description Logic language to provide its model-theoretic semantics, there- with essentially completing the temporal ER language ERVT. The new notion of status attribute is introduced to capture the possible changes, which results in several logical implications they entail, including their interaction with temporal classes to ensure correct behaviour in subsumption hierarchies, paving the way to verify automatically whether a temporal conceptual data model is consistent.

@proceedings{105,
  author = {Nasubo Ongoma and Maria Keet},
  title = {Temporal Attributes: Status and Subsumption},
  abstract = {Representing data that changes over time in conceptual data models is required by various application domains, and requires a language that is expressive enough to fully capture the operational semantics of the time-varying information. Temporal modelling languages typically focus on representing and reasoning over temporal classes and relationships, but have scant support for temporal attributes, if at all. This prevents one to fully utilise a temporal conceptual data model, which, however, is needed to model not only evolving objects (e.g., an employee’s role), but also its attributes, such as changes in salary and bonus payouts. To characterise temporal attributes precisely, we use the DLRUS Description Logic language to provide its model-theoretic semantics, there- with essentially completing the temporal ER language ERVT. The new notion of status attribute is introduced to capture the possible changes, which results in several logical implications they entail, including their interaction with temporal classes to ensure correct behaviour in subsumption hierarchies, paving the way to verify automatically whether a temporal conceptual data model is consistent.},
  year = {2015},
  journal = {Asia-Pacific Conference on Conceptual Modelling},
  pages = {61-70},
  month = {27/01-30/01},
  address = {Sydney, Australia},
  isbn = {978-1-921770-47-0},
}
Rens G. Speeding up Online POMDP Planning: Unification of Observation Branches by Belief-state Compression via Expected Feature Values. International Conference on Agents and Artificial Intelligence (ICAART) Vol. 2. 2015.

A novel algorithm to speed up online planning in partially observable Markov decision processes (POMDPs) is introduced. I propose a method for compressing nodes in belief-decision-trees while planning occurs. Whereas belief-decision-trees branch on actions and observations, with my method, they branch only on actions. This is achieved by unifying the branches required due to the nondeterminism of observations. The method is based on the expected values of domain features. The new algorithm is experimentally compared to three other online POMDP algorithms, outperforming them on the given test domain.

@proceedings{104,
  author = {Gavin Rens},
  title = {Speeding up Online POMDP Planning: Unification of Observation Branches by Belief-state Compression via Expected Feature Values},
  abstract = {A novel algorithm to speed up online planning in partially observable Markov decision processes (POMDPs) is introduced. I propose a method for compressing nodes in belief-decision-trees while planning occurs. Whereas belief-decision-trees branch on actions and observations, with my method, they branch only on actions. This is achieved by unifying the branches required due to the nondeterminism of observations. The method is based on the expected values of domain features. The new algorithm is experimentally compared to three other online POMDP algorithms, outperforming them on the given test domain.},
  year = {2015},
  journal = {International Conference on Agents and Artificial Intelligence (ICAART) Vol. 2},
  pages = {241-246},
  month = {10/01-12/01},
  isbn = {978-989-758-074-1},
}
Rens G, Meyer T. Hybrid POMDP-BDI: An Agent Architecture with Online Stochastic Planning and Desires with Changing Intensity Levels. International Conference on Agents and Artificial Intelligence (ICAART) Vol. 1. 2015.

Partially observable Markov decision processes (POMDPs) and the belief-desire-intention (BDI) framework have several complimentary strengths. We propose an agent architecture which combines these two powerful approaches to capitalize on their strengths. Our architecture introduces the notion of intensity of the desire for a goal’s achievement. We also define an update rule for goals’ desire levels. When to select a new goal to focus on is also defined. To verify that the proposed architecture works, experiments were run with an agent based on the architecture, in a domain where multiple goals must continually be achieved. The results show that (i) while the agent is pursuing goals, it can concurrently perform rewarding actions not directly related to its goals, (ii) the trade-off between goals and preferences can be set effectively and (iii) goals and preferences can be satisfied even while dealing with stochastic actions and perceptions. We believe that the proposed architecture furthers the theory of high-level autonomous agent reasoning.

@proceedings{103,
  author = {Gavin Rens and Thomas Meyer},
  title = {Hybrid POMDP-BDI: An Agent Architecture with Online Stochastic Planning and Desires with Changing Intensity Levels},
  abstract = {Partially observable Markov decision processes (POMDPs) and the belief-desire-intention (BDI) framework have several complimentary strengths. We propose an agent architecture which combines these two powerful approaches to capitalize on their strengths. Our architecture introduces the notion of intensity of the desire for a goal’s achievement. We also define an update rule for goals’ desire levels. When to select a new goal to focus on is also defined. To verify that the proposed architecture works, experiments were run with an agent based on the architecture, in a domain where multiple goals must continually be achieved. The results show that (i) while the agent is pursuing goals, it can concurrently perform rewarding actions not directly related to its goals, (ii) the trade-off between goals and preferences can be set effectively and (iii) goals and preferences can be satisfied even while dealing with stochastic actions and perceptions. We believe that the proposed architecture furthers the theory of high-level autonomous agent reasoning.},
  year = {2015},
  journal = {International Conference on Agents and Artificial Intelligence (ICAART) Vol. 1},
  pages = {5-14},
  month = {10/01-12/01},
  isbn = {978-989-758-073-4},
}
Rens G, Meyer T, Lakemeyer G. A Modal Logic for the Decision-Theoretic Projection Problem. International Conference on Agents and Artificial Intelligence (ICAART) Vol. 2. 2015.

We present a decidable logic in which queries can be posed about (i) the degree of belief in a propositional sentence after an arbitrary finite number of actions and observations and (ii) the utility of a finite sequence of actions after a number of actions and observations. Another contribution of this work is that a POMDP model specification is allowed to be partial or incomplete with no restriction on the lack of information specified for the model. The model may even contain information about non-initial beliefs. Essentially, entailment of arbitrary queries (expressible in the language) can be answered. A sound, complete and terminating decision procedure is provided.

@proceedings{102,
  author = {Gavin Rens and Thomas Meyer and G. Lakemeyer},
  title = {A Modal Logic for the Decision-Theoretic Projection Problem},
  abstract = {We present a decidable logic in which queries can be posed about (i) the degree of belief in a propositional sentence after an arbitrary finite number of actions and observations and (ii) the utility of a finite sequence of actions after a number of actions and observations. Another contribution of this work is that a POMDP model specification is allowed to be partial or incomplete with no restriction on the lack of information specified for the model. The model may even contain information about non-initial beliefs. Essentially, entailment of arbitrary queries (expressible in the language) can be answered. A sound, complete and terminating decision procedure is provided.},
  year = {2015},
  journal = {International Conference on Agents and Artificial Intelligence (ICAART) Vol. 2},
  pages = {5-16},
  month = {10/01-12/01},
  isbn = {978-989-758-074-1},
}
Wissing D, Pienaar W, van Niekerk D. Palatalisation of /s/ in Afrikaans. STELLENBOSCH PAPERS IN LINGUISTICS. 2015;48(3). http://spilplus.journals.ac.za/pub/article/view/688/631.

No Abstract

@article{134,
  author = {D. Wissing and W. Pienaar and D. van Niekerk},
  title = {Palatalisation of /s/ in Afrikaans},
  abstract = {No Abstract},
  year = {2015},
  journal = {STELLENBOSCH PAPERS IN LINGUISTICS},
  volume = {48},
  pages = {137-158},
  issue = {3},
  publisher = {University of Stellenbosch},
  address = {South Africa},
  isbn = {2224-3380},
  url = {http://spilplus.journals.ac.za/pub/article/view/688/631},
}

2014

Klarman S, Meyer T. Complexity of Temporal Query Abduction in DL-Lite. 27th International Workshop on Description Logics (DL 2014). 2014. http://ceur-ws.org/Vol-1193/paper_45.pdf.

Temporal query abduction is the problem of hypothesizing a minimal set of temporal data which, given some fixed background knowledge, warrants the entailment of the query. This problem formally underlies a variety of forms of explanatory and diagnostic reasoning in the context of time series data, data streams, or otherwise temporally annotated structured information. In this paper, we consider (temporally ordered) data represented in Description Logics from the popular DLLite family and Temporal Query Language, based on the combination of LTL with conjunctive queries. In this defined setting, we study the complexity of temporal query abduction, assuming different restrictions on the problem and minimality criteria for abductive solutions. As a result, we draw several revealing demarcation lines between NP-, DP- and PSpace-complete variants of the problem.

@misc{365,
  author = {Szymon Klarman and Thomas Meyer},
  title = {Complexity of Temporal Query Abduction in DL-Lite},
  abstract = {Temporal query abduction is the problem of hypothesizing a minimal set of temporal data which, given some fixed background knowledge, warrants the entailment of the query. This problem formally underlies a variety of forms of explanatory and diagnostic reasoning in the context of time series data, data streams, or otherwise temporally annotated structured information. In this paper, we consider (temporally ordered) data represented in Description Logics from the popular DLLite family and Temporal Query Language, based on the combination of LTL with conjunctive queries. In this defined setting, we study the complexity of temporal query abduction, assuming different restrictions on the problem and minimality criteria for abductive solutions. As a result, we draw several revealing demarcation lines between NP-, DP- and PSpace-complete variants of the problem.},
  year = {2014},
  journal = {27th International Workshop on Description Logics (DL 2014)},
  month = {17/07 - 20/07},
  url = {http://ceur-ws.org/Vol-1193/paper_45.pdf},
}
Ogundele O, Moodley D, Seebregts C, Pillay A. Building Semantic Causal Models to Predict Treatment Adherence for Tuberculosis Patients in Sub-Saharan Africa. 4th International Symposium (FHIES 2014) and 6th International Workshop (SEHC 2014). 2014.

Poor adherence to prescribed treatment is a major factor contributing to tuberculosis patients developing drug resistance and failing treatment. Treatment adherence behaviour is influenced by diverse personal, cultural and socio-economic factors that vary between regions and communities. Decision network models can potentially be used to predict treatment adherence behaviour. However, determining the network structure (identifying the factors and their causal relations) and the conditional probabilities is a challenging task. To resolve the former we developed an ontology supported by current scientific literature to categorise and clarify the similarity and granularity of factors

@proceedings{158,
  author = {Olukunle Ogundele and Deshen Moodley and Chris Seebregts and Anban Pillay},
  title = {Building Semantic Causal Models to Predict Treatment Adherence for Tuberculosis Patients in Sub-Saharan Africa},
  abstract = {Poor adherence to prescribed treatment is a major factor contributing to tuberculosis patients developing drug resistance and failing treatment. Treatment adherence behaviour is influenced by diverse personal, cultural and socio-economic factors that vary between regions and communities. Decision network models can potentially be used to predict treatment adherence behaviour. However, determining the network structure (identifying the factors and their causal relations) and the conditional probabilities is a challenging task. To resolve the former we developed an ontology supported by current scientific literature to categorise and clarify the similarity and granularity of factors},
  year = {2014},
  journal = {4th International Symposium (FHIES 2014) and 6th International Workshop (SEHC 2014)},
  pages = {81-95},
  month = {17/07-18/07},
}
Ongoma N. Formalising Temporal Attributes in Temporal Conceptual Data Models. 2014;MSc.

Formalized temporal attributes in temporal conceptual models, temporal EER model, using a temporal description logic language (DLRus). This ensured the full formalization of the temporal conceptual model, ERvt, which permits full reasoning on the model. These results permit the development of consistent temporal databases.

@phdthesis{112,
  author = {Nasubo Ongoma},
  title = {Formalising Temporal Attributes in Temporal Conceptual Data Models},
  abstract = {Formalized temporal attributes in temporal conceptual models, temporal EER model, using a temporal description logic language (DLRus). This ensured the full formalization of the temporal conceptual model, ERvt, which permits full reasoning on the model. These results permit the development of consistent temporal databases.},
  year = {2014},
  volume = {MSc},
}
Casini G, Meyer T, Moodley K, Nortjé R. Relevant Closure: A New Form of Defeasible Reasoning for Description Logics. JELIA 2014. 2014.

Among the various proposals for defeasible reasoning for description logics, Rational Closure, a procedure originally defined for propositional logic, turns out to have a number of desirable properties. Not only it is computationally feasible, but it can also be implemented using existing classical reasoners. One of its drawbacks is that it can be seen as too weak from the inferential point of view. To overcome this limitation we introduce in this paper two extensions of Rational Closure: Basic Relevant Closure and Minimal Relevant Closure. As the names suggest, both rely on defining a version of relevance. Our formalisation of relevance in this context is based on the notion of a justification (a minimal subset of sentences implying a given sentence). This is, to our knowledge, the first proposal for defining defeasibility in terms of justifications—a notion that is well-established in the area of ontology debugging. Both Basic and Minimal Relevant Closure increase the inferential power of Rational Closure, giving back intuitive conclusions that cannot be obtained from Rational Closure. We analyse the properties and present algorithms for both Basic and Minimal Relevant Closure, and provide experimental results for both Basic Relevant Closure and Minimal Relevant Closure, comparing it with Rational Closure.

@proceedings{107,
  author = {Giovanni Casini and Thomas Meyer and Kody Moodley and Riku Nortjé},
  title = {Relevant Closure: A New Form of Defeasible Reasoning for Description Logics},
  abstract = {Among the various proposals for defeasible reasoning for description
logics, Rational Closure, a procedure originally defined for propositional logic,
turns out to have a number of desirable properties. Not only it is computationally
feasible, but it can also be implemented using existing classical reasoners. One
of its drawbacks is that it can be seen as too weak from the inferential point of
view. To overcome this limitation we introduce in this paper two extensions of
Rational Closure: Basic Relevant Closure and Minimal Relevant Closure. As the
names suggest, both rely on defining a version of relevance. Our formalisation
of relevance in this context is based on the notion of a justification (a minimal
subset of sentences implying a given sentence). This is, to our knowledge, the
first proposal for defining defeasibility in terms of justifications—a notion that
is well-established in the area of ontology debugging. Both Basic and Minimal
Relevant Closure increase the inferential power of Rational Closure, giving back
intuitive conclusions that cannot be obtained from Rational Closure. We analyse
the properties and present algorithms for both Basic and Minimal Relevant
Closure, and provide experimental results for both Basic Relevant Closure and
Minimal Relevant Closure, comparing it with Rational Closure.},
  year = {2014},
  journal = {JELIA 2014},
  pages = {92-106},
  month = {24/09-26/09},
  isbn = {978-3-319-11557-3},
}
Harmse H, Britz K, Gerber A, Moodley D. Scenario Testing using Formal Ontologies. 2014. http://ceur-ws.org/Vol-1301/ontocomodise2014_10.pdf.

One of the challenges in the Software Development Life Cycle (SDLC) is to ensure that the requirements that drive the development of a software system are correct. However, establishing unambiguous and error-free requirements is not a trivial problem. As part of the requirements phase of the SDLC, a conceptual model can be created which describes the objects, relationships and operations that are of importance to business. Such a conceptual model is often expressed as a UML class diagram. Recent research concerned with the formal validation of such UML class diagrams has focused on transforming UML class diagrams to various formalisms such as description logics. Description logics are desirable since they have reasoning support which can be used to show that a UML class diagram is consistent/inconsistent. Yet, even when a UML class diagram is consistent, it still does not address the problem of ensuring that a UML class diagram represents business requirements accurately. To validate such diagrams business analysts use a technique called scenario testing. In this paper we present an approach for the formal validation of UML class diagrams based on scenario testing. We additionally provide preliminary feedback on the experiences gained from using our scenario testing approach on a real-world software project.

@misc{106,
  author = {Henriette Harmse and Katarina Britz and Aurona Gerber and Deshen Moodley},
  title = {Scenario Testing using Formal Ontologies},
  abstract = {One of the challenges in the Software Development Life Cycle (SDLC) is to ensure that the requirements that drive the development of a software system are correct. However, establishing unambiguous and error-free requirements is not a trivial problem. As part of the requirements phase of the SDLC, a conceptual model can be created which describes the objects, relationships and operations that are of importance to business. Such a conceptual model is often expressed as a UML class diagram. Recent research concerned with the formal validation of such UML class diagrams has focused on transforming UML class diagrams to various formalisms such as description logics. Description logics are desirable since they have reasoning support which can be used to show that a UML class diagram is consistent/inconsistent. Yet, even when a UML class diagram is consistent, it still does not address the problem of ensuring that a UML class diagram represents business requirements accurately. To validate such diagrams business analysts use a technique called scenario testing. In this paper we present an approach for the formal validation of UML class diagrams based on scenario testing. We additionally provide preliminary feedback on the experiences gained from using our scenario testing approach on a real-world software project.},
  year = {2014},
  isbn = {urn:nbn:de:0074-1301-3},
  url = {http://ceur-ws.org/Vol-1301/ontocomodise2014_10.pdf},
}
Gerber A, Eardley C, Morar N. An Ontology-based Key for Afrotropical Bees. In: Frontiers In Artificial Intelligence And Applications. Rio de Janeiro, Brazil: IOS Press; 2014. http://ebooks.iospress.nl/volume/formal-ontology-in-information-systems-proceedings-of-the-eighth-international-conference-fois-2014.

The goal of this paper is to report on the development of an ontologybased taxonomic key application that is a first deliverable of a larger project that has as goal the development of ontology-driven computing solutions for problems experienced in taxonomy. The ontology-based taxonomic key was developed from a complex taxonomic data set, namely the Catalogue of Afrotropical Bees. The key is used to identify the genera of African bees and for this paper we developed an ontology-based application, that demonstrates that morphological key data can be captured effectively in a standardised format as an ontology, and furthermore, even though the ontology-based key provides the same identification results as the traditional key, this approach allows for several additional advantages that could support taxonomy in the biological sciences. The morphology ontology for Afrotropical bees, as well as the key application form the basis of a suite of tools that we intend to develop to support the taxonomic processes in this domain.

@inbook{101,
  author = {Aurona Gerber and C. Eardley and Nishal Morar},
  title = {An Ontology-based Key for Afrotropical Bees},
  abstract = {The goal of this paper is to report on the development of an ontologybased
taxonomic key application that is a first deliverable of a larger project that
has as goal the development of ontology-driven computing solutions for problems
experienced in taxonomy. The ontology-based taxonomic key was developed from
a complex taxonomic data set, namely the Catalogue of Afrotropical Bees. The
key is used to identify the genera of African bees and for this paper we developed
an ontology-based application, that demonstrates that morphological key data can
be captured effectively in a standardised format as an ontology, and furthermore,
even though the ontology-based key provides the same identification results as the
traditional key, this approach allows for several additional advantages that could
support taxonomy in the biological sciences. The morphology ontology for
Afrotropical bees, as well as the key application form the basis of a suite of tools
that we intend to develop to support the taxonomic processes in this domain.},
  year = {2014},
  journal = {Frontiers in Artificial Intelligence and Applications},
  pages = {277-288},
  publisher = {IOS Press},
  address = {Rio de Janeiro, Brazil},
  isbn = {978-1-61499-437-4 (print) | 978-1-61499-438-1 (online)},
  url = {http://ebooks.iospress.nl/volume/formal-ontology-in-information-systems-proceedings-of-the-eighth-international-conference-fois-2014},
}
Rens G. Formalisms for Agents Reasoning with Stochastic Actions and Perceptions. 2014;PhD.

The thesis reports on the development of a sequence of logics (formal languages based on mathematical logic) to deal with a class of uncertainty that agents may encounter. More accurately, the logics are meant to be used for allowing robots or software agents to reason about the uncertainty they have about the effects of their actions and the noisiness of their observations. The approach is to take the well-established formalism called the partially observable Markov decision process (POMDP) as an underlying formalism and then design a modal logic based on POMDP theory to allow an agent to reason with a knowledge-base (including knowledge about the uncertainties). First, three logics are designed, each one adding one or more important features for reasoning in the class of domains of interest (i.e., domains where stochastic action and sensing are considered). The final logic, called the Stochastic Decision Logic (SDL) combines the three logics into a coherent formalism, adding three important notions for reasoning about stochastic decision-theoretic domains: (i) representation of and reasoning about degrees of belief in a statement, given stochastic knowledge, (ii) representation of and reasoning about the expected future rewards of a sequence of actions and (iii) the progression or update of an agent’s epistemic, stochastic knowledge. For all the logics developed in this thesis, entailment is defined, that is, whether a sentence logically follows from a knowledge-base. Decision procedures for determining entailment are developed, and they are all proved sound, complete and terminating. The decision procedures all employ tableau calculi to deal with the traditional logical aspects, and systems of equations and inequalities to deal with the probabilistic aspects. Besides promoting the compact representation of POMDP models, and the power that logic brings to the automation of reasoning, the Stochastic Decision Logic is novel and significant in that it allows the agent to determine whether or not a set of sentences is entailed by an arbitrarily precise specification of a POMDP model, where this is not possible with standard POMDPs. The research conducted for this thesis has resulted in several publications and has been presented at several workshops, symposia and conferences.

@phdthesis{100,
  author = {Gavin Rens},
  title = {Formalisms for Agents Reasoning with Stochastic Actions and Perceptions},
  abstract = {The thesis reports on the development of a sequence of logics (formal languages based on mathematical logic) to deal with a class of uncertainty that agents may encounter. More accurately, the logics are meant to be used for allowing robots or software agents to reason about the uncertainty they have about the effects of their actions and the noisiness of their observations. The approach is to take the well-established formalism called the 
partially observable Markov decision process (POMDP) as an underlying formalism and then design a modal logic based on POMDP theory to allow an agent to reason with a knowledge-base (including knowledge about the uncertainties).
First, three logics are designed, each one adding one or more important features for reasoning in the class of domains of interest (i.e., domains where stochastic action and sensing are considered). The final logic, called the Stochastic Decision Logic (SDL) combines the three logics into a coherent formalism, adding three important notions for reasoning about stochastic decision-theoretic domains: (i) representation of and reasoning about degrees of belief in a statement, given stochastic knowledge, (ii) representation of and reasoning about the expected future rewards of a sequence of actions and (iii) the progression or update of an agent’s epistemic, stochastic knowledge.
For all the logics developed in this thesis, entailment is defined, that is, whether a sentence logically follows from a knowledge-base. Decision procedures for determining entailment are developed, and they are all proved sound, complete and terminating. The decision procedures all employ tableau calculi to deal with the traditional logical aspects, and systems of equations and inequalities to deal with the probabilistic aspects.
Besides promoting the compact representation of POMDP models, and the power that logic brings to the automation of reasoning, the Stochastic Decision Logic is novel and significant in that it allows the agent to determine whether or not a set of sentences is entailed by an arbitrarily precise specification of a POMDP model, where this is not possible with standard POMDPs.
The research conducted for this thesis has resulted in several publications and has been presented at several workshops, symposia and conferences.},
  year = {2014},
  volume = {PhD},
}
Lombard N, Gerber A, van der Merwe A. The Construction and Use of an Ontology to Support a Simulation Environment Performing Countermeasure Evaluation for Military Aircraft . 2014;MTech.

No Abstract

@phdthesis{99,
  author = {Nelia Lombard and Aurona Gerber and Alta van der Merwe},
  title = {The Construction and Use of an Ontology to Support a Simulation Environment Performing Countermeasure Evaluation for Military Aircraft},
  abstract = {No Abstract},
  year = {2014},
  volume = {MTech},
}
de Vries M, Gerber A, van der Merwe A. The Nature of the Enterprise Engineering Discipline. In: LNBIP Springer; 2014. http://ciaonetwork.org/events/past-events/4th-enterprise-engineering-working-conference.

No Abstract

@inbook{98,
  author = {Marne de Vries and Aurona Gerber and Alta van der Merwe},
  title = {The Nature of the Enterprise Engineering Discipline},
  abstract = {No Abstract},
  year = {2014},
  publisher = {LNBIP Springer},
  url = {http://ciaonetwork.org/events/past-events/4th-enterprise-engineering-working-conference},
}
Brandt P, Moodley D, Pillay A, Seebregts C, de Oliveira T. An Investigation of Classification Algorithms for Predicting HIV Drug Resistance Without Genotype Resistance Testing. Third International Symposium on Foundations of Health Information Engineering and Systems. 2014. http://link.springer.com/chapter/10.1007/978-3-642-53956-5_16.

The development of drug resistance is a major factor imped- ing the efficacy of antiretroviral treatment of South Africa’s HIV infected population. While genotype resistance testing is the standard method to determine resistance, access to these tests is limited in low-resource set- tings. In this paper we investigate machine learning techniques for drug resistance prediction from routine treatment and laboratory data to help clinicians select patients for confirmatory genotype testing. The tech- niques, including binary relevance, HOMER, MLkNN, predictive clus- tering trees (PCT), RAkEL and ensemble of classifier chains were tested on a dataset of 252 medical records of patients enrolled in an HIV treat- ment failure clinic in rural KwaZulu-Natal in South Africa. The PCT method performed best with a discriminant power of 1.56 for two drugs, above 1.0 for three others and a mean true positive rate of 0.68. These methods show potential for application where access to genotyping is limited.

@proceedings{97,
  author = {Pascal Brandt and Deshen Moodley and Anban Pillay and Chris Seebregts and T. de Oliveira},
  title = {An Investigation of Classification Algorithms for Predicting HIV Drug Resistance Without Genotype Resistance Testing},
  abstract = {The development of drug resistance is a major factor imped- ing the efficacy of antiretroviral treatment of South Africa’s HIV infected population. While genotype resistance testing is the standard method to determine resistance, access to these tests is limited in low-resource set- tings. In this paper we investigate machine learning techniques for drug resistance prediction from routine treatment and laboratory data to help clinicians select patients for confirmatory genotype testing. The tech- niques, including binary relevance, HOMER, MLkNN, predictive clus- tering trees (PCT), RAkEL and ensemble of classifier chains were tested on a dataset of 252 medical records of patients enrolled in an HIV treat- ment failure clinic in rural KwaZulu-Natal in South Africa. The PCT method performed best with a discriminant power of 1.56 for two drugs, above 1.0 for three others and a mean true positive rate of 0.68. These methods show potential for application where access to genotyping is limited.},
  year = {2014},
  journal = {Third International Symposium on Foundations of Health Information Engineering and Systems},
  pages = {236-253},
  month = {21/08-23/08},
  isbn = {978-3-642-53955-8},
  url = {http://link.springer.com/chapter/10.1007/978-3-642-53956-5_16},
}
Ongoma N. Formalising Temporal Attributes in Temporal conceptual data models. 2014.

No Abstract

@misc{96,
  author = {Nasubo Ongoma},
  title = {Formalising Temporal Attributes in Temporal conceptual data models},
  abstract = {No Abstract},
  year = {2014},
}
Rens G, Meyer T, Lakemeyer G. A Logic for Specifying Stochastic Actions and Observations. Intl. Symposium on Foundations of Information and Knowledge Systems (FoIKS). 2014.

We present a logic inspired by partially observable Markov decision process (POMDP) theory for specifying agent domains where the agent's actuators and sensors are noisy (causing uncertainty). The language features modalities for actions and predicates for observations. It includes a notion of probability to represent the uncertainties, and the expression of rewards and costs are also catered for. One of the main contributions of the paper is the formulation of a sound and complete decision procedure for checking validity of sentences: a tableau method which appeals to solving systems of equations. The tableau rules eliminate propositional connectives, then, for all open branches of the tableau tree, systems of equations are generated and checked for feasibility. This paper presents progress made on previously published work.

@proceedings{95,
  author = {Gavin Rens and Thomas Meyer and G. Lakemeyer},
  title = {A Logic for Specifying Stochastic Actions and Observations},
  abstract = {We present a logic inspired by partially observable Markov decision process (POMDP) theory for specifying agent domains where the agent's actuators and sensors are noisy (causing uncertainty). The language features modalities for actions and predicates for observations. It includes a notion of probability to represent the uncertainties, and the expression of rewards and costs are also catered for. One of the main contributions of the paper is the formulation of a sound and complete decision procedure for checking validity of sentences: a tableau method which appeals to solving systems of equations. The tableau rules eliminate propositional connectives, then, for all open branches of the tableau tree, systems of equations are generated and checked for feasibility. This paper presents progress made on previously published work.},
  year = {2014},
  journal = {Intl. Symposium on Foundations of Information and Knowledge Systems (FoIKS)},
  pages = {305-323},
  month = {03/03-07/03},
  isbn = {978-3-319-04938-0},
}
Price CS, Moodley D, Bezuidenhout CN. Using agent-based simulation to explore sugarcane supply chain transport complexities at a mill scale. 43rd Annual Operations Research Society of South Africa Conference. 2014.

The sugarcane supply chain (from sugarcane grower to mill) have particular challenges. One of these is that the growers have to deliver their cane to the mill before its quality degrades. The sugarcane supply chain typically consists of many growers and a mill. Growers deliver their cane daily during the milling season; the amount of cane they deliver depends on their farm size. Growers make decisions about when to harvest the cane, and the number and type of trucks needed to deliver their cane. The mill wants a consistent cane supply over the milling season. Growers are sometimes affected long queue lengths at the mill when they offload their cane. A preliminary agent-based simulation model was developed to understand this complex system. The model inputs a number of growers, and the amount of cane they are to deliver over the milling season. The number of trucks needed by each grower is determined by the trip, loading and unloading times and the anticipated waiting time at the mill. The anticipated waiting time was varied to determine how many trucks would be needed in the system to deliver the week’s cane allocation. As the anticipated waiting time increased, the number of trucks needed also increased, which in turn delayed the trucks when queuing at the mill. The growers’ anticipated waiting times never matched the actual waiting times. The research shows the promise of agent-based models as a sense-making approach to understanding systems where there are many individuals who have autonomous behaviour, and whose actions and interactions can result in unexpected system-level behaviour.

@proceedings{94,
  author = {C. Sue Price and Deshen Moodley and C.N. Bezuidenhout},
  title = {Using agent-based simulation to explore sugarcane supply chain transport complexities at a mill scale},
  abstract = {The sugarcane supply chain (from sugarcane grower to mill) have particular challenges.  One of these is that the growers have to deliver their cane to the mill before its quality degrades.  The sugarcane supply chain typically consists of many growers and a mill.  Growers deliver their cane daily during the milling season; the amount of cane they deliver depends on their farm size.  Growers make decisions about when to harvest the cane, and the number and type of trucks needed to deliver their cane.  The mill wants a consistent cane supply over the milling season.  Growers are sometimes affected long queue lengths at the mill when they offload their cane.
A preliminary agent-based simulation model was developed to understand this complex system.  The model inputs a number of growers, and the amount of cane they are to deliver over the milling season.  The number of trucks needed by each grower is determined by the trip, loading and unloading times and the anticipated waiting time at the mill.  The anticipated waiting time was varied to determine how many trucks would be needed in the system to deliver the week’s cane allocation.  As the anticipated waiting time increased, the number of trucks needed also increased, which in turn delayed the trucks when queuing at the mill.   The growers’ anticipated waiting times never matched the actual waiting times.  The research shows the promise of agent-based models as a sense-making approach to understanding systems where there are many individuals who have autonomous behaviour, and whose actions and interactions can result in unexpected system-level behaviour.},
  year = {2014},
  journal = {43rd Annual Operations Research Society of South Africa Conference},
  pages = {88-96},
  month = {14/09-17/09},
  isbn = {978-1-86822-656-6},
}
Brandt P, Moodley D, Pillay A. An Investigation Of Multi-label Classification Techniques For Predicting Hiv Drug Resistance In Resource-limited Settings. 2014;MSc.

South Africa has one of the highest HIV infection rates in the world with more than 5.6 million infected people and consequently has the largest antiretroviral treatment program with more than 1.5 million people on treatment. The development of drug resistance is a major factor impeding the efficacy of antiretroviral treatment. While genotype resistance testing (GRT) is the standard method to determine resistance, access to these tests is limited in resource-limited settings. This research investigates the efficacy of multi-label machine learning techniques at predicting HIV drug resistance from routine treatment and laboratory data. Six techniques, namely, binary relevance, HOMER, MLkNN, predictive clustering trees (PCT), RAkEL and ensemble of classifier chains (ECC) have been tested and evaluated on data from medical records of patients enrolled in an HIV treatment failure clinic in rural KwaZulu-Natal in South Africa. The performance is measured using five scalar evaluation measures and receiver operating characteristic (ROC) curves. The techniques were found to provide useful predictive information in most cases. The PCT and ECC techniques perform best and have true positive prediction rates of 97% and 98% respectively for specific drugs. The ECC method also achieved an AUC value of 0.83, which is comparable to the current state of the art. All models have been validated using 10 fold cross validation and show increased performance when additional data is added. In order to make use of these techniques in the field, a tool is presented that may, with small modifications, be integrated into public HIV treatment programs in South Africa and could assist clinicians to identify patients with a high probability of drug resistance.

@phdthesis{93,
  author = {Pascal Brandt and Deshen Moodley and Anban Pillay},
  title = {An Investigation Of Multi-label Classification Techniques For Predicting Hiv Drug Resistance In Resource-limited Settings},
  abstract = {South Africa has one of the highest HIV infection rates in the world with more than 5.6 million infected people and consequently has the largest antiretroviral treatment program with more than 1.5 million people on treatment. The development of drug resistance is a major factor impeding the efficacy of antiretroviral treatment. While genotype resistance testing (GRT) is the standard method to determine resistance, access to these tests is limited in resource-limited settings. This research investigates the efficacy of multi-label machine learning techniques at predicting HIV drug resistance from routine treatment and laboratory data. Six techniques, namely, binary relevance, HOMER, MLkNN, predictive clustering trees (PCT), RAkEL and ensemble of classifier chains (ECC) have been tested and evaluated on data from medical records of patients enrolled in an HIV treatment failure clinic in rural KwaZulu-Natal in South Africa. The performance is measured using five scalar evaluation measures and receiver operating characteristic (ROC) curves. The techniques were found to provide useful predictive information in most cases. The PCT and ECC techniques perform best and have true positive prediction rates of 97% and 98% respectively for specific drugs. The ECC method also achieved an AUC value of 0.83, which is comparable to the current state of the art. All models have been validated using 10 fold cross validation and show increased performance when additional data is added. In order to make use of these techniques in the field, a tool is presented that may, with small modifications, be integrated into public HIV treatment programs in South Africa and could assist clinicians to identify patients with a high probability of drug resistance.},
  year = {2014},
  volume = {MSc},
}
  • CSIR
  • DSI
  • Covid-19