CAIR@UKZN Research Publications

2019

Mbonye V, Price CS. A model to evaluate the quality of Wi-Fi perfomance: Case study at UKZN Westville campus. 2nd International Conference on Advances in Big Data, Computing and Data Communication Systems (icABCD 2019). 2019.

Understanding how satisfied users are with services is very important in the delivery of quality services and in improving them. While studies have investigated perceptions of Wi-Fi among students, there is still a gap in understanding the overall perception of quality of service in terms of the different factors that may affect Wi-Fi service quality. Brady & Cronin Jr’s service quality model proposes that outcome quality, physical environment quality and interaction quality affect service quality. Sub-constructs for the independent variables were generated, and Likert-scale items developed for each sub-construct, based on the literature. 373 questionnaires were administered to University of KwaZulu-Natal (UKZN) Westville campus students. Factor analysis was to confirm the sub-constructs. Multiple regression analysis was used to test the model’s ability to predict Wi-Fi service quality. Of the three independent constructs, the outcome quality mean had the highest value (4.53), and it was similar to how the students rated service quality (4.52). All the constructs were rated at above the neutral score of 4. In the factor analysis, two physical environment quality items were excluded, and one service quality item was categorised with the expertise sub-construct of interaction quality. Using multiple regression analysis, the model showed that the independent constructs predict service quality with an R2 of 59.5%. However, when models for individual most-used locations (the library and lecture venues) were conducted, the R2 improved. The model can be used to understand users’ perceptions of outcome quality, physical environment quality and interaction quality which influence the quality of Wi-Fi performance, and evaluate the Wi-Fi performance quality of different locations.

@proceedings{217,
  author = {V. Mbonye and C. Sue Price},
  title = {A model to evaluate the quality of Wi-Fi perfomance: Case study at UKZN Westville campus},
  abstract = {Understanding how satisfied users are with services is very important in the delivery of quality services and in improving them. While studies have investigated perceptions of Wi-Fi among students, there is still a gap in understanding the overall perception of quality of service in terms of the different factors that may affect Wi-Fi service quality.  Brady & Cronin Jr’s service quality model proposes that outcome quality, physical environment quality and interaction quality affect service quality.  Sub-constructs for the independent variables were generated, and Likert-scale items developed for each sub-construct, based on the literature.  373 questionnaires were administered to University of KwaZulu-Natal (UKZN) Westville campus students.  Factor analysis was to confirm the sub-constructs.  Multiple regression analysis was used to test the model’s ability to predict Wi-Fi service quality.
Of the three independent constructs, the outcome quality mean had the highest value (4.53), and it was similar to how the students rated service quality (4.52).  All the constructs were rated at above the neutral score of 4.  In the factor analysis, two physical environment quality items were excluded, and one service quality item was categorised with the expertise sub-construct of interaction quality.  Using multiple regression analysis, the model showed that the independent constructs predict service quality with an R2 of 59.5%.  However, when models for individual most-used locations (the library and lecture venues) were conducted, the R2 improved.  The model can be used to understand users’ perceptions of outcome quality, physical environment quality and interaction quality which influence the quality of Wi-Fi performance, and evaluate the Wi-Fi performance quality of different locations.},
  year = {2019},
  journal = {2nd International Conference on Advances in Big Data, Computing and Data Communication Systems (icABCD 2019)},
  pages = {291-297},
  month = {05/08 - 06/08},
  publisher = {IEEE},
  address = {Danvers MA},
  isbn = {978-1-5386-9235-6},
}

2018

Ndaba M, Pillay A, Ezugwu A. An Improved Generalized Regression Neural Network for Type II Diabetes Classification. In: Iccsa 2018, Lncs 10963. 10963rd ed. Springer International Publishing AG; 2018.

This paper proposes an improved Generalized Regression Neural Network (KGRNN) for the diagnosis of type II diabetes. Dia- betes, a widespread chronic disease, is a metabolic disorder that develops when the body does not make enough insulin or is unable to use insulin effectively. Type II diabetes is the most common type and accounts for an estimated 90% of cases. The novel KGRNN technique reported in this study uses an enhanced K-Means clustering technique (CVE-K-Means) to produce cluster centers (centroids) that are used to train the network. The technique was applied to the Pima Indian diabetes dataset, a widely used benchmark dataset for Diabetes diagnosis. The technique outper- forms the best known GRNN techniques for Type II diabetes diagnosis in terms of classification accuracy and computational time and obtained a classification accuracy of 86% with 83% sensitivity and 87% specificity. The Area Under the Receiver Operating Characteristic Curve (ROC) of 87% was obtained.

@inbook{195,
  author = {Moeketsi Ndaba and Anban Pillay and Absalom Ezugwu},
  title = {An Improved Generalized Regression Neural Network for Type II Diabetes Classification},
  abstract = {This paper proposes an improved Generalized Regression Neural Network (KGRNN) for the diagnosis of type II diabetes. Dia- betes, a widespread chronic disease, is a metabolic disorder that develops when the body does not make enough insulin or is unable to use insulin effectively. Type II diabetes is the most common type and accounts for an estimated 90% of cases. The novel KGRNN technique reported in this study uses an enhanced K-Means clustering technique (CVE-K-Means) to produce cluster centers (centroids) that are used to train the network. The technique was applied to the Pima Indian diabetes dataset, a widely used benchmark dataset for Diabetes diagnosis. The technique outper- forms the best known GRNN techniques for Type II diabetes diagnosis in terms of classification accuracy and computational time and obtained a classification accuracy of 86% with 83% sensitivity and 87% specificity. The Area Under the Receiver Operating Characteristic Curve (ROC) of 87% was obtained.},
  year = {2018},
  journal = {ICCSA 2018, LNCS 10963},
  edition = {10963},
  pages = {659-671},
  publisher = {Springer International Publishing AG},
  isbn = {3319951718},
}
Jembere E, Rawatlal R, Pillay A. Matrix Factorisation for Predicting Student Performance. 2017 7th World Engineering Education Forum (WEEF). 2018.

Predicting student performance in tertiary institutions has potential to improve curriculum advice given to students, the planning of interventions for academic support and monitoring and curriculum design. The student performance prediction problem, as defined in this study, is the prediction of a student’s mark for a module, given the student’s performance in previously attempted modules. The prediction problem is amenable to machine learning techniques, provided that sufficient data is available for analysis. This work reports on a study undertaken at the College of Agriculture, Engineering and Science at University of KwaZulu-Natal that investigates the efficacy of Matrix Factorization as a technique for solving the prediction problem. The study uses Singular Value Decomposition (SVD), a Matrix Factorization technique that has been successfully used in recommender systems. The performance of the technique was benchmarked against the use of student and course average marks as predictors of performance. The results obtained suggests that Matrix Factorization performs better than both benchmarks.

@proceedings{194,
  author = {Edgar Jembere and Randhir Rawatlal and Anban Pillay},
  title = {Matrix Factorisation for Predicting Student Performance},
  abstract = {Predicting student performance in tertiary institutions has potential to improve curriculum advice given to students, the planning of interventions for academic support and monitoring and curriculum design. The student performance prediction problem, as defined in this study, is the prediction of a student’s mark for a module, given the student’s performance in previously attempted modules. The prediction problem is amenable to machine learning techniques, provided that sufficient data is available for analysis. This work reports on a study undertaken at the College of Agriculture, Engineering and Science at University of KwaZulu-Natal that investigates the efficacy of Matrix Factorization as a technique for solving the prediction problem. The study uses Singular Value Decomposition (SVD), a Matrix Factorization technique that has been successfully used in recommender systems. The performance of the technique was benchmarked against the use of student and course average marks as predictors of performance. The results obtained suggests that Matrix Factorization performs better than both benchmarks.},
  year = {2018},
  journal = {2017 7th World Engineering Education Forum (WEEF)},
  pages = {513-518},
  month = {13/11-16/11},
  publisher = {IEEE},
  isbn = {978-1-5386-1523-2},
}
Ndaba M, Pillay A, Ezugwu A. A Comparative Study of Machine Learning Techniques for Classifying Type II Diabetes Mellitus. 2018;MSc.

Diabetes is a metabolic disorder that develops when the body does not make enough insulin or is not able to use insulin effectively. Accurate and early detection of diabetes can aid in effective management of the disease. Several machine learning techniques have shown promise as cost ef- fective ways for early diagnosis of the disease to reduce the occurrence of health complications arising due to delayed diagnosis. This study compares the efficacy of three broad machine learning approaches; viz. Artificial Neural Networks (ANNs), Instance-based classification technique, and Statistical Regression to diagnose type II diabetes. For each approach, this study proposes novel techniques that extend the state of the art. The new techniques include Artificial Neural Networks hybridized with an improved K-Means clustering and a boosting technique; improved variants of Logistic Regression (LR), K-Nearest Neighbours algorithm (KNN), and K-Means clustering. The techniques were evaluated on the Pima Indian diabetes dataset and the results were compared to recent results reported in the literature. The highest classification accuracy of 100% with 100% sensitivity and 100% specificity were achieved using an ensemble of the Boosting technique, the enhanced K-Means clustering algorithm (CVE-K-Means) and the Generalized Regression Neu- ral Network (GRNN): B-KGRNN. A hybrid of CVE-K-Means algorithm and GRNN (KGRNN) achieved the best accuracy of 86% with 83% sensitivity. The improved LR model (LR-n) achieved the highest classification accuracy of 84% with 72% sensitivity. The new multi-layer percep- tron (MLP-BPX) achieved the best accuracy of 82% and 72% sensitivity. A hybrid of KNN and CVE-K-Means (CKNN) technique achieved the best accuracy of 81% and 89% sensitivity. CVE- K-Means technique achieved the best accuracy of 80% and 61% sensitivity. The B-KGRNN, KGRNN, LR-n, and CVE-K-Means technique outperformed similar techniques in literature in terms of classification accuracy by 15%, 1%, 2%, and 3% respectively. CKNN and KGRNN tech- nique proved to have less computational complexity compared to the standard KNN and GRNN algorithm. Employing data pre-processing techniques such as feature extraction and missing value removal improved the classification accuracy of machine learning techniques by more than 11% in most instances.

@phdthesis{192,
  author = {Moeketsi Ndaba and Anban Pillay and Absalom Ezugwu},
  title = {A Comparative Study of Machine Learning Techniques for Classifying Type II Diabetes Mellitus},
  abstract = {Diabetes is a metabolic disorder that develops when the body does not make enough insulin or is not able to use insulin effectively. Accurate and early detection of diabetes can aid in effective management of the disease. Several machine learning techniques have shown promise as cost ef- fective ways for early diagnosis of the disease to reduce the occurrence of health complications arising due to delayed diagnosis. This study compares the efficacy of three broad machine learning approaches; viz. Artificial Neural Networks (ANNs), Instance-based classification technique, and Statistical Regression to diagnose type II diabetes. For each approach, this study proposes novel techniques that extend the state of the art. The new techniques include Artificial Neural Networks hybridized with an improved K-Means clustering and a boosting technique; improved variants of Logistic Regression (LR), K-Nearest Neighbours algorithm (KNN), and K-Means clustering. The techniques were evaluated on the Pima Indian diabetes dataset and the results were compared to recent results reported in the literature. The highest classification accuracy of 100% with 100% sensitivity and 100% specificity were achieved using an ensemble of the Boosting technique, the enhanced K-Means clustering algorithm (CVE-K-Means) and the Generalized Regression Neu- ral Network (GRNN): B-KGRNN. A hybrid of CVE-K-Means algorithm and GRNN (KGRNN) achieved the best accuracy of 86% with 83% sensitivity. The improved LR model (LR-n) achieved the highest classification accuracy of 84% with 72% sensitivity. The new multi-layer percep- tron (MLP-BPX) achieved the best accuracy of 82% and 72% sensitivity. A hybrid of KNN and CVE-K-Means (CKNN) technique achieved the best accuracy of 81% and 89% sensitivity. CVE- K-Means technique achieved the best accuracy of 80% and 61% sensitivity. The B-KGRNN, KGRNN, LR-n, and CVE-K-Means technique outperformed similar techniques in literature in terms of classification accuracy by 15%, 1%, 2%, and 3% respectively. CKNN and KGRNN tech- nique proved to have less computational complexity compared to the standard KNN and GRNN algorithm. Employing data pre-processing techniques such as feature extraction and missing value removal improved the classification accuracy of machine learning techniques by more than 11% in most instances.},
  year = {2018},
  volume = {MSc},
}
Waltham M, Moodley D, Pillay A. Q-Cog: A Q-Learning Based Cognitive Agent Architecture for Complex 3D Virtual Worlds. 2018;MSc.

Intelligent cognitive agents requiring a high level of adaptability should contain min- imal initial data and be able to autonomously gather new knowledge from their own experiences. 3D virtual worlds provide complex environments in which autonomous software agents may learn and interact. In many applications within this domain, such as video games and virtual reality, the environment is partially observable and agents must make decisions and react in real-time. Due to the dynamic nature of virtual worlds, adaptability is of great importance for virtual agents. The Reinforce- ment Learning paradigm provides a mechanism for unsupervised learning that allows agents to learn from their own experiences in the environment. In particular, the Q- Learning algorithm allows agents to develop an optimal action-selection policy based on their environment experiences. This research explores the potential of cognitive architectures utilizing Reinforcement Learning whereby agents may contain a library of action-selection policies within virtual environments. The proposed cognitive archi- tecture, Q-Cog, utilizes a policy selection mechanism to develop adaptable 3D virtual agents. Results from experimentation indicates that Q-Cog provides an effective basis for developing adaptive self-learning agents for 3D virtual worlds.

@phdthesis{190,
  author = {Michael Waltham and Deshen Moodley and Anban Pillay},
  title = {Q-Cog: A Q-Learning Based Cognitive Agent  Architecture for Complex 3D Virtual Worlds},
  abstract = {Intelligent cognitive agents requiring a high level of adaptability should contain min- imal initial data and be able to autonomously gather new knowledge from their own experiences. 3D virtual worlds provide complex environments in which autonomous software agents may learn and interact. In many applications within this domain, such as video games and virtual reality, the environment is partially observable and agents must make decisions and react in real-time. Due to the dynamic nature of virtual worlds, adaptability is of great importance for virtual agents. The Reinforce- ment Learning paradigm provides a mechanism for unsupervised learning that allows agents to learn from their own experiences in the environment. In particular, the Q- Learning algorithm allows agents to develop an optimal action-selection policy based on their environment experiences. This research explores the potential of cognitive architectures utilizing Reinforcement Learning whereby agents may contain a library of action-selection policies within virtual environments. The proposed cognitive archi- tecture, Q-Cog, utilizes a policy selection mechanism to develop adaptable 3D virtual agents. Results from experimentation indicates that Q-Cog provides an effective basis for developing adaptive self-learning agents for 3D virtual worlds.},
  year = {2018},
  volume = {MSc},
  publisher = {Durban University},
}
Dzitiro J, Jembere E, Pillay A. A DeepQA Based Real-Time Document Recommender System. Southern Africa Telecommunication Networks and Applications Conference (SATNAC) 2018. 2018.

Recommending relevant documents to users in real- time as they compose their own documents differs from the traditional task of recommending products to users. Variation in the users’ interests as they work on their documents can undermine the effectiveness of classical recommender system techniques that depend heavily on off-line data. This necessitates the use of real-time data gathered as the user is composing a document to determine which documents the user will most likely be interested in. Classical methodologies for evaluating recommender systems are not appropriate for this problem. This paper proposed a methodology for evaluating real-time document recommender system solutions. The proposed method- ology was then used to show that a solution that anticipates a user’s interest and makes only high confidence recommendations performs better than a classical content-based filtering solution. The results obtained using the proposed methodology confirmed that there is a need for a new breed of recommender systems algorithms for real-time document recommender systems that can anticipate the user’s interest and make only high confidence recommendations.

@proceedings{189,
  author = {Joshua Dzitiro and Edgar Jembere and Anban Pillay},
  title = {A DeepQA Based Real-Time Document Recommender System},
  abstract = {Recommending relevant documents to users in real- time as they compose their own documents differs from the traditional task of recommending products to users. Variation in the users’ interests as they work on their documents can undermine the effectiveness of classical recommender system techniques that depend heavily on off-line data. This necessitates the use of real-time data gathered as the user is composing a document to determine which documents the user will most likely be interested in. Classical methodologies for evaluating recommender systems are not appropriate for this problem. This paper proposed a methodology for evaluating real-time document recommender system solutions. The proposed method- ology was then used to show that a solution that anticipates a user’s interest and makes only high confidence recommendations performs better than a classical content-based filtering solution. The results obtained using the proposed methodology confirmed that there is a need for a new breed of recommender systems algorithms for real-time document recommender systems that can anticipate the user’s interest and make only high confidence recommendations.},
  year = {2018},
  journal = {Southern Africa Telecommunication Networks and Applications Conference (SATNAC) 2018},
  pages = {304-309},
  month = {02/09-05/09},
  publisher = {SATNAC},
  address = {South Africa},
}
Price CS, Moodley D, Pillay A. Dynamic Bayesian decision network to represent growers’ adaptive pre-harvest burning decisions in a sugarcane supply chain. Proceedings of the Annual Conference of the South African Institute of Computer Scientists and Information Technologists (SAICSIT '18). 2018. https://dl.acm.org/citation.cfm?id=3278681.

Sugarcane growers usually burn their cane to facilitate its harvesting and transportation. Cane quality tends to deteriorate after burning, so it must be delivered as soon as possible to the mill for processing. This situation is dynamic and many factors, including weather conditions, delivery quotas and previous decisions taken, affect when and how much cane to burn. A dynamic Bayesian decision network (DBDN) was developed, using an iterative knowledge engineering approach, to represent sugarcane growers’ adaptive pre-harvest burning decisions. It was evaluated against five different scenarios which were crafted to represent the range of issues the grower faces when making these decisions. The DBDN was able to adapt reactively to delays in deliveries, although the model did not have enough states representing delayed delivery statuses. The model adapted proactively to rain forecasts, but only adapted reactively to high wind forecasts. The DBDN is a promising way of modelling such dynamic, adaptive operational decisions.

@proceedings{181,
  author = {C. Sue Price and Deshen Moodley and Anban Pillay},
  title = {Dynamic Bayesian decision network to represent growers’ adaptive pre-harvest burning decisions in a sugarcane supply chain},
  abstract = {Sugarcane growers usually burn their cane to facilitate its harvesting and transportation.  Cane quality tends to deteriorate after burning, so it must be delivered as soon as possible to the mill for processing.  This situation is dynamic and many factors, including weather conditions, delivery quotas and previous decisions taken, affect when and how much cane to burn.  A dynamic Bayesian decision network (DBDN) was developed, using an iterative knowledge engineering approach, to represent sugarcane growers’ adaptive pre-harvest burning decisions.  It was evaluated against five different scenarios which were crafted to represent the range of issues the grower faces when making these decisions.  The DBDN was able to adapt reactively to delays in deliveries, although the model did not have enough states representing delayed delivery statuses.  The model adapted proactively to rain forecasts, but only adapted reactively to high wind forecasts.   The DBDN is a promising way of modelling such dynamic, adaptive operational decisions.},
  year = {2018},
  journal = {Proceedings of the Annual Conference of the South African Institute of Computer Scientists and Information Technologists (SAICSIT '18)},
  pages = {89-98},
  month = {26/09-28/09},
  publisher = {ACM},
  address = {New York NY},
  isbn = {978-1-4503-6647-2},
  url = {https://dl.acm.org/citation.cfm?id=3278681},
}

2017

Gerber A, Morar N, Meyer T, Eardley C. Ontology-based support for taxonomic functions. Ecological Informatics. 2017;41. https://ac.els-cdn.com/S1574954116301959/1-s2.0-S1574954116301959-main.pdf?_tid=487687ca-01b3-11e8-89aa-00000aacb35e&acdnat=1516873196_6a2c94e428089403763ccec46613cf0f.

This paper reports on an investigation into the use of ontology technologies to support taxonomic functions. Support for taxonomy is imperative given several recent discussions and publications that voiced concern over the taxonomic impediment within the broader context of the life sciences. Taxonomy is defined as the scientific classification, description and grouping of biological organisms into hierarchies based on sets of shared characteristics, and documenting the principles that enforce such classification. Under taxonomic functions we identified two broad categories: the classification functions concerned with identification and naming of organisms, and secondly classification functions concerned with categorization and revision (i.e. grouping and describing, or revisiting existing groups and descriptions). Ontology technologies within the broad field of artificial intelligence include computational ontologies that are knowledge representation mechanisms using standardized representations that are based on description logics (DLs). This logic base of computational ontologies provides for the computerized capturing and manipulation of knowledge. Furthermore, the set-theoretical basis of computational ontologies ensures particular suitability towards classification, which is considered as a core function of systematics or taxonomy. Using the specific case of Afrotropical bees, this experimental research study represents the taxonomic knowledge base as an ontology, explore the use of available reasoning algorithms to draw the necessary inferences that support taxonomic functions (identification and revision) over the ontology and implement a Web-based application (the WOC). The contributions include the ontology, a reusable and standardized computable knowledge base of the taxonomy of Afrotropical bees, as well as the WOC and the evaluation thereof by experts.

@article{163,
  author = {Aurona Gerber and Nishal Morar and Thomas Meyer and C. Eardley},
  title = {Ontology-based support for taxonomic functions},
  abstract = {This paper reports on an investigation into the use of ontology technologies to support taxonomic functions. Support for taxonomy is imperative given several recent discussions and publications that voiced concern over the taxonomic impediment within the broader context of the life sciences. Taxonomy is defined as the scientific classification, description and grouping of biological organisms into hierarchies based on sets of shared characteristics, and documenting the principles that enforce such classification. Under taxonomic functions we identified two broad categories: the classification functions concerned with identification and naming of organisms, and secondly classification functions concerned with categorization and revision (i.e. grouping and describing, or revisiting existing groups and descriptions).
Ontology technologies within the broad field of artificial intelligence include computational ontologies that are knowledge representation mechanisms using standardized representations that are based on description logics (DLs). This logic base of computational ontologies provides for the computerized capturing and manipulation of knowledge. Furthermore, the set-theoretical basis of computational ontologies ensures particular suitability towards classification, which is considered as a core function of systematics or taxonomy.
Using the specific case of Afrotropical bees, this experimental research study represents the taxonomic knowledge base as an ontology, explore the use of available reasoning algorithms to draw the necessary inferences that support taxonomic functions (identification and revision) over the ontology and implement a Web-based application (the WOC). The contributions include the ontology, a reusable and standardized computable knowledge base of the taxonomy of Afrotropical bees, as well as the WOC and the evaluation thereof by experts.},
  year = {2017},
  journal = {Ecological Informatics},
  volume = {41},
  pages = {11-23},
  publisher = {Elsevier},
  isbn = {1574-9541},
  url = {https://ac.els-cdn.com/S1574954116301959/1-s2.0-S1574954116301959-main.pdf?_tid=487687ca-01b3-11e8-89aa-00000aacb35e&acdnat=1516873196_6a2c94e428089403763ccec46613cf0f},
}
Seebregts C, Pillay A, Crichton R, Singh S, Moodley D. 14 Enterprise Architectures for Digital Health. Global Health Informatics: Principles of eHealth and mHealth to Improve Quality of Care. 2017. https://books.google.co.za/books?id=8p-rDgAAQBAJ&pg=PA173&lpg=PA173&dq=14+Enterprise+Architectures+for+Digital+Health&source=bl&ots=i6SQzaXiPp&sig=zDLJ6lIqt3Xox3Lt5LNCuMkUoJ4&hl=en&sa=X&ved=0ahUKEwivtK6jxPDYAhVkL8AKHXbNDY0Q6AEINDAB#v=onepage&q=14%20Enterp.

• Several different paradigms and standards exist for creating digital health architectures that are mostly complementary, but sometimes contradictory. • The potential benefits of using EA approaches and tools are that they help to ensure the appropriate use of standards for interoperability and data storage and exchange, and encourage the creation of reusable software components and metadata.

@article{162,
  author = {Chris Seebregts and Anban Pillay and Ryan Crichton and S. Singh and Deshen Moodley},
  title = {14 Enterprise Architectures for Digital Health},
  abstract = {• Several different paradigms and standards exist for creating digital health architectures that 
are mostly complementary, but sometimes contradictory.
• The potential benefits of using EA 
approaches and tools are that they help to ensure the appropriate use of standards for 
interoperability and data storage and exchange, and encourage the creation of reusable 
software components and metadata.},
  year = {2017},
  journal = {Global Health Informatics: Principles of eHealth and mHealth to Improve Quality of Care},
  pages = {173-182},
  publisher = {MIT Press},
  isbn = {978-0262533201},
  url = {https://books.google.co.za/books?id=8p-rDgAAQBAJ&pg=PA173&lpg=PA173&dq=14+Enterprise+Architectures+for+Digital+Health&source=bl&ots=i6SQzaXiPp&sig=zDLJ6lIqt3Xox3Lt5LNCuMkUoJ4&hl=en&sa=X&ved=0ahUKEwivtK6jxPDYAhVkL8AKHXbNDY0Q6AEINDAB#v=onepage&q=14%20Enterp},
}
Adeleke JA, Moodley D, Rens G, Adewumi AO. Integrating Statistical Machine Learning in a Semantic Sensor Web for Proactive Monitoring and Control. Sensors. 2017;17(4). http://pubs.cs.uct.ac.za/archive/00001219/01/sensors-17-00807.pdf.

Proactive monitoring and control of our natural and built environments is important in various application scenarios. Semantic Sensor Web technologies have been well researched and used for environmental monitoring applications to expose sensor data for analysis in order to provide responsive actions in situations of interest. While these applications provide quick response to situations, to minimize their unwanted effects, research efforts are still necessary to provide techniques that can anticipate the future to support proactive control, such that unwanted situations can be averted altogether. This study integrates a statistical machine learning based predictive model in a Semantic Sensor Web using stream reasoning. The approach is evaluated in an indoor air quality monitoring case study. A sliding window approach that employs the Multilayer Perceptron model to predict short term PM2.5 pollution situations is integrated into the proactive monitoring and control framework. Results show that the proposed approach can effectively predict short term PM2.5 pollution situations: precision of up to 0.86 and sensitivity of up to 0.85 is achieved over half hour prediction horizons, making it possible for the system to warn occupants or even to autonomously avert the predicted pollution situations within the context of Semantic Sensor Web.

@article{160,
  author = {Jude Adeleke and Deshen Moodley and Gavin Rens and A.O. Adewumi},
  title = {Integrating Statistical Machine Learning in a Semantic Sensor Web for Proactive Monitoring and Control},
  abstract = {Proactive monitoring and control of our natural and built environments is important in various application scenarios. Semantic Sensor Web technologies have been well researched and used for environmental monitoring applications to expose sensor data for analysis in order to provide responsive actions in situations of interest. While these applications provide quick response to situations, to minimize their unwanted effects, research efforts are still necessary to provide techniques that can anticipate the future to support proactive control, such that unwanted situations can be averted altogether. This study integrates a statistical machine learning based predictive model in a Semantic Sensor Web using stream reasoning. The approach is evaluated in an indoor air quality monitoring case study. A sliding window approach that employs the Multilayer Perceptron model to predict short term PM2.5 pollution situations is integrated into the proactive monitoring and control framework. Results show that the proposed approach can effectively predict short term PM2.5 pollution situations: precision of up to 0.86 and sensitivity of up to 0.85 is achieved over half hour prediction horizons, making it possible for the system to warn occupants or even to autonomously avert the predicted pollution situations within the context of Semantic Sensor Web.},
  year = {2017},
  journal = {Sensors},
  volume = {17},
  pages = {1-23},
  issue = {4},
  publisher = {MDPI},
  isbn = {1424-8220},
  url = {http://pubs.cs.uct.ac.za/archive/00001219/01/sensors-17-00807.pdf},
}
Coetzer W, Moodley D. A knowledge-based system for generating interaction networks from ecological data. Data & Knowledge Engineering. 2017;112. doi:http://dx.doi.org/10.1016/j.datak.2017.09.005.

Semantic heterogeneity hampers efforts to find, integrate, analyse and interpret ecological data. An application case-study is described, in which the objective was to automate the integration and interpretation of heterogeneous, flower-visiting ecological data. A prototype knowledgebased system is described and evaluated. The system's semantic architecture uses a combination of ontologies and a Bayesian network to represent and reason with qualitative, uncertain ecological data and knowledge. This allows the high-level context and causal knowledge of behavioural interactions between individual plants and insects, and consequent ecological interactions between plant and insect populations, to be discovered. The system automatically assembles ecological interactions into a semantically consistent interaction network (a new design of a useful, traditional domain model). We discuss the contribution of probabilistic reasoning to knowledge discovery, the limitations of knowledge discovery in the application case-study, the impact of the work and the potential to apply the system design to the study of ecological interaction networks in general.

@article{154,
  author = {Willem Coetzer and Deshen Moodley},
  title = {A knowledge-based system for generating interaction networks from ecological data},
  abstract = {Semantic heterogeneity hampers efforts to find, integrate, analyse and interpret ecological data. An application case-study is described, in which the objective was to automate the integration and interpretation of heterogeneous, flower-visiting ecological data. A prototype knowledgebased system is described and evaluated. The system's semantic architecture uses a combination of ontologies and a Bayesian network to represent and reason with qualitative, uncertain ecological data and knowledge. This allows the high-level context and causal knowledge of behavioural interactions between individual plants and insects, and consequent ecological interactions between plant and insect populations, to be discovered. The system automatically assembles ecological interactions into a semantically consistent interaction network (a new design of a useful, traditional domain model). We discuss the contribution of probabilistic reasoning to knowledge discovery, the limitations of knowledge discovery in the application case-study, the impact of the work and the potential to apply the system design to the study of ecological interaction networks in general.},
  year = {2017},
  journal = {Data & Knowledge Engineering},
  volume = {112},
  pages = {55-78},
  publisher = {Elsevier},
  isbn = {0169-023X},
  url = {http://pubs.cs.uct.ac.za/archive/00001220/01/coetzer-et-al-DKE-2017.pdf},
  doi = {http://dx.doi.org/10.1016/j.datak.2017.09.005},
}
Rens G, Moodley D. A hybrid POMDP-BDI agent architecture with online stochastic planning and plan caching. Cognitive Systems Research. 2017;43. doi:http://dx.doi.org/10.1016/j.cogsys.2016.12.002.

This article presents an agent architecture for controlling an autonomous agent in stochastic, noisy environments. The architecture combines the partially observable Markov decision process (POMDP) model with the belief-desire-intention (BDI) framework. The Hybrid POMDP-BDI agent architecture takes the best features from the two approaches, that is, the online generation of reward-maximizing courses of action from POMDP theory, and sophisticated multiple goal management from BDI theory. We introduce the advances made since the introduction of the basic architecture, including (i) the ability to pursue and manage multiple goals simultaneously and (ii) a plan library for storing pre-written plans and for storing recently generated plans for future reuse. A version of the architecture is implemented and is evaluated in a simulated environment. The results of the experiments show that the improved hybrid architecture outperforms the standard POMDP architecture and the previous basic hybrid architecture for both processing speed and effectiveness of the agent in reaching its goals.

@article{147,
  author = {Gavin Rens and Deshen Moodley},
  title = {A hybrid POMDP-BDI agent architecture with online stochastic planning and plan caching},
  abstract = {This article presents an agent architecture for controlling an autonomous agent in stochastic, noisy environments. The architecture combines the partially observable Markov decision process (POMDP) model with the belief-desire-intention (BDI) framework. The Hybrid POMDP-BDI agent architecture takes the best features from the two approaches, that is, the online generation of reward-maximizing courses of action from POMDP theory, and sophisticated multiple goal management from BDI theory. We introduce the advances made since the introduction of the basic architecture, including (i) the ability to pursue and manage multiple goals simultaneously and (ii) a plan library for storing pre-written plans and for storing recently generated plans for future reuse. A version of the architecture is implemented and is evaluated in a simulated environment. The results of the experiments show that the improved hybrid architecture outperforms the standard POMDP architecture and the previous basic hybrid architecture for both processing speed and effectiveness of the agent in reaching its goals.},
  year = {2017},
  journal = {Cognitive Systems Research},
  volume = {43},
  pages = {1-20},
  publisher = {Elsevier B.V.},
  isbn = {1389-0417},
  doi = {http://dx.doi.org/10.1016/j.cogsys.2016.12.002},
}

2016

Kala JR, Viriri S, Moodley D. Leaf Classification Using Convexity Moments of Polygons. International Symposium on Visual Computing. 2016.

Research has shown that shape features can be used in the process of object recognition with promising results. However, due to a wide variety of shape descriptors, selecting the right one remains a difficult task. This paper presents a new shape recognition feature: Convexity Moment of Polygons. The Convexity Moments of Polygons is derived from the Convexity measure of polygons. A series of experimentations based on FLAVIA images dataset was performed to demonstrate the accuracy of the proposed feature compared to the Convexity measure of polygons in the field of leaf classification. A classification rate of 92% was obtained with the Convexity Moment of Polygons, 80% with the convexity Measure of Polygons using the Radial Basis function neural networks classifier (RBF).

@proceedings{161,
  author = {J.R. Kala and S. Viriri and Deshen Moodley},
  title = {Leaf Classification Using Convexity Moments of Polygons},
  abstract = {Research has shown that shape features can be used in the process of object recognition with promising results. However, due to a wide variety of shape descriptors, selecting the right one remains a difficult task. This paper presents a new shape recognition feature: Convexity Moment of Polygons. The Convexity Moments of Polygons is derived from the Convexity measure of polygons. A series of experimentations based on FLAVIA images dataset was performed to demonstrate the accuracy of the proposed feature compared to the Convexity measure of polygons in the field of leaf classification. A classification rate of 92% was obtained with the Convexity Moment of Polygons, 80% with the convexity Measure of Polygons using the Radial Basis function neural networks classifier (RBF).},
  year = {2016},
  journal = {International Symposium on Visual Computing},
  pages = {300-339},
  month = {14/12-16/12},
  isbn = {978-3-319-50832-0},
}
Coetzer W, Moodley D, Gerber A. Eliciting and Representing High-Level Knowledge Requirements to Discover Ecological Knowledge in Flower-Visiting Data. PLoS ONE . 2016;11(11). http://pubs.cs.uct.ac.za/archive/00001127/01/journal.pone.0166559.pdf.

Observations of individual organisms (data) can be combined with expert ecological knowledge of species, especially causal knowledge, to model and extract from flower–visiting data useful information about behavioral interactions between insect and plant organisms, such as nectar foraging and pollen transfer. We describe and evaluate a method to elicit and represent such expert causal knowledge of behavioral ecology, and discuss the potential for wider application of this method to the design of knowledge-based systems for knowledge discovery in biodiversity and ecosystem informatics.

@article{159,
  author = {Willem Coetzer and Deshen Moodley and Aurona Gerber},
  title = {Eliciting and Representing High-Level Knowledge Requirements to Discover Ecological Knowledge in Flower-Visiting Data},
  abstract = {Observations of individual organisms (data) can be combined with expert ecological knowledge of species, especially causal knowledge, to model and extract from flower–visiting data useful information about behavioral interactions between insect and plant organisms, such as nectar foraging and pollen transfer. We describe and evaluate a method to elicit and represent such expert causal knowledge of behavioral ecology, and discuss the potential for wider application of this method to the design of knowledge-based systems for knowledge discovery in biodiversity and ecosystem informatics.},
  year = {2016},
  journal = {PLoS ONE},
  volume = {11},
  pages = {1-15},
  issue = {11},
  url = {http://pubs.cs.uct.ac.za/archive/00001127/01/journal.pone.0166559.pdf},
}
Waltham M, Moodley D. An Analysis of Artificial Intelligence Techniques in Multiplayer Online Battle Arena Game Environments. Annual Conference of the South African Institute of Computer Scientists and Information Technologists (SAICSIT 2016). 2016. doi: http://dx.doi.org/10.1145/2987491.2987513.

The 3D computer gaming industry is constantly exploring new avenues for creating immersive and engaging environments. One avenue being explored is autonomous control of the behaviour of non-player characters (NPC). This paper reviews and compares existing artificial intelligence (AI) techniques for controlling the behaviour of non-human characters in Multiplayer Online Battle Arena (MOBA) game environments. Two techniques, the fuzzy state machine (FuSM) and the emotional behaviour tree (EBT), were reviewed and compared. In addition, an alternate and simple mechanism to incorporate emotion in a behaviour tree is proposed and tested. Initial tests of the mechanism show that it is a viable and promising mechanism for effectively tracking the emotional state of an NPC and for incorporating emotion in NPC decision making.

@proceedings{157,
  author = {Michael Waltham and Deshen Moodley},
  title = {An Analysis of Artificial Intelligence Techniques in Multiplayer Online Battle Arena Game Environments},
  abstract = {The 3D computer gaming industry is constantly exploring new avenues for creating immersive and engaging environments. One avenue being explored is autonomous control of the behaviour of non-player characters (NPC). This paper reviews and compares existing artificial intelligence (AI) techniques for controlling the behaviour of non-human characters in Multiplayer Online Battle Arena (MOBA) game environments. Two techniques, the fuzzy state machine (FuSM) and the emotional behaviour tree (EBT), were reviewed and compared. In addition, an alternate and simple mechanism to incorporate emotion in a behaviour tree is proposed and tested. Initial tests of the mechanism show that it is a viable and promising mechanism for effectively tracking the emotional state of an NPC and for incorporating emotion in NPC decision making.},
  year = {2016},
  journal = {Annual Conference of the South African Institute of Computer Scientists and Information Technologists (SAICSIT 2016)},
  pages = {45},
  month = {26/09-28/09},
  publisher = {ACM},
  address = {Johannesburg},
  isbn = {978-1-4503-4805-8},
  doi = {http://dx.doi.org/10.1145/2987491.2987513},
}
Clark A, Moodley D. A System for a Hand Gesture-Manipulated Virtual Reality Environment. Annual Conference of the South African Institute of Computer Scientists and Information Technologists (SAICSIT 2016). 2016. doi:http://dx.doi.org/10.1145/2987491.2987511.

Extensive research has been done using machine learning techniques for hand gesture recognition (HGR) using camera-based devices; such as the Leap Motion Controller (LMC). However, limited research has investigated machine learning techniques for HGR in virtual reality applications (VR). This paper reports on the design, implementation, and evaluation of a static HGR system for VR applications using the LMC. The gesture recognition system incorporated a lightweight feature vector of five normalized tip-to-palm distances and a k-nearest neighbour (kNN) classifier. The system was evaluated in terms of response time, accuracy and usability using a case-study VR stellar data visualization application created in the Unreal Engine 4. An average gesture classification time of 0.057ms with an accuracy of 82.5% was achieved on four distinct gestures, which is comparable with previous results from Sign Language recognition systems. This shows the potential of HGR machine learning techniques applied to VR, which were previously applied to non-VR scenarios such as Sign Language recognition.

@proceedings{156,
  author = {A. Clark and Deshen Moodley},
  title = {A System for a Hand Gesture-Manipulated Virtual Reality Environment},
  abstract = {Extensive research has been done using machine learning techniques for hand gesture recognition (HGR) using camera-based devices; such as the Leap Motion Controller (LMC). However, limited research has investigated machine learning techniques for HGR in virtual reality applications (VR). This paper reports on the design, implementation, and evaluation of a static HGR system for VR applications using the LMC. The gesture recognition system incorporated a lightweight feature vector of five normalized tip-to-palm distances and a k-nearest neighbour (kNN) classifier. The system was evaluated in terms of response time, accuracy and usability using a case-study VR stellar data visualization application created in the Unreal Engine 4. An average gesture classification time of 0.057ms with an accuracy of 82.5% was achieved on four distinct gestures, which is comparable with previous results from Sign Language recognition systems. This shows the potential of HGR machine learning techniques applied to VR, which were previously applied to non-VR scenarios such as Sign Language recognition.},
  year = {2016},
  journal = {Annual Conference of the South African Institute of Computer Scientists and Information Technologists (SAICSIT 2016)},
  pages = {10},
  month = {26/09-28/09},
  publisher = {ACM},
  address = {Johannesburg},
  isbn = {978-1-4503-4805-8},
  doi = {http://dx.doi.org/10.1145/2987491.2987511},
}
Rens G, Meyer T, Casini G. On Revision of Partially Specified Convex Probabilistic Belief Bases. European Conference on Artificial Intelligence (ECAI). 2016.

We propose a method for an agent to revise its incomplete probabilistic beliefs when a new piece of propositional information is observed. In this work, an agent’s beliefs are represented by a set of probabilistic formulae – a belief base. The method involves determining a representative set of ‘boundary’ probability distributions consistent with the current belief base, revising each of these probability distributions and then translating the revised information into a new belief base. We use a version of Lewis Imaging as the revision operation. The correctness of the approach is proved. An analysis of the approach is done against six rationality postulates. The expressivity of the belief bases under consideration are rather restricted, but has some applications. We also discuss methods of belief base revision employing the notion of optimum entropy, and point out some of the benefits and difficulties in those methods. Both the boundary distribution method and the optimum entropy method are reasonable, yet yield different results.

@proceedings{144,
  author = {Gavin Rens and Thomas Meyer and Giovanni Casini},
  title = {On Revision of Partially Specified Convex Probabilistic Belief Bases},
  abstract = {We propose a method for an agent to revise its incomplete probabilistic beliefs when a new piece of propositional information is observed. In this work, an agent’s beliefs are represented by a set of probabilistic formulae – a belief base. The method involves determining a representative set of ‘boundary’ probability distributions consistent with the current belief base, revising each of these probability distributions and then translating the revised information into a new belief base. We use a version of Lewis Imaging as the revision operation. The correctness of the approach is proved. An analysis of the approach is done against six rationality postulates. The expressivity of the belief bases under consideration are rather restricted, but has some applications. We also discuss methods of belief base revision employing the notion of optimum entropy, and point out some of the benefits and difficulties in those methods. Both the boundary distribution method and the optimum entropy method are reasonable, yet yield different results.},
  year = {2016},
  journal = {European Conference on Artificial Intelligence (ECAI)},
  pages = {921-929},
  month = {31/08-02/09},
}
Rens G. A Stochastic Belief Change Framework with an Observation Stream and Defaults as Expired Observations. 2016.

A framework for an agent to change its probabilistic beliefs after a stream of noisy observations is received is proposed. Observations which are no longer relevant, become default assumptions until overridden by newer, more prevalent observations. A distinction is made between background and foreground beliefs. Agent actions and environment events are distinguishable and form part of the agent model. It is left up to the agent designer to provide an environment model; a submodel of the agent model. An example of an environment model is provided in the paper, and an example scenario is based on it. Given the particular form of the agent model, several ‘patterns of cognition’ can be identified. An argument is made for four particular patterns.

@misc{140,
  author = {Gavin Rens},
  title = {A Stochastic Belief Change Framework with an Observation Stream and Defaults as Expired Observations},
  abstract = {A framework for an agent to change its probabilistic beliefs after a stream of noisy observations is received is proposed. Observations which are no longer relevant, become default assumptions until overridden by newer, more prevalent observations. A distinction is made between background and foreground beliefs. Agent actions and environment events are distinguishable and form part of the agent model. It is left up to the agent designer to provide an environment model; a submodel of the agent model. An example of an environment model is provided in the paper, and an example scenario is based on it. Given the particular form of the agent model, several ‘patterns of cognition’ can be identified. An argument is made for four particular patterns.},
  year = {2016},
}
Rens G, Kern-Isberner G. An Approach to Qualitative Belief Change Modulo Ontic Strength. 2016.

Sometimes, strictly choosing between belief revision and belief update is inadequate in a dynamical, uncertain environment. Boutilier combined the two notions to allow updates in response to external changes to inform an agent about its prior beliefs. His approach is based on ranking functions. Rens proposed a new method to trade off probabilistic revision and update, in proportion to the agent’s confidence for whether to revise or update. In this paper, we translate Rens’s approach from a probabilistic setting to a setting with ranking functions. Given the translation, we are able to compare Boutilier’s and Rens’s approaches.We found that Rens’s approach is an extension of Boutilier’s.

@misc{139,
  author = {Gavin Rens and Gabriele Kern-Isberner},
  title = {An Approach to Qualitative Belief Change Modulo Ontic Strength},
  abstract = {Sometimes, strictly choosing between belief revision and belief update is inadequate in a dynamical, uncertain environment. Boutilier combined the two notions to allow updates in response to external changes to inform an agent about its prior beliefs. His approach is based on ranking functions. Rens proposed a new method to trade off probabilistic revision and update, in proportion to the agent’s confidence for whether to revise or update. In this paper, we translate Rens’s approach from a probabilistic setting to a setting with ranking functions. Given the translation, we are able to compare Boutilier’s and Rens’s approaches.We found that Rens’s approach is an extension of Boutilier’s.},
  year = {2016},
}
Rens G. On Stochastic Belief Revision and Update and their Combination. 2016.

I propose a framework for an agent to change its probabilistic beliefs when a new piece of propositional information alpha is observed. Traditionally, belief change occurs by either a revision process or by an update process, depending on whether the agent is informed with alpha in a static world or, respectively, whether alpha is a ‘signal’ from the environment due to an event occurring. Boutilier suggested a unified model of qualitative belief change, which “combines aspects of revision and update, providing a more realistic characterization of belief change.” In this paper, I propose a unified model of quantitative belief change, where an agent’s beliefs are represented as a probability distribution over possible worlds. As does Boutilier, I take a dynamical systems perspective. The proposed approach is evaluated against several rationality postulated, and some properties of the approach are worked out.

@misc{132,
  author = {Gavin Rens},
  title = {On Stochastic Belief Revision and Update and their Combination},
  abstract = {I propose a framework for an agent to change its probabilistic beliefs when a new piece of propositional information alpha is observed. Traditionally, belief change occurs by either a revision process or by an update process, depending on whether the agent is informed with alpha in a static world or, respectively, whether alpha is a ‘signal’ from the environment due to an event occurring. Boutilier suggested a unified model of qualitative belief change, which “combines aspects of revision and update, providing a more realistic characterization of belief change.” In this paper, I propose a unified model of quantitative belief change, where an agent’s beliefs are represented as a probability distribution over possible worlds. As does Boutilier, I take a dynamical systems perspective. The proposed approach is evaluated against several rationality postulated, and some properties of the approach are worked out.},
  year = {2016},
  isbn = {ISSN 0933-6192},
}
Rens G, Meyer T, Casini G. Revising Incompletely Specified Convex Probabilistic Belief Bases. 2016.

We propose a method for an agent to revise its incomplete probabilistic beliefs when a new piece of propositional information is observed. In this work, an agent’s beliefs are represented by a set of probabilistic formulae – a belief base. The method involves determining a representative set of ‘boundary’ probability distributions consistent with the current belief base, revising each of these probability distributions and then translating the revised information into a new belief base. We use a version of Lewis Imaging as the revision operation. The correctness of the approach is proved. The expressivity of the belief bases under consideration are rather restricted, but has some applications. We also discuss methods of belief base revision employing the notion of optimum entropy, and point out some of the benefits and difficulties in those methods. Both the boundary distribution method and the optimum entropy method are reasonable, yet yield different results.

@misc{131,
  author = {Gavin Rens and Thomas Meyer and Giovanni Casini},
  title = {Revising Incompletely Specified Convex Probabilistic Belief Bases},
  abstract = {We propose a method for an agent to revise its incomplete probabilistic beliefs when a new piece of propositional information is observed. In this work, an agent’s beliefs are represented by a set of probabilistic formulae – a belief base. The method involves determining a representative set of ‘boundary’ probability distributions consistent with the current belief base, revising each of these probability distributions and then translating the revised information into a new belief base. We use a version of Lewis Imaging as the revision operation. The correctness of the approach is proved. The expressivity of the belief bases under consideration are rather restricted, but has some applications. We also discuss methods of belief base revision employing the notion of optimum entropy, and point out some of the benefits and difficulties in those methods. Both the boundary distribution method and the optimum entropy method are reasonable, yet yield different results.},
  year = {2016},
  isbn = {ISSN 0933-6192},
}

2015

Britz K, Casini G, Meyer T, Moodley K, Sattler U, Varzinczak I. Rational Defeasible Reasoning for Expressive Description Logics. 2015.

In this paper, we enrich description logics (DLs) with non-monotonic reasoning features in a number of ways. We start by investigating a notion of defeasible conditional in the spirit of KLM-style defeasible consequence. In particular, we consider a natural and intuitive semantics for defeasible subsumption in terms of DL interpretations enriched with a preference relation. We propose and investigate syntactic properties (à la Gentzen) for both preferential and rational conditionals and prove representation results for the description logic ALC. This representation result paves the way for more effective decision procedures for defeasible reasoning in DLs. We then move to non-monotonicity in DLs at the level of entailment. We investigate versions of entailment in the context of both preferential and rational subsumption, relate them to preferential and rational closure, and show that computing them can be reduced to classical ALC entailment. This provides further evidence that our semantic constructions are appropriate in a non-monotonic DL setting. One of the barriers to evaluating performance scalability of rational closure is the abscence of naturally occurring DL-based ontologies with defeasible features. We overcome this barrier by devising an approach to introduce defeasible subsumption into classical real world ontologies. This culminates in a set of semi-natural defeasible ontologies that is used, together with a purely artificial set, to test our rational closure algorithms. We found that performance is scalable on the whole with no major bottlenecks.

@misc{130,
  author = {Katarina Britz and Giovanni Casini and Thomas Meyer and Kody Moodley and U. Sattler and Ivan Varzinczak},
  title = {Rational Defeasible Reasoning for Expressive Description Logics},
  abstract = {In this paper, we enrich description logics (DLs) with non-monotonic reasoning features in a number of ways. We start by investigating a notion of defeasible conditional in the spirit of KLM-style defeasible consequence. In particular, we consider a natural and intuitive semantics for defeasible subsumption in terms of DL interpretations enriched with a preference relation. We propose and investigate syntactic properties (à la Gentzen) for both preferential and rational conditionals and prove representation results for the description logic ALC. This representation result paves the way for more effective decision procedures for defeasible reasoning in DLs. We then move to non-monotonicity in DLs at the level of entailment. We investigate versions of entailment in the context of both preferential and rational subsumption, relate them to preferential and rational closure, and show that computing them can be reduced to classical ALC entailment. This provides further evidence that our semantic constructions are appropriate in a non-monotonic DL setting. One of the barriers to evaluating performance scalability of rational closure is the abscence of naturally occurring DL-based ontologies with defeasible features. We overcome this barrier by devising an approach to introduce defeasible subsumption into classical real world ontologies. This culminates in a set of semi-natural defeasible ontologies that is used, together with a purely artificial set, to test our rational closure algorithms. We found that performance is scalable on the whole with no major bottlenecks.},
  year = {2015},
}
Adeleke JA, Moodley D. An Ontology for Proactive Indoor Environmental Quality Monitoring and Control. The 2015 Annual Conference of the South African Institute of Computer Scientists and Information Technologists (SAICSIT '15). 2015.

Proactive monitoring and control of indoor air quality in homes where there are pregnant mothers and infants is essential for healthy development and well-being of children. This is especially true in low income households where cooking practices and exposure to harmful pollutants produced by nearby industries can negatively impact on a healthy home environment. Interdisciplinary expert knowledge is required to make sense of dynamic and complex environmental phenomena from multivariate low level sensor observations and high level human activities to detect health risks and enact decisions about control. We have developed an ontology for indoor environmental quality monitoring and control based on an ongoing real world case study in Durban, South Africa. We implemented an Indoor Air Quality Index and a thermal comfort index which can be automatically determined by reasoning on the ontology. We evaluated the ontology by populating it with test sensor data and showing how it can be queried to analyze health risk situations and determine control actions. Our evaluation shows that the ontology can be used for real world indoor monitoring and control applications in resource constrained settings.

@proceedings{127,
  author = {Jude Adeleke and Deshen Moodley},
  title = {An Ontology for Proactive Indoor Environmental Quality Monitoring and Control},
  abstract = {Proactive monitoring and control of indoor air quality in homes where there are pregnant mothers and infants is essential for healthy development and well-being of children. This is especially true in low income households where cooking practices and exposure to harmful pollutants produced by nearby industries can negatively impact on a healthy home environment. Interdisciplinary expert knowledge is required to make sense of dynamic and complex environmental phenomena from multivariate low level sensor observations and high level human activities to detect health risks and enact decisions about control. We have developed an ontology for indoor environmental quality monitoring and control based on an ongoing real world case study in Durban, South Africa. We implemented an Indoor Air Quality Index and a thermal comfort index which can be automatically determined by reasoning on the ontology. We evaluated the ontology by populating it with test sensor data and showing how it can be queried to analyze health risk situations and determine control actions. Our evaluation shows that the ontology can be used for real world indoor monitoring and control applications in resource constrained settings.},
  year = {2015},
  journal = {The 2015 Annual Conference of the South African Institute of Computer Scientists and Information Technologists (SAICSIT '15)},
  month = {28/09-30/09},
  address = {New York, NY, USA ©2015},
  isbn = {978-1-4503-3683-3},
}
Booth R. On the Entailment Problem for a Logic of Typicality. IJCAI 2015. 2015.

Propositional Typicality Logic (PTL) is a recently proposed logic, obtained by enriching classical propositional logic with a typicality operator. In spite of the non-monotonic features introduced by the semantics adopted for the typicality operator, the obvious Tarskian definition of entailment for PTL remains monotonic and is therefore not appropriate. We investigate different (semantic) versions of entailment for PTL, based on the notion of Rational Closure as defined by Lehmann and Magidor for KLM-style conditionals, and constructed using minimality. Our first important result is an impossibility theorem showing that a set of proposed postulates that at first all seem appropriate for a notion of entailment with regard to typicality cannot be satisfied simultaneously. Closer inspection reveals that this result is best interpreted as an argument for advocating the development of more than one type of PTL entailment. In the spirit of this interpretation, we define two primary forms of entailment for PTL and discuss their advantages and disadvantages.

@proceedings{115,
  author = {Richard Booth},
  title = {On the Entailment Problem for a Logic of Typicality},
  abstract = {Propositional Typicality Logic (PTL) is a recently proposed logic, obtained by enriching classical propositional logic with a typicality operator. In spite of the non-monotonic features introduced by the semantics adopted for the typicality operator, the obvious Tarskian definition of entailment for PTL remains monotonic and is therefore not appropriate.
We investigate different (semantic) versions of entailment for PTL, based on the notion of Rational Closure as defined by Lehmann and Magidor for KLM-style conditionals, and constructed using minimality. Our first important result is an impossibility theorem showing that a set of proposed postulates that at first all seem appropriate for a notion of entailment with regard to typicality cannot be satisfied simultaneously. Closer inspection reveals that
this result is best interpreted as an argument for advocating the development of more than one type of PTL entailment. In the spirit of this interpretation, we define two primary forms of entailment for PTL and discuss their advantages and disadvantages.},
  year = {2015},
  journal = {IJCAI 2015},
  month = {25/07-31/07},
}
Casini G, Meyer T, Moodley K, Varzinczak I, Sattler U. Introducing Defeasibility into OWL Ontologies. The International Semantic Web Conference. 2015.

In recent years, various approaches have been developed for representing and reasoning with exceptions in OWL. The price one pays for such capabilities, in terms of practical performance, is an important factor that is yet to be quantified comprehensively. A major barrier is the lack of naturally occurring ontologies with defeasible features - the ideal candidates for evaluation. Such data is unavailable due to absence of tool support for representing defeasible features. In the past, defeasible reasoning implementations have favoured automated generation of defeasible ontologies. While this suffices as a preliminary approach, we posit that a method somewhere in between these two would yield more meaningful results. In this work, we describe a systematic approach to modify real-world OWL ontologies to include defeasible features, and we apply this to the Manchester OWL Repository to generate defeasible ontologies for evaluating our reasoner DIP (Defeasible-Inference Platform). The results of this evaluation are provided together with some insights into where the performance bottle-necks lie for this kind of reasoning. We found that reasoning was feasible on the whole, with surprisingly few bottle-necks in our evaluation.

@proceedings{113,
  author = {Giovanni Casini and Thomas Meyer and Kody Moodley and Ivan Varzinczak and U. Sattler},
  title = {Introducing Defeasibility into OWL Ontologies},
  abstract = {In recent years, various approaches have been developed for representing and reasoning with exceptions in OWL. The price one pays for such capabilities, in terms of practical performance, is an important factor that is yet to be quantified comprehensively. A major barrier is the lack of naturally occurring ontologies with defeasible features - the ideal candidates for evaluation. Such data is unavailable due to absence of tool support for representing defeasible features. In the past, defeasible reasoning implementations have favoured automated generation of defeasible ontologies. While this suffices as a preliminary approach, we posit that a method somewhere in between these two would yield more meaningful results. In this work, we describe a systematic approach to modify real-world OWL ontologies to include defeasible features, and we apply this to the Manchester OWL Repository to generate defeasible ontologies for evaluating our reasoner DIP (Defeasible-Inference Platform). The results of this evaluation are provided together with some insights into where the performance bottle-necks lie for this kind of reasoning. We found that reasoning was feasible on the whole, with surprisingly few bottle-necks in our evaluation.},
  year = {2015},
  journal = {The International Semantic Web Conference},
  month = {11/10-15/10},
}
Rens G, Meyer T. A New Approach to Probabilistic Belief Change. International Florida AI Research Society Conference. 2015.

One way for an agent to deal with uncertainty about its beliefs is to maintain a probability distribution over the worlds it believes are possible. A belief change operation may recommend some previously believed worlds to become impossible and some previously disbelieved worlds to become possible. This work investigates how to redistribute probabilities due to worlds being added to and removed from an agent’s belief-state. Two related approaches are proposed and analyzed.

@proceedings{111,
  author = {Gavin Rens and Thomas Meyer},
  title = {A New Approach to Probabilistic Belief Change},
  abstract = {One way for an agent to deal with uncertainty about its beliefs is to maintain a probability distribution over the worlds it believes are possible. A belief change operation may recommend some previously believed worlds to become impossible and some previously disbelieved worlds to become possible. This work investigates how to redistribute probabilities due to worlds being added to and removed from an agent’s belief-state. Two related approaches are proposed and analyzed.},
  year = {2015},
  journal = {International Florida AI Research Society Conference},
  pages = {582-587},
  month = {18/05-20/05},
  isbn = {978-1-57735-730-8},
}
Crichton R, Pillay A, Moodley D. The Open Health Information Mediator: an Architecture for Enabling Interoperability in Low to Middle Income Countries. 2015;MSc.

Interoperability and system integration are central problems that limit the effective use of health information systems to improve efficiency and effectiveness of health service delivery. There is currently no proven technology that provides a general solution in low and middle income countries where the challenges are especially acute. Engineering health information systems in low resource environments have several challenges that include poor infrastructure, skills shortages, fragmented and piecemeal applications deployed and managed by multiple organisations as well as low levels of resourcing. An important element of modern solutions to these problems is a health information exchange that enable disparate systems to share health information. It is a challenging task to develop systems as complex as health information exchanges that will have wide applicability in low and middle income countries. This work takes a case study approach and uses the development of a health information exchange in Rwanda as the case study. This research reports on the design, implementation and analysis of an architecture, the Health Information Mediator, that is a central component of a health information exchange. While such architectures have been used successfully in high income countries their efficacy has not been demonstrated in low and middle income countries. The Rwandan case study was used to understand and identify the challenges and requirements for health information exchange in low and middle income countries. These requirements were used to derive a set of key concerns for the architecture that were then used to drive its design. Novel features of the architecture include: the ability to mediate messages at both the service provider and service consumer interfaces; support for multiple internal representations of messages to facilitate the adoption of new and evolving standards; and the provision of a general method for mediating health information exchange transactions agnostic of the type of transactions. The architecture is shown to satisfy the key concerns and was validated by implementing and deploying a reference application, the OpenHIM, within the Rwandan health information exchange. The architecture is also analysed using the Architecture Trade-off Analysis Method. It has also been successfully implemented in other low and middle income countries with relatively minor configuration changes which demonstrates the architectures generalizability.

@phdthesis{110,
  author = {Ryan Crichton and Anban Pillay and Deshen Moodley},
  title = {The Open Health Information Mediator: an Architecture for Enabling Interoperability in Low to Middle Income Countries},
  abstract = {Interoperability and system integration are central problems that limit the effective use of health information systems to improve efficiency and effectiveness of health service delivery. There is currently no proven technology that provides a general solution in low and middle income countries where the challenges are especially acute. Engineering health information systems in low resource environments have several challenges that include poor infrastructure, skills shortages, fragmented and piecemeal applications deployed and managed by multiple organisations as well as low levels of resourcing. An important element of modern solutions to these problems is a health information exchange that enable disparate systems to share health information. 
It is a challenging task to develop systems as complex as health information exchanges that will have wide applicability in low and middle income countries. This work takes a case study approach and uses the development of a health information exchange in Rwanda as the case study. This research reports on the design, implementation and analysis of an architecture, the Health Information Mediator, that is a central component of a health information exchange. While such architectures have been used successfully in high income countries their efficacy has not been demonstrated in low and middle income countries. The Rwandan case study was used to understand and identify the challenges and requirements for health information exchange in low and middle income countries. These requirements were used to derive a set of key concerns for the architecture that were then used to drive its design. Novel features of the architecture include: the ability to mediate messages at both the service provider and service consumer interfaces; support for multiple internal representations of messages to facilitate the adoption of new and evolving standards; and the provision of a general method for mediating health information exchange transactions agnostic of the type of transactions.
The architecture is shown to satisfy the key concerns and was validated by implementing and deploying a reference application, the OpenHIM, within the Rwandan health information exchange. The architecture is also analysed using the Architecture Trade-off Analysis Method. It has also been successfully implemented in other low and middle income countries with relatively minor configuration changes which demonstrates the architectures generalizability.},
  year = {2015},
  volume = {MSc},
}
Ongoma N, Keet M. Temporal Attributes: Status and Subsumption. Asia-Pacific Conference on Conceptual Modelling. 2015.

Representing data that changes over time in conceptual data models is required by various application domains, and requires a language that is expressive enough to fully capture the operational semantics of the time-varying information. Temporal modelling languages typically focus on representing and reasoning over temporal classes and relationships, but have scant support for temporal attributes, if at all. This prevents one to fully utilise a temporal conceptual data model, which, however, is needed to model not only evolving objects (e.g., an employee’s role), but also its attributes, such as changes in salary and bonus payouts. To characterise temporal attributes precisely, we use the DLRUS Description Logic language to provide its model-theoretic semantics, there- with essentially completing the temporal ER language ERVT. The new notion of status attribute is introduced to capture the possible changes, which results in several logical implications they entail, including their interaction with temporal classes to ensure correct behaviour in subsumption hierarchies, paving the way to verify automatically whether a temporal conceptual data model is consistent.

@proceedings{105,
  author = {Nasubo Ongoma and Maria Keet},
  title = {Temporal Attributes: Status and Subsumption},
  abstract = {Representing data that changes over time in conceptual data models is required by various application domains, and requires a language that is expressive enough to fully capture the operational semantics of the time-varying information. Temporal modelling languages typically focus on representing and reasoning over temporal classes and relationships, but have scant support for temporal attributes, if at all. This prevents one to fully utilise a temporal conceptual data model, which, however, is needed to model not only evolving objects (e.g., an employee’s role), but also its attributes, such as changes in salary and bonus payouts. To characterise temporal attributes precisely, we use the DLRUS Description Logic language to provide its model-theoretic semantics, there- with essentially completing the temporal ER language ERVT. The new notion of status attribute is introduced to capture the possible changes, which results in several logical implications they entail, including their interaction with temporal classes to ensure correct behaviour in subsumption hierarchies, paving the way to verify automatically whether a temporal conceptual data model is consistent.},
  year = {2015},
  journal = {Asia-Pacific Conference on Conceptual Modelling},
  pages = {61-70},
  month = {27/01-30/01},
  address = {Sydney, Australia},
  isbn = {978-1-921770-47-0},
}
Rens G. Speeding up Online POMDP Planning: Unification of Observation Branches by Belief-state Compression via Expected Feature Values. International Conference on Agents and Artificial Intelligence (ICAART) Vol. 2. 2015.

A novel algorithm to speed up online planning in partially observable Markov decision processes (POMDPs) is introduced. I propose a method for compressing nodes in belief-decision-trees while planning occurs. Whereas belief-decision-trees branch on actions and observations, with my method, they branch only on actions. This is achieved by unifying the branches required due to the nondeterminism of observations. The method is based on the expected values of domain features. The new algorithm is experimentally compared to three other online POMDP algorithms, outperforming them on the given test domain.

@proceedings{104,
  author = {Gavin Rens},
  title = {Speeding up Online POMDP Planning: Unification of Observation Branches by Belief-state Compression via Expected Feature Values},
  abstract = {A novel algorithm to speed up online planning in partially observable Markov decision processes (POMDPs) is introduced. I propose a method for compressing nodes in belief-decision-trees while planning occurs. Whereas belief-decision-trees branch on actions and observations, with my method, they branch only on actions. This is achieved by unifying the branches required due to the nondeterminism of observations. The method is based on the expected values of domain features. The new algorithm is experimentally compared to three other online POMDP algorithms, outperforming them on the given test domain.},
  year = {2015},
  journal = {International Conference on Agents and Artificial Intelligence (ICAART) Vol. 2},
  pages = {241-246},
  month = {10/01-12/01},
  isbn = {978-989-758-074-1},
}
Rens G, Meyer T. Hybrid POMDP-BDI: An Agent Architecture with Online Stochastic Planning and Desires with Changing Intensity Levels. International Conference on Agents and Artificial Intelligence (ICAART) Vol. 1. 2015.

Partially observable Markov decision processes (POMDPs) and the belief-desire-intention (BDI) framework have several complimentary strengths. We propose an agent architecture which combines these two powerful approaches to capitalize on their strengths. Our architecture introduces the notion of intensity of the desire for a goal’s achievement. We also define an update rule for goals’ desire levels. When to select a new goal to focus on is also defined. To verify that the proposed architecture works, experiments were run with an agent based on the architecture, in a domain where multiple goals must continually be achieved. The results show that (i) while the agent is pursuing goals, it can concurrently perform rewarding actions not directly related to its goals, (ii) the trade-off between goals and preferences can be set effectively and (iii) goals and preferences can be satisfied even while dealing with stochastic actions and perceptions. We believe that the proposed architecture furthers the theory of high-level autonomous agent reasoning.

@proceedings{103,
  author = {Gavin Rens and Thomas Meyer},
  title = {Hybrid POMDP-BDI: An Agent Architecture with Online Stochastic Planning and Desires with Changing Intensity Levels},
  abstract = {Partially observable Markov decision processes (POMDPs) and the belief-desire-intention (BDI) framework have several complimentary strengths. We propose an agent architecture which combines these two powerful approaches to capitalize on their strengths. Our architecture introduces the notion of intensity of the desire for a goal’s achievement. We also define an update rule for goals’ desire levels. When to select a new goal to focus on is also defined. To verify that the proposed architecture works, experiments were run with an agent based on the architecture, in a domain where multiple goals must continually be achieved. The results show that (i) while the agent is pursuing goals, it can concurrently perform rewarding actions not directly related to its goals, (ii) the trade-off between goals and preferences can be set effectively and (iii) goals and preferences can be satisfied even while dealing with stochastic actions and perceptions. We believe that the proposed architecture furthers the theory of high-level autonomous agent reasoning.},
  year = {2015},
  journal = {International Conference on Agents and Artificial Intelligence (ICAART) Vol. 1},
  pages = {5-14},
  month = {10/01-12/01},
  isbn = {978-989-758-073-4},
}
Rens G, Meyer T, Lakemeyer G. A Modal Logic for the Decision-Theoretic Projection Problem. International Conference on Agents and Artificial Intelligence (ICAART) Vol. 2. 2015.

We present a decidable logic in which queries can be posed about (i) the degree of belief in a propositional sentence after an arbitrary finite number of actions and observations and (ii) the utility of a finite sequence of actions after a number of actions and observations. Another contribution of this work is that a POMDP model specification is allowed to be partial or incomplete with no restriction on the lack of information specified for the model. The model may even contain information about non-initial beliefs. Essentially, entailment of arbitrary queries (expressible in the language) can be answered. A sound, complete and terminating decision procedure is provided.

@proceedings{102,
  author = {Gavin Rens and Thomas Meyer and G. Lakemeyer},
  title = {A Modal Logic for the Decision-Theoretic Projection Problem},
  abstract = {We present a decidable logic in which queries can be posed about (i) the degree of belief in a propositional sentence after an arbitrary finite number of actions and observations and (ii) the utility of a finite sequence of actions after a number of actions and observations. Another contribution of this work is that a POMDP model specification is allowed to be partial or incomplete with no restriction on the lack of information specified for the model. The model may even contain information about non-initial beliefs. Essentially, entailment of arbitrary queries (expressible in the language) can be answered. A sound, complete and terminating decision procedure is provided.},
  year = {2015},
  journal = {International Conference on Agents and Artificial Intelligence (ICAART) Vol. 2},
  pages = {5-16},
  month = {10/01-12/01},
  isbn = {978-989-758-074-1},
}

2014

Ogundele O, Moodley D, Seebregts C, Pillay A. Building Semantic Causal Models to Predict Treatment Adherence for Tuberculosis Patients in Sub-Saharan Africa. 4th International Symposium (FHIES 2014) and 6th International Workshop (SEHC 2014). 2014.

Poor adherence to prescribed treatment is a major factor contributing to tuberculosis patients developing drug resistance and failing treatment. Treatment adherence behaviour is influenced by diverse personal, cultural and socio-economic factors that vary between regions and communities. Decision network models can potentially be used to predict treatment adherence behaviour. However, determining the network structure (identifying the factors and their causal relations) and the conditional probabilities is a challenging task. To resolve the former we developed an ontology supported by current scientific literature to categorise and clarify the similarity and granularity of factors

@proceedings{158,
  author = {Olukunle Ogundele and Deshen Moodley and Chris Seebregts and Anban Pillay},
  title = {Building Semantic Causal Models to Predict Treatment Adherence for Tuberculosis Patients in Sub-Saharan Africa},
  abstract = {Poor adherence to prescribed treatment is a major factor contributing to tuberculosis patients developing drug resistance and failing treatment. Treatment adherence behaviour is influenced by diverse personal, cultural and socio-economic factors that vary between regions and communities. Decision network models can potentially be used to predict treatment adherence behaviour. However, determining the network structure (identifying the factors and their causal relations) and the conditional probabilities is a challenging task. To resolve the former we developed an ontology supported by current scientific literature to categorise and clarify the similarity and granularity of factors},
  year = {2014},
  journal = {4th International Symposium (FHIES 2014) and 6th International Workshop (SEHC 2014)},
  pages = {81-95},
  month = {17/07-18/07},
}
Ongoma N, Keet M, Meyer T. Formalising Temporal Attributes in Temporal Conceptual Data Models. 2014;MSc.

Formalized temporal attributes in temporal conceptual models, temporal EER model, using a temporal description logic language (DLRus). This ensured the full formalization of the temporal conceptual model, ERvt, which permits full reasoning on the model. These results permit the development of consistent temporal databases.

@phdthesis{112,
  author = {Nasubo Ongoma and Maria Keet and Thomas Meyer},
  title = {Formalising Temporal Attributes in Temporal Conceptual Data Models},
  abstract = {Formalized temporal attributes in temporal conceptual models, temporal EER model, using a temporal description logic language (DLRus). This ensured the full formalization of the temporal conceptual model, ERvt, which permits full reasoning on the model. These results permit the development of consistent temporal databases.},
  year = {2014},
  volume = {MSc},
}
Harmse H, Britz K, Gerber A, Moodley D. Scenario Testing using Formal Ontologies. 2014. http://ceur-ws.org/Vol-1301/ontocomodise2014_10.pdf.

One of the challenges in the Software Development Life Cycle (SDLC) is to ensure that the requirements that drive the development of a software system are correct. However, establishing unambiguous and error-free requirements is not a trivial problem. As part of the requirements phase of the SDLC, a conceptual model can be created which describes the objects, relationships and operations that are of importance to business. Such a conceptual model is often expressed as a UML class diagram. Recent research concerned with the formal validation of such UML class diagrams has focused on transforming UML class diagrams to various formalisms such as description logics. Description logics are desirable since they have reasoning support which can be used to show that a UML class diagram is consistent/inconsistent. Yet, even when a UML class diagram is consistent, it still does not address the problem of ensuring that a UML class diagram represents business requirements accurately. To validate such diagrams business analysts use a technique called scenario testing. In this paper we present an approach for the formal validation of UML class diagrams based on scenario testing. We additionally provide preliminary feedback on the experiences gained from using our scenario testing approach on a real-world software project.

@misc{106,
  author = {Henriette Harmse and Katarina Britz and Aurona Gerber and Deshen Moodley},
  title = {Scenario Testing using Formal Ontologies},
  abstract = {One of the challenges in the Software Development Life Cycle (SDLC) is to ensure that the requirements that drive the development of a software system are correct. However, establishing unambiguous and error-free requirements is not a trivial problem. As part of the requirements phase of the SDLC, a conceptual model can be created which describes the objects, relationships and operations that are of importance to business. Such a conceptual model is often expressed as a UML class diagram. Recent research concerned with the formal validation of such UML class diagrams has focused on transforming UML class diagrams to various formalisms such as description logics. Description logics are desirable since they have reasoning support which can be used to show that a UML class diagram is consistent/inconsistent. Yet, even when a UML class diagram is consistent, it still does not address the problem of ensuring that a UML class diagram represents business requirements accurately. To validate such diagrams business analysts use a technique called scenario testing. In this paper we present an approach for the formal validation of UML class diagrams based on scenario testing. We additionally provide preliminary feedback on the experiences gained from using our scenario testing approach on a real-world software project.},
  year = {2014},
  isbn = {urn:nbn:de:0074-1301-3},
  url = {http://ceur-ws.org/Vol-1301/ontocomodise2014_10.pdf},
}
Gerber A, Eardley C, Morar N. An Ontology-based Key for Afrotropical Bees. In: Frontiers In Artificial Intelligence And Applications. Rio de Janeiro, Brazil: IOS Press; 2014. http://ebooks.iospress.nl/volume/formal-ontology-in-information-systems-proceedings-of-the-eighth-international-conference-fois-2014.

The goal of this paper is to report on the development of an ontologybased taxonomic key application that is a first deliverable of a larger project that has as goal the development of ontology-driven computing solutions for problems experienced in taxonomy. The ontology-based taxonomic key was developed from a complex taxonomic data set, namely the Catalogue of Afrotropical Bees. The key is used to identify the genera of African bees and for this paper we developed an ontology-based application, that demonstrates that morphological key data can be captured effectively in a standardised format as an ontology, and furthermore, even though the ontology-based key provides the same identification results as the traditional key, this approach allows for several additional advantages that could support taxonomy in the biological sciences. The morphology ontology for Afrotropical bees, as well as the key application form the basis of a suite of tools that we intend to develop to support the taxonomic processes in this domain.

@inbook{101,
  author = {Aurona Gerber and C. Eardley and Nishal Morar},
  title = {An Ontology-based Key for Afrotropical Bees},
  abstract = {The goal of this paper is to report on the development of an ontologybased
taxonomic key application that is a first deliverable of a larger project that
has as goal the development of ontology-driven computing solutions for problems
experienced in taxonomy. The ontology-based taxonomic key was developed from
a complex taxonomic data set, namely the Catalogue of Afrotropical Bees. The
key is used to identify the genera of African bees and for this paper we developed
an ontology-based application, that demonstrates that morphological key data can
be captured effectively in a standardised format as an ontology, and furthermore,
even though the ontology-based key provides the same identification results as the
traditional key, this approach allows for several additional advantages that could
support taxonomy in the biological sciences. The morphology ontology for
Afrotropical bees, as well as the key application form the basis of a suite of tools
that we intend to develop to support the taxonomic processes in this domain.},
  year = {2014},
  journal = {Frontiers in Artificial Intelligence and Applications},
  pages = {277-288},
  publisher = {IOS Press},
  address = {Rio de Janeiro, Brazil},
  isbn = {978-1-61499-437-4 (print) | 978-1-61499-438-1 (online)},
  url = {http://ebooks.iospress.nl/volume/formal-ontology-in-information-systems-proceedings-of-the-eighth-international-conference-fois-2014},
}
Rens G, Meyer T, Lakemeyer G. Formalisms for Agents Reasoning with Stochastic Actions and Perceptions. 2014;PhD.

The thesis reports on the development of a sequence of logics (formal languages based on mathematical logic) to deal with a class of uncertainty that agents may encounter. More accurately, the logics are meant to be used for allowing robots or software agents to reason about the uncertainty they have about the effects of their actions and the noisiness of their observations. The approach is to take the well-established formalism called the partially observable Markov decision process (POMDP) as an underlying formalism and then design a modal logic based on POMDP theory to allow an agent to reason with a knowledge-base (including knowledge about the uncertainties). First, three logics are designed, each one adding one or more important features for reasoning in the class of domains of interest (i.e., domains where stochastic action and sensing are considered). The final logic, called the Stochastic Decision Logic (SDL) combines the three logics into a coherent formalism, adding three important notions for reasoning about stochastic decision-theoretic domains: (i) representation of and reasoning about degrees of belief in a statement, given stochastic knowledge, (ii) representation of and reasoning about the expected future rewards of a sequence of actions and (iii) the progression or update of an agent’s epistemic, stochastic knowledge. For all the logics developed in this thesis, entailment is defined, that is, whether a sentence logically follows from a knowledge-base. Decision procedures for determining entailment are developed, and they are all proved sound, complete and terminating. The decision procedures all employ tableau calculi to deal with the traditional logical aspects, and systems of equations and inequalities to deal with the probabilistic aspects. Besides promoting the compact representation of POMDP models, and the power that logic brings to the automation of reasoning, the Stochastic Decision Logic is novel and significant in that it allows the agent to determine whether or not a set of sentences is entailed by an arbitrarily precise specification of a POMDP model, where this is not possible with standard POMDPs. The research conducted for this thesis has resulted in several publications and has been presented at several workshops, symposia and conferences.

@phdthesis{100,
  author = {Gavin Rens and Thomas Meyer and G. Lakemeyer},
  title = {Formalisms for Agents Reasoning with Stochastic Actions and Perceptions},
  abstract = {The thesis reports on the development of a sequence of logics (formal languages based on mathematical logic) to deal with a class of uncertainty that agents may encounter. More accurately, the logics are meant to be used for allowing robots or software agents to reason about the uncertainty they have about the effects of their actions and the noisiness of their observations. The approach is to take the well-established formalism called the 
partially observable Markov decision process (POMDP) as an underlying formalism and then design a modal logic based on POMDP theory to allow an agent to reason with a knowledge-base (including knowledge about the uncertainties).
First, three logics are designed, each one adding one or more important features for reasoning in the class of domains of interest (i.e., domains where stochastic action and sensing are considered). The final logic, called the Stochastic Decision Logic (SDL) combines the three logics into a coherent formalism, adding three important notions for reasoning about stochastic decision-theoretic domains: (i) representation of and reasoning about degrees of belief in a statement, given stochastic knowledge, (ii) representation of and reasoning about the expected future rewards of a sequence of actions and (iii) the progression or update of an agent’s epistemic, stochastic knowledge.
For all the logics developed in this thesis, entailment is defined, that is, whether a sentence logically follows from a knowledge-base. Decision procedures for determining entailment are developed, and they are all proved sound, complete and terminating. The decision procedures all employ tableau calculi to deal with the traditional logical aspects, and systems of equations and inequalities to deal with the probabilistic aspects.
Besides promoting the compact representation of POMDP models, and the power that logic brings to the automation of reasoning, the Stochastic Decision Logic is novel and significant in that it allows the agent to determine whether or not a set of sentences is entailed by an arbitrarily precise specification of a POMDP model, where this is not possible with standard POMDPs.
The research conducted for this thesis has resulted in several publications and has been presented at several workshops, symposia and conferences.},
  year = {2014},
  volume = {PhD},
}
Ongoma N. Formalising Temporal Attributes in Temporal conceptual data models. 2014.

No Abstract

@misc{96,
  author = {Nasubo Ongoma},
  title = {Formalising Temporal Attributes in Temporal conceptual data models},
  abstract = {No Abstract},
  year = {2014},
}
Price CS, Moodley D, Bezuidenhout CN. Using agent-based simulation to explore sugarcane supply chain transport complexities at a mill scale. 43rd Annual Operations Research Society of South Africa Conference. 2014.

The sugarcane supply chain (from sugarcane grower to mill) have particular challenges. One of these is that the growers have to deliver their cane to the mill before its quality degrades. The sugarcane supply chain typically consists of many growers and a mill. Growers deliver their cane daily during the milling season; the amount of cane they deliver depends on their farm size. Growers make decisions about when to harvest the cane, and the number and type of trucks needed to deliver their cane. The mill wants a consistent cane supply over the milling season. Growers are sometimes affected long queue lengths at the mill when they offload their cane. A preliminary agent-based simulation model was developed to understand this complex system. The model inputs a number of growers, and the amount of cane they are to deliver over the milling season. The number of trucks needed by each grower is determined by the trip, loading and unloading times and the anticipated waiting time at the mill. The anticipated waiting time was varied to determine how many trucks would be needed in the system to deliver the week’s cane allocation. As the anticipated waiting time increased, the number of trucks needed also increased, which in turn delayed the trucks when queuing at the mill. The growers’ anticipated waiting times never matched the actual waiting times. The research shows the promise of agent-based models as a sense-making approach to understanding systems where there are many individuals who have autonomous behaviour, and whose actions and interactions can result in unexpected system-level behaviour.

@proceedings{94,
  author = {C. Sue Price and Deshen Moodley and C.N. Bezuidenhout},
  title = {Using agent-based simulation to explore sugarcane supply chain transport complexities at a mill scale},
  abstract = {The sugarcane supply chain (from sugarcane grower to mill) have particular challenges.  One of these is that the growers have to deliver their cane to the mill before its quality degrades.  The sugarcane supply chain typically consists of many growers and a mill.  Growers deliver their cane daily during the milling season; the amount of cane they deliver depends on their farm size.  Growers make decisions about when to harvest the cane, and the number and type of trucks needed to deliver their cane.  The mill wants a consistent cane supply over the milling season.  Growers are sometimes affected long queue lengths at the mill when they offload their cane.
A preliminary agent-based simulation model was developed to understand this complex system.  The model inputs a number of growers, and the amount of cane they are to deliver over the milling season.  The number of trucks needed by each grower is determined by the trip, loading and unloading times and the anticipated waiting time at the mill.  The anticipated waiting time was varied to determine how many trucks would be needed in the system to deliver the week’s cane allocation.  As the anticipated waiting time increased, the number of trucks needed also increased, which in turn delayed the trucks when queuing at the mill.   The growers’ anticipated waiting times never matched the actual waiting times.  The research shows the promise of agent-based models as a sense-making approach to understanding systems where there are many individuals who have autonomous behaviour, and whose actions and interactions can result in unexpected system-level behaviour.},
  year = {2014},
  journal = {43rd Annual Operations Research Society of South Africa Conference},
  pages = {88-96},
  month = {14/09-17/09},
  isbn = {978-1-86822-656-6},
}
Brandt P, Moodley D, Pillay A. An Investigation Of Multi-label Classification Techniques For Predicting Hiv Drug Resistance In Resource-limited Settings. 2014;MSc.

South Africa has one of the highest HIV infection rates in the world with more than 5.6 million infected people and consequently has the largest antiretroviral treatment program with more than 1.5 million people on treatment. The development of drug resistance is a major factor impeding the efficacy of antiretroviral treatment. While genotype resistance testing (GRT) is the standard method to determine resistance, access to these tests is limited in resource-limited settings. This research investigates the efficacy of multi-label machine learning techniques at predicting HIV drug resistance from routine treatment and laboratory data. Six techniques, namely, binary relevance, HOMER, MLkNN, predictive clustering trees (PCT), RAkEL and ensemble of classifier chains (ECC) have been tested and evaluated on data from medical records of patients enrolled in an HIV treatment failure clinic in rural KwaZulu-Natal in South Africa. The performance is measured using five scalar evaluation measures and receiver operating characteristic (ROC) curves. The techniques were found to provide useful predictive information in most cases. The PCT and ECC techniques perform best and have true positive prediction rates of 97% and 98% respectively for specific drugs. The ECC method also achieved an AUC value of 0.83, which is comparable to the current state of the art. All models have been validated using 10 fold cross validation and show increased performance when additional data is added. In order to make use of these techniques in the field, a tool is presented that may, with small modifications, be integrated into public HIV treatment programs in South Africa and could assist clinicians to identify patients with a high probability of drug resistance.

@phdthesis{93,
  author = {Pascal Brandt and Deshen Moodley and Anban Pillay},
  title = {An Investigation Of Multi-label Classification Techniques For Predicting Hiv Drug Resistance In Resource-limited Settings},
  abstract = {South Africa has one of the highest HIV infection rates in the world with more than 5.6 million infected people and consequently has the largest antiretroviral treatment program with more than 1.5 million people on treatment. The development of drug resistance is a major factor impeding the efficacy of antiretroviral treatment. While genotype resistance testing (GRT) is the standard method to determine resistance, access to these tests is limited in resource-limited settings. This research investigates the efficacy of multi-label machine learning techniques at predicting HIV drug resistance from routine treatment and laboratory data. Six techniques, namely, binary relevance, HOMER, MLkNN, predictive clustering trees (PCT), RAkEL and ensemble of classifier chains (ECC) have been tested and evaluated on data from medical records of patients enrolled in an HIV treatment failure clinic in rural KwaZulu-Natal in South Africa. The performance is measured using five scalar evaluation measures and receiver operating characteristic (ROC) curves. The techniques were found to provide useful predictive information in most cases. The PCT and ECC techniques perform best and have true positive prediction rates of 97% and 98% respectively for specific drugs. The ECC method also achieved an AUC value of 0.83, which is comparable to the current state of the art. All models have been validated using 10 fold cross validation and show increased performance when additional data is added. In order to make use of these techniques in the field, a tool is presented that may, with small modifications, be integrated into public HIV treatment programs in South Africa and could assist clinicians to identify patients with a high probability of drug resistance.},
  year = {2014},
  volume = {MSc},
}
Moodley D, Seebregts C, Pillay A, Meyer T. An ontology-driven modeling platform for regulating eHealth interoperability in low resource settings. Foundations of Health Information Engineering and Systems, Revised and Selected Papers, Lecture Notes in Computer Science Volume 7789. 2014.

No Abstract

@proceedings{92,
  author = {Deshen Moodley and Chris Seebregts and Anban Pillay and Thomas Meyer},
  title = {An ontology-driven modeling platform for regulating eHealth interoperability in low resource settings},
  abstract = {No Abstract},
  year = {2014},
  journal = {Foundations of Health Information Engineering and Systems, Revised and Selected Papers, Lecture Notes in Computer Science Volume 7789},
  pages = {107-124},
  month = {15/09},
  isbn = {978-3-642-53955-8},
}
Ongoma N, Keet M, Meyer T. Transition Constraints for Temporal Attributes. 2014. http://ceur-ws.org/Vol-1193/paper_25.pdf.

Representing temporal data in conceptual data models and ontologies is required by various application domains. For it to be useful for modellers to represent the information precisely and reason over it, it is essential to have a language that is expressive enough to capture the required operational semantics of the time-varying information. Temporal modelling languages have little support for temporal attributes, if at all, yet attributes are a standard element in the widely used conceptual modelling languages such as EER and UML. This hiatus prevents one to utilise a complete temporal conceptual data model and keep track of evolving values of data and its interaction with temporal classes. A rich axiomatisation of fully temporised attributes is possible with a minor extension to the already very expressive description logic language DLRUS. We formalise the notion of transition of attributes, and their interaction with transition of classes. The transition specified for attributes are extension, evolution, and arbitrary quantitative extension.

@misc{91,
  author = {Nasubo Ongoma and Maria Keet and Thomas Meyer},
  title = {Transition Constraints for Temporal Attributes},
  abstract = {Representing temporal data in conceptual data models and ontologies
is required by various application domains. For it to be useful for modellers to
represent the information precisely and reason over it, it is essential to have a language
that is expressive enough to capture the required operational semantics of
the time-varying information. Temporal modelling languages have little support
for temporal attributes, if at all, yet attributes are a standard element in the widely
used conceptual modelling languages such as EER and UML. This hiatus prevents
one to utilise a complete temporal conceptual data model and keep track of
evolving values of data and its interaction with temporal classes. A rich axiomatisation
of fully temporised attributes is possible with a minor extension to the
already very expressive description logic language DLRUS. We formalise the
notion of transition of attributes, and their interaction with transition of classes.
The transition specified for attributes are extension, evolution, and arbitrary quantitative
extension.},
  year = {2014},
  url = {http://ceur-ws.org/Vol-1193/paper_25.pdf},
}
Meyer T, Moodley K, Sattler U. Practical Defeasible Reasoning for Description Logics. European Starting AI Researcher Symposium. 2014.

The preferential approach to nonmonotonic reasoning was consolidated in depth by Krause, Lehmann and Magidor (KLM) for propositional logic in the early 90's. In recent years, there have been efforts to extend their framework to Description Logics (DLs) and a solid (though preliminary) theoretical foundation has already been established towards this aim. Despite this foundation, the generalisation of the propositional framework to DLs is not yet complete and there are multiple proposals for entailment in this context with no formal system for deciding between these. In addition, there are virtually no existing preferential reasoning implementations to speak of for DL-based ontologies. The goals of this PhD are to provide a complete generalisation of the preferential framework of KLM to the DL ALC, provide a formal understanding of the relationships between the multiple proposals for entailment in this context, and finally, to develop an accompanying defeasible reasoning system for DL-based ontologies with performance that is suitable for use in existing ontology development settings.

@proceedings{89,
  author = {Thomas Meyer and Kody Moodley and U. Sattler},
  title = {Practical Defeasible Reasoning for Description Logics},
  abstract = {The preferential approach to nonmonotonic reasoning was consolidated in depth by Krause, Lehmann and Magidor (KLM) for propositional logic in the early 90's. In recent years, there have been efforts to extend their framework to Description Logics (DLs) and a solid (though preliminary) theoretical foundation has already been established towards this aim. Despite this foundation, the generalisation of the propositional framework to DLs is not yet complete and there are multiple proposals for entailment in this context with no formal system for deciding between these. In addition, there are virtually no existing preferential reasoning implementations to speak of for DL-based ontologies. The goals of this PhD are to provide a complete generalisation of the preferential framework of KLM to the DL ALC, provide a formal understanding of the relationships between the multiple proposals for entailment in this context, and finally, to develop an accompanying defeasible reasoning system for DL-based ontologies with performance that is suitable for use in existing ontology development settings.},
  year = {2014},
  journal = {European Starting AI Researcher Symposium},
  pages = {191-200},
  month = {18/08-19/08},
  address = {Prague, Czech Republic},
  isbn = {978-1-61499-421-3},
}
  • CSIR
  • DSI