Primary tabs
2020
1.
Clark A, Pillay A, Moodley D. A system for pose analysis and selection in virtual reality environments. In: SAICSIT ’20: Conference of the South African Institute of Computer Scientists and Information Technologists 2020. Virtual: ACM Digital Library; 2020. https://dl.acm.org/doi/proceedings/10.1145/3410886.
Depth cameras provide a natural and intuitive user interaction mechanism in virtual reality environments by using hand gestures as the primary user input. However, building robust VR systems that use depth cameras are challenging. Gesture recognition accuracy is affected by occlusion, variation in hand orientation and misclassification of similar hand gestures. This research explores the limits of the Leap Motion depth camera for static hand pose recognition in virtual reality applications. We propose a system for analysing static hand poses and for systematically identifying a pose set that can achieve a near-perfect recognition accuracy. The system consists of a hand pose taxonomy, a pose notation, a machine learning classifier and an algorithm to identify a reliable pose set that can achieve near perfect accuracy levels. We used this system to construct a benchmark hand pose data set containing 2550 static hand pose instances, and show how the algorithm can be used to systematically derive a set of poses that can produce an accuracy of 99% using a Support Vector Machine classifier.
@{379, author = {Andrew Clark and Anban Pillay and Deshen Moodley}, title = {A system for pose analysis and selection in virtual reality environments}, abstract = {Depth cameras provide a natural and intuitive user interaction mechanism in virtual reality environments by using hand gestures as the primary user input. However, building robust VR systems that use depth cameras are challenging. Gesture recognition accuracy is affected by occlusion, variation in hand orientation and misclassification of similar hand gestures. This research explores the limits of the Leap Motion depth camera for static hand pose recognition in virtual reality applications. We propose a system for analysing static hand poses and for systematically identifying a pose set that can achieve a near-perfect recognition accuracy. The system consists of a hand pose taxonomy, a pose notation, a machine learning classifier and an algorithm to identify a reliable pose set that can achieve near perfect accuracy levels. We used this system to construct a benchmark hand pose data set containing 2550 static hand pose instances, and show how the algorithm can be used to systematically derive a set of poses that can produce an accuracy of 99% using a Support Vector Machine classifier.}, year = {2020}, journal = {SAICSIT '20: Conference of the South African Institute of Computer Scientists and Information Technologists 2020}, pages = {210-216}, month = {14/09/2020}, publisher = {ACM Digital Library}, address = {Virtual}, isbn = {978-1-4503-8847-4}, url = {https://dl.acm.org/doi/proceedings/10.1145/3410886}, }
2019
1.
Mbonye V, Price CS. A model to evaluate the quality of Wi-Fi perfomance: Case study at UKZN Westville campus. In: 2nd International Conference on Advances in Big Data, Computing and Data Communication Systems (icABCD 2019). Danvers MA: IEEE; 2019.
Understanding how satisfied users are with services is very important in the delivery of quality services and in improving them. While studies have investigated perceptions of Wi-Fi among students, there is still a gap in understanding the overall perception of quality of service in terms of the different factors that may affect Wi-Fi service quality. Brady & Cronin Jr’s service quality model proposes that outcome quality, physical environment quality and interaction quality affect service quality. Sub-constructs for the independent variables were generated, and Likert-scale items developed for each sub-construct, based on the literature. 373 questionnaires were administered to University of KwaZulu-Natal (UKZN) Westville campus students. Factor analysis was to confirm the sub-constructs. Multiple regression analysis was used to test the model’s ability to predict Wi-Fi service quality. Of the three independent constructs, the outcome quality mean had the highest value (4.53), and it was similar to how the students rated service quality (4.52). All the constructs were rated at above the neutral score of 4. In the factor analysis, two physical environment quality items were excluded, and one service quality item was categorised with the expertise sub-construct of interaction quality. Using multiple regression analysis, the model showed that the independent constructs predict service quality with an R2 of 59.5%. However, when models for individual most-used locations (the library and lecture venues) were conducted, the R2 improved. The model can be used to understand users’ perceptions of outcome quality, physical environment quality and interaction quality which influence the quality of Wi-Fi performance, and evaluate the Wi-Fi performance quality of different locations.
@{217, author = {V. Mbonye and C. Sue Price}, title = {A model to evaluate the quality of Wi-Fi perfomance: Case study at UKZN Westville campus}, abstract = {Understanding how satisfied users are with services is very important in the delivery of quality services and in improving them. While studies have investigated perceptions of Wi-Fi among students, there is still a gap in understanding the overall perception of quality of service in terms of the different factors that may affect Wi-Fi service quality. Brady & Cronin Jr’s service quality model proposes that outcome quality, physical environment quality and interaction quality affect service quality. Sub-constructs for the independent variables were generated, and Likert-scale items developed for each sub-construct, based on the literature. 373 questionnaires were administered to University of KwaZulu-Natal (UKZN) Westville campus students. Factor analysis was to confirm the sub-constructs. Multiple regression analysis was used to test the model’s ability to predict Wi-Fi service quality. Of the three independent constructs, the outcome quality mean had the highest value (4.53), and it was similar to how the students rated service quality (4.52). All the constructs were rated at above the neutral score of 4. In the factor analysis, two physical environment quality items were excluded, and one service quality item was categorised with the expertise sub-construct of interaction quality. Using multiple regression analysis, the model showed that the independent constructs predict service quality with an R2 of 59.5%. However, when models for individual most-used locations (the library and lecture venues) were conducted, the R2 improved. The model can be used to understand users’ perceptions of outcome quality, physical environment quality and interaction quality which influence the quality of Wi-Fi performance, and evaluate the Wi-Fi performance quality of different locations.}, year = {2019}, journal = {2nd International Conference on Advances in Big Data, Computing and Data Communication Systems (icABCD 2019)}, pages = {291-297}, month = {05/08 - 06/08}, publisher = {IEEE}, address = {Danvers MA}, isbn = {978-1-5386-9235-6}, }
2018
1.
Ndaba M, Pillay A, Ezugwu A. An Improved Generalized Regression Neural Network for Type II Diabetes Classification. In: Iccsa 2018, lncs 10963. 10963rd ed. Springer International Publishing AG; 2018.
This paper proposes an improved Generalized Regression Neural Network (KGRNN) for the diagnosis of type II diabetes. Dia- betes, a widespread chronic disease, is a metabolic disorder that develops when the body does not make enough insulin or is unable to use insulin effectively. Type II diabetes is the most common type and accounts for an estimated 90% of cases. The novel KGRNN technique reported in this study uses an enhanced K-Means clustering technique (CVE-K-Means) to produce cluster centers (centroids) that are used to train the network. The technique was applied to the Pima Indian diabetes dataset, a widely used benchmark dataset for Diabetes diagnosis. The technique outper- forms the best known GRNN techniques for Type II diabetes diagnosis in terms of classification accuracy and computational time and obtained a classification accuracy of 86% with 83% sensitivity and 87% specificity. The Area Under the Receiver Operating Characteristic Curve (ROC) of 87% was obtained.
@inbook{195, author = {Moeketsi Ndaba and Anban Pillay and Absalom Ezugwu}, title = {An Improved Generalized Regression Neural Network for Type II Diabetes Classification}, abstract = {This paper proposes an improved Generalized Regression Neural Network (KGRNN) for the diagnosis of type II diabetes. Dia- betes, a widespread chronic disease, is a metabolic disorder that develops when the body does not make enough insulin or is unable to use insulin effectively. Type II diabetes is the most common type and accounts for an estimated 90% of cases. The novel KGRNN technique reported in this study uses an enhanced K-Means clustering technique (CVE-K-Means) to produce cluster centers (centroids) that are used to train the network. The technique was applied to the Pima Indian diabetes dataset, a widely used benchmark dataset for Diabetes diagnosis. The technique outper- forms the best known GRNN techniques for Type II diabetes diagnosis in terms of classification accuracy and computational time and obtained a classification accuracy of 86% with 83% sensitivity and 87% specificity. The Area Under the Receiver Operating Characteristic Curve (ROC) of 87% was obtained.}, year = {2018}, journal = {ICCSA 2018, LNCS 10963}, edition = {10963}, pages = {659-671}, publisher = {Springer International Publishing AG}, isbn = {3319951718}, }
1.
Jembere E, Rawatlal R, Pillay A. Matrix Factorisation for Predicting Student Performance. In: 2017 7th World Engineering Education Forum (WEEF). IEEE; 2018.
Predicting student performance in tertiary institutions has potential to improve curriculum advice given to students, the planning of interventions for academic support and monitoring and curriculum design. The student performance prediction problem, as defined in this study, is the prediction of a student’s mark for a module, given the student’s performance in previously attempted modules. The prediction problem is amenable to machine learning techniques, provided that sufficient data is available for analysis. This work reports on a study undertaken at the College of Agriculture, Engineering and Science at University of KwaZulu-Natal that investigates the efficacy of Matrix Factorization as a technique for solving the prediction problem. The study uses Singular Value Decomposition (SVD), a Matrix Factorization technique that has been successfully used in recommender systems. The performance of the technique was benchmarked against the use of student and course average marks as predictors of performance. The results obtained suggests that Matrix Factorization performs better than both benchmarks.
@{194, author = {Edgar Jembere and Randhir Rawatlal and Anban Pillay}, title = {Matrix Factorisation for Predicting Student Performance}, abstract = {Predicting student performance in tertiary institutions has potential to improve curriculum advice given to students, the planning of interventions for academic support and monitoring and curriculum design. The student performance prediction problem, as defined in this study, is the prediction of a student’s mark for a module, given the student’s performance in previously attempted modules. The prediction problem is amenable to machine learning techniques, provided that sufficient data is available for analysis. This work reports on a study undertaken at the College of Agriculture, Engineering and Science at University of KwaZulu-Natal that investigates the efficacy of Matrix Factorization as a technique for solving the prediction problem. The study uses Singular Value Decomposition (SVD), a Matrix Factorization technique that has been successfully used in recommender systems. The performance of the technique was benchmarked against the use of student and course average marks as predictors of performance. The results obtained suggests that Matrix Factorization performs better than both benchmarks.}, year = {2018}, journal = {2017 7th World Engineering Education Forum (WEEF)}, pages = {513-518}, month = {13/11-16/11}, publisher = {IEEE}, isbn = {978-1-5386-1523-2}, }
1.
Ndaba M, Pillay A, Ezugwu A. A Comparative Study of Machine Learning Techniques for Classifying Type II Diabetes Mellitus. 2018;MSc.
Diabetes is a metabolic disorder that develops when the body does not make enough insulin or is not able to use insulin effectively. Accurate and early detection of diabetes can aid in effective management of the disease. Several machine learning techniques have shown promise as cost ef- fective ways for early diagnosis of the disease to reduce the occurrence of health complications arising due to delayed diagnosis. This study compares the efficacy of three broad machine learning approaches; viz. Artificial Neural Networks (ANNs), Instance-based classification technique, and Statistical Regression to diagnose type II diabetes. For each approach, this study proposes novel techniques that extend the state of the art. The new techniques include Artificial Neural Networks hybridized with an improved K-Means clustering and a boosting technique; improved variants of Logistic Regression (LR), K-Nearest Neighbours algorithm (KNN), and K-Means clustering. The techniques were evaluated on the Pima Indian diabetes dataset and the results were compared to recent results reported in the literature. The highest classification accuracy of 100% with 100% sensitivity and 100% specificity were achieved using an ensemble of the Boosting technique, the enhanced K-Means clustering algorithm (CVE-K-Means) and the Generalized Regression Neu- ral Network (GRNN): B-KGRNN. A hybrid of CVE-K-Means algorithm and GRNN (KGRNN) achieved the best accuracy of 86% with 83% sensitivity. The improved LR model (LR-n) achieved the highest classification accuracy of 84% with 72% sensitivity. The new multi-layer percep- tron (MLP-BPX) achieved the best accuracy of 82% and 72% sensitivity. A hybrid of KNN and CVE-K-Means (CKNN) technique achieved the best accuracy of 81% and 89% sensitivity. CVE- K-Means technique achieved the best accuracy of 80% and 61% sensitivity. The B-KGRNN, KGRNN, LR-n, and CVE-K-Means technique outperformed similar techniques in literature in terms of classification accuracy by 15%, 1%, 2%, and 3% respectively. CKNN and KGRNN tech- nique proved to have less computational complexity compared to the standard KNN and GRNN algorithm. Employing data pre-processing techniques such as feature extraction and missing value removal improved the classification accuracy of machine learning techniques by more than 11% in most instances.
@phdthesis{192, author = {Moeketsi Ndaba and Anban Pillay and Absalom Ezugwu}, title = {A Comparative Study of Machine Learning Techniques for Classifying Type II Diabetes Mellitus}, abstract = {Diabetes is a metabolic disorder that develops when the body does not make enough insulin or is not able to use insulin effectively. Accurate and early detection of diabetes can aid in effective management of the disease. Several machine learning techniques have shown promise as cost ef- fective ways for early diagnosis of the disease to reduce the occurrence of health complications arising due to delayed diagnosis. This study compares the efficacy of three broad machine learning approaches; viz. Artificial Neural Networks (ANNs), Instance-based classification technique, and Statistical Regression to diagnose type II diabetes. For each approach, this study proposes novel techniques that extend the state of the art. The new techniques include Artificial Neural Networks hybridized with an improved K-Means clustering and a boosting technique; improved variants of Logistic Regression (LR), K-Nearest Neighbours algorithm (KNN), and K-Means clustering. The techniques were evaluated on the Pima Indian diabetes dataset and the results were compared to recent results reported in the literature. The highest classification accuracy of 100% with 100% sensitivity and 100% specificity were achieved using an ensemble of the Boosting technique, the enhanced K-Means clustering algorithm (CVE-K-Means) and the Generalized Regression Neu- ral Network (GRNN): B-KGRNN. A hybrid of CVE-K-Means algorithm and GRNN (KGRNN) achieved the best accuracy of 86% with 83% sensitivity. The improved LR model (LR-n) achieved the highest classification accuracy of 84% with 72% sensitivity. The new multi-layer percep- tron (MLP-BPX) achieved the best accuracy of 82% and 72% sensitivity. A hybrid of KNN and CVE-K-Means (CKNN) technique achieved the best accuracy of 81% and 89% sensitivity. CVE- K-Means technique achieved the best accuracy of 80% and 61% sensitivity. The B-KGRNN, KGRNN, LR-n, and CVE-K-Means technique outperformed similar techniques in literature in terms of classification accuracy by 15%, 1%, 2%, and 3% respectively. CKNN and KGRNN tech- nique proved to have less computational complexity compared to the standard KNN and GRNN algorithm. Employing data pre-processing techniques such as feature extraction and missing value removal improved the classification accuracy of machine learning techniques by more than 11% in most instances.}, year = {2018}, volume = {MSc}, }
1.
Waltham M, Moodley D, Pillay A. Q-Cog: A Q-Learning Based Cognitive Agent Architecture for Complex 3D Virtual Worlds. 2018;MSc.
Intelligent cognitive agents requiring a high level of adaptability should contain min- imal initial data and be able to autonomously gather new knowledge from their own experiences. 3D virtual worlds provide complex environments in which autonomous software agents may learn and interact. In many applications within this domain, such as video games and virtual reality, the environment is partially observable and agents must make decisions and react in real-time. Due to the dynamic nature of virtual worlds, adaptability is of great importance for virtual agents. The Reinforce- ment Learning paradigm provides a mechanism for unsupervised learning that allows agents to learn from their own experiences in the environment. In particular, the Q- Learning algorithm allows agents to develop an optimal action-selection policy based on their environment experiences. This research explores the potential of cognitive architectures utilizing Reinforcement Learning whereby agents may contain a library of action-selection policies within virtual environments. The proposed cognitive archi- tecture, Q-Cog, utilizes a policy selection mechanism to develop adaptable 3D virtual agents. Results from experimentation indicates that Q-Cog provides an effective basis for developing adaptive self-learning agents for 3D virtual worlds.
@phdthesis{190, author = {Michael Waltham and Deshen Moodley and Anban Pillay}, title = {Q-Cog: A Q-Learning Based Cognitive Agent Architecture for Complex 3D Virtual Worlds}, abstract = {Intelligent cognitive agents requiring a high level of adaptability should contain min- imal initial data and be able to autonomously gather new knowledge from their own experiences. 3D virtual worlds provide complex environments in which autonomous software agents may learn and interact. In many applications within this domain, such as video games and virtual reality, the environment is partially observable and agents must make decisions and react in real-time. Due to the dynamic nature of virtual worlds, adaptability is of great importance for virtual agents. The Reinforce- ment Learning paradigm provides a mechanism for unsupervised learning that allows agents to learn from their own experiences in the environment. In particular, the Q- Learning algorithm allows agents to develop an optimal action-selection policy based on their environment experiences. This research explores the potential of cognitive architectures utilizing Reinforcement Learning whereby agents may contain a library of action-selection policies within virtual environments. The proposed cognitive archi- tecture, Q-Cog, utilizes a policy selection mechanism to develop adaptable 3D virtual agents. Results from experimentation indicates that Q-Cog provides an effective basis for developing adaptive self-learning agents for 3D virtual worlds.}, year = {2018}, volume = {MSc}, publisher = {Durban University}, }
1.
Dzitiro J, Jembere E, Pillay A. A DeepQA Based Real-Time Document Recommender System. In: Southern Africa Telecommunication Networks and Applications Conference (SATNAC) 2018. South Africa: SATNAC; 2018.
Recommending relevant documents to users in real- time as they compose their own documents differs from the traditional task of recommending products to users. Variation in the users’ interests as they work on their documents can undermine the effectiveness of classical recommender system techniques that depend heavily on off-line data. This necessitates the use of real-time data gathered as the user is composing a document to determine which documents the user will most likely be interested in. Classical methodologies for evaluating recommender systems are not appropriate for this problem. This paper proposed a methodology for evaluating real-time document recommender system solutions. The proposed method- ology was then used to show that a solution that anticipates a user’s interest and makes only high confidence recommendations performs better than a classical content-based filtering solution. The results obtained using the proposed methodology confirmed that there is a need for a new breed of recommender systems algorithms for real-time document recommender systems that can anticipate the user’s interest and make only high confidence recommendations.
@{189, author = {Joshua Dzitiro and Edgar Jembere and Anban Pillay}, title = {A DeepQA Based Real-Time Document Recommender System}, abstract = {Recommending relevant documents to users in real- time as they compose their own documents differs from the traditional task of recommending products to users. Variation in the users’ interests as they work on their documents can undermine the effectiveness of classical recommender system techniques that depend heavily on off-line data. This necessitates the use of real-time data gathered as the user is composing a document to determine which documents the user will most likely be interested in. Classical methodologies for evaluating recommender systems are not appropriate for this problem. This paper proposed a methodology for evaluating real-time document recommender system solutions. The proposed method- ology was then used to show that a solution that anticipates a user’s interest and makes only high confidence recommendations performs better than a classical content-based filtering solution. The results obtained using the proposed methodology confirmed that there is a need for a new breed of recommender systems algorithms for real-time document recommender systems that can anticipate the user’s interest and make only high confidence recommendations.}, year = {2018}, journal = {Southern Africa Telecommunication Networks and Applications Conference (SATNAC) 2018}, pages = {304-309}, month = {02/09-05/09}, publisher = {SATNAC}, address = {South Africa}, }
1.
Price CS, Moodley D, Pillay A. Dynamic Bayesian decision network to represent growers’ adaptive pre-harvest burning decisions in a sugarcane supply chain. In: Proceedings of the Annual Conference of the South African Institute of Computer Scientists and Information Technologists (SAICSIT ’18). New York NY: ACM; 2018. https://dl.acm.org/citation.cfm?id=3278681.
Sugarcane growers usually burn their cane to facilitate its harvesting and transportation. Cane quality tends to deteriorate after burning, so it must be delivered as soon as possible to the mill for processing. This situation is dynamic and many factors, including weather conditions, delivery quotas and previous decisions taken, affect when and how much cane to burn. A dynamic Bayesian decision network (DBDN) was developed, using an iterative knowledge engineering approach, to represent sugarcane growers’ adaptive pre-harvest burning decisions. It was evaluated against five different scenarios which were crafted to represent the range of issues the grower faces when making these decisions. The DBDN was able to adapt reactively to delays in deliveries, although the model did not have enough states representing delayed delivery statuses. The model adapted proactively to rain forecasts, but only adapted reactively to high wind forecasts. The DBDN is a promising way of modelling such dynamic, adaptive operational decisions.
@{181, author = {C. Sue Price and Deshen Moodley and Anban Pillay}, title = {Dynamic Bayesian decision network to represent growers’ adaptive pre-harvest burning decisions in a sugarcane supply chain}, abstract = {Sugarcane growers usually burn their cane to facilitate its harvesting and transportation. Cane quality tends to deteriorate after burning, so it must be delivered as soon as possible to the mill for processing. This situation is dynamic and many factors, including weather conditions, delivery quotas and previous decisions taken, affect when and how much cane to burn. A dynamic Bayesian decision network (DBDN) was developed, using an iterative knowledge engineering approach, to represent sugarcane growers’ adaptive pre-harvest burning decisions. It was evaluated against five different scenarios which were crafted to represent the range of issues the grower faces when making these decisions. The DBDN was able to adapt reactively to delays in deliveries, although the model did not have enough states representing delayed delivery statuses. The model adapted proactively to rain forecasts, but only adapted reactively to high wind forecasts. The DBDN is a promising way of modelling such dynamic, adaptive operational decisions.}, year = {2018}, journal = {Proceedings of the Annual Conference of the South African Institute of Computer Scientists and Information Technologists (SAICSIT '18)}, pages = {89-98}, month = {26/09-28/09}, publisher = {ACM}, address = {New York NY}, isbn = {978-1-4503-6647-2}, url = {https://dl.acm.org/citation.cfm?id=3278681}, }
2017
1.
Gerber A, Morar N, Meyer T, Eardley C. Ontology-based support for taxonomic functions. Ecological Informatics. 2017;41. https://ac.els-cdn.com/S1574954116301959/1-s2.0-S1574954116301959-main.pdf?_tid=487687ca-01b3-11e8-89aa-00000aacb35e&acdnat=1516873196_6a2c94e428089403763ccec46613cf0f.
This paper reports on an investigation into the use of ontology technologies to support taxonomic functions. Support for taxonomy is imperative given several recent discussions and publications that voiced concern over the taxonomic impediment within the broader context of the life sciences. Taxonomy is defined as the scientific classification, description and grouping of biological organisms into hierarchies based on sets of shared characteristics, and documenting the principles that enforce such classification. Under taxonomic functions we identified two broad categories: the classification functions concerned with identification and naming of organisms, and secondly classification functions concerned with categorization and revision (i.e. grouping and describing, or revisiting existing groups and descriptions). Ontology technologies within the broad field of artificial intelligence include computational ontologies that are knowledge representation mechanisms using standardized representations that are based on description logics (DLs). This logic base of computational ontologies provides for the computerized capturing and manipulation of knowledge. Furthermore, the set-theoretical basis of computational ontologies ensures particular suitability towards classification, which is considered as a core function of systematics or taxonomy. Using the specific case of Afrotropical bees, this experimental research study represents the taxonomic knowledge base as an ontology, explore the use of available reasoning algorithms to draw the necessary inferences that support taxonomic functions (identification and revision) over the ontology and implement a Web-based application (the WOC). The contributions include the ontology, a reusable and standardized computable knowledge base of the taxonomy of Afrotropical bees, as well as the WOC and the evaluation thereof by experts.
@article{163, author = {Aurona Gerber and Nishal Morar and Tommie Meyer and C. Eardley}, title = {Ontology-based support for taxonomic functions}, abstract = {This paper reports on an investigation into the use of ontology technologies to support taxonomic functions. Support for taxonomy is imperative given several recent discussions and publications that voiced concern over the taxonomic impediment within the broader context of the life sciences. Taxonomy is defined as the scientific classification, description and grouping of biological organisms into hierarchies based on sets of shared characteristics, and documenting the principles that enforce such classification. Under taxonomic functions we identified two broad categories: the classification functions concerned with identification and naming of organisms, and secondly classification functions concerned with categorization and revision (i.e. grouping and describing, or revisiting existing groups and descriptions). Ontology technologies within the broad field of artificial intelligence include computational ontologies that are knowledge representation mechanisms using standardized representations that are based on description logics (DLs). This logic base of computational ontologies provides for the computerized capturing and manipulation of knowledge. Furthermore, the set-theoretical basis of computational ontologies ensures particular suitability towards classification, which is considered as a core function of systematics or taxonomy. Using the specific case of Afrotropical bees, this experimental research study represents the taxonomic knowledge base as an ontology, explore the use of available reasoning algorithms to draw the necessary inferences that support taxonomic functions (identification and revision) over the ontology and implement a Web-based application (the WOC). The contributions include the ontology, a reusable and standardized computable knowledge base of the taxonomy of Afrotropical bees, as well as the WOC and the evaluation thereof by experts.}, year = {2017}, journal = {Ecological Informatics}, volume = {41}, pages = {11-23}, publisher = {Elsevier}, isbn = {1574-9541}, url = {https://ac.els-cdn.com/S1574954116301959/1-s2.0-S1574954116301959-main.pdf?_tid=487687ca-01b3-11e8-89aa-00000aacb35e&acdnat=1516873196_6a2c94e428089403763ccec46613cf0f}, }
1.
Seebregts C, Pillay A, Crichton R, Singh S, Moodley D. 14 Enterprise Architectures for Digital Health. Global Health Informatics: Principles of eHealth and mHealth to Improve Quality of Care. 2017. https://books.google.co.za/books?id=8p-rDgAAQBAJ&pg=PA173&lpg=PA173&dq=14+Enterprise+Architectures+for+Digital+Health&source=bl&ots=i6SQzaXiPp&sig=zDLJ6lIqt3Xox3Lt5LNCuMkUoJ4&hl=en&sa=X&ved=0ahUKEwivtK6jxPDYAhVkL8AKHXbNDY0Q6AEINDAB#v=onepage&q=14%20Enterp.
• Several different paradigms and standards exist for creating digital health architectures that are mostly complementary, but sometimes contradictory. • The potential benefits of using EA approaches and tools are that they help to ensure the appropriate use of standards for interoperability and data storage and exchange, and encourage the creation of reusable software components and metadata.
@article{162, author = {Chris Seebregts and Anban Pillay and Ryan Crichton and S. Singh and Deshen Moodley}, title = {14 Enterprise Architectures for Digital Health}, abstract = {• Several different paradigms and standards exist for creating digital health architectures that are mostly complementary, but sometimes contradictory. • The potential benefits of using EA approaches and tools are that they help to ensure the appropriate use of standards for interoperability and data storage and exchange, and encourage the creation of reusable software components and metadata.}, year = {2017}, journal = {Global Health Informatics: Principles of eHealth and mHealth to Improve Quality of Care}, pages = {173-182}, publisher = {MIT Press}, isbn = {978-0262533201}, url = {https://books.google.co.za/books?id=8p-rDgAAQBAJ&pg=PA173&lpg=PA173&dq=14+Enterprise+Architectures+for+Digital+Health&source=bl&ots=i6SQzaXiPp&sig=zDLJ6lIqt3Xox3Lt5LNCuMkUoJ4&hl=en&sa=X&ved=0ahUKEwivtK6jxPDYAhVkL8AKHXbNDY0Q6AEINDAB#v=onepage&q=14%20Enterp}, }
1.
Adeleke JA, Moodley D, Rens G, Adewumi A. Integrating Statistical Machine Learning in a Semantic Sensor Web for Proactive Monitoring and Control. Sensors. 2017;17(4). http://pubs.cs.uct.ac.za/archive/00001219/01/sensors-17-00807.pdf.
Proactive monitoring and control of our natural and built environments is important in various application scenarios. Semantic Sensor Web technologies have been well researched and used for environmental monitoring applications to expose sensor data for analysis in order to provide responsive actions in situations of interest. While these applications provide quick response to situations, to minimize their unwanted effects, research efforts are still necessary to provide techniques that can anticipate the future to support proactive control, such that unwanted situations can be averted altogether. This study integrates a statistical machine learning based predictive model in a Semantic Sensor Web using stream reasoning. The approach is evaluated in an indoor air quality monitoring case study. A sliding window approach that employs the Multilayer Perceptron model to predict short term PM2.5 pollution situations is integrated into the proactive monitoring and control framework. Results show that the proposed approach can effectively predict short term PM2.5 pollution situations: precision of up to 0.86 and sensitivity of up to 0.85 is achieved over half hour prediction horizons, making it possible for the system to warn occupants or even to autonomously avert the predicted pollution situations within the context of Semantic Sensor Web.
@article{160, author = {Jude Adeleke and Deshen Moodley and Gavin Rens and A.O. Adewumi}, title = {Integrating Statistical Machine Learning in a Semantic Sensor Web for Proactive Monitoring and Control}, abstract = {Proactive monitoring and control of our natural and built environments is important in various application scenarios. Semantic Sensor Web technologies have been well researched and used for environmental monitoring applications to expose sensor data for analysis in order to provide responsive actions in situations of interest. While these applications provide quick response to situations, to minimize their unwanted effects, research efforts are still necessary to provide techniques that can anticipate the future to support proactive control, such that unwanted situations can be averted altogether. This study integrates a statistical machine learning based predictive model in a Semantic Sensor Web using stream reasoning. The approach is evaluated in an indoor air quality monitoring case study. A sliding window approach that employs the Multilayer Perceptron model to predict short term PM2.5 pollution situations is integrated into the proactive monitoring and control framework. Results show that the proposed approach can effectively predict short term PM2.5 pollution situations: precision of up to 0.86 and sensitivity of up to 0.85 is achieved over half hour prediction horizons, making it possible for the system to warn occupants or even to autonomously avert the predicted pollution situations within the context of Semantic Sensor Web.}, year = {2017}, journal = {Sensors}, volume = {17}, pages = {1-23}, issue = {4}, publisher = {MDPI}, isbn = {1424-8220}, url = {http://pubs.cs.uct.ac.za/archive/00001219/01/sensors-17-00807.pdf}, }
1.
Coetzer W, Moodley D. A knowledge-based system for generating interaction networks from ecological data. Data & Knowledge Engineering. 2017;112. doi:http://dx.doi.org/10.1016/j.datak.2017.09.005.
Semantic heterogeneity hampers efforts to find, integrate, analyse and interpret ecological data. An application case-study is described, in which the objective was to automate the integration and interpretation of heterogeneous, flower-visiting ecological data. A prototype knowledgebased system is described and evaluated. The system's semantic architecture uses a combination of ontologies and a Bayesian network to represent and reason with qualitative, uncertain ecological data and knowledge. This allows the high-level context and causal knowledge of behavioural interactions between individual plants and insects, and consequent ecological interactions between plant and insect populations, to be discovered. The system automatically assembles ecological interactions into a semantically consistent interaction network (a new design of a useful, traditional domain model). We discuss the contribution of probabilistic reasoning to knowledge discovery, the limitations of knowledge discovery in the application case-study, the impact of the work and the potential to apply the system design to the study of ecological interaction networks in general.
@article{154, author = {Willem Coetzer and Deshen Moodley}, title = {A knowledge-based system for generating interaction networks from ecological data}, abstract = {Semantic heterogeneity hampers efforts to find, integrate, analyse and interpret ecological data. An application case-study is described, in which the objective was to automate the integration and interpretation of heterogeneous, flower-visiting ecological data. A prototype knowledgebased system is described and evaluated. The system's semantic architecture uses a combination of ontologies and a Bayesian network to represent and reason with qualitative, uncertain ecological data and knowledge. This allows the high-level context and causal knowledge of behavioural interactions between individual plants and insects, and consequent ecological interactions between plant and insect populations, to be discovered. The system automatically assembles ecological interactions into a semantically consistent interaction network (a new design of a useful, traditional domain model). We discuss the contribution of probabilistic reasoning to knowledge discovery, the limitations of knowledge discovery in the application case-study, the impact of the work and the potential to apply the system design to the study of ecological interaction networks in general.}, year = {2017}, journal = {Data & Knowledge Engineering}, volume = {112}, pages = {55-78}, publisher = {Elsevier}, isbn = {0169-023X}, url = {http://pubs.cs.uct.ac.za/archive/00001220/01/coetzer-et-al-DKE-2017.pdf}, doi = {http://dx.doi.org/10.1016/j.datak.2017.09.005}, }
1.
Rens G, Moodley D. A hybrid POMDP-BDI agent architecture with online stochastic planning and plan caching. Cognitive Systems Research. 2017;43. doi:http://dx.doi.org/10.1016/j.cogsys.2016.12.002.
This article presents an agent architecture for controlling an autonomous agent in stochastic, noisy environments. The architecture combines the partially observable Markov decision process (POMDP) model with the belief-desire-intention (BDI) framework. The Hybrid POMDP-BDI agent architecture takes the best features from the two approaches, that is, the online generation of reward-maximizing courses of action from POMDP theory, and sophisticated multiple goal management from BDI theory. We introduce the advances made since the introduction of the basic architecture, including (i) the ability to pursue and manage multiple goals simultaneously and (ii) a plan library for storing pre-written plans and for storing recently generated plans for future reuse. A version of the architecture is implemented and is evaluated in a simulated environment. The results of the experiments show that the improved hybrid architecture outperforms the standard POMDP architecture and the previous basic hybrid architecture for both processing speed and effectiveness of the agent in reaching its goals.
@article{147, author = {Gavin Rens and Deshen Moodley}, title = {A hybrid POMDP-BDI agent architecture with online stochastic planning and plan caching}, abstract = {This article presents an agent architecture for controlling an autonomous agent in stochastic, noisy environments. The architecture combines the partially observable Markov decision process (POMDP) model with the belief-desire-intention (BDI) framework. The Hybrid POMDP-BDI agent architecture takes the best features from the two approaches, that is, the online generation of reward-maximizing courses of action from POMDP theory, and sophisticated multiple goal management from BDI theory. We introduce the advances made since the introduction of the basic architecture, including (i) the ability to pursue and manage multiple goals simultaneously and (ii) a plan library for storing pre-written plans and for storing recently generated plans for future reuse. A version of the architecture is implemented and is evaluated in a simulated environment. The results of the experiments show that the improved hybrid architecture outperforms the standard POMDP architecture and the previous basic hybrid architecture for both processing speed and effectiveness of the agent in reaching its goals.}, year = {2017}, journal = {Cognitive Systems Research}, volume = {43}, pages = {1-20}, publisher = {Elsevier B.V.}, isbn = {1389-0417}, doi = {http://dx.doi.org/10.1016/j.cogsys.2016.12.002}, }
2016
1.
Coetzer W, Moodley D, Gerber A. Eliciting and Representing High-Level Knowledge Requirements to Discover Ecological Knowledge in Flower-Visiting Data. PLOS ONE. 2016;11(11). doi:10.1371/journal.pone.0166559.
Observations of individual organisms (data) can be combined with expert ecological knowledge of species, especially causal knowledge, to model and extract from flower–visiting data useful information about behavioral interactions between insect and plant organisms, such as nectar foraging and pollen transfer. We describe and evaluate a method to elicit and represent such expert causal knowledge of behavioural ecology, and discuss the potential for wider application of this method to the design of knowledge-based systems for knowledge discovery in biodiversity and ecosystem informatics.
@article{446, author = {Willem Coetzer and Deshen Moodley and Aurona Gerber}, title = {Eliciting and Representing High-Level Knowledge Requirements to Discover Ecological Knowledge in Flower-Visiting Data.}, abstract = {Observations of individual organisms (data) can be combined with expert ecological knowledge of species, especially causal knowledge, to model and extract from flower–visiting data useful information about behavioral interactions between insect and plant organisms, such as nectar foraging and pollen transfer. We describe and evaluate a method to elicit and represent such expert causal knowledge of behavioural ecology, and discuss the potential for wider application of this method to the design of knowledge-based systems for knowledge discovery in biodiversity and ecosystem informatics.}, year = {2016}, journal = {PLOS ONE}, volume = {11}, issue = {11}, isbn = {e0166559}, url = {https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0166559}, doi = {10.1371/journal.pone.0166559}, }
1.
Kala J, Viriri S, Moodley D. Leaf Classification Using Convexity Moments of Polygons. In: International Symposium on Visual Computing. ; 2016.
Research has shown that shape features can be used in the process of object recognition with promising results. However, due to a wide variety of shape descriptors, selecting the right one remains a difficult task. This paper presents a new shape recognition feature: Convexity Moment of Polygons. The Convexity Moments of Polygons is derived from the Convexity measure of polygons. A series of experimentations based on FLAVIA images dataset was performed to demonstrate the accuracy of the proposed feature compared to the Convexity measure of polygons in the field of leaf classification. A classification rate of 92% was obtained with the Convexity Moment of Polygons, 80% with the convexity Measure of Polygons using the Radial Basis function neural networks classifier (RBF).
@{161, author = {J.R. Kala and S. Viriri and Deshen Moodley}, title = {Leaf Classification Using Convexity Moments of Polygons}, abstract = {Research has shown that shape features can be used in the process of object recognition with promising results. However, due to a wide variety of shape descriptors, selecting the right one remains a difficult task. This paper presents a new shape recognition feature: Convexity Moment of Polygons. The Convexity Moments of Polygons is derived from the Convexity measure of polygons. A series of experimentations based on FLAVIA images dataset was performed to demonstrate the accuracy of the proposed feature compared to the Convexity measure of polygons in the field of leaf classification. A classification rate of 92% was obtained with the Convexity Moment of Polygons, 80% with the convexity Measure of Polygons using the Radial Basis function neural networks classifier (RBF).}, year = {2016}, journal = {International Symposium on Visual Computing}, pages = {300-339}, month = {14/12-16/12}, isbn = {978-3-319-50832-0}, }