Research Publications

2017

Adeleke JA, Moodley D, Rens G, Adewumi AO. Integrating Statistical Machine Learning in a Semantic Sensor Web for Proactive Monitoring and Control. Sensors. 2017;17(4). http://pubs.cs.uct.ac.za/archive/00001219/01/sensors-17-00807.pdf.

Proactive monitoring and control of our natural and built environments is important in various application scenarios. Semantic Sensor Web technologies have been well researched and used for environmental monitoring applications to expose sensor data for analysis in order to provide responsive actions in situations of interest. While these applications provide quick response to situations, to minimize their unwanted effects, research efforts are still necessary to provide techniques that can anticipate the future to support proactive control, such that unwanted situations can be averted altogether. This study integrates a statistical machine learning based predictive model in a Semantic Sensor Web using stream reasoning. The approach is evaluated in an indoor air quality monitoring case study. A sliding window approach that employs the Multilayer Perceptron model to predict short term PM2.5 pollution situations is integrated into the proactive monitoring and control framework. Results show that the proposed approach can effectively predict short term PM2.5 pollution situations: precision of up to 0.86 and sensitivity of up to 0.85 is achieved over half hour prediction horizons, making it possible for the system to warn occupants or even to autonomously avert the predicted pollution situations within the context of Semantic Sensor Web.

@article{160,
  author = {Jude Adeleke and Deshen Moodley and Gavin Rens and A.O. Adewumi},
  title = {Integrating Statistical Machine Learning in a Semantic Sensor Web for Proactive Monitoring and Control},
  abstract = {Proactive monitoring and control of our natural and built environments is important in various application scenarios. Semantic Sensor Web technologies have been well researched and used for environmental monitoring applications to expose sensor data for analysis in order to provide responsive actions in situations of interest. While these applications provide quick response to situations, to minimize their unwanted effects, research efforts are still necessary to provide techniques that can anticipate the future to support proactive control, such that unwanted situations can be averted altogether. This study integrates a statistical machine learning based predictive model in a Semantic Sensor Web using stream reasoning. The approach is evaluated in an indoor air quality monitoring case study. A sliding window approach that employs the Multilayer Perceptron model to predict short term PM2.5 pollution situations is integrated into the proactive monitoring and control framework. Results show that the proposed approach can effectively predict short term PM2.5 pollution situations: precision of up to 0.86 and sensitivity of up to 0.85 is achieved over half hour prediction horizons, making it possible for the system to warn occupants or even to autonomously avert the predicted pollution situations within the context of Semantic Sensor Web.},
  year = {2017},
  journal = {Sensors},
  volume = {17},
  pages = {1-23},
  issue = {4},
  publisher = {MDPI},
  isbn = {1424-8220},
  url = {http://pubs.cs.uct.ac.za/archive/00001219/01/sensors-17-00807.pdf},
}
Rens G, Meyer T, Moodley D. A Stochastic Belief Management Architecture for Agent Control. 2017. http://pubs.cs.uct.ac.za/archive/00001201/01/AGA_2017_Rens_et_al.pdf.

We propose an architecture for agent control, where the agent stores its beliefs and environment models as logical sentences. Given successive observations, the agent’s current state (of beliefs) is maintained by a combination of probability, POMDP and belief change theory. Two existing logics are employed for knowledge representation and reasoning: the stochastic decision logic of Rens et al. (2015) and p-logic of Zhuanget al. (2017) (a restricted version of a logic designedby Fagin et al. (1990)). The proposed architecture assumes two streams of observations: active, which correspond to agent intentions and passive, which is received without the agent’s direct involvement. Stochastic uncertainty, and ignorance due to lack of information are both dealt with in the architecture. Planning, and learning of environment models are assumed present but are not covered in this proposal.

@misc{155,
  author = {Gavin Rens and Thomas Meyer and Deshen Moodley},
  title = {A Stochastic Belief Management Architecture for Agent Control},
  abstract = {We propose an architecture for agent control, where the agent stores its beliefs and environment models as logical sentences. Given successive observations, the agent’s current state (of beliefs) is maintained by a combination of probability, POMDP and belief change theory. Two existing logics are employed for knowledge representation and reasoning: the stochastic decision logic of Rens et al. (2015) and p-logic of Zhuanget al. (2017) (a restricted version of a logic designedby Fagin et al. (1990)). The proposed architecture assumes two streams of observations: active, which correspond to agent intentions and passive, which is received without the agent’s direct involvement. Stochastic uncertainty, and ignorance due to lack of information are both dealt with in the architecture. Planning, and learning of environment models are assumed present but are not covered in this proposal.},
  year = {2017},
  url = {http://pubs.cs.uct.ac.za/archive/00001201/01/AGA_2017_Rens_et_al.pdf},
}
Coetzer W, Moodley D. A knowledge-based system for generating interaction networks from ecological data. Data & Knowledge Engineering. 2017;112. doi:http://dx.doi.org/10.1016/j.datak.2017.09.005.

Semantic heterogeneity hampers efforts to find, integrate, analyse and interpret ecological data. An application case-study is described, in which the objective was to automate the integration and interpretation of heterogeneous, flower-visiting ecological data. A prototype knowledgebased system is described and evaluated. The system's semantic architecture uses a combination of ontologies and a Bayesian network to represent and reason with qualitative, uncertain ecological data and knowledge. This allows the high-level context and causal knowledge of behavioural interactions between individual plants and insects, and consequent ecological interactions between plant and insect populations, to be discovered. The system automatically assembles ecological interactions into a semantically consistent interaction network (a new design of a useful, traditional domain model). We discuss the contribution of probabilistic reasoning to knowledge discovery, the limitations of knowledge discovery in the application case-study, the impact of the work and the potential to apply the system design to the study of ecological interaction networks in general.

@article{154,
  author = {Willem Coetzer and Deshen Moodley},
  title = {A knowledge-based system for generating interaction networks from ecological data},
  abstract = {Semantic heterogeneity hampers efforts to find, integrate, analyse and interpret ecological data. An application case-study is described, in which the objective was to automate the integration and interpretation of heterogeneous, flower-visiting ecological data. A prototype knowledgebased system is described and evaluated. The system's semantic architecture uses a combination of ontologies and a Bayesian network to represent and reason with qualitative, uncertain ecological data and knowledge. This allows the high-level context and causal knowledge of behavioural interactions between individual plants and insects, and consequent ecological interactions between plant and insect populations, to be discovered. The system automatically assembles ecological interactions into a semantically consistent interaction network (a new design of a useful, traditional domain model). We discuss the contribution of probabilistic reasoning to knowledge discovery, the limitations of knowledge discovery in the application case-study, the impact of the work and the potential to apply the system design to the study of ecological interaction networks in general.},
  year = {2017},
  journal = {Data & Knowledge Engineering},
  volume = {112},
  pages = {55-78},
  publisher = {Elsevier},
  isbn = {0169-023X},
  url = {http://pubs.cs.uct.ac.za/archive/00001220/01/coetzer-et-al-DKE-2017.pdf},
  doi = {http://dx.doi.org/10.1016/j.datak.2017.09.005},
}
Kroon S, Heavens A, Fantaye Y, et al. No evidence for extensions to the standard cosmological model. Physical Review Letters. 2017;119(2017). https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.119.101301.

No Abstract

@article{152,
  author = {Steve Kroon and A. Heavens and Y. Fantaye and E. Sellentin and H. Eggers and Z. Hosenie and A. Mootoovaloo},
  title = {No evidence for extensions to the standard cosmological model},
  abstract = {No Abstract},
  year = {2017},
  journal = {Physical Review Letters},
  volume = {119},
  pages = {101301-101305},
  issue = {2017},
  publisher = {American Physical Society},
  url = {https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.119.101301},
}
Kroon S, Yoon M, Bekker J. New reinforcement learning algorithm for robot soccer. Orion. 2017;33(1). http://orion.journals.ac.za/pub/article/view/542.

Reinforcement Learning (RL) is a powerful technique to develop intelligent agents in the field of Artificial Intelligence (AI). This paper proposes a new RL algorithm called the Temporal-Difference value iteration algorithm with state-value functions and presents applications of this algorithm to the decision-making problems challenged in the RoboCup Small Size League (SSL) domain. Six scenarios were defined to develop shooting skills for an SSL soccer robot in various situations using the proposed algorithm. Furthermore, an Artificial Neural Network (ANN) model, namely Multi-Layer Perceptron (MLP) was used as a function approximator in each application. The experimental results showed that the proposed RL algorithm had effectively trained the RL agent to acquire good shooting skills. The RL agent showed good performance under specified experimental conditions.

@article{151,
  author = {Steve Kroon and M. Yoon and J. Bekker},
  title = {New reinforcement learning algorithm for robot soccer},
  abstract = {Reinforcement Learning (RL) is a powerful technique to develop intelligent agents in the field of Artificial Intelligence (AI). This paper proposes a new RL algorithm called the Temporal-Difference value iteration algorithm with state-value functions and presents applications of this algorithm to the decision-making problems challenged in the RoboCup Small Size League (SSL) domain. Six scenarios were defined to develop shooting skills for an SSL soccer robot in various situations using the proposed algorithm. Furthermore, an Artificial Neural Network (ANN) model, namely Multi-Layer Perceptron (MLP) was used as a function approximator in each application. The experimental results showed that the proposed RL algorithm had effectively trained the RL agent to acquire good shooting skills. The RL agent showed good performance under specified experimental conditions.},
  year = {2017},
  journal = {Orion},
  volume = {33},
  pages = {1-20},
  issue = {1},
  publisher = {Operations Research Society of South Africa (ORSSA)},
  address = {South Africa},
  isbn = {2224-0004 (online)},
  url = {http://orion.journals.ac.za/pub/article/view/542},
}
Rens G, Moodley D. A hybrid POMDP-BDI agent architecture with online stochastic planning and plan caching. Cognitive Systems Research. 2017;43. doi:http://dx.doi.org/10.1016/j.cogsys.2016.12.002.

This article presents an agent architecture for controlling an autonomous agent in stochastic, noisy environments. The architecture combines the partially observable Markov decision process (POMDP) model with the belief-desire-intention (BDI) framework. The Hybrid POMDP-BDI agent architecture takes the best features from the two approaches, that is, the online generation of reward-maximizing courses of action from POMDP theory, and sophisticated multiple goal management from BDI theory. We introduce the advances made since the introduction of the basic architecture, including (i) the ability to pursue and manage multiple goals simultaneously and (ii) a plan library for storing pre-written plans and for storing recently generated plans for future reuse. A version of the architecture is implemented and is evaluated in a simulated environment. The results of the experiments show that the improved hybrid architecture outperforms the standard POMDP architecture and the previous basic hybrid architecture for both processing speed and effectiveness of the agent in reaching its goals.

@article{147,
  author = {Gavin Rens and Deshen Moodley},
  title = {A hybrid POMDP-BDI agent architecture with online stochastic planning and plan caching},
  abstract = {This article presents an agent architecture for controlling an autonomous agent in stochastic, noisy environments. The architecture combines the partially observable Markov decision process (POMDP) model with the belief-desire-intention (BDI) framework. The Hybrid POMDP-BDI agent architecture takes the best features from the two approaches, that is, the online generation of reward-maximizing courses of action from POMDP theory, and sophisticated multiple goal management from BDI theory. We introduce the advances made since the introduction of the basic architecture, including (i) the ability to pursue and manage multiple goals simultaneously and (ii) a plan library for storing pre-written plans and for storing recently generated plans for future reuse. A version of the architecture is implemented and is evaluated in a simulated environment. The results of the experiments show that the improved hybrid architecture outperforms the standard POMDP architecture and the previous basic hybrid architecture for both processing speed and effectiveness of the agent in reaching its goals.},
  year = {2017},
  journal = {Cognitive Systems Research},
  volume = {43},
  pages = {1-20},
  publisher = {Elsevier B.V.},
  isbn = {1389-0417},
  doi = {http://dx.doi.org/10.1016/j.cogsys.2016.12.002},
}

2016

Ojeme B, Mbogho A, Meyer T. Probabilistic Expert Systems for Reasoning in Clinical Depressive Disorders. In: 15th IEEE International Conference on Machine Learning and Applications (ICMLA). IEEE; 2016. doi:10.1109/ICMLA.2016.0105.

Like other real-world problems, reasoning in clinical depression presents cognitive challenges for clinicians. This is due to the presence of co-occuring diseases, incomplete data, uncertain knowledge, and the vast amount of data to be analysed. Current approaches rely heavily on the experience, knowledge, and subjective opinions of clinicians, creating scalability issues. Automating this process requires a good knowledge representation technique to capture the knowledge of the domain experts, and multidimensional inferential reasoning approaches that can utilise a few bits and pieces of information for efficient reasoning. This study presents knowledge-based system with variants of Bayesian network models for efficient inferential reasoning, translating from available fragmented depression data to the desired information in a visually interpretable and transparent manner. Mutual information, a Conditional independence test-based method was used to learn the classifiers.

@{361,
  author = {Blessing Ojeme and Audrey Mbogho and Thomas Meyer},
  title = {Probabilistic Expert Systems for Reasoning in Clinical Depressive Disorders},
  abstract = {Like other real-world problems, reasoning in clinical depression presents cognitive challenges for clinicians. This is due to the presence of co-occuring diseases, incomplete data, uncertain knowledge, and the vast amount of data to be analysed. Current approaches rely heavily on the experience, knowledge, and subjective opinions of clinicians, creating scalability issues. Automating this process requires a good knowledge representation technique to capture the knowledge of the domain experts, and multidimensional inferential reasoning approaches that can utilise a few bits and pieces of information for efficient reasoning. This study presents knowledge-based system with variants of Bayesian network models for efficient inferential reasoning, translating from available fragmented depression data to the desired information in a visually interpretable and transparent manner. Mutual information, a Conditional independence test-based method was used to learn the classifiers.},
  year = {2016},
  journal = {15th IEEE International Conference on Machine Learning and Applications (ICMLA)},
  month = {18/12 - 20/12},
  publisher = {IEEE},
  doi = {10.1109/ICMLA.2016.0105},
}
Casini G, Meyer T. Using Defeasible Information to Obtain Coherence. In: Fifteenth International Conference on Principles of Knowledge Representation and Reasoning (KR). AAAI Press; 2016. doi:https://dl.acm.org/doi/10.5555/3032027.3032097.

We consider the problem of obtaining coherence in a propositional knowledge base using techniques from Belief Change. Our motivation comes from the field of formal ontologies where coherence is interpreted to mean that a concept name has to be satisfiable. In the propositional case we consider here, this translates to a propositional formula being satisfiable. We define belief change operators in a framework of nonmonotonic preferential reasoning. We show how the introduction of defeasible information using contraction operators can be an effective means for obtaining coherence.

@{360,
  author = {Giovanni Casini and Thomas Meyer},
  title = {Using Defeasible Information to Obtain Coherence},
  abstract = {We consider the problem of obtaining coherence in a propositional knowledge base using techniques from Belief Change. Our motivation comes from the field of formal ontologies where coherence is interpreted to mean that a concept name has to be satisfiable. In the propositional case we consider here, this translates to a propositional formula being satisfiable. We define belief change operators in a framework of nonmonotonic preferential reasoning. We show how the introduction of defeasible information using contraction operators can be an effective means for obtaining coherence.},
  year = {2016},
  journal = {Fifteenth International Conference on  Principles of Knowledge Representation and Reasoning (KR)},
  pages = {537-540},
  month = {25/04 - 29/04},
  publisher = {AAAI Press},
  doi = {https://dl.acm.org/doi/10.5555/3032027.3032097},
}
Rens G, Casini G, Meyer T. On Revision of Partially Specified Convex Probabilistic Belief Bases. In: European Conference on Artificial Intelligence (ECAI). IO Press; 2016. https://www.researchgate.net/publication/307577667_On_Revision_of_Partially_Specified_Convex_Probabilistic_Belief_Bases.

We propose a method for an agent to revise its incomplete probabilistic beliefs when a new piece of propositional information is observed. In this work, an agent’s beliefs are represented by a set of probabilistic formulae – a belief base. The method involves determining a representative set of ‘boundary’ probability distributions consistent with the current belief base, revising each of these probability distributions and then translating the revised information into a new belief base. We use a version of Lewis Imaging as the revision operation. The correctness of the approach is proved. An analysis of the approach is done against six rationality postulates. The expressivity of the belief bases under consideration are rather restricted, but has some applications. We also discuss methods of belief base revision employing the notion of optimum entropy, and point out some of the benefits and difficulties in those methods. Both the boundary distribution method and the optimum entropy methods are reasonable, yet yield different results.

@{359,
  author = {Gavin Rens and Giovanni Casini and Thomas Meyer},
  title = {On Revision of Partially Specified Convex Probabilistic Belief Bases},
  abstract = {We propose a method for an agent to revise its incomplete probabilistic beliefs when a new piece of propositional information is observed. In this work, an agent’s beliefs are represented by a set of probabilistic formulae – a belief base. The method involves determining a representative set of ‘boundary’ probability distributions consistent with the current belief base, revising each of these probability distributions and then translating the revised information into a new belief base. We use a version of Lewis Imaging as the revision operation. The correctness of the approach is proved. An analysis of the approach is done against six rationality postulates. The expressivity of the belief bases under consideration are rather restricted, but has some applications. We also discuss methods of belief base revision employing the notion of optimum entropy, and point out some of the benefits and difficulties in those methods. Both the boundary distribution method and the optimum entropy methods are reasonable, yet yield different results.},
  year = {2016},
  journal = {European Conference on Artificial Intelligence (ECAI)},
  pages = {921-929},
  month = {29/08 - 2/09},
  publisher = {IO Press},
  url = {https://www.researchgate.net/publication/307577667_On_Revision_of_Partially_Specified_Convex_Probabilistic_Belief_Bases},
}
Van Niekerk DR. Syllabification for Afrikaans speech synthesis. In: Pattern Recognition Association of South Africa and Robotics and Mechatronics International Conference (PRASA-RobMech). Stellenbosch, South Africa; 2016. doi:10.1109/RoboMech.2016.7813143.

This paper describes the continuing development of a pronunciation resource for speech synthesis of Afrikaans by augmenting an existing pronunciation dictionary to include syllable boundaries and stress. Furthermore, different approaches for grapheme to phoneme conversion and syllabification derived from the dictionary are evaluated. Cross-validation experiments suggest that joint sequence models are effective at directly modelling pronunciations including syllable boundaries. Finally, some informal observations and demonstrations are presented regarding the integration of this work into a typical text-to-speech system.

@{285,
  author = {Daniel Van Niekerk},
  title = {Syllabification for Afrikaans speech synthesis},
  abstract = {This paper describes the continuing development of a pronunciation resource for speech synthesis of Afrikaans by augmenting an existing pronunciation dictionary to include syllable boundaries and stress. Furthermore, different approaches for grapheme to phoneme conversion and syllabification derived from the dictionary are evaluated. Cross-validation experiments suggest that joint sequence models are effective at directly modelling pronunciations including syllable boundaries. Finally, some informal observations and demonstrations are presented regarding the integration of this work into a typical text-to-speech system.},
  year = {2016},
  journal = {Pattern Recognition Association of South Africa and Robotics and Mechatronics International Conference (PRASA-RobMech)},
  pages = {31-36},
  address = {Stellenbosch, South Africa},
  isbn = {978-1-5090-3335-5},
  doi = {10.1109/RoboMech.2016.7813143},
}
Kleynhans N, Hartman W, Van Niekerk DR, et al. Code-switched English Pronunciation Modeling for Swahili Spoken Term Detection. Procedia Computer Science. 2016;81. doi:10.1016/j.procs.2016.04.040.

We investigate modeling strategies for English code-switched words as found in a Swahili spoken term detection system. Code switching, where speakers switch language in a conversation, occurs frequently in multilingual environments, and typically deteriorates STD performance. Analysis is performed in the context of the IARPA Babel program which focuses on rapid STD system development for under-resourced languages. Our results show that approaches that specifically target the modeling of code-switched words, significantly improve the detection performance of these words.

@article{271,
  author = {Neil Kleynhans and William Hartman and Daniel Van Niekerk and Charl Van Heerden and Richard Schwartz and Stavros Tsakalidis and Marelie Davel},
  title = {Code-switched English Pronunciation Modeling for Swahili Spoken Term Detection},
  abstract = {We investigate modeling strategies for English code-switched words as found in a Swahili spoken term detection system. Code
switching, where speakers switch language in a conversation, occurs frequently in multilingual environments, and typically deteriorates STD performance. Analysis is performed in the context of the IARPA Babel program which focuses on rapid STD
system development for under-resourced languages. Our results show that approaches that specifically target the modeling of
code-switched words, significantly improve the detection performance of these words.},
  year = {2016},
  journal = {Procedia Computer Science},
  volume = {81},
  pages = {128-135},
  publisher = {Elsevier B.V.},
  address = {Yogyakarta, Indonesia},
  isbn = {1877-0509},
  doi = {10.1016/j.procs.2016.04.040},
}
Leenen L, Meyer T. Semantic Technologies and Big Data: Analytics for Cyber Defence. International Journal of Cyber Warfare and Terrorism. 2016;6(3).

The Governments, military forces and other organisations responsible for cybersecurity deal with vast amounts of data that has to be understood in order to lead to intelligent decision making. Due to the vast amounts of information pertinent to cybersecurity, automation is required for processing and decision making, specifically to present advance warning of possible threats. The ability to detect patterns in vast data sets, and being able to understanding the significance of detected patterns are essential in the cyber defence domain. Big data technologies supported by semantic technologies can improve cybersecurity, and thus cyber defence by providing support for the processing and understanding of the huge amounts of information in the cyber environment. The term big data analytics refers to advanced analytic techniques such as machine learning, predictive analysis, and other intelligent processing techniques applied to large data sets that contain different data types. The purpose is to detect patterns, correlations, trends and other useful information. Semantic technologies is a knowledge representation paradigm where the meaning of data is encoded separately from the data itself. The use of semantic technologies such as logic-based systems to support decision making is becoming increasingly popular. However, most automated systems are currently based on syntactic rules. These rules are generally not sophisticated enough to deal with the complexity of decisions required to be made. The incorporation of semantic information allows for increased understanding and sophistication in cyber defence systems. This paper argues that both big data analytics and semantic technologies are necessary to provide counter measures against cyber threats. An overview of the use of semantic technologies and big data technologies in cyber defence is provided, and important areas for future research in the combined domains are discussed.

@article{229,
  author = {Louise Leenen and Thomas Meyer},
  title = {Semantic Technologies and Big Data: Analytics for Cyber Defence},
  abstract = {The Governments, military forces and other organisations responsible for cybersecurity deal with
vast amounts of data that has to be understood in order to lead to intelligent decision making. Due
to the vast amounts of information pertinent to cybersecurity, automation is required for processing
and decision making, specifically to present advance warning of possible threats. The ability to detect
patterns in vast data sets, and being able to understanding the significance of detected patterns are
essential in the cyber defence domain. Big data technologies supported by semantic technologies
can improve cybersecurity, and thus cyber defence by providing support for the processing and
understanding of the huge amounts of information in the cyber environment. The term big data
analytics refers to advanced analytic techniques such as machine learning, predictive analysis, and
other intelligent processing techniques applied to large data sets that contain different data types. The
purpose is to detect patterns, correlations, trends and other useful information. Semantic technologies
is a knowledge representation paradigm where the meaning of data is encoded separately from the
data itself. The use of semantic technologies such as logic-based systems to support decision making
is becoming increasingly popular. However, most automated systems are currently based on syntactic
rules. These rules are generally not sophisticated enough to deal with the complexity of decisions
required to be made. The incorporation of semantic information allows for increased understanding and
sophistication in cyber defence systems. This paper argues that both big data analytics and semantic
technologies are necessary to provide counter measures against cyber threats. An overview of the
use of semantic technologies and big data technologies in cyber defence is provided, and important
areas for future research in the combined domains are discussed.},
  year = {2016},
  journal = {International Journal of Cyber Warfare and Terrorism},
  volume = {6},
  issue = {3},
}
van Niekerk L, Watson B. The Development and Evaluation of an Electronic Serious Game Aimed at the Education of Core Programming Skills. 2016;MA. http://hdl.handle.net/10019.1/100119.

No Abstract

@phdthesis{207,
  author = {L. van Niekerk and Bruce Watson},
  title = {The Development and Evaluation of an Electronic Serious Game Aimed at the Education of Core Programming Skills},
  abstract = {No Abstract},
  year = {2016},
  volume = {MA},
  url = {http://hdl.handle.net/10019.1/100119},
}
Kala JR, Viriri S, Moodley D. Leaf Classification Using Convexity Moments of Polygons. In: International Symposium on Visual Computing. ; 2016.

Research has shown that shape features can be used in the process of object recognition with promising results. However, due to a wide variety of shape descriptors, selecting the right one remains a difficult task. This paper presents a new shape recognition feature: Convexity Moment of Polygons. The Convexity Moments of Polygons is derived from the Convexity measure of polygons. A series of experimentations based on FLAVIA images dataset was performed to demonstrate the accuracy of the proposed feature compared to the Convexity measure of polygons in the field of leaf classification. A classification rate of 92% was obtained with the Convexity Moment of Polygons, 80% with the convexity Measure of Polygons using the Radial Basis function neural networks classifier (RBF).

@{161,
  author = {J.R. Kala and S. Viriri and Deshen Moodley},
  title = {Leaf Classification Using Convexity Moments of Polygons},
  abstract = {Research has shown that shape features can be used in the process of object recognition with promising results. However, due to a wide variety of shape descriptors, selecting the right one remains a difficult task. This paper presents a new shape recognition feature: Convexity Moment of Polygons. The Convexity Moments of Polygons is derived from the Convexity measure of polygons. A series of experimentations based on FLAVIA images dataset was performed to demonstrate the accuracy of the proposed feature compared to the Convexity measure of polygons in the field of leaf classification. A classification rate of 92% was obtained with the Convexity Moment of Polygons, 80% with the convexity Measure of Polygons using the Radial Basis function neural networks classifier (RBF).},
  year = {2016},
  journal = {International Symposium on Visual Computing},
  pages = {300-339},
  month = {14/12-16/12},
  isbn = {978-3-319-50832-0},
}
Coetzer W, Moodley D, Gerber A. Eliciting and Representing High-Level Knowledge Requirements to Discover Ecological Knowledge in Flower-Visiting Data. PLoS ONE . 2016;11(11). http://pubs.cs.uct.ac.za/archive/00001127/01/journal.pone.0166559.pdf.

Observations of individual organisms (data) can be combined with expert ecological knowledge of species, especially causal knowledge, to model and extract from flower–visiting data useful information about behavioral interactions between insect and plant organisms, such as nectar foraging and pollen transfer. We describe and evaluate a method to elicit and represent such expert causal knowledge of behavioral ecology, and discuss the potential for wider application of this method to the design of knowledge-based systems for knowledge discovery in biodiversity and ecosystem informatics.

@article{159,
  author = {Willem Coetzer and Deshen Moodley and Aurona Gerber},
  title = {Eliciting and Representing High-Level Knowledge Requirements to Discover Ecological Knowledge in Flower-Visiting Data},
  abstract = {Observations of individual organisms (data) can be combined with expert ecological knowledge of species, especially causal knowledge, to model and extract from flower–visiting data useful information about behavioral interactions between insect and plant organisms, such as nectar foraging and pollen transfer. We describe and evaluate a method to elicit and represent such expert causal knowledge of behavioral ecology, and discuss the potential for wider application of this method to the design of knowledge-based systems for knowledge discovery in biodiversity and ecosystem informatics.},
  year = {2016},
  journal = {PLoS ONE},
  volume = {11},
  pages = {1-15},
  issue = {11},
  url = {http://pubs.cs.uct.ac.za/archive/00001127/01/journal.pone.0166559.pdf},
}
Waltham M, Moodley D. An Analysis of Artificial Intelligence Techniques in Multiplayer Online Battle Arena Game Environments. In: Annual Conference of the South African Institute of Computer Scientists and Information Technologists (SAICSIT 2016). Johannesburg: ACM; 2016. doi: http://dx.doi.org/10.1145/2987491.2987513.

The 3D computer gaming industry is constantly exploring new avenues for creating immersive and engaging environments. One avenue being explored is autonomous control of the behaviour of non-player characters (NPC). This paper reviews and compares existing artificial intelligence (AI) techniques for controlling the behaviour of non-human characters in Multiplayer Online Battle Arena (MOBA) game environments. Two techniques, the fuzzy state machine (FuSM) and the emotional behaviour tree (EBT), were reviewed and compared. In addition, an alternate and simple mechanism to incorporate emotion in a behaviour tree is proposed and tested. Initial tests of the mechanism show that it is a viable and promising mechanism for effectively tracking the emotional state of an NPC and for incorporating emotion in NPC decision making.

@{157,
  author = {Michael Waltham and Deshen Moodley},
  title = {An Analysis of Artificial Intelligence Techniques in Multiplayer Online Battle Arena Game Environments},
  abstract = {The 3D computer gaming industry is constantly exploring new avenues for creating immersive and engaging environments. One avenue being explored is autonomous control of the behaviour of non-player characters (NPC). This paper reviews and compares existing artificial intelligence (AI) techniques for controlling the behaviour of non-human characters in Multiplayer Online Battle Arena (MOBA) game environments. Two techniques, the fuzzy state machine (FuSM) and the emotional behaviour tree (EBT), were reviewed and compared. In addition, an alternate and simple mechanism to incorporate emotion in a behaviour tree is proposed and tested. Initial tests of the mechanism show that it is a viable and promising mechanism for effectively tracking the emotional state of an NPC and for incorporating emotion in NPC decision making.},
  year = {2016},
  journal = {Annual Conference of the South African Institute of Computer Scientists and Information Technologists (SAICSIT 2016)},
  pages = {45},
  month = {26/09-28/09},
  publisher = {ACM},
  address = {Johannesburg},
  isbn = {978-1-4503-4805-8},
  doi = {http://dx.doi.org/10.1145/2987491.2987513},
}
Clark A, Moodley D. A System for a Hand Gesture-Manipulated Virtual Reality Environment. In: Annual Conference of the South African Institute of Computer Scientists and Information Technologists (SAICSIT 2016). Johannesburg: ACM; 2016. doi:http://dx.doi.org/10.1145/2987491.2987511.

Extensive research has been done using machine learning techniques for hand gesture recognition (HGR) using camera-based devices; such as the Leap Motion Controller (LMC). However, limited research has investigated machine learning techniques for HGR in virtual reality applications (VR). This paper reports on the design, implementation, and evaluation of a static HGR system for VR applications using the LMC. The gesture recognition system incorporated a lightweight feature vector of five normalized tip-to-palm distances and a k-nearest neighbour (kNN) classifier. The system was evaluated in terms of response time, accuracy and usability using a case-study VR stellar data visualization application created in the Unreal Engine 4. An average gesture classification time of 0.057ms with an accuracy of 82.5% was achieved on four distinct gestures, which is comparable with previous results from Sign Language recognition systems. This shows the potential of HGR machine learning techniques applied to VR, which were previously applied to non-VR scenarios such as Sign Language recognition.

@{156,
  author = {A. Clark and Deshen Moodley},
  title = {A System for a Hand Gesture-Manipulated Virtual Reality Environment},
  abstract = {Extensive research has been done using machine learning techniques for hand gesture recognition (HGR) using camera-based devices; such as the Leap Motion Controller (LMC). However, limited research has investigated machine learning techniques for HGR in virtual reality applications (VR). This paper reports on the design, implementation, and evaluation of a static HGR system for VR applications using the LMC. The gesture recognition system incorporated a lightweight feature vector of five normalized tip-to-palm distances and a k-nearest neighbour (kNN) classifier. The system was evaluated in terms of response time, accuracy and usability using a case-study VR stellar data visualization application created in the Unreal Engine 4. An average gesture classification time of 0.057ms with an accuracy of 82.5% was achieved on four distinct gestures, which is comparable with previous results from Sign Language recognition systems. This shows the potential of HGR machine learning techniques applied to VR, which were previously applied to non-VR scenarios such as Sign Language recognition.},
  year = {2016},
  journal = {Annual Conference of the South African Institute of Computer Scientists and Information Technologists (SAICSIT 2016)},
  pages = {10},
  month = {26/09-28/09},
  publisher = {ACM},
  address = {Johannesburg},
  isbn = {978-1-4503-4805-8},
  doi = {http://dx.doi.org/10.1145/2987491.2987511},
}
  • CSIR
  • DSI
  • Covid-19