Research Publications

2019

Casini G, Straccia U, Meyer T. A polynomial Time Subsumption Algorithm for Nominal Safe ELO⊥ under Rational Closure. Information Sciences. 2019;501. doi:https://doi.org/10.1016/j.ins.2018.09.037.

Description Logics (DLs) under Rational Closure (RC) is a well-known framework for non-monotonic reasoning in DLs. In this paper, we address the concept subsumption decision problem under RC for nominal safe ELO⊥, a notable and practically important DL representative of the OWL 2 profile OWL 2 EL. Our contribution here is to define a polynomial time subsumption procedure for nominal safe ELO⊥ under RC that relies entirely on a series of classical, monotonic EL⊥ subsumption tests. Therefore, any existing classical monotonic EL⊥ reasoner can be used as a black box to implement our method. We then also adapt the method to one of the known extensions of RC for DLs, namely Defeasible Inheritance-based DLs without losing the computational tractability.

@article{221,
  author = {Giovanni Casini and Umberto Straccia and Thomas Meyer},
  title = {A polynomial Time Subsumption Algorithm for Nominal Safe ELO⊥ under Rational Closure},
  abstract = {Description Logics (DLs) under Rational Closure (RC) is a well-known framework for non-monotonic reasoning in DLs. In this paper, we address the concept subsumption decision problem under RC for nominal safe ELO⊥, a notable and practically important DL representative of the OWL 2 profile OWL 2 EL. Our contribution here is to define a polynomial time subsumption procedure for nominal safe ELO⊥ under RC that relies entirely on a series of classical, monotonic EL⊥ subsumption tests. Therefore, any existing classical monotonic EL⊥ reasoner can be used as a black box to implement our method. We then also adapt the method to one of the known extensions of RC for DLs, namely Defeasible Inheritance-based DLs without losing the computational tractability.},
  year = {2019},
  journal = {Information Sciences},
  volume = {501},
  pages = {588 - 620},
  publisher = {Elsevier},
  isbn = {0020-0255},
  url = {http://www.sciencedirect.com/science/article/pii/S0020025518307436},
  doi = {https://doi.org/10.1016/j.ins.2018.09.037},
}
Leenen L, Meyer T. Artificial Intelligence and Big Data Analytics in Support of Cyber Defense. In: Developments in Information Security and Cybernetic Wars. United States of America: Information Science Reference, IGI Global; 2019. doi:10.4018/978-1-5225-8304-2.ch002.

Cybersecurity analysts rely on vast volumes of security event data to predict, identify, characterize, and deal with security threats. These analysts must understand and make sense of these huge datasets in order to discover patterns which lead to intelligent decision making and advance warnings of possible threats, and this ability requires automation. Big data analytics and artificial intelligence can improve cyber defense. Big data analytics methods are applied to large data sets that contain different data types. The purpose is to detect patterns, correlations, trends, and other useful information. Artificial intelligence provides algorithms that can reason or learn and improve their behavior, and includes semantic technologies. A large number of automated systems are currently based on syntactic rules which are generally not sophisticated enough to deal with the level of complexity in this domain. An overview of artificial intelligence and big data technologies in cyber defense is provided, and important areas for future research are identified and discussed.

@inbook{220,
  author = {Louise Leenen and Thomas Meyer},
  title = {Artificial Intelligence and Big Data Analytics in Support of Cyber Defense},
  abstract = {Cybersecurity analysts rely on vast volumes of security event data to predict, identify, characterize, and deal with security threats. These analysts must understand and make sense of these huge datasets in order to discover patterns which lead to intelligent decision making and advance warnings of possible threats, and this ability requires automation. Big data analytics and artificial intelligence can improve cyber defense. Big data analytics methods are applied to large data sets that contain different data types. The purpose is to detect patterns, correlations, trends, and other useful information. Artificial intelligence provides algorithms that can reason or learn and improve their behavior, and includes semantic technologies. A large number of automated systems are currently based on syntactic rules which are generally not sophisticated enough to deal with the level of complexity in this domain. An overview of artificial intelligence and big data technologies in cyber defense is provided, and important areas for future research are identified and discussed.},
  year = {2019},
  journal = {Developments in Information Security and Cybernetic Wars},
  pages = {42 - 63},
  publisher = {Information Science Reference, IGI Global},
  address = {United States of America},
  isbn = {9781522583042},
  doi = {10.4018/978-1-5225-8304-2.ch002},
}
Booth R, Casini G, Meyer T, Varzinczak I. On rational entailment for Propositional Typicality Logic. Artificial Intelligence . 2019;227. doi:https://doi.org/10.1016/j.artint.2019.103178.

Propositional Typicality Logic (PTL) is a recently proposed logic, obtained by enriching classical propositional logic with a typicality operator capturing the most typical (alias normal or conventional) situations in which a given sentence holds. The semantics of PTL is in terms of ranked models as studied in the well-known KLM approach to preferential reasoning and therefore KLM-style rational consequence relations can be embedded in PTL. In spite of the non-monotonic features introduced by the semantics adopted for the typicality operator, the obvious Tarskian definition of entailment for PTL remains monotonic and is therefore not appropriate in many contexts. Our first important result is an impossibility theorem showing that a set of proposed postulates that at first all seem appropriate for a notion of entailment with regard to typicality cannot be satisfied simultaneously. Closer inspection reveals that this result is best interpreted as an argument for advocating the development of more than one type of PTL entailment. In the spirit of this interpretation, we investigate three different (semantic) versions of entailment for PTL, each one based on the definition of rational closure as introduced by Lehmann and Magidor for KLM-style conditionals, and constructed using different notions of minimality.

@article{219,
  author = {Richard Booth and Giovanni Casini and Thomas Meyer and Ivan Varzinczak},
  title = {On rational entailment for Propositional Typicality Logic},
  abstract = {Propositional Typicality Logic (PTL) is a recently proposed logic, obtained by enriching classical propositional logic with a typicality operator capturing the most typical (alias normal or conventional) situations in which a given sentence holds. The semantics of PTL is in terms of ranked models as studied in the well-known KLM approach to preferential reasoning and therefore KLM-style rational consequence relations can be embedded in PTL. In spite of the non-monotonic features introduced by the semantics adopted for the typicality operator, the obvious Tarskian definition of entailment for PTL remains monotonic and is therefore not appropriate in many contexts. Our first important result is an impossibility theorem showing that a set of proposed postulates that at first all seem appropriate for a notion of entailment with regard to typicality cannot be satisfied simultaneously. Closer inspection reveals that this result is best interpreted as an argument for advocating the development of more than one type of PTL entailment. In the spirit of this interpretation, we investigate three different (semantic) versions of entailment for PTL, each one based on the definition of rational closure as introduced by Lehmann and Magidor for KLM-style conditionals, and constructed using different notions of minimality.},
  year = {2019},
  journal = {Artificial Intelligence},
  volume = {227},
  pages = {103178},
  publisher = {Elsevier},
  isbn = {0004-3702},
  url = {https://www.sciencedirect.com/science/article/abs/pii/S000437021830506X?via%3Dihub},
  doi = {https://doi.org/10.1016/j.artint.2019.103178},
}
Mbonye V, Price CS. A model to evaluate the quality of Wi-Fi perfomance: Case study at UKZN Westville campus. In: 2nd International Conference on Advances in Big Data, Computing and Data Communication Systems (icABCD 2019). Danvers MA: IEEE; 2019.

Understanding how satisfied users are with services is very important in the delivery of quality services and in improving them. While studies have investigated perceptions of Wi-Fi among students, there is still a gap in understanding the overall perception of quality of service in terms of the different factors that may affect Wi-Fi service quality. Brady & Cronin Jr’s service quality model proposes that outcome quality, physical environment quality and interaction quality affect service quality. Sub-constructs for the independent variables were generated, and Likert-scale items developed for each sub-construct, based on the literature. 373 questionnaires were administered to University of KwaZulu-Natal (UKZN) Westville campus students. Factor analysis was to confirm the sub-constructs. Multiple regression analysis was used to test the model’s ability to predict Wi-Fi service quality. Of the three independent constructs, the outcome quality mean had the highest value (4.53), and it was similar to how the students rated service quality (4.52). All the constructs were rated at above the neutral score of 4. In the factor analysis, two physical environment quality items were excluded, and one service quality item was categorised with the expertise sub-construct of interaction quality. Using multiple regression analysis, the model showed that the independent constructs predict service quality with an R2 of 59.5%. However, when models for individual most-used locations (the library and lecture venues) were conducted, the R2 improved. The model can be used to understand users’ perceptions of outcome quality, physical environment quality and interaction quality which influence the quality of Wi-Fi performance, and evaluate the Wi-Fi performance quality of different locations.

@{217,
  author = {V. Mbonye and C. Sue Price},
  title = {A model to evaluate the quality of Wi-Fi perfomance: Case study at UKZN Westville campus},
  abstract = {Understanding how satisfied users are with services is very important in the delivery of quality services and in improving them. While studies have investigated perceptions of Wi-Fi among students, there is still a gap in understanding the overall perception of quality of service in terms of the different factors that may affect Wi-Fi service quality.  Brady & Cronin Jr’s service quality model proposes that outcome quality, physical environment quality and interaction quality affect service quality.  Sub-constructs for the independent variables were generated, and Likert-scale items developed for each sub-construct, based on the literature.  373 questionnaires were administered to University of KwaZulu-Natal (UKZN) Westville campus students.  Factor analysis was to confirm the sub-constructs.  Multiple regression analysis was used to test the model’s ability to predict Wi-Fi service quality.
Of the three independent constructs, the outcome quality mean had the highest value (4.53), and it was similar to how the students rated service quality (4.52).  All the constructs were rated at above the neutral score of 4.  In the factor analysis, two physical environment quality items were excluded, and one service quality item was categorised with the expertise sub-construct of interaction quality.  Using multiple regression analysis, the model showed that the independent constructs predict service quality with an R2 of 59.5%.  However, when models for individual most-used locations (the library and lecture venues) were conducted, the R2 improved.  The model can be used to understand users’ perceptions of outcome quality, physical environment quality and interaction quality which influence the quality of Wi-Fi performance, and evaluate the Wi-Fi performance quality of different locations.},
  year = {2019},
  journal = {2nd International Conference on Advances in Big Data, Computing and Data Communication Systems (icABCD 2019)},
  pages = {291-297},
  month = {05/08 - 06/08},
  publisher = {IEEE},
  address = {Danvers MA},
  isbn = {978-1-5386-9235-6},
}
Nudelman Z, Moodley D, Berman S. Using Bayesian Networks and Machine Learning to Predict Computer Science Success. In: Annual Conference of the Southern African Computer Lecturers' Association. Springer; 2019. https://link.springer.com/chapter/10.1007/978-3-030-05813-5_14.

Bayesian Networks and Machine Learning techniques were evaluated and compared for predicting academic performance of Computer Science students at the University of Cape Town. Bayesian Networks performed similarly to other classification models. The causal links inherent in Bayesian Networks allow for understanding of the contributing factors for academic success in this field. The most effective indicators of success in first-year ‘core’ courses in Computer Science included the student’s scores for Mathematics and Physics as well as their aptitude for learning and their work ethos. It was found that unsuccessful students could be identified with ≈ 91% accuracy. This could help to increase throughput as well as student wellbeing at university.

@{216,
  author = {Z. Nudelman and Deshen Moodley and S. Berman},
  title = {Using Bayesian Networks and Machine Learning to Predict Computer Science Success},
  abstract = {Bayesian Networks and Machine Learning techniques were evaluated and compared for predicting academic performance of Computer Science students at the University of Cape Town. Bayesian Networks performed similarly to other classification models. The causal links inherent in Bayesian Networks allow for understanding of the contributing factors for academic success in this field. The most effective indicators of success in first-year ‘core’ courses in Computer Science included the student’s scores for Mathematics and Physics as well as their aptitude for learning and their work ethos. It was found that unsuccessful students could be identified with   ≈ 91% accuracy. This could help to increase throughput as well as student wellbeing at university.},
  year = {2019},
  journal = {Annual Conference of the Southern African Computer Lecturers' Association},
  pages = {207-222},
  month = {18/06/2018 - 20/06/2018},
  publisher = {Springer},
  isbn = {978-3-030-05813-5},
  url = {https://link.springer.com/chapter/10.1007/978-3-030-05813-5_14},
}

2018

van der Merwe A, Gerber A. Guidelines for using Bloom’s Taxonomy Table as Alignment Tool between Goals and Assessment. In: SACLA. Springer; 2018.

In academia lecturers are often appointed based on their research profile and not their teaching and learning (T&L) experience. Although universities do emphasize T&L, it might often not even be mentioned during interviews. In the field of education lecturers are more aware of using tools such as Bloom’s Taxonomy during their T&L activities. However, in the field of information systems limited academic papers are available on how lecturers can align their goals with the assessment in their courses. In this paper Bloom’s Taxonomy Table was used to evaluate the alignment of goals of the case and the assessment done on a fourth-year level subject offered in the information systems field. The purpose of the paper was firstly to reflect on the practice of using Bloom’s Taxonomy Table as an evaluation tool and then secondly to provide a set of guidelines for lecturers who want to use Bloom’s Taxonomy Table in alignment studies.

@{260,
  author = {Alta van der Merwe and Aurona Gerber},
  title = {Guidelines for using Bloom’s Taxonomy Table as Alignment Tool between Goals and Assessment},
  abstract = {In academia lecturers are often appointed based on their research profile and not their teaching and learning (T&L) experience. Although universities do emphasize T&L, it might often not even be mentioned during interviews. In the field of education lecturers are more aware of using tools such as Bloom’s Taxonomy during their T&L activities. However, in the field of information systems limited academic papers are available on how lecturers can align their goals with the assessment in their courses. In this paper Bloom’s Taxonomy Table was used to evaluate the alignment of goals of the case and the assessment done on a fourth-year level subject offered in the information systems field. The purpose of the paper was firstly to reflect on the practice of using Bloom’s Taxonomy Table as an evaluation tool and then secondly to provide a set of guidelines for lecturers who want to use Bloom’s Taxonomy Table in alignment studies.},
  year = {2018},
  journal = {SACLA},
  pages = {278 - 290},
  month = {18/06 - 20/06},
  publisher = {Springer},
  isbn = {978-0-720-80192-8},
}
Thomas A, Gerber A, van der Merwe A. Ontology-Based Spatial Pattern Recognition in Diagrams. In: Artificial Intelligence Applications and Innovations. Springer; 2018. https://www.springer.com/us/book/9783319920061.

Diagrams are widely used in our day to day communication. A knowledge of the spatial patterns used in diagrams is essential to read and understand them. In the context of diagrams, spatial patterns mean accepted spatial arrangements of graphical and textual elements used to represent diagram-specific concepts. In order to assist with the automated understanding of diagrams by computer applications, this paper presents an ontology-based approach to recognise diagram-specific concepts from the spatial patterns in diagrams. Specifically, relevant spatial patterns of diagrams are encoded in an ontology, and the automated instance classification feature of the ontology reasoners is utilised to map spatial patterns to diagram-specific concepts depicted in a diagram. A prototype of this approach to support automated recognition of UML and domain concepts from class diagrams and its performance are also discussed in this paper. This paper concludes with a reflection of the strengths and limitations of the proposed approach.

@{257,
  author = {Anitta Thomas and Aurona Gerber and Alta van der Merwe},
  title = {Ontology-Based Spatial Pattern Recognition in Diagrams},
  abstract = {Diagrams are widely used in our day to day communication. A knowledge of the spatial patterns used in diagrams is essential to read and understand them. In the context of diagrams, spatial patterns mean accepted spatial arrangements of graphical and textual elements used to represent diagram-specific concepts. In order to assist with the automated understanding of diagrams by computer applications, this paper presents an ontology-based approach to recognise diagram-specific concepts from the spatial patterns in diagrams. Specifically, relevant spatial patterns of diagrams are encoded in an ontology, and the automated instance classification feature of the ontology reasoners is utilised to map spatial patterns to diagram-specific concepts depicted in a diagram. A prototype of this approach to support automated recognition of UML and domain concepts from class diagrams and its performance are also discussed in this paper. This paper concludes with a reflection of the strengths and limitations of the proposed approach.},
  year = {2018},
  journal = {Artificial Intelligence Applications and Innovations},
  pages = {61 -72},
  month = {25/05 - 27/05},
  publisher = {Springer},
  isbn = {978-3-319-92007-8},
  url = {https://www.springer.com/us/book/9783319920061},
}
Gerber A. Computational Ontologies as Classification Artifacts in IS Research. In: AMCIS. ; 2018. https://aisel.aisnet.org/amcis2018/Semantics/Presentations/5/.

Based on previous work on classification in IS research, this paper reports on an experimental investigation into the use of computational ontologies as classification artifacts, given the classification approaches identified in information systems (IS) research. The set-theoretical basis of computational ontologies ensures particular suitability for classification functions, and classification was identified as an accepted approach to develop contributions to IS research. The main contribution of the paper is a set of guidelines that IS researchers could use when adopting a classification approach and constructing an ontology as the resulting classification artifact. The guidelines were extracted by mapping an ontology construction approach to the classification approaches of IS research. Ontology construction approaches have been developed in response to the significant adoption of computational ontologies in the broad field of computing and IS since the acceptance of the W3C standards for ontology languages. These W3C standards also resulted in the development of tools such as ontology editors and reasoners. The advantages of using computational ontologies as classification artifacts thus include standardized representation, as well as the availability of associated technologies such as reasoners that could, for instance, ensure that implicit assumptions are made explicit and that the ontology is consistent and satisfiable. The research results from this experimental investigation extend the current work on classification in IS research.

@{256,
  author = {Aurona Gerber},
  title = {Computational Ontologies as Classification Artifacts in IS Research},
  abstract = {Based on previous work on classification in IS research, this paper reports on an experimental investigation into the use of computational ontologies as classification artifacts, given the classification approaches identified in information systems (IS) research. The set-theoretical basis of computational ontologies ensures particular suitability for classification functions, and classification was identified as an accepted approach to develop contributions to IS research. The main contribution of the paper is a set of guidelines that IS researchers could use when adopting a classification approach and constructing an ontology as the resulting classification artifact. The guidelines were extracted by mapping an ontology construction approach to the classification approaches of IS research. Ontology construction approaches have been developed in response to the significant adoption of computational ontologies in the broad field of computing and IS since the acceptance of the W3C standards for ontology languages. These W3C standards also resulted in the development of tools such as ontology editors and reasoners. The advantages of using computational ontologies as classification artifacts thus include standardized representation, as well as the availability of associated technologies such as reasoners that could, for instance, ensure that implicit assumptions are made explicit and that the ontology is consistent and satisfiable. The research results from this experimental investigation extend the current work on classification in IS research.},
  year = {2018},
  journal = {AMCIS},
  month = {16/09 - 18/09},
  isbn = {978-0-9966831-6-6},
  url = {https://aisel.aisnet.org/amcis2018/Semantics/Presentations/5/},
}
Moodley D, Pillay A, Seebregts C. Establishing a Health Informatics Research Laboratory in South Africa . In: Digital Re-imagination Colloquium 2018. NEMISA; 2018. http://uir.unisa.ac.za/bitstream/handle/10500/25615/Digital%20Skills%20Proceedings%202018.pdf?sequence=1&isAllowed=y.

Aim/Purpose The aim of this project was to explore models for stimulating health informatics innovation and capacity development in South Africa. Background There is generally a critical lack of health informatics innovation and capacity in South Africa and sub-Saharan Africa. This is despite the wide anticipation that digital health systems will play a fundamental role in strengthening health systems and improving service delivery Methodology We established a program over four years to train Masters and Doctoral students and conducted research projects across a wide range of biomedical and health informatics technologies at a leading South African university. We also developed a Health Architecture Laboratory Innovation and Development Ecosystem (HeAL-IDE) designed to be a long-lasting and potentially reproducible output of the project. Contribution We were able to demonstrate a successful model for building innovation and capacity in a sustainable way. Key outputs included: (i)a successful partnership model; (ii) a sustainable HeAL-IDE; (iii) research papers; (iv) a world-class software product and several demonstrators; and (iv) highly trained staff. Findings Our main findings are that: (i) it is possible to create a local ecosystem for innovation and capacity building that creates value for the partners (a university and a private non-profit company); (ii) the ecosystem is able to create valuable outputs that would be much less likely to have been developed singly by each partner, and; (iii) the ecosystem could serve as a powerful model for adoption in other settings. Recommendations for Practitioners Non-profit companies and non-governmental organizations implementing health information systems in South Africa and other low resource settings have an opportunity to partner with local universities for purposes of internal capacity development and assisting with the research, reflection and innovation aspects of their projects and programmes. Recommendation for Researchers Applied health informatics researchers working in low resource settings could productively partner with local implementing organizations in order to gain a better understanding of the challenges and requirements at field sites and to accelerate the testing and deployment of health information technology solutions. Impact on Society This research demonstrates a model that can deliver valuable software products for public health. Future Research It would be useful to implement the model in other settings and research whether the model is more generally useful

@{252,
  author = {Deshen Moodley and Anban Pillay and Chris Seebregts},
  title = {Establishing a Health Informatics Research Laboratory in South Africa},
  abstract = {Aim/Purpose 
The aim of this project was to explore models for stimulating health
informatics innovation and capacity development in South Africa.
Background 
There is generally a critical lack of health informatics innovation and capacity in South Africa and sub-Saharan Africa. This is despite the wide anticipation that digital health systems will play a fundamental role in strengthening health systems and improving service delivery
Methodology 
We established a program over four years to train Masters and Doctoral students and conducted research projects across a wide range of biomedical and health informatics technologies at a leading South African university. We also developed a Health Architecture Laboratory Innovation and Development Ecosystem (HeAL-IDE) designed to be a long-lasting and potentially reproducible output of the project.
Contribution 
We were able to demonstrate a successful model for building innovation and capacity in a sustainable way. Key outputs included: (i)a successful partnership model; (ii) a sustainable HeAL-IDE; (iii) research papers; (iv) a world-class software product and several
demonstrators; and (iv) highly trained staff.
Findings 
Our main findings are that: (i) it is possible to create a local ecosystem for innovation and capacity building that creates value for the partners (a university and a private non-profit company); (ii) the ecosystem is able to create valuable outputs that would be much less likely to have been developed singly by each partner, and; (iii) the ecosystem could serve as a powerful model for adoption in other settings.
Recommendations for Practitioners
Non-profit companies and non-governmental organizations implementing health information systems in South Africa and other low resource settings have an opportunity to partner with local universities for purposes of internal capacity development and assisting with the research, reflection and innovation aspects of their projects and programmes.
Recommendation for Researchers
Applied health informatics researchers working in low resource settings could productively partner with local implementing organizations in order to gain a better understanding of the challenges and requirements at field sites and to accelerate the testing and deployment of health information technology solutions.
Impact on Society 
This research demonstrates a model that can deliver valuable software products for public health.
Future Research 
It would be useful to implement the model in other settings and research whether the model is more generally useful},
  year = {2018},
  journal = {Digital Re-imagination Colloquium 2018},
  pages = {16 - 24},
  month = {13/03 - 15/03},
  publisher = {NEMISA},
  isbn = {978-0-6399275-0-3},
  url = {http://uir.unisa.ac.za/bitstream/handle/10500/25615/Digital%20Skills%20Proceedings%202018.pdf?sequence=1&isAllowed=y},
}
Watson B. The impact of using a contract-driven, test-interceptor based software development approach. In: Annual conference of The South African Institute of Computer Scientists and Information Technologists (SAICSIT 2018). New York: ACM; 2018. https://doi.org/10.475/123_4.

A contract-driven development approach requires the formalization of component requirements in the form of a component contract. The Use Case, Responsibility Driven Analysis and Design (URDAD) methodology is based on the contract-driven development approach and uses contracts to capture user requirements and perform a technology-neutral design across layers of granularity. This is achieved by taking use-case based functional requirements through an iterative design process and generating various levels of granularity iteratively. In this project, the component contracts that were captured by utilizing the URDAD approach are used to generate test interceptors which validate whether, in the context of rendering component services, the component contracts are satisfied. To achieve this, Java classes and interfaces are annotated with pre- and postconditions to represent the contracts in code. Annotation processors are then used to automatically generate test-interceptor classes by processing the annotations. The test-interceptor classes encapsulate test-logic and are interface-compatible with their underlying component counterparts. This enable test-interceptors to intercept service requests to the underlying counterpart components in order to verify contract adherence. The generated test interceptors can be used for unit testing as well as real-time component monitoring. This development approach, utilized within the URDAD methodology would then result in unit and integration tests across levels of granularity. Empirical data from actual software development projects will be used to assess the impact of introducing such a development approach in real software development projects. In particular, the study assesses the impact on the quality attributes of the software development process, as well as the qualities of the software produced by the process. Process qualities measured include development productivity (the rate at which software is produced), correctness (the rate at which the produced software meets the clients requirements) and the certifiability of the software development process (which certifiability requirements are fully or partially addressed by the URDAD development approach). Software qualities measured include reusability (empirical and qualitative), simplicity (the inverse of the complexity measure) and bug density (number of defects in a module). The study aims to show conclusively how the approach impacts the creation of correct software which meets the client requirements, how productivity is affected and if the approach enhances or hinders certifiability. The study also aims to determine if testinterceptors are a viable mechanism to produce high-quality tests that contribute to the creation of correct software. Furthermore, the study aims to determine if the software produced by applying this approach yield improved reusability or not, if the software becomes more or less complex and if more or less bugs are induced.

@{211,
  author = {Bruce Watson},
  title = {The impact of using a contract-driven, test-interceptor based software development approach},
  abstract = {A contract-driven development approach requires the formalization of component requirements in the form of a component contract. The Use Case, Responsibility Driven Analysis and Design (URDAD) methodology is based on the contract-driven development approach and uses contracts to capture user requirements and perform a technology-neutral design across layers of granularity. This is achieved by taking use-case based functional requirements through an iterative design process and generating various levels of granularity iteratively. 
In this project, the component contracts that were captured by utilizing the URDAD approach are used to generate test interceptors which validate whether, in the context of rendering component services, the component contracts are satisfied. To achieve this, Java classes and interfaces are annotated with pre- and postconditions to represent the contracts in code. Annotation processors are then used to automatically generate test-interceptor classes by processing the annotations. The test-interceptor classes encapsulate test-logic and are interface-compatible with their underlying component counterparts. This enable test-interceptors to intercept service requests to the underlying counterpart components in order to verify contract adherence. The generated test interceptors can be used for unit testing as well as real-time component monitoring. This development approach, utilized within the URDAD methodology would then result in unit and integration tests across levels of granularity. 
Empirical data from actual software development projects will be used to assess the impact of introducing such a development approach in real software development projects. In particular, the study assesses the impact on the quality attributes of the software development process, as well as the qualities of the software produced by the process.
Process qualities measured include development productivity (the rate at which software is produced), correctness (the rate at which the produced software meets the clients requirements) and the certifiability of the software  development process (which certifiability requirements are fully or partially addressed by the URDAD development approach). Software qualities measured include reusability (empirical and qualitative), simplicity (the inverse of the complexity measure) and bug density (number of defects in a module). 
The study aims to show conclusively how the approach impacts the creation of correct software which meets the client requirements, how productivity is affected and if the approach enhances or hinders certifiability. The study also aims to determine if testinterceptors
are a viable mechanism to produce high-quality tests that contribute to the creation of correct software. Furthermore, the study aims to determine if the software produced by applying this approach yield improved reusability or not, if the software becomes
more or less complex and if more or less bugs are induced.},
  year = {2018},
  journal = {Annual conference of The South African Institute of Computer Scientists and Information Technologists (SAICSIT 2018)},
  pages = {322-326},
  month = {26/09-28/09},
  publisher = {ACM},
  address = {New York},
  isbn = {123-4567-24-567/08/06},
  url = {https://doi.org/10.475/123_4},
}
Watson B. Three Strategies for the Dead-Zone String Matching Algorithm. In: The Prague Stringology Conference. Prague, Czech Republic: Prague Stringology Club; 2018. http://www.stringology.org/.

No Abstract

@{210,
  author = {Bruce Watson},
  title = {Three Strategies for the Dead-Zone String Matching Algorithm},
  abstract = {No Abstract},
  year = {2018},
  journal = {The Prague Stringology Conference},
  pages = {117-128},
  month = {27/08-28/08},
  publisher = {Prague Stringology Club},
  address = {Prague, Czech Republic},
  isbn = {978-80-01-06484-9},
  url = {http://www.stringology.org/},
}
Watson B. Modelling the sensory space of varietal wines: Mining of large, unstructured text data and visualisation of style patterns. Scientific Reports. 2018;8(4987). www.nature.com/scientificreports.

No Abstract

@article{209,
  author = {Bruce Watson},
  title = {Modelling the sensory space of varietal wines: Mining of large, unstructured text data and visualisation of style patterns},
  abstract = {No Abstract},
  year = {2018},
  journal = {Scientific Reports},
  volume = {8},
  pages = {1-13},
  issue = {4987},
  publisher = {Springer Nature},
  url = {www.nature.com/scientificreports},
}
Watson B. Using CSP to Develop Quality Concurrent Software. In: Principled Software Development. Switzerland: Springer; 2018. https://doi.org/10.1007/978-3-319-98047-8.

A method for developing concurrent software is advocated that centres on using CSP to specify the behaviour of the system. A small example problem is used to illustrate the method. The problem is to develop a simulation system that keeps track of and reports on the least unique bid of multiple streams of randomly generated incoming bids. The problem’s required high-level behaviour is specified in CSP, refined down to the level of interacting processes and then verified for refinement and behavioural correctness using the FDR refinement checker. Heuristics are used to map the CSP processes to a GO implementation. Interpretive reflections are offered of the lessons learned as a result of the exercise.

@inbook{208,
  author = {Bruce Watson},
  title = {Using CSP to Develop Quality Concurrent Software},
  abstract = {A method for developing concurrent software is advocated that centres
on using CSP to specify the behaviour of the system. A small example problem
is used to illustrate the method. The problem is to develop a simulation system
that keeps track of and reports on the least unique bid of multiple streams of
randomly generated incoming bids. The problem’s required high-level behaviour is
specified in CSP, refined down to the level of interacting processes and then verified
for refinement and behavioural correctness using the FDR refinement checker.
Heuristics are used to map the CSP processes to a GO implementation. Interpretive
reflections are offered of the lessons learned as a result of the exercise.},
  year = {2018},
  journal = {Principled Software Development},
  pages = {165-184},
  publisher = {Springer},
  address = {Switzerland},
  isbn = {978-3-319-98046-1},
  url = {https://doi.org/10.1007/978-3-319-98047-8},
}
Meyer T, Leenen L. Semantic Technologies and Big Data Analytics for Cyber Defence. In: Information Retrieval and Management: Concepts, Methodologies, Tools, and Applications. IGI Global; 2018. https://researchspace.csir.co.za/dspace/bitstream/handle/10204/8932/Leenen_2016.pdf?sequence=1.

The Governments, military forces and other organisations responsible for cybersecurity deal with vast amounts of data that has to be understood in order to lead to intelligent decision making. Due to the vast amounts of information pertinent to cybersecurity, automation is required for processing and decision making, specifically to present advance warning of possible threats. The ability to detect patterns in vast data sets, and being able to understanding the significance of detected patterns are essential in the cyber defence domain. Big data technologies supported by semantic technologies can improve cybersecurity, and thus cyber defence by providing support for the processing and understanding of the huge amounts of information in the cyber environment. The term big data analytics refers to advanced analytic techniques such as machine learning, predictive analysis, and other intelligent processing techniques applied to large data sets that contain different data types. The purpose is to detect patterns, correlations, trends and other useful information. Semantic technologies is a knowledge representation paradigm where the meaning of data is encoded separately from the data itself. The use of semantic technologies such as logicbased systems to support decision making is becoming increasingly popular. However, most automated systems are currently based on syntactic rules. These rules are generally not sophisticated enough to deal with the complexity of decisions required to be made. The incorporation of semantic information allows for increased understanding and sophistication in cyber defence systems. This paper argues that both big data analytics and semantic technologies are necessary to provide counter measures against cyber threats. An overview of the use of semantic technologies and big data technologies in cyber defence is provided, and important areas for future research in the combined domains are discussed.

@inbook{206,
  author = {Thomas Meyer and Louise Leenen},
  title = {Semantic Technologies and Big Data Analytics for Cyber Defence},
  abstract = {The Governments, military forces and other organisations responsible for cybersecurity deal with vast amounts of data that has to be understood in order to lead to intelligent decision making. Due to the vast amounts of information pertinent to cybersecurity, automation is required for processing and decision making, specifically to present advance warning of possible threats. The ability to detect patterns in vast data sets, and being able to understanding the significance of detected patterns are essential in the cyber defence domain. Big data technologies supported by semantic technologies can improve cybersecurity, and thus cyber defence by providing support for the processing and understanding of the huge amounts of information in the cyber environment.
The term big data analytics refers to advanced analytic techniques such as machine learning, predictive analysis, and other intelligent processing techniques applied to large data sets that contain different data types. The purpose is to detect patterns, correlations, trends and other useful information. Semantic technologies is a knowledge representation paradigm where the meaning of data is encoded separately from the data itself. The use of semantic technologies such as logicbased systems to support decision making is becoming increasingly popular. However, most automated systems are currently based on syntactic rules. These rules are generally not sophisticated enough to deal with the complexity of decisions required to be made. The incorporation of semantic information allows for increased understanding and sophistication in cyber defence systems.
This paper argues that both big data analytics and semantic technologies are necessary to provide counter measures against cyber threats. An overview of the use of semantic technologies and big data technologies in cyber defence is provided, and important areas for future research in the combined domains are discussed.},
  year = {2018},
  journal = {Information Retrieval and Management: Concepts, Methodologies, Tools, and Applications},
  pages = {1375-1388},
  publisher = {IGI Global},
  isbn = {9781522551911},
  url = {https://researchspace.csir.co.za/dspace/bitstream/handle/10204/8932/Leenen_2016.pdf?sequence=1},
}
Botha L, Meyer T, Peñaloza R. The Bayesian Description Logic BALC. In: International Workshop on Description Logics. ; 2018. http://ceur-ws.org/Vol-2211/.

Description Logics (DLs) that support uncertainty are not as well studied as their crisp alternatives, thereby limiting their use in real world domains. The Bayesian DL BEL and its extensions have been introduced to deal with uncertain knowledge without assuming (probabilistic) independence between axioms. In this paper we combine the classical DL ALC with Bayesian Networks. Our new DL includes a solution to the consistency checking problem and changes to the tableaux algorithm that are not a part of BEL. Furthermore, BALC also supports probabilistic assertional information which was not studied for BEL. We present algorithms for four categories of reasoning problems for our logic; two versions of concept satis ability (referred to as total concept satis- ability and partial concept satis ability respectively), knowledge base consistency, subsumption, and instance checking. We show that all reasoning problems in BALC are in the same complexity class as their classical variants, provided that the size of the Bayesian Network is included in the size of the knowledge base.

@{205,
  author = {Leonard Botha and Thomas Meyer and Rafael Peñaloza},
  title = {The Bayesian Description Logic BALC},
  abstract = {Description Logics (DLs) that support uncertainty are not as well studied as their crisp alternatives, thereby limiting their use in real world domains. The Bayesian DL BEL and its extensions have been introduced to deal with uncertain knowledge without assuming (probabilistic) independence between axioms. In this paper we combine the classical DL ALC with Bayesian Networks. Our new DL includes a solution to the consistency checking problem and changes to the tableaux algorithm that are not a part of BEL. Furthermore, BALC also supports probabilistic assertional information which was not studied for BEL. We present algorithms for four categories of reasoning problems for our logic; two versions of concept satisability (referred to as total concept satis- ability and partial concept satisability respectively), knowledge base consistency, subsumption, and instance checking. We show that all reasoning problems in BALC are in the same complexity class as their classical variants, provided that the size of the Bayesian Network is included in the size of the knowledge base.},
  year = {2018},
  journal = {International Workshop on Description Logics},
  month = {27/10-29/10},
  url = {http://ceur-ws.org/Vol-2211/},
}
de Waal A, Yoo K. Latent Variable Bayesian Networks Constructed Using Structural Equation Modelling. In: 2018 21st International Conference on Information Fusion (FUSION). IEEE; 2018. https://ieeexplore.ieee.org/abstract/document/8455240.

Bayesian networks in fusion systems often contain latent variables. They play an important role in fusion systems as they provide context which lead to better choices of data sources to fuse. Latent variables in Bayesian networks are mostly constructed by means of expert knowledge modelling.We propose using theory-driven structural equation modelling (SEM) to identify and structure latent variables in a Bayesian network. The linking of SEM and Bayesian networks is motivated by the fact that both methods can be shown to be causal models. We compare this approach to a data-driven approach where latent factors are induced by means of unsupervised learning. We identify appropriate metrics for URREF ontology criteria for both approaches.

@{204,
  author = {Alta de Waal and Keunyoung Yoo},
  title = {Latent Variable Bayesian Networks Constructed Using Structural Equation Modelling},
  abstract = {Bayesian networks in fusion systems often contain latent variables. They play an important role in fusion systems as they provide context which lead to better choices of data sources to fuse. Latent variables in Bayesian networks are mostly constructed by means of expert knowledge modelling.We propose using theory-driven structural equation modelling (SEM) to identify and structure latent variables in a Bayesian network. The linking of SEM and Bayesian networks is motivated by the fact that both methods can be shown to be causal models. We compare this approach to a data-driven approach where latent factors are induced by means of unsupervised learning. We identify appropriate metrics for URREF ontology criteria for both approaches.},
  year = {2018},
  journal = {2018 21st International Conference on Information Fusion (FUSION)},
  pages = {688-695},
  month = {10/07-13/07},
  publisher = {IEEE},
  isbn = {978-0-9964527-6-2},
  url = {https://ieeexplore.ieee.org/abstract/document/8455240},
}
Watson B. Using CSP to Develop Quality Concurrent Software. In: Switzerland; 2018. https://doi.org/10.1007/978-3-319-98047-8.

No Abstract

@{203,
  author = {Bruce Watson},
  title = {Using CSP to Develop Quality Concurrent Software},
  abstract = {No Abstract},
  year = {2018},
  address = {Switzerland},
  isbn = {978-3-319-98046-1},
  url = {https://doi.org/10.1007/978-3-319-98047-8},
}
Pretorius A, Kroon S, Kamper H. Learning Dynamics of Linear Denoising Autoencoders. In: 35th International Conference on Machine Learning. Sweden: Proceedings of Machine Learning Research (PMLR); 2018.

Denoising autoencoders (DAEs) have proven useful for unsupervised representation learning, but a thorough theoretical understanding is still lacking of how the input noise influences learning. Here we develop theory for how noise influences learning in DAEs. By focusing on linear DAEs, we are able to derive analytic expressions that exactly describe their learning dynamics. We verify our theoretical predictions with simulations as well as experiments on MNIST and CIFAR-10. The theory illustrates how, when tuned correctly, noise allows DAEs to ignore low variance directions in the inputs while learning to reconstruct them. Furthermore, in a comparison of the learning dynamics of DAEs to standard regularised autoencoders, we show that noise has a similar regularisation effect to weight decay, but with faster training dynamics. We also show that our theoretical predictions approximate learning dynamics on real-world data and qualitatively match observed dynamics in nonlinear DAEs.

@{202,
  author = {Arnu Pretorius and Steve Kroon and H. Kamper},
  title = {Learning Dynamics of Linear Denoising Autoencoders},
  abstract = {Denoising autoencoders (DAEs) have proven useful for unsupervised representation learning, but a thorough theoretical understanding is still lacking of how the input noise influences learning. Here we develop theory for how noise influences learning in DAEs. By focusing on linear DAEs, we are able to derive analytic expressions that exactly describe their learning dynamics. We verify our theoretical predictions with simulations as well as experiments on MNIST and CIFAR-10. The theory illustrates how, when tuned correctly, noise allows DAEs to ignore low variance directions in the inputs while learning to reconstruct them. Furthermore, in a comparison of the learning dynamics of DAEs to standard regularised autoencoders, we show that noise has a similar regularisation effect to weight decay, but with faster training dynamics. We also show that our theoretical predictions approximate learning dynamics on real-world data and qualitatively match observed dynamics in nonlinear DAEs.},
  year = {2018},
  journal = {35th International Conference on Machine Learning},
  pages = {4141-4150},
  month = {10/07-15/07},
  publisher = {Proceedings of Machine Learning Research (PMLR)},
  address = {Sweden},
  isbn = {1938-7228},
}
Berglund M, Drewes F, van der Merwe B. The Output Size Problem for String-to-Tree Transducers. Journal of Automata, Languages and Combinatorics. 2018;23(1). https://www.jalc.de/issues/2018/issue_23_1-3/jalc-2018-019-038.php.

The output size problem, for a string-to-tree transducer, is to determine the asymptotic behavior of the function describing the maximum size of output trees, with respect to the length of input strings. We show that the problem to determine, for a given regular expression, the worst-case matching time of a backtracking regular expression matcher, can be reduced to the output size problem. The latter can, in turn, be solved by determining the degree of ambiguity of a non-deterministic finite automaton. Keywords: string-to-tree transducers, output size, backtracking regular expression matchers, NFA ambiguity

@article{201,
  author = {Martin Berglund and F. Drewes and Brink van der Merwe},
  title = {The Output Size Problem for String-to-Tree Transducers},
  abstract = {The output size problem, for a string-to-tree transducer, is to determine the asymptotic behavior of the function describing the maximum size of output trees, with respect to the length of input strings. We show that the problem to determine, for a given regular expression, the worst-case matching time of a backtracking regular expression matcher, can be reduced to the output size problem. The latter can, in turn, be solved by determining the degree of ambiguity of a non-deterministic finite automaton. 
Keywords: string-to-tree transducers, output size, backtracking regular expression matchers, NFA ambiguity},
  year = {2018},
  journal = {Journal of Automata, Languages and Combinatorics},
  volume = {23},
  pages = {19-38},
  issue = {1},
  publisher = {Institut für Informatik, Justus-Liebig-Universität Giessen},
  address = {Germany},
  isbn = {2567-3785},
  url = {https://www.jalc.de/issues/2018/issue_23_1-3/jalc-2018-019-038.php},
}
Casini G, Meyer T, Varzinczak I. Defeasible Entailment: from Rational Closure to Lexicographic Closure and Beyond. In: 7th International Workshop on Non-Monotonic Reasoning (NMR 2018). ; 2018. http://orbilu.uni.lu/bitstream/10993/37393/1/NMR2018Paper.pdf.

In this paper we present what we believe to be the first systematic approach for extending the framework for defeasible entailment first presented by Kraus, Lehmann, and Magidor—the so-called KLM approach. Drawing on the properties for KLM, we first propose a class of basic defeasible entailment relations. We characterise this basic framework in three ways: (i) semantically, (ii) in terms of a class of properties, and (iii) in terms of ranks on statements in a knowlege base. We also provide an algorithm for computing the basic framework. These results are proved through various representation results. We then refine this framework by defining the class of rational defeasible entailment relations. This refined framework is also characterised in thee ways: semantically, in terms of a class of properties, and in terms of ranks on statements. We also provide an algorithm for computing the refined framework. Again, these results are proved through various representation results. We argue that the class of rational defeasible entailment relations—a strengthening of basic defeasible entailment which is itself a strengthening of the original KLM proposal—is worthy of the term rational in the sense that all of them can be viewed as appropriate forms of defeasible entailment. We show that the two well-known forms of defeasible entailment, rational closure and lexicographic closure, fall within our rational defeasible framework. We show that rational closure is the most conservative of the defeasible entailment relations within the framework (with respect to subset inclusion), but that there are forms of defeasible entailment within our framework that are more “adventurous” than lexicographic closure.

@{200,
  author = {Giovanni Casini and Thomas Meyer and Ivan Varzinczak},
  title = {Defeasible Entailment: from Rational Closure to Lexicographic Closure and Beyond},
  abstract = {In this paper we present what we believe to be the first systematic approach for extending the framework for defeasible entailment first presented by Kraus, Lehmann, and Magidor—the so-called KLM approach. Drawing on the properties for KLM, we first propose a class of basic defeasible entailment relations. We characterise this basic framework in three ways: (i) semantically, (ii) in terms of a class of properties, and (iii) in terms of ranks on statements in a knowlege base. We also provide an algorithm for computing the basic framework. These results are proved through various representation results. We then refine this framework by defining the class of rational defeasible entailment relations. This refined framework is also characterised in thee ways: semantically, in terms of a class of properties, and in terms of ranks on statements. We also provide an algorithm for computing the refined framework. Again, these results are proved through various representation results.
We argue that the class of rational defeasible entailment relations—a strengthening of basic defeasible entailment which is itself a strengthening of the original KLM proposal—is worthy of the term rational in the sense that all of them can be viewed as appropriate forms of defeasible entailment. We show that the two well-known forms of defeasible entailment, rational closure and lexicographic closure, fall within our rational defeasible framework. We show that rational closure is the most conservative of the defeasible entailment relations within the framework (with respect to subset inclusion), but that there are forms of defeasible entailment within our framework that are more “adventurous” than lexicographic closure.},
  year = {2018},
  journal = {7th International Workshop on Non-Monotonic Reasoning (NMR 2018)},
  pages = {109-118},
  month = {27/10-29/10},
  url = {http://orbilu.uni.lu/bitstream/10993/37393/1/NMR2018Paper.pdf},
}
Berglund M, Drewes F, van der Merwe B. On Regular Expressions with Backreferences and Transducers. In: 10th Workshop on Non-Classical Models of Automata and Applications (NCMA 2018). ; 2018.

Modern regular expression matching software features many extensions, some general while some are very narrowly speci ed. Here we consider the generalization of adding a class of operators which can be described by, e.g. nite-state transducers. Combined with backreferences they enable new classes of languages to be matched. The addition of nite-state transducers is shown to make membership testing undecidable. Following this result, we study the complexity of membership testing for various restricted cases of the model.

@{199,
  author = {Martin Berglund and F. Drewes and Brink van der Merwe},
  title = {On Regular Expressions with Backreferences and Transducers},
  abstract = {Modern regular expression matching software features many extensions, some general while some are very narrowly specied. Here we consider the generalization of adding a class of operators which can be described by, e.g. nite-state transducers. Combined with backreferences they enable new classes of languages to be matched. The addition of nite-state transducers is shown to make membership testing undecidable. Following this result, we study the complexity of membership testing for various restricted cases of the model.},
  year = {2018},
  journal = {10th Workshop on Non-Classical Models of Automata and Applications (NCMA 2018)},
  pages = {1-19},
  month = {21/08-22/08},
}
Rens G, Meyer T, Nayak A. Maximizing Expected Impact in an Agent Reputation Network. In: 41st German Conference on AI, Berlin, Germany, September 24–28, 2018. Springer; 2018. https://www.springer.com/us/book/9783030001100.

We propose a new framework for reasoning about the reputation of multiple agents, based on the partially observable Markov decision process (POMDP). It is general enough for the specification of a variety of stochastic multi-agent system (MAS) domains involving the impact of agents on each other’s reputations. Assuming that an agent must maintain a good enough reputation to survive in the system, a method for an agent to select optimal actions is developed.

@{198,
  author = {Gavin Rens and Thomas Meyer and A. Nayak},
  title = {Maximizing Expected Impact in an Agent Reputation Network},
  abstract = {We propose a new framework for reasoning about the reputation of multiple agents, based on the partially observable Markov decision process (POMDP). It is general enough for the specification of a variety of stochastic multi-agent system (MAS) domains involving the impact of agents on each other’s reputations. Assuming that an agent must maintain a good enough reputation to survive in the system, a method for an agent to select optimal actions is developed.},
  year = {2018},
  journal = {41st German Conference on AI, Berlin, Germany, September 24–28, 2018},
  pages = {99-106},
  month = {24/09-28/09},
  publisher = {Springer},
  isbn = {978-3-030-00110-0},
  url = {https://www.springer.com/us/book/9783030001100},
}
Casini G, Eduardo F, Meyer T, Varzinczak I. A Semantic Perspective on Belief Change in a Preferential Non-Monotonic Framework. In: 16th International Conference on Principles of Knowledge Representation and Reasoning. United States of America: AAAI Press; 2018. https://dblp.org/db/conf/kr/kr2018.html.

Belief change and non-monotonic reasoning are usually viewed as two sides of the same coin, with results showing that one can formally be defined in terms of the other. In this paper we investigate the integration of the two formalisms by studying belief change for a (preferential) non-monotonic framework. We show that the standard AGM approach to belief change can be transferred to a preferential non-monotonic framework in the sense that change operations can be defined on conditional knowledge bases. We take as a point of departure the results presented by Casini and Meyer (2017), and we develop and extend such results with characterisations based on semantics and entrenchment relations, showing how some of the constructions defined for propositional logic can be lifted to our preferential non-monotonic framework.

@{197,
  author = {Giovanni Casini and F. Eduardo and Thomas Meyer and Ivan Varzinczak},
  title = {A Semantic Perspective on Belief Change in a Preferential Non-Monotonic Framework},
  abstract = {Belief change and non-monotonic reasoning are usually viewed as two sides of the same coin, with results showing that one can formally be defined in terms of the other. In this paper we investigate the integration of the two formalisms by studying belief change for a (preferential) non-monotonic framework. We show that the standard AGM approach to belief change can be transferred to a preferential non-monotonic framework in the sense that change operations can be defined on conditional knowledge bases. We take as a point of departure the results presented by Casini and Meyer (2017), and we develop and extend such results with characterisations based on semantics and entrenchment relations, showing how some of the constructions defined for propositional logic can be lifted to our preferential non-monotonic framework.},
  year = {2018},
  journal = {16th International Conference on Principles of Knowledge Representation and Reasoning},
  pages = {220-229},
  month = {27/10-02/11},
  publisher = {AAAI Press},
  address = {United States of America},
  isbn = {978-1-57735-803-9},
  url = {https://dblp.org/db/conf/kr/kr2018.html},
}
van der Merwe B, Berglund M, Bester W. Formalising Boost POSIX Regular Expression Matching. In: International Colloquium on Theoretical Aspects of Computing. Springer; 2018. https://link.springer.com/chapter/10.1007/978-3-030-02508-3_6.

Whereas Perl-compatible regular expression matchers typically exhibit some variation of leftmost-greedy semantics, those conforming to the posix standard are prescribed leftmost-longest semantics. However, the posix standard leaves some room for interpretation, and Fowler and Kuklewicz have done experimental work to confirm differences between various posix matchers. The Boost library has an interesting take on the posix standard, where it maximises the leftmost match not with respect to subexpressions of the regular expression pattern, but rather, with respect to capturing groups. In our work, we provide the first formalisation of Boost semantics, and we analyse the complexity of regular expression matching when using Boost semantics.

@{196,
  author = {Brink van der Merwe and Martin Berglund and Willem Bester},
  title = {Formalising Boost POSIX Regular Expression Matching},
  abstract = {Whereas Perl-compatible regular expression matchers typically exhibit some variation of leftmost-greedy semantics, those conforming to the posix standard are prescribed leftmost-longest semantics. However, the posix standard leaves some room for interpretation, and Fowler and Kuklewicz have done experimental work to confirm differences between various posix matchers. The Boost library has an interesting take on the posix standard, where it maximises the leftmost match not with respect to subexpressions of the regular expression pattern, but rather, with respect to capturing groups. In our work, we provide the first formalisation of Boost semantics, and we analyse the complexity of regular expression matching when using Boost semantics.},
  year = {2018},
  journal = {International Colloquium on Theoretical Aspects of Computing},
  pages = {99-115},
  month = {17/02},
  publisher = {Springer},
  isbn = {978-3-030-02508-3},
  url = {https://link.springer.com/chapter/10.1007/978-3-030-02508-3_6},
}
Ndaba M, Pillay A, Ezugwu A. An Improved Generalized Regression Neural Network for Type II Diabetes Classification. In: ICCSA 2018, LNCS 10963. 10963rd ed. Springer International Publishing AG; 2018.

This paper proposes an improved Generalized Regression Neural Network (KGRNN) for the diagnosis of type II diabetes. Dia- betes, a widespread chronic disease, is a metabolic disorder that develops when the body does not make enough insulin or is unable to use insulin effectively. Type II diabetes is the most common type and accounts for an estimated 90% of cases. The novel KGRNN technique reported in this study uses an enhanced K-Means clustering technique (CVE-K-Means) to produce cluster centers (centroids) that are used to train the network. The technique was applied to the Pima Indian diabetes dataset, a widely used benchmark dataset for Diabetes diagnosis. The technique outper- forms the best known GRNN techniques for Type II diabetes diagnosis in terms of classification accuracy and computational time and obtained a classification accuracy of 86% with 83% sensitivity and 87% specificity. The Area Under the Receiver Operating Characteristic Curve (ROC) of 87% was obtained.

@inbook{195,
  author = {Moeketsi Ndaba and Anban Pillay and Absalom Ezugwu},
  title = {An Improved Generalized Regression Neural Network for Type II Diabetes Classification},
  abstract = {This paper proposes an improved Generalized Regression Neural Network (KGRNN) for the diagnosis of type II diabetes. Dia- betes, a widespread chronic disease, is a metabolic disorder that develops when the body does not make enough insulin or is unable to use insulin effectively. Type II diabetes is the most common type and accounts for an estimated 90% of cases. The novel KGRNN technique reported in this study uses an enhanced K-Means clustering technique (CVE-K-Means) to produce cluster centers (centroids) that are used to train the network. The technique was applied to the Pima Indian diabetes dataset, a widely used benchmark dataset for Diabetes diagnosis. The technique outper- forms the best known GRNN techniques for Type II diabetes diagnosis in terms of classification accuracy and computational time and obtained a classification accuracy of 86% with 83% sensitivity and 87% specificity. The Area Under the Receiver Operating Characteristic Curve (ROC) of 87% was obtained.},
  year = {2018},
  journal = {ICCSA 2018, LNCS 10963},
  edition = {10963},
  pages = {659-671},
  publisher = {Springer International Publishing AG},
  isbn = {3319951718},
}
  • CSIR
  • DSI
  • Covid-19