Research Publications

2017

Watson B, Runge T, Schaefer I, Cleophas LGWA. Many-MADFAct: Concurrently Constructing MADFAs. In: Prague Stringology Conference 2017. Prague Stringology Club; 2017. https://dblp.org/db/conf/stringology/stringology2017.

No Abstract

@{215,
  author = {Bruce Watson and T. Runge and I. Schaefer and L.G.W.A. Cleophas},
  title = {Many-MADFAct: Concurrently Constructing MADFAs},
  abstract = {No Abstract},
  year = {2017},
  journal = {Prague Stringology Conference 2017},
  pages = {127-142},
  month = {28/08-30/08},
  publisher = {Prague Stringology Club},
  isbn = {978-80-01-06193-0},
  url = {https://dblp.org/db/conf/stringology/stringology2017},
}
Watson B. Efficient pattern matching in degenerate strings with the Burrows-Wheeler transform. In: WCTA 2017 12th Workshop on Compression, Text and Algorithms. ; 2017. pages.di.unipi.it/spire2017/wcta.html.

No Abstract

@{214,
  author = {Bruce Watson},
  title = {Efficient pattern matching in degenerate strings with the Burrows-Wheeler transform},
  abstract = {No Abstract},
  year = {2017},
  journal = {WCTA 2017 12th Workshop on Compression, Text and Algorithms},
  pages = {1-7},
  month = {29/09},
  url = {pages.di.unipi.it/spire2017/wcta.html},
}
Watson B, Nxumalo M, Kourie DG, Cleophas LGWA. An Assessment of Algorithms for Deriving Failure Deterministic Finite Automata. South African Computer Journal. 2017;29(1). http://dx.doi.org/10.18489/sacj.v29i1.456.

No Abstract

@article{213,
  author = {Bruce Watson and M. Nxumalo and D.G Kourie and L.G.W.A. Cleophas},
  title = {An Assessment of Algorithms for Deriving Failure Deterministic Finite Automata},
  abstract = {No Abstract},
  year = {2017},
  journal = {South African Computer Journal},
  volume = {29},
  pages = {43-68},
  issue = {1},
  isbn = {2313-7835},
  url = {http://dx.doi.org/10.18489/sacj.v29i1.456},
}
Watson B, Daykin JW. Indeterminate String Factorizations and Degenerate Text Transformations. Mathematics in Computer Science. 2017;11(2). https://core.ac.uk/download/pdf/81595959.pdf.

No Abstract

@article{212,
  author = {Bruce Watson and J.W. Daykin},
  title = {Indeterminate String Factorizations and Degenerate Text Transformations},
  abstract = {No Abstract},
  year = {2017},
  journal = {Mathematics in Computer Science},
  volume = {11},
  pages = {209-218},
  issue = {2},
  isbn = {1661-8270},
  url = {https://core.ac.uk/download/pdf/81595959.pdf},
}
de Waal A, Koen H, de Villiers JP, Roodt H. An expert-driven causal model of the rhino poaching problem. Ecological Modelling. 2017;347. https://www.sciencedirect.com/science/article/pii/S0304380016307621.

A significant challenge in ecological modelling is the lack of complete sets of high-quality data. This is especially true in the rhino poaching problem where data is incomplete. Although there are many poaching attacks, they can be spread over a vast surface area such as in the case of the Kruger National Park in South Africa, which is roughly the size of Israel. Bayesian networks are useful reasoning tools and can utilise expert knowledge when data is insufficient or sparse. Bayesian networks allow the modeller to incorporate data, expert knowledge, or any combination of the two. This flexibility of Bayesian networks makes them ideal for modelling complex ecological problems. In this paper an expert-driven model of the rhino poaching problem is presented. The development as well as the evaluation of the model is performed from an expert perspective. Independent expert evaluation is performed in the form of queries that test different scenarios. Structuring the rhino poaching problem as a causal network yields a framework that can be used to reason about the problem, as well as inform the modeller of the type of data that has to be gathered.

@article{191,
  author = {Alta de Waal and Hildegarde Koen and J.P de Villiers and Henk Roodt},
  title = {An expert-driven causal model of the rhino poaching problem},
  abstract = {A significant challenge in ecological modelling is the lack of complete sets of high-quality data. This is especially true in the rhino poaching problem where data is incomplete. Although there are many poaching attacks, they can be spread over a vast surface area such as in the case of the Kruger National Park in South Africa, which is roughly the size of Israel. Bayesian networks are useful reasoning tools and can utilise expert knowledge when data is insufficient or sparse. Bayesian networks allow the modeller to incorporate data, expert knowledge, or any combination of the two. This flexibility of Bayesian networks makes them ideal for modelling complex ecological problems. In this paper an expert-driven model of the rhino poaching problem is presented. The development as well as the evaluation of the model is performed from an expert perspective. Independent expert evaluation is performed in the form of queries that test different scenarios. Structuring the rhino poaching problem as a causal network yields a framework that can be used to reason about the problem, as well as inform the modeller of the type of data that has to be gathered.},
  year = {2017},
  journal = {Ecological Modelling},
  volume = {347},
  pages = {29-39},
  publisher = {Elsevier},
  isbn = {0304-3800},
  url = {https://www.sciencedirect.com/science/article/pii/S0304380016307621},
}
Gueorguiev V, Moodley D. Hyperparameter Optimization for Astronomy. 2017;Honours. http://projects.cs.uct.ac.za/honsproj/cgi-bin/view/2017/gueorguiev_henhaeyono_stopforth.zip/#downloads.

The task of phenomenon classification in astronomy provides a novel and challenging setting for the application of state-of-the-art techniques addressing the problem of combined algorithm selection and hyperparameter optimization (CASH) of machine learning algorithms, which find local applications such as at the data-intensive Square Kilometre Array (SKA). This work will use various algorithms for CASH to explore the possibility and efficacy of hyperparameter optimization on improving performance of machine learning techniques for astronomy. Then, with focus on the Galaxy Zoo project, these algorithms will be used to conduct an indepth comparison of state-of-the-art in hyperparameter optimization (HPO) along with techniques that aim to improve performance on large datasets and expensive function evaluations. Finally, the likelihood for an integration with a cognitive vision system for astronomy will be examined by conducting a brief exploration into different feature extraction and selection methods.

@phdthesis{180,
  author = {V. Gueorguiev and Deshen Moodley},
  title = {Hyperparameter Optimization for Astronomy},
  abstract = {The task of phenomenon classification in astronomy provides a novel and challenging setting for the application of state-of-the-art techniques addressing the problem of combined
algorithm selection and hyperparameter optimization (CASH) of machine learning algorithms, which find local applications such as at the data-intensive Square Kilometre Array
(SKA). This work will use various algorithms for CASH to explore the possibility and efficacy of hyperparameter optimization on improving performance of machine learning
techniques for astronomy. Then, with focus on the Galaxy Zoo project, these algorithms will be used to conduct an indepth comparison of state-of-the-art in hyperparameter optimization
(HPO) along with techniques that aim to improve performance on large datasets and expensive function evaluations. Finally, the likelihood for an integration with a cognitive
vision system for astronomy will be examined by conducting a brief exploration into different feature extraction and selection methods.},
  year = {2017},
  volume = {Honours},
  publisher = {University of Cape Town},
  url = {http://projects.cs.uct.ac.za/honsproj/cgi-bin/view/2017/gueorguiev_henhaeyono_stopforth.zip/#downloads},
}
Watson B, Strauss T, Kourie DG, Cleophas LGWA. CSP for Parallelising Brzozowski’s DFA Construction Algorithm. In: The Role of Theory in Computer Science. World Scientific Publishing Co. Pte. Ltd.; 2017. https://doi.org/10.1142/9789813148208_0010.

No Abstract

@inbook{179,
  author = {Bruce Watson and T. Strauss and D.G Kourie and L.G.W.A. Cleophas},
  title = {CSP for Parallelising Brzozowski’s DFA Construction Algorithm},
  abstract = {No Abstract},
  year = {2017},
  journal = {The Role of Theory in Computer Science},
  pages = {217-243},
  publisher = {World Scientific Publishing Co. Pte. Ltd.},
  isbn = {978-981-3148-19-2},
  url = {https://doi.org/10.1142/9789813148208_0010},
}
van der Merwe B, Weideman N, Berglund M. Turning evil regexes harmless. In: Conference of South African Institute of Computer Scientists and Information Technologists (SAICSIT'17). ACM; 2017. https://dl.acm.org/citation.cfm?id=3129416.

No Abstract

@{178,
  author = {Brink van der Merwe and N. Weideman and Martin Berglund},
  title = {Turning evil regexes harmless},
  abstract = {No Abstract},
  year = {2017},
  journal = {Conference of South African Institute of Computer Scientists and Information Technologists (SAICSIT'17)},
  month = {26/09-28/09},
  publisher = {ACM},
  url = {https://dl.acm.org/citation.cfm?id=3129416},
}
Berglund M, Björklund H, Drewes F. Single-rooted DAGs in regular DAG languages: Parikh image and path languages. In: International Workshop on Tree Adjoining Grammars and Related Formalisms. The Association for Computational Linguistics (ACL); 2017. http://www.aclweb.org/anthology/W/W17/W17-62.pdf.

No Abstract

@{177,
  author = {Martin Berglund and H. Björklund and F. Drewes},
  title = {Single-rooted DAGs in regular DAG languages: Parikh image and path languages},
  abstract = {No Abstract},
  year = {2017},
  journal = {International Workshop on Tree Adjoining Grammars and Related Formalisms},
  pages = {94-101},
  month = {04/09-06/09},
  publisher = {The Association for Computational Linguistics (ACL)},
  isbn = {978-1-945626-98-2},
  url = {http://www.aclweb.org/anthology/W/W17/W17-62.pdf},
}
Berglund M, van der Merwe B. Regular Expressions with Backreferences Re-examined. In: The Prague Stringology Conference (PSC 2017). Czech Technical University in Prague,; 2017.

Most modern regular expression matching libraries (one of the rare exceptions being Google’s RE2) allow backreferences, operations which bind a substring to a variable allowing it to be matched again verbatim. However, different implementations not only vary in the syntax permitted when using backreferences, but both implementations and definitions in the literature offer up a number of different variants on how backreferences match. Our aim is to compare the various flavors by considering the formal languages that each can describe, resulting in the establishment of a hierarchy of language classes. Beyond the hierarchy itself, some complexity results are given, and as part of the effort on comparing language classes new pumping lemmas are established, and old ones extended to new classes.

@{176,
  author = {Martin Berglund and Brink van der Merwe},
  title = {Regular Expressions with Backreferences Re-examined},
  abstract = {Most modern regular expression matching libraries (one of the rare exceptions being Google’s RE2) allow backreferences, operations which bind a substring to a variable allowing it to be matched again verbatim. However, different implementations not only vary in the syntax permitted when using backreferences, but both implementations and definitions in the literature offer up a number of different variants on how backreferences match. Our aim is to compare the various flavors by considering the formal languages that each can describe, resulting in the establishment of a hierarchy of language classes. Beyond the hierarchy itself, some complexity results are given, and as part of the effort on comparing language classes new pumping lemmas are established, and old ones extended to new classes.},
  year = {2017},
  journal = {The Prague Stringology Conference (PSC 2017)},
  pages = {30-41},
  month = {28/08-30/08},
  address = {Czech Technical University in Prague,},
  isbn = {ISBN 978-80-01-06193-0},
}
Berglund M, van der Merwe B, Watson B, Weideman N. On the Semantics of Atomic Subgroups in Practical Regular Expressions. In: Implementation and Application of Automata, 22nd International Conference, CIAA 2017. Marne-la-Vallee, France: Springer; 2017. http://www.springer.com/978-3-319-60133-5.

Most regular expression matching engines have operators and features to enhance the succinctness of classical regular expressions, such as interval quantifiers and regular lookahead. In addition, matching engines in for example Perl, Java, Ruby and .NET, also provide operators, such as atomic operators, that constrain the backtracking behavior of the engine. The most common use is to prevent needless backtracking, but the operators will often also change the language accepted. As such it is essential to develop a theoretical sound basis for the matching semantics of regular expressions with atomic operators. We here establish that atomic operators preserve regularity, but are exponentially more succinct for some languages. Further we investigate the state complexity of deterministic and non-deterministic finite automata accepting the language corresponding to a regular expression with atomic operators, and show that emptiness testing is PSPACE-complete.

@{175,
  author = {Martin Berglund and Brink van der Merwe and Bruce Watson and N. Weideman},
  title = {On the Semantics of Atomic Subgroups in Practical Regular Expressions},
  abstract = {Most regular expression matching engines have operators and features to enhance the succinctness of classical regular expressions, such as interval quantifiers and regular lookahead. In addition, matching engines in for example Perl, Java, Ruby and .NET, also provide operators, such as atomic operators, that constrain the backtracking behavior of the engine. The most common use is to prevent needless backtracking, but the operators will often also change the language accepted. As such it is essential to develop a theoretical sound basis for the matching semantics of regular expressions with atomic operators. We here establish that atomic operators preserve regularity, but are exponentially more succinct for some languages. Further we investigate the state complexity of deterministic and non-deterministic finite automata accepting the language corresponding to a regular expression with atomic operators, and show that emptiness testing is PSPACE-complete.},
  year = {2017},
  journal = {Implementation and Application of Automata, 22nd International Conference, CIAA 2017},
  pages = {14-26},
  month = {27/06-30/06},
  publisher = {Springer},
  address = {Marne-la-Vallee, France},
  isbn = {978-3-319-60133-5},
  url = {http://www.springer.com/978-3-319-60133-5},
}
Fischer B, Esterhuizen M, Greene GJ. Visualizing and Exploring Software Version Control Repositories using Interactive Tag Clouds over Formal Concept Lattices. Elsevier. 2017;87(2017). https://www.sciencedirect.com/science/article/pii/S0950584916304050?via%3Dihub.

Context: version control repositories contain a wealth of implicit information that can be used to answer many questions about a project’s development process. However, this information is not directly accessible in the repositories and must be extracted and visualized. Objective: the main objective of this work is to develop a flexible and generic interactive visualization engine called ConceptCloud that supports exploratory search in version control repositories. Method: ConceptCloud is a flexible, interactive browser for SVN and Git repositories. Its main novelty is the combination of an intuitive tag cloud visualization with an underlying concept lattice that provides a formal structure for navigation. ConceptCloud supports concurrent navigation in multiple linked but individually customizable tag clouds, which allows for multi-faceted repository browsing, and scriptable construction of unique visualizations. Results: we describe the mathematical foundations and implementation of our approach and use ConceptCloud to quickly gain insight into the team structure and development process of three projects. We perform a user study to determine the usability of ConceptCloud. We show that untrained participants are able to answer historical questions about a software project better using ConceptCloud than using a linear list of commits. Conclusion: ConceptCloud can be used to answer many difficult questions such as “What has happened in this project while I was away?” and “Which developers collaborate?”. Tag clouds generated from our approach provide a visualization in which version control data can be aggregated and explored interactively.

@article{174,
  author = {Bernd Fischer and M. Esterhuizen and G.J. Greene},
  title = {Visualizing and Exploring Software Version Control Repositories using Interactive Tag Clouds over Formal Concept Lattices},
  abstract = {Context: version control repositories contain a wealth of implicit information that can be used to answer many questions about a project’s development process. However, this information is not directly accessible in the repositories and must be extracted and visualized.
Objective: the main objective of this work is to develop a flexible and generic interactive visualization engine called ConceptCloud that supports exploratory search in version control repositories.
Method: ConceptCloud is a flexible, interactive browser for SVN and Git repositories. Its main novelty is the combination of an intuitive tag cloud visualization with an underlying concept lattice that provides a formal structure for navigation. ConceptCloud supports concurrent navigation in multiple linked but individually customizable tag clouds, which allows for multi-faceted repository browsing, and scriptable construction of unique visualizations.
Results: we describe the mathematical foundations and implementation of our approach and use ConceptCloud to quickly gain insight into the team structure and development process of three projects. We perform a user study to determine the usability of ConceptCloud. We show that untrained participants are able to answer historical questions about a software project better using ConceptCloud than using a linear list of commits.
Conclusion: ConceptCloud can be used to answer many difficult questions such as “What has happened in this project while I was away?” and “Which developers collaborate?”. Tag clouds generated from our approach provide a visualization in which version control data can be aggregated and explored interactively.},
  year = {2017},
  journal = {Elsevier},
  volume = {87},
  pages = {223-241},
  issue = {2017},
  url = {https://www.sciencedirect.com/science/article/pii/S0950584916304050?via%3Dihub},
}
Fischer B, Dunaiski M, Greene GJ. Exploratory Search of Academic Publication and Citation Data using Interactive Tag Cloud Visualizations. Scientometrics (Springer). 2017;110(3). https://link.springer.com/article/10.1007%2Fs11192-016-2236-3.

Acquiring an overview of an unfamiliar discipline and exploring relevant papers and journals is often a laborious task for researchers. In this paper we show how exploratory search can be supported on a large collection of academic papers to allow users to answer complex scientometric questions which traditional retrieval approaches do not support optimally. We use our ConceptCloud browser, which makes use of a combination of concept lattices and tag clouds, to visually present academic publication data (specifically, the ACM Digital Library) in a browsable format that facilitates exploratory search. We augment this dataset with semantic categories, obtained through automatic keyphrase extraction from papers’ titles and abstracts, in order to provide the user with uniform keyphrases of the underlying data collection. We use the citations and references of papers to provide additional mechanisms for exploring relevant research by presenting aggregated reference and citation data not only for a single paper but also across topics, authors and journals, which is novel in our approach. We conduct a user study to evaluate our approach in which we asked 34 participants, from different academic backgrounds with varying degrees of research experience, to answer a variety of scientometric questions using our ConceptCloud browser. Participants were able to answer complex scientometric questions using our ConceptCloud browser with a mean correctness of 73%, with the user’s prior research experience having no statistically significant effect on the results.

@article{173,
  author = {Bernd Fischer and M. Dunaiski and G.J. Greene},
  title = {Exploratory Search of Academic Publication and Citation Data using Interactive Tag Cloud Visualizations},
  abstract = {Acquiring an overview of an unfamiliar discipline and exploring relevant papers and journals is often a laborious task for researchers. In this paper we show how exploratory search can be supported on a large collection of academic papers to allow users to answer complex scientometric questions which traditional retrieval approaches do not support optimally. We use our ConceptCloud browser, which makes use of a combination of concept lattices and tag clouds, to visually present academic publication data (specifically, the ACM Digital Library) in a browsable format that facilitates exploratory search. We augment this dataset with semantic categories, obtained through automatic keyphrase extraction from papers’ titles and abstracts, in order to provide the user with uniform keyphrases of the underlying data collection. We use the citations and references of papers to provide additional mechanisms for exploring relevant research by presenting aggregated reference and citation data not only for a single paper but also across topics, authors and journals, which is novel in our approach. We conduct a user study to evaluate our approach in which we asked 34 participants, from different academic backgrounds with varying degrees of research experience, to answer a variety of scientometric questions using our ConceptCloud browser. Participants were able to answer complex scientometric questions using our ConceptCloud browser with a mean correctness of 73%, with the user’s prior research experience having no statistically significant effect on the results.},
  year = {2017},
  journal = {Scientometrics (Springer)},
  volume = {110},
  pages = {1539-1571},
  issue = {3},
  address = {Netherlands},
  isbn = {0138-9130},
  url = {https://link.springer.com/article/10.1007%2Fs11192-016-2236-3},
}
Britz K, Varzinczak I. Context-based defeasible subsumption for dSROIQ. In: 13th International Symposium on Commonsense Reasoning. ; 2017.

The description logic dSROIQ is a decidable extension of SROIQ that supports defeasible reasoning in the KLM tradition. It features a parameterised preference order on binary relations in a domain of interpretation, which allows for the use of defeasible roles in complex concepts, as well as in defeasible concept and role subsumption, and in defeasible role assertions. In this paper, we address an important limitation both in dSROIQ and in other defeasible extensions of description logics, namely the restriction in the semantics of defeasible concept subsumption to a single preference order on objects. We do this by inducing preference orders on objects from preference orders on roles, and use these to relativise defeasible subsumption. This yields a notion of contextualised defeasible subsumption, with contexts described by roles.

@{169,
  author = {Katarina Britz and Ivan Varzinczak},
  title = {Context-based defeasible subsumption for dSROIQ},
  abstract = {The description logic dSROIQ is a decidable extension of SROIQ that supports defeasible reasoning in the KLM tradition. It features a parameterised preference order on binary relations in a domain of interpretation, which allows for the use of defeasible roles in complex concepts, as well as in defeasible concept and role subsumption, and in defeasible role assertions. In this paper, we address an important limitation both in dSROIQ and in other defeasible extensions of description logics, namely the restriction in the semantics of defeasible concept subsumption to a single preference order on objects. We do this by inducing preference orders on objects from preference orders on roles, and use these to relativise defeasible subsumption. This yields a notion of contextualised defeasible subsumption, with contexts described by roles.},
  year = {2017},
  journal = {13th International Symposium on Commonsense Reasoning},
  month = {06/11-08/11},
}
Britz K, Varzinczak I. Towards defeasible SROIQ. 2017. http://ceur-ws.org/Vol-1879/.

We present a decidable extension of the Description Logic SROIQ that supports defeasible reasoning in the KLM tradition, and extends it through the introduction of defeasible roles. The semantics of the resulting DL dSROIQ extends the classical semantics with a parameterised preference order on binary relations in a domain of interpretation. This allows for the use of defeasible roles in complex concepts, as well as in defeasible concept and role subsumption, and in defeasible role assertions. Reasoning over dSROIQ ontologies is made possible by a translation of entailment to concept satisfiability relative to an RBox only. A tableau algorithm then decides on consistency of dSROIQ-concepts in the preferential semantics.

@misc{168,
  author = {Katarina Britz and Ivan Varzinczak},
  title = {Towards defeasible SROIQ},
  abstract = {We present a decidable extension of the Description Logic SROIQ that supports defeasible reasoning in the KLM tradition, and extends it through the introduction of defeasible roles. The semantics of the resulting DL dSROIQ extends the classical semantics with a parameterised preference order on binary relations in a domain of interpretation. This allows for the use of defeasible roles in complex concepts, as well as in defeasible concept and role subsumption, and in defeasible role assertions.  Reasoning over dSROIQ ontologies is made possible by a translation of entailment to concept satisfiability relative to an RBox only. A tableau algorithm then decides on consistency of dSROIQ-concepts in the preferential semantics.},
  year = {2017},
  isbn = {ISSN 1613-0073},
  url = {http://ceur-ws.org/Vol-1879/},
}
Casini G, Meyer T. Belief Change in a Preferential Non-Monotonic Framework. In: International Joint Conference on Artificial Intelligence (IJCAI-17). ; 2017.

Belief change and non-monotonic reasoning are usually viewed as two sides of the same coin, with results showing that one can formally be defined in terms of the other. In this paper we show that we can also integrate the two formalisms by studying belief change within a (preferential) non-monotonic framework. This integration relies heavily on the identification of the monotonic core of a non-monotonic framework. We consider belief change operators in a non-monotonic propositional setting with a view towards preserving consistency. These results can also be applied to the preservation of coherence—an important notion within the field of logic-based ontologies. We show that the standard AGM approach to belief change can be adapted to a preferential non-monotonic framework, with the definition of expansion, contraction, and revision operators, and corresponding representation results. Surprisingly, preferential AGM belief change, as defined here, can be obtained in terms of classical AGM belief change.

@{167,
  author = {Giovanni Casini and Thomas Meyer},
  title = {Belief Change in a Preferential Non-Monotonic Framework},
  abstract = {Belief change and non-monotonic reasoning are usually viewed as two sides of the same coin, with results showing that one can formally be defined in terms of the other. In this paper we show that we can also integrate the two formalisms by studying belief change within a (preferential) non-monotonic framework. This integration relies heavily on the identification of the monotonic core of a non-monotonic framework. We consider belief change operators in a non-monotonic propositional setting with a view towards preserving consistency. These results can also be applied to the preservation of coherence—an important notion within the field of logic-based ontologies. We show that the standard AGM approach to belief change can be adapted to a preferential non-monotonic framework, with the definition of expansion, contraction, and revision operators, and corresponding representation results. Surprisingly, preferential AGM belief change, as defined here, can be obtained in terms of classical AGM belief change.},
  year = {2017},
  journal = {International Joint Conference on Artificial Intelligence (IJCAI-17)},
  pages = {929-935},
  month = {19/08-25/08},
  isbn = {978-0-9992411-0-3},
}
Mouton F, Teixeira M, Meyer T. Benchmarking a Mobile Implementation of the Social Engineering Prevention Training Tool. In: Information Security for South Africa (ISSA). ; 2017.

As the nature of information stored digitally becomes more important and confidential, the security of the systems put in place to protect this information needs to be increased. The human element, however, remains a vulnerability of the system and it is this vulnerability that social engineers attempt to exploit. The Social Engineering Attack Detection Model version 2 (SEADMv2) has been proposed to help people identify malicious social engineering attacks. Prior to this study, the SEADMv2 had not been implemented as a user friendly application or tested with real subjects. This paper describes how the SEADMv2 was implemented as an Android application. This Android application was tested on 20 subjects, to determine whether it reduces the probability of a subject falling victim to a social engineering attack or not. The results indicated that the Android implementation of the SEADMv2 significantly reducedthe number of subjects that fell victim to social engineering attacks. The Android application also significantly reduced the number of subjects that fell victim to malicious social engineering attacks, bidirectional communication social engineering attacks and indirect communication social engineering attacks. The Android application did not have a statistically significant effect on harmless scenarios and unidirectional communication social engineering attacks.

@{166,
  author = {F. Mouton and M. Teixeira and Thomas Meyer},
  title = {Benchmarking a Mobile Implementation of the Social Engineering Prevention Training Tool},
  abstract = {As the nature of information stored digitally becomes more important and confidential, the security of the systems put in place to protect this information needs to be increased. The human element, however, remains a vulnerability of the system and it is this vulnerability that social engineers attempt to exploit. The Social Engineering Attack Detection Model version 2 (SEADMv2) has been proposed to help people identify malicious social engineering attacks. Prior to this study, the SEADMv2 had not been implemented as a user friendly application or tested with real subjects. This paper describes how the SEADMv2 was implemented as an Android application. This Android application was tested on 20 subjects, to determine whether it reduces the probability of a subject falling victim to a social engineering attack or not. The results indicated that the Android implementation of the SEADMv2 significantly reducedthe number of subjects that fell victim to social engineering attacks. The Android application also significantly reduced the number of subjects that fell victim to malicious social engineering attacks, bidirectional communication social engineering attacks and indirect communication social engineering attacks. The Android application did not have a statistically significant effect on harmless scenarios and unidirectional communication social engineering attacks.},
  year = {2017},
  journal = {Information Security for South Africa (ISSA)},
  pages = {106-116},
  month = {16/08-17/08},
  isbn = {978-1-5386-0545-5},
}
Booth R, Casini G, Meyer T, Varzinczak I. Extending Typicality for Description Logics. 2017. http://orbilu.uni.lu/bitstream/10993/32165/1/TforDL-Technical_report.pdf.

Recent extensions of description logics for dealing with different forms of non-monotonic reasoning don’t take us beyond the case of defeasible subsumption. In this paper we enrich the DL EL⊥ with a (constrained version of) a typicality operator •, the intuition of which is to capture the most typical members of a class, providing us with the DL EL•⊥. We argue that EL•⊥ is the smallest step one can take to increase the expressivity beyond the case of defeasible subsumption for DLs, while still retaining all the rationality properties an appropriate notion of defeasible subsumption is required to satisfy, and investigate what an appropriate notion of non-monotonic entailment for EL• ⊥ should look like.

@misc{165,
  author = {Richard Booth and Giovanni Casini and Thomas Meyer and Ivan Varzinczak},
  title = {Extending Typicality for Description Logics},
  abstract = {Recent extensions of description logics for dealing with different forms of non-monotonic reasoning don’t take us beyond the case of defeasible subsumption. In this paper we enrich the DL EL⊥ with a (constrained version of) a typicality operator •, the intuition of which is to capture the most typical members of a class, providing us with the DL EL•⊥. We argue that EL•⊥ is the smallest step one can take to increase the expressivity beyond the case of defeasible subsumption for DLs, while still retaining all the rationality properties an appropriate notion of defeasible subsumption is required to satisfy, and investigate what an appropriate notion of non-monotonic entailment for EL• ⊥ should look like.},
  year = {2017},
  url = {http://orbilu.uni.lu/bitstream/10993/32165/1/TforDL-Technical_report.pdf},
}
Rens G, Meyer T. Imagining Probabilistic Belief Change as Imaging. 2017. https://arxiv.org/pdf/1705.01172.pdf.

Imaging is a form of probabilistic belief change which could be employed for both revision and update. In this paper, we propose a new framework for probabilistic belief change based on imaging, called Expected Distance Imaging (EDI). EDI is sufficiently general to define Bayesian conditioning and other forms of imaging previously defined in the literature. We argue that, and investigate how, EDI can be used for both revision and update. EDI’s definition depends crucially on a weight function whose properties are studied and whose effect on belief change operations is analysed. Finally, four EDI instantiations are proposed, two for revision and two for update, and probabilistic rationality postulates are suggested for their analysis.

@misc{164,
  author = {Gavin Rens and Thomas Meyer},
  title = {Imagining Probabilistic Belief Change as Imaging},
  abstract = {Imaging is a form of probabilistic belief change which could be employed for both revision and update. In this paper, we propose a new framework for probabilistic belief change based on imaging, called Expected Distance Imaging (EDI). EDI is sufficiently general to define Bayesian conditioning and other forms of imaging previously defined in the literature. We argue that, and investigate how, EDI can be used for both revision and update. EDI’s definition depends crucially on a weight function whose properties are studied and whose effect on belief change operations is analysed. Finally, four EDI instantiations are proposed, two for revision and two for update, and probabilistic rationality postulates are suggested for their analysis.},
  year = {2017},
  url = {https://arxiv.org/pdf/1705.01172.pdf},
}
Gerber A, Morar N, Meyer T, Eardley C. Ontology-based support for taxonomic functions. Ecological Informatics. 2017;41. https://ac.els-cdn.com/S1574954116301959/1-s2.0-S1574954116301959-main.pdf?_tid=487687ca-01b3-11e8-89aa-00000aacb35e&acdnat=1516873196_6a2c94e428089403763ccec46613cf0f.

This paper reports on an investigation into the use of ontology technologies to support taxonomic functions. Support for taxonomy is imperative given several recent discussions and publications that voiced concern over the taxonomic impediment within the broader context of the life sciences. Taxonomy is defined as the scientific classification, description and grouping of biological organisms into hierarchies based on sets of shared characteristics, and documenting the principles that enforce such classification. Under taxonomic functions we identified two broad categories: the classification functions concerned with identification and naming of organisms, and secondly classification functions concerned with categorization and revision (i.e. grouping and describing, or revisiting existing groups and descriptions). Ontology technologies within the broad field of artificial intelligence include computational ontologies that are knowledge representation mechanisms using standardized representations that are based on description logics (DLs). This logic base of computational ontologies provides for the computerized capturing and manipulation of knowledge. Furthermore, the set-theoretical basis of computational ontologies ensures particular suitability towards classification, which is considered as a core function of systematics or taxonomy. Using the specific case of Afrotropical bees, this experimental research study represents the taxonomic knowledge base as an ontology, explore the use of available reasoning algorithms to draw the necessary inferences that support taxonomic functions (identification and revision) over the ontology and implement a Web-based application (the WOC). The contributions include the ontology, a reusable and standardized computable knowledge base of the taxonomy of Afrotropical bees, as well as the WOC and the evaluation thereof by experts.

@article{163,
  author = {Aurona Gerber and Nishal Morar and Thomas Meyer and C. Eardley},
  title = {Ontology-based support for taxonomic functions},
  abstract = {This paper reports on an investigation into the use of ontology technologies to support taxonomic functions. Support for taxonomy is imperative given several recent discussions and publications that voiced concern over the taxonomic impediment within the broader context of the life sciences. Taxonomy is defined as the scientific classification, description and grouping of biological organisms into hierarchies based on sets of shared characteristics, and documenting the principles that enforce such classification. Under taxonomic functions we identified two broad categories: the classification functions concerned with identification and naming of organisms, and secondly classification functions concerned with categorization and revision (i.e. grouping and describing, or revisiting existing groups and descriptions).
Ontology technologies within the broad field of artificial intelligence include computational ontologies that are knowledge representation mechanisms using standardized representations that are based on description logics (DLs). This logic base of computational ontologies provides for the computerized capturing and manipulation of knowledge. Furthermore, the set-theoretical basis of computational ontologies ensures particular suitability towards classification, which is considered as a core function of systematics or taxonomy.
Using the specific case of Afrotropical bees, this experimental research study represents the taxonomic knowledge base as an ontology, explore the use of available reasoning algorithms to draw the necessary inferences that support taxonomic functions (identification and revision) over the ontology and implement a Web-based application (the WOC). The contributions include the ontology, a reusable and standardized computable knowledge base of the taxonomy of Afrotropical bees, as well as the WOC and the evaluation thereof by experts.},
  year = {2017},
  journal = {Ecological Informatics},
  volume = {41},
  pages = {11-23},
  publisher = {Elsevier},
  isbn = {1574-9541},
  url = {https://ac.els-cdn.com/S1574954116301959/1-s2.0-S1574954116301959-main.pdf?_tid=487687ca-01b3-11e8-89aa-00000aacb35e&acdnat=1516873196_6a2c94e428089403763ccec46613cf0f},
}
Seebregts C, Pillay A, Crichton R, Singh S, Moodley D. 14 Enterprise Architectures for Digital Health. Global Health Informatics: Principles of eHealth and mHealth to Improve Quality of Care. 2017. https://books.google.co.za/books?id=8p-rDgAAQBAJ&pg=PA173&lpg=PA173&dq=14+Enterprise+Architectures+for+Digital+Health&source=bl&ots=i6SQzaXiPp&sig=zDLJ6lIqt3Xox3Lt5LNCuMkUoJ4&hl=en&sa=X&ved=0ahUKEwivtK6jxPDYAhVkL8AKHXbNDY0Q6AEINDAB#v=onepage&q=14%20Enterp.

• Several different paradigms and standards exist for creating digital health architectures that are mostly complementary, but sometimes contradictory. • The potential benefits of using EA approaches and tools are that they help to ensure the appropriate use of standards for interoperability and data storage and exchange, and encourage the creation of reusable software components and metadata.

@article{162,
  author = {Chris Seebregts and Anban Pillay and Ryan Crichton and S. Singh and Deshen Moodley},
  title = {14 Enterprise Architectures for Digital Health},
  abstract = {• Several different paradigms and standards exist for creating digital health architectures that 
are mostly complementary, but sometimes contradictory.
• The potential benefits of using EA 
approaches and tools are that they help to ensure the appropriate use of standards for 
interoperability and data storage and exchange, and encourage the creation of reusable 
software components and metadata.},
  year = {2017},
  journal = {Global Health Informatics: Principles of eHealth and mHealth to Improve Quality of Care},
  pages = {173-182},
  publisher = {MIT Press},
  isbn = {978-0262533201},
  url = {https://books.google.co.za/books?id=8p-rDgAAQBAJ&pg=PA173&lpg=PA173&dq=14+Enterprise+Architectures+for+Digital+Health&source=bl&ots=i6SQzaXiPp&sig=zDLJ6lIqt3Xox3Lt5LNCuMkUoJ4&hl=en&sa=X&ved=0ahUKEwivtK6jxPDYAhVkL8AKHXbNDY0Q6AEINDAB#v=onepage&q=14%20Enterp},
}
Adeleke JA, Moodley D, Rens G, Adewumi AO. Integrating Statistical Machine Learning in a Semantic Sensor Web for Proactive Monitoring and Control. Sensors. 2017;17(4). http://pubs.cs.uct.ac.za/archive/00001219/01/sensors-17-00807.pdf.

Proactive monitoring and control of our natural and built environments is important in various application scenarios. Semantic Sensor Web technologies have been well researched and used for environmental monitoring applications to expose sensor data for analysis in order to provide responsive actions in situations of interest. While these applications provide quick response to situations, to minimize their unwanted effects, research efforts are still necessary to provide techniques that can anticipate the future to support proactive control, such that unwanted situations can be averted altogether. This study integrates a statistical machine learning based predictive model in a Semantic Sensor Web using stream reasoning. The approach is evaluated in an indoor air quality monitoring case study. A sliding window approach that employs the Multilayer Perceptron model to predict short term PM2.5 pollution situations is integrated into the proactive monitoring and control framework. Results show that the proposed approach can effectively predict short term PM2.5 pollution situations: precision of up to 0.86 and sensitivity of up to 0.85 is achieved over half hour prediction horizons, making it possible for the system to warn occupants or even to autonomously avert the predicted pollution situations within the context of Semantic Sensor Web.

@article{160,
  author = {Jude Adeleke and Deshen Moodley and Gavin Rens and A.O. Adewumi},
  title = {Integrating Statistical Machine Learning in a Semantic Sensor Web for Proactive Monitoring and Control},
  abstract = {Proactive monitoring and control of our natural and built environments is important in various application scenarios. Semantic Sensor Web technologies have been well researched and used for environmental monitoring applications to expose sensor data for analysis in order to provide responsive actions in situations of interest. While these applications provide quick response to situations, to minimize their unwanted effects, research efforts are still necessary to provide techniques that can anticipate the future to support proactive control, such that unwanted situations can be averted altogether. This study integrates a statistical machine learning based predictive model in a Semantic Sensor Web using stream reasoning. The approach is evaluated in an indoor air quality monitoring case study. A sliding window approach that employs the Multilayer Perceptron model to predict short term PM2.5 pollution situations is integrated into the proactive monitoring and control framework. Results show that the proposed approach can effectively predict short term PM2.5 pollution situations: precision of up to 0.86 and sensitivity of up to 0.85 is achieved over half hour prediction horizons, making it possible for the system to warn occupants or even to autonomously avert the predicted pollution situations within the context of Semantic Sensor Web.},
  year = {2017},
  journal = {Sensors},
  volume = {17},
  pages = {1-23},
  issue = {4},
  publisher = {MDPI},
  isbn = {1424-8220},
  url = {http://pubs.cs.uct.ac.za/archive/00001219/01/sensors-17-00807.pdf},
}
  • CSIR
  • DSI
  • Covid-19