Research Publications

2016

Casini G, Meyer T. Using Defeasible Information to Obtain Coherence. In: Fifteenth International Conference on Principles of Knowledge Representation and Reasoning (KR). AAAI Press; 2016. doi:https://dl.acm.org/doi/10.5555/3032027.3032097.

We consider the problem of obtaining coherence in a propositional knowledge base using techniques from Belief Change. Our motivation comes from the field of formal ontologies where coherence is interpreted to mean that a concept name has to be satisfiable. In the propositional case we consider here, this translates to a propositional formula being satisfiable. We define belief change operators in a framework of nonmonotonic preferential reasoning. We show how the introduction of defeasible information using contraction operators can be an effective means for obtaining coherence.

@{360,
  author = {Giovanni Casini and Thomas Meyer},
  title = {Using Defeasible Information to Obtain Coherence},
  abstract = {We consider the problem of obtaining coherence in a propositional knowledge base using techniques from Belief Change. Our motivation comes from the field of formal ontologies where coherence is interpreted to mean that a concept name has to be satisfiable. In the propositional case we consider here, this translates to a propositional formula being satisfiable. We define belief change operators in a framework of nonmonotonic preferential reasoning. We show how the introduction of defeasible information using contraction operators can be an effective means for obtaining coherence.},
  year = {2016},
  journal = {Fifteenth International Conference on  Principles of Knowledge Representation and Reasoning (KR)},
  pages = {537-540},
  month = {25/04 - 29/04},
  publisher = {AAAI Press},
  doi = {https://dl.acm.org/doi/10.5555/3032027.3032097},
}
Rens G, Casini G, Meyer T. On Revision of Partially Specified Convex Probabilistic Belief Bases. In: European Conference on Artificial Intelligence (ECAI). IO Press; 2016. https://www.researchgate.net/publication/307577667_On_Revision_of_Partially_Specified_Convex_Probabilistic_Belief_Bases.

We propose a method for an agent to revise its incomplete probabilistic beliefs when a new piece of propositional information is observed. In this work, an agent’s beliefs are represented by a set of probabilistic formulae – a belief base. The method involves determining a representative set of ‘boundary’ probability distributions consistent with the current belief base, revising each of these probability distributions and then translating the revised information into a new belief base. We use a version of Lewis Imaging as the revision operation. The correctness of the approach is proved. An analysis of the approach is done against six rationality postulates. The expressivity of the belief bases under consideration are rather restricted, but has some applications. We also discuss methods of belief base revision employing the notion of optimum entropy, and point out some of the benefits and difficulties in those methods. Both the boundary distribution method and the optimum entropy methods are reasonable, yet yield different results.

@{359,
  author = {Gavin Rens and Giovanni Casini and Thomas Meyer},
  title = {On Revision of Partially Specified Convex Probabilistic Belief Bases},
  abstract = {We propose a method for an agent to revise its incomplete probabilistic beliefs when a new piece of propositional information is observed. In this work, an agent’s beliefs are represented by a set of probabilistic formulae – a belief base. The method involves determining a representative set of ‘boundary’ probability distributions consistent with the current belief base, revising each of these probability distributions and then translating the revised information into a new belief base. We use a version of Lewis Imaging as the revision operation. The correctness of the approach is proved. An analysis of the approach is done against six rationality postulates. The expressivity of the belief bases under consideration are rather restricted, but has some applications. We also discuss methods of belief base revision employing the notion of optimum entropy, and point out some of the benefits and difficulties in those methods. Both the boundary distribution method and the optimum entropy methods are reasonable, yet yield different results.},
  year = {2016},
  journal = {European Conference on Artificial Intelligence (ECAI)},
  pages = {921-929},
  month = {29/08 - 2/09},
  publisher = {IO Press},
  url = {https://www.researchgate.net/publication/307577667_On_Revision_of_Partially_Specified_Convex_Probabilistic_Belief_Bases},
}
Van Niekerk DR. Syllabification for Afrikaans speech synthesis. In: Pattern Recognition Association of South Africa and Robotics and Mechatronics International Conference (PRASA-RobMech). Stellenbosch, South Africa; 2016. doi:10.1109/RoboMech.2016.7813143.

This paper describes the continuing development of a pronunciation resource for speech synthesis of Afrikaans by augmenting an existing pronunciation dictionary to include syllable boundaries and stress. Furthermore, different approaches for grapheme to phoneme conversion and syllabification derived from the dictionary are evaluated. Cross-validation experiments suggest that joint sequence models are effective at directly modelling pronunciations including syllable boundaries. Finally, some informal observations and demonstrations are presented regarding the integration of this work into a typical text-to-speech system.

@{285,
  author = {Daniel Van Niekerk},
  title = {Syllabification for Afrikaans speech synthesis},
  abstract = {This paper describes the continuing development of a pronunciation resource for speech synthesis of Afrikaans by augmenting an existing pronunciation dictionary to include syllable boundaries and stress. Furthermore, different approaches for grapheme to phoneme conversion and syllabification derived from the dictionary are evaluated. Cross-validation experiments suggest that joint sequence models are effective at directly modelling pronunciations including syllable boundaries. Finally, some informal observations and demonstrations are presented regarding the integration of this work into a typical text-to-speech system.},
  year = {2016},
  journal = {Pattern Recognition Association of South Africa and Robotics and Mechatronics International Conference (PRASA-RobMech)},
  pages = {31-36},
  address = {Stellenbosch, South Africa},
  isbn = {978-1-5090-3335-5},
  doi = {10.1109/RoboMech.2016.7813143},
}
Kleynhans N, Hartman W, Van Niekerk DR, et al. Code-switched English Pronunciation Modeling for Swahili Spoken Term Detection. Procedia Computer Science. 2016;81. doi:10.1016/j.procs.2016.04.040.

We investigate modeling strategies for English code-switched words as found in a Swahili spoken term detection system. Code switching, where speakers switch language in a conversation, occurs frequently in multilingual environments, and typically deteriorates STD performance. Analysis is performed in the context of the IARPA Babel program which focuses on rapid STD system development for under-resourced languages. Our results show that approaches that specifically target the modeling of code-switched words, significantly improve the detection performance of these words.

@article{271,
  author = {Neil Kleynhans and William Hartman and Daniel Van Niekerk and Charl Van Heerden and Richard Schwartz and Stavros Tsakalidis and Marelie Davel},
  title = {Code-switched English Pronunciation Modeling for Swahili Spoken Term Detection},
  abstract = {We investigate modeling strategies for English code-switched words as found in a Swahili spoken term detection system. Code
switching, where speakers switch language in a conversation, occurs frequently in multilingual environments, and typically deteriorates STD performance. Analysis is performed in the context of the IARPA Babel program which focuses on rapid STD
system development for under-resourced languages. Our results show that approaches that specifically target the modeling of
code-switched words, significantly improve the detection performance of these words.},
  year = {2016},
  journal = {Procedia Computer Science},
  volume = {81},
  pages = {128-135},
  publisher = {Elsevier B.V.},
  address = {Yogyakarta, Indonesia},
  isbn = {1877-0509},
  doi = {10.1016/j.procs.2016.04.040},
}
Leenen L, Meyer T. Semantic Technologies and Big Data: Analytics for Cyber Defence. International Journal of Cyber Warfare and Terrorism. 2016;6(3).

The Governments, military forces and other organisations responsible for cybersecurity deal with vast amounts of data that has to be understood in order to lead to intelligent decision making. Due to the vast amounts of information pertinent to cybersecurity, automation is required for processing and decision making, specifically to present advance warning of possible threats. The ability to detect patterns in vast data sets, and being able to understanding the significance of detected patterns are essential in the cyber defence domain. Big data technologies supported by semantic technologies can improve cybersecurity, and thus cyber defence by providing support for the processing and understanding of the huge amounts of information in the cyber environment. The term big data analytics refers to advanced analytic techniques such as machine learning, predictive analysis, and other intelligent processing techniques applied to large data sets that contain different data types. The purpose is to detect patterns, correlations, trends and other useful information. Semantic technologies is a knowledge representation paradigm where the meaning of data is encoded separately from the data itself. The use of semantic technologies such as logic-based systems to support decision making is becoming increasingly popular. However, most automated systems are currently based on syntactic rules. These rules are generally not sophisticated enough to deal with the complexity of decisions required to be made. The incorporation of semantic information allows for increased understanding and sophistication in cyber defence systems. This paper argues that both big data analytics and semantic technologies are necessary to provide counter measures against cyber threats. An overview of the use of semantic technologies and big data technologies in cyber defence is provided, and important areas for future research in the combined domains are discussed.

@article{229,
  author = {Louise Leenen and Thomas Meyer},
  title = {Semantic Technologies and Big Data: Analytics for Cyber Defence},
  abstract = {The Governments, military forces and other organisations responsible for cybersecurity deal with
vast amounts of data that has to be understood in order to lead to intelligent decision making. Due
to the vast amounts of information pertinent to cybersecurity, automation is required for processing
and decision making, specifically to present advance warning of possible threats. The ability to detect
patterns in vast data sets, and being able to understanding the significance of detected patterns are
essential in the cyber defence domain. Big data technologies supported by semantic technologies
can improve cybersecurity, and thus cyber defence by providing support for the processing and
understanding of the huge amounts of information in the cyber environment. The term big data
analytics refers to advanced analytic techniques such as machine learning, predictive analysis, and
other intelligent processing techniques applied to large data sets that contain different data types. The
purpose is to detect patterns, correlations, trends and other useful information. Semantic technologies
is a knowledge representation paradigm where the meaning of data is encoded separately from the
data itself. The use of semantic technologies such as logic-based systems to support decision making
is becoming increasingly popular. However, most automated systems are currently based on syntactic
rules. These rules are generally not sophisticated enough to deal with the complexity of decisions
required to be made. The incorporation of semantic information allows for increased understanding and
sophistication in cyber defence systems. This paper argues that both big data analytics and semantic
technologies are necessary to provide counter measures against cyber threats. An overview of the
use of semantic technologies and big data technologies in cyber defence is provided, and important
areas for future research in the combined domains are discussed.},
  year = {2016},
  journal = {International Journal of Cyber Warfare and Terrorism},
  volume = {6},
  issue = {3},
}
van Niekerk L, Watson B. The Development and Evaluation of an Electronic Serious Game Aimed at the Education of Core Programming Skills. 2016;MA. http://hdl.handle.net/10019.1/100119.

No Abstract

@phdthesis{207,
  author = {L. van Niekerk and Bruce Watson},
  title = {The Development and Evaluation of an Electronic Serious Game Aimed at the Education of Core Programming Skills},
  abstract = {No Abstract},
  year = {2016},
  volume = {MA},
  url = {http://hdl.handle.net/10019.1/100119},
}
Kala JR, Viriri S, Moodley D. Leaf Classification Using Convexity Moments of Polygons. In: International Symposium on Visual Computing. ; 2016.

Research has shown that shape features can be used in the process of object recognition with promising results. However, due to a wide variety of shape descriptors, selecting the right one remains a difficult task. This paper presents a new shape recognition feature: Convexity Moment of Polygons. The Convexity Moments of Polygons is derived from the Convexity measure of polygons. A series of experimentations based on FLAVIA images dataset was performed to demonstrate the accuracy of the proposed feature compared to the Convexity measure of polygons in the field of leaf classification. A classification rate of 92% was obtained with the Convexity Moment of Polygons, 80% with the convexity Measure of Polygons using the Radial Basis function neural networks classifier (RBF).

@{161,
  author = {J.R. Kala and S. Viriri and Deshen Moodley},
  title = {Leaf Classification Using Convexity Moments of Polygons},
  abstract = {Research has shown that shape features can be used in the process of object recognition with promising results. However, due to a wide variety of shape descriptors, selecting the right one remains a difficult task. This paper presents a new shape recognition feature: Convexity Moment of Polygons. The Convexity Moments of Polygons is derived from the Convexity measure of polygons. A series of experimentations based on FLAVIA images dataset was performed to demonstrate the accuracy of the proposed feature compared to the Convexity measure of polygons in the field of leaf classification. A classification rate of 92% was obtained with the Convexity Moment of Polygons, 80% with the convexity Measure of Polygons using the Radial Basis function neural networks classifier (RBF).},
  year = {2016},
  journal = {International Symposium on Visual Computing},
  pages = {300-339},
  month = {14/12-16/12},
  isbn = {978-3-319-50832-0},
}
Coetzer W, Moodley D, Gerber A. Eliciting and Representing High-Level Knowledge Requirements to Discover Ecological Knowledge in Flower-Visiting Data. PLoS ONE . 2016;11(11). http://pubs.cs.uct.ac.za/archive/00001127/01/journal.pone.0166559.pdf.

Observations of individual organisms (data) can be combined with expert ecological knowledge of species, especially causal knowledge, to model and extract from flower–visiting data useful information about behavioral interactions between insect and plant organisms, such as nectar foraging and pollen transfer. We describe and evaluate a method to elicit and represent such expert causal knowledge of behavioral ecology, and discuss the potential for wider application of this method to the design of knowledge-based systems for knowledge discovery in biodiversity and ecosystem informatics.

@article{159,
  author = {Willem Coetzer and Deshen Moodley and Aurona Gerber},
  title = {Eliciting and Representing High-Level Knowledge Requirements to Discover Ecological Knowledge in Flower-Visiting Data},
  abstract = {Observations of individual organisms (data) can be combined with expert ecological knowledge of species, especially causal knowledge, to model and extract from flower–visiting data useful information about behavioral interactions between insect and plant organisms, such as nectar foraging and pollen transfer. We describe and evaluate a method to elicit and represent such expert causal knowledge of behavioral ecology, and discuss the potential for wider application of this method to the design of knowledge-based systems for knowledge discovery in biodiversity and ecosystem informatics.},
  year = {2016},
  journal = {PLoS ONE},
  volume = {11},
  pages = {1-15},
  issue = {11},
  url = {http://pubs.cs.uct.ac.za/archive/00001127/01/journal.pone.0166559.pdf},
}
Waltham M, Moodley D. An Analysis of Artificial Intelligence Techniques in Multiplayer Online Battle Arena Game Environments. In: Annual Conference of the South African Institute of Computer Scientists and Information Technologists (SAICSIT 2016). Johannesburg: ACM; 2016. doi: http://dx.doi.org/10.1145/2987491.2987513.

The 3D computer gaming industry is constantly exploring new avenues for creating immersive and engaging environments. One avenue being explored is autonomous control of the behaviour of non-player characters (NPC). This paper reviews and compares existing artificial intelligence (AI) techniques for controlling the behaviour of non-human characters in Multiplayer Online Battle Arena (MOBA) game environments. Two techniques, the fuzzy state machine (FuSM) and the emotional behaviour tree (EBT), were reviewed and compared. In addition, an alternate and simple mechanism to incorporate emotion in a behaviour tree is proposed and tested. Initial tests of the mechanism show that it is a viable and promising mechanism for effectively tracking the emotional state of an NPC and for incorporating emotion in NPC decision making.

@{157,
  author = {Michael Waltham and Deshen Moodley},
  title = {An Analysis of Artificial Intelligence Techniques in Multiplayer Online Battle Arena Game Environments},
  abstract = {The 3D computer gaming industry is constantly exploring new avenues for creating immersive and engaging environments. One avenue being explored is autonomous control of the behaviour of non-player characters (NPC). This paper reviews and compares existing artificial intelligence (AI) techniques for controlling the behaviour of non-human characters in Multiplayer Online Battle Arena (MOBA) game environments. Two techniques, the fuzzy state machine (FuSM) and the emotional behaviour tree (EBT), were reviewed and compared. In addition, an alternate and simple mechanism to incorporate emotion in a behaviour tree is proposed and tested. Initial tests of the mechanism show that it is a viable and promising mechanism for effectively tracking the emotional state of an NPC and for incorporating emotion in NPC decision making.},
  year = {2016},
  journal = {Annual Conference of the South African Institute of Computer Scientists and Information Technologists (SAICSIT 2016)},
  pages = {45},
  month = {26/09-28/09},
  publisher = {ACM},
  address = {Johannesburg},
  isbn = {978-1-4503-4805-8},
  doi = {http://dx.doi.org/10.1145/2987491.2987513},
}
Clark A, Moodley D. A System for a Hand Gesture-Manipulated Virtual Reality Environment. In: Annual Conference of the South African Institute of Computer Scientists and Information Technologists (SAICSIT 2016). Johannesburg: ACM; 2016. doi:http://dx.doi.org/10.1145/2987491.2987511.

Extensive research has been done using machine learning techniques for hand gesture recognition (HGR) using camera-based devices; such as the Leap Motion Controller (LMC). However, limited research has investigated machine learning techniques for HGR in virtual reality applications (VR). This paper reports on the design, implementation, and evaluation of a static HGR system for VR applications using the LMC. The gesture recognition system incorporated a lightweight feature vector of five normalized tip-to-palm distances and a k-nearest neighbour (kNN) classifier. The system was evaluated in terms of response time, accuracy and usability using a case-study VR stellar data visualization application created in the Unreal Engine 4. An average gesture classification time of 0.057ms with an accuracy of 82.5% was achieved on four distinct gestures, which is comparable with previous results from Sign Language recognition systems. This shows the potential of HGR machine learning techniques applied to VR, which were previously applied to non-VR scenarios such as Sign Language recognition.

@{156,
  author = {A. Clark and Deshen Moodley},
  title = {A System for a Hand Gesture-Manipulated Virtual Reality Environment},
  abstract = {Extensive research has been done using machine learning techniques for hand gesture recognition (HGR) using camera-based devices; such as the Leap Motion Controller (LMC). However, limited research has investigated machine learning techniques for HGR in virtual reality applications (VR). This paper reports on the design, implementation, and evaluation of a static HGR system for VR applications using the LMC. The gesture recognition system incorporated a lightweight feature vector of five normalized tip-to-palm distances and a k-nearest neighbour (kNN) classifier. The system was evaluated in terms of response time, accuracy and usability using a case-study VR stellar data visualization application created in the Unreal Engine 4. An average gesture classification time of 0.057ms with an accuracy of 82.5% was achieved on four distinct gestures, which is comparable with previous results from Sign Language recognition systems. This shows the potential of HGR machine learning techniques applied to VR, which were previously applied to non-VR scenarios such as Sign Language recognition.},
  year = {2016},
  journal = {Annual Conference of the South African Institute of Computer Scientists and Information Technologists (SAICSIT 2016)},
  pages = {10},
  month = {26/09-28/09},
  publisher = {ACM},
  address = {Johannesburg},
  isbn = {978-1-4503-4805-8},
  doi = {http://dx.doi.org/10.1145/2987491.2987511},
}
Van Heerden CJ, Kleynhans N, Davel MH. Improving the Lwazi ASR baseline. In: Interspeech. San Francisco, USA; 2016. doi:http://dx.doi.org/10.21437/Interspeech.2016-1412.

We investigate the impact of recent advances in speech recognition techniques for under-resourced languages. Specifically, we review earlier results published on the Lwazi ASR corpus of South African languages, and experiment with additional acoustic modeling approaches. We demonstrate large gains by applying current state-of-the-art techniques, even if the data itself is neither extended nor improved. We analyze the various performance improvements observed, report on comparative performance per technique – across all eleven languages in the corpus – and discuss the implications of our findings for under-resourced languages in general.

@{153,
  author = {Charl Van Heerden and Neil Kleynhans and Marelie Davel},
  title = {Improving the Lwazi ASR baseline},
  abstract = {We investigate the impact of recent advances in speech recognition techniques for under-resourced languages. Specifically, we review earlier results published on the Lwazi ASR corpus of South African languages, and experiment with additional acoustic modeling approaches. We demonstrate large gains by applying current state-of-the-art techniques, even if the data itself is neither extended nor improved. We analyze the various performance improvements observed, report on comparative performance per technique – across all eleven languages in the corpus – and discuss the implications of our findings for under-resourced languages in general.},
  year = {2016},
  journal = {Interspeech},
  pages = {3529-3538},
  month = {08/09-12/09},
  address = {San Francisco, USA},
  doi = {http://dx.doi.org/10.21437/Interspeech.2016-1412},
}
Lapalme J, Gerber A, van der Merwe A, Zachman J, de Vries M, Hinkelmann K. Exploring the future of enterprise architecture: A Zachman perspective. 2016. http://www.sciencedirect.com/science/journal/01663615/79.

No Abstract

@misc{150,
  author = {James Lapalme and Aurona Gerber and Alta van der Merwe and John Zachman and Marne de Vries and Knut Hinkelmann},
  title = {Exploring the future of enterprise architecture: A Zachman perspective.},
  abstract = {No Abstract},
  year = {2016},
  url = {http://www.sciencedirect.com/science/journal/01663615/79},
}
Harmse H, Britz K, Gerber A. Armstrong Relations for Ontology Design and Evaluation. 2016. http://ceur-ws.org/Vol-1577/.

No Abstract

@misc{149,
  author = {Henriette Harmse and Katarina Britz and Aurona Gerber},
  title = {Armstrong Relations for Ontology Design and Evaluation},
  abstract = {No Abstract},
  year = {2016},
  url = {http://ceur-ws.org/Vol-1577/},
}
Hinkelmann K, Gerber A, Karagiannis D, Thoenssen B, van der Merwe A, Woitsch R. A new paradigm for the continuous alignment of business and IT: Combining enterprise architecture modelling and enterprise ontology. Computers in Industry. 2016;79. http://www.sciencedirect.com/science/article/pii/S0166361515300270.

No Abstract

@article{148,
  author = {Knut Hinkelmann and Aurona Gerber and Dimitris Karagiannis and Barbara Thoenssen and Alta van der Merwe and Robert Woitsch},
  title = {A new paradigm for the continuous alignment of business and IT: Combining enterprise architecture modelling and enterprise ontology},
  abstract = {No Abstract},
  year = {2016},
  journal = {Computers in Industry},
  volume = {79},
  publisher = {Sciencedirect},
  url = {http://www.sciencedirect.com/science/article/pii/S0166361515300270},
}
van der Merwe B. The Output Size Problem for String-to-Tree Transducers. 2016. http://www.ims.uni-stuttgart.de/events/TTATT2016.

No Abstract

@misc{146,
  author = {Brink van der Merwe},
  title = {The Output Size Problem for String-to-Tree Transducers},
  abstract = {No Abstract},
  year = {2016},
  url = {http://www.ims.uni-stuttgart.de/events/TTATT2016},
}
van der Merwe B. Analyzing Matching Time Behavior of Backtracking Regular Expression Matchers by Using Ambiguity of NFA. . In: 21st International Conference on Implementation and Application of Automata. ; 2016.

No Abstract

@{145,
  author = {Brink van der Merwe},
  title = {Analyzing Matching Time Behavior of Backtracking Regular Expression Matchers by Using Ambiguity of NFA.},
  abstract = {No Abstract},
  year = {2016},
  journal = {21st International Conference on Implementation and Application of Automata},
  pages = {322-334},
  month = {19/07-22/07},
}
Rens G, Meyer T, Casini G. On Revision of Partially Specified Convex Probabilistic Belief Bases. In: European Conference on Artificial Intelligence (ECAI). ; 2016.

We propose a method for an agent to revise its incomplete probabilistic beliefs when a new piece of propositional information is observed. In this work, an agent’s beliefs are represented by a set of probabilistic formulae – a belief base. The method involves determining a representative set of ‘boundary’ probability distributions consistent with the current belief base, revising each of these probability distributions and then translating the revised information into a new belief base. We use a version of Lewis Imaging as the revision operation. The correctness of the approach is proved. An analysis of the approach is done against six rationality postulates. The expressivity of the belief bases under consideration are rather restricted, but has some applications. We also discuss methods of belief base revision employing the notion of optimum entropy, and point out some of the benefits and difficulties in those methods. Both the boundary distribution method and the optimum entropy method are reasonable, yet yield different results.

@{144,
  author = {Gavin Rens and Thomas Meyer and Giovanni Casini},
  title = {On Revision of Partially Specified Convex Probabilistic Belief Bases},
  abstract = {We propose a method for an agent to revise its incomplete probabilistic beliefs when a new piece of propositional information is observed. In this work, an agent’s beliefs are represented by a set of probabilistic formulae – a belief base. The method involves determining a representative set of ‘boundary’ probability distributions consistent with the current belief base, revising each of these probability distributions and then translating the revised information into a new belief base. We use a version of Lewis Imaging as the revision operation. The correctness of the approach is proved. An analysis of the approach is done against six rationality postulates. The expressivity of the belief bases under consideration are rather restricted, but has some applications. We also discuss methods of belief base revision employing the notion of optimum entropy, and point out some of the benefits and difficulties in those methods. Both the boundary distribution method and the optimum entropy method are reasonable, yet yield different results.},
  year = {2016},
  journal = {European Conference on Artificial Intelligence (ECAI)},
  pages = {921-929},
  month = {31/08-02/09},
}
Kroon S, Le Roux PB, Bester W. DSaaS: A cloud Service for Persistent Data Structures. In: CLOSER, 6th International Conference on Cloud Computing and Services Science. Portugal; 2016.

No Abstract

@{143,
  author = {Steve Kroon and PB Le Roux and Willem Bester},
  title = {DSaaS:  A cloud Service for Persistent Data Structures},
  abstract = {No Abstract},
  year = {2016},
  journal = {CLOSER, 6th International Conference on Cloud Computing and Services Science},
  pages = {37-48},
  month = {23/04-25/04},
  address = {Portugal},
  isbn = {978-989-758-182-3},
}
de Villiers H, Wiehman S. Semantic Segmentation of Bioimages Using Convolutional Neural Networks. In: 2016 International Joint Conference on Neural Networks (IJCNN). Singapore; 2016.

Convolutional neural networks have shown great promise in both general image segmentation problems as well as bioimage segmentation. In this paper, the application of different convolutional network architectures is explored on the C. elegans live/dead assay dataset from the Broad Bioimage Benchmark Collection. These architectures include a standard convolutional network which produces single pixel outputs, as well as Fully Convolutional Networks (FCN) for patch prediction. It was shown that the custom image processing pipeline, which achieved a worm segmentation accuracy of 94%, was outperformed by all of the architectures considered, with the best being 97.3% achieved by a FCN with a single downsampling layer. These results demonstrate the promise of employing convolutional neural network architectures as an alternative to ad-hoc image processing pipelines on optical microscopy images of C. elegans.

@{142,
  author = {Hennie de Villiers and Stiaan Wiehman},
  title = {Semantic Segmentation of Bioimages Using Convolutional Neural Networks},
  abstract = {Convolutional neural networks have shown great promise in both general image segmentation problems as well as bioimage segmentation. In this paper, the application of different convolutional network architectures is explored on the C. elegans live/dead assay dataset from the Broad Bioimage Benchmark Collection. These architectures include a standard convolutional network which produces single pixel outputs, as well as Fully Convolutional Networks (FCN) for patch prediction. It was shown that the custom image processing pipeline, which achieved a worm segmentation accuracy of 94%, was outperformed by all of the architectures considered, with the best being 97.3% achieved by a FCN with a single downsampling layer. These results demonstrate the promise of employing convolutional neural network architectures as an alternative to ad-hoc image processing pipelines on optical microscopy images of C. elegans.},
  year = {2016},
  journal = {2016 International Joint Conference on Neural Networks (IJCNN)},
  pages = {624-631},
  month = {24/07-29/07},
  address = {Singapore},
  isbn = {978-1-5090-0620-5},
}
  • CSIR
  • DSI
  • Covid-19