### 2015

Giwa O, Davel MH. Text-based Language Identification of Multilingual Names. In: Pattern Recognition Association of South Africa (PRASA). Port Elizabeth, South Africa; 2015. doi: 10.1109/RoboMech.2015.7359517.

Text-based language identification (T-LID) of isolated words has been shown to be useful for various speech processing tasks, including pronunciation modelling and data categorisation. When the words to be categorised are proper names, the task becomes more difficult: not only do proper names often have idiosyncratic spellings, they are also often considered to be multilingual. We, therefore, investigate how an existing T-LID technique can be adapted to perform multilingual word classification. That is, given a proper name, which may be either mono- or multilingual, we aim to determine how accurately we can predict how many possible source languages the word has, and what they are. Using a Joint Sequence Model-based approach to T-LID and the SADE corpus - a newly developed proper names corpus of South African names - we experiment with different approaches to multilingual T-LID. We compare posterior-based and likelihood-based methods and obtain promising results on a challenging task.

@{289,
author = {Oluwapelumi Giwa and Marelie Davel},
title = {Text-based Language Identification of Multilingual Names},
abstract = {Text-based language identification (T-LID) of isolated words has been shown to be useful for various speech processing tasks, including pronunciation modelling and data categorisation. When the words to be categorised are proper names, the task becomes more difficult: not only do proper names often have idiosyncratic spellings, they are also often considered to be multilingual. We, therefore, investigate how an existing T-LID technique can be adapted to perform multilingual word classification. That is, given a proper name, which may be either mono- or multilingual, we aim to determine how accurately we can predict how many possible source languages the word has, and what they are. Using a Joint Sequence Model-based approach to T-LID and the SADE corpus - a newly developed proper names corpus of South African names - we experiment with different approaches to multilingual T-LID. We compare posterior-based and likelihood-based methods and obtain promising results on a challenging task.},
year = {2015},
journal = {Pattern Recognition Association of South Africa (PRASA)},
pages = {166-171},
address = {Port Elizabeth, South Africa},
isbn = {978-1-4673-7450-7, 978-1-4673-7449-1},
doi = {10.1109/RoboMech.2015.7359517},
}

Davel MH, Barnard E, Van Heerden CJ, et al. Exploring minimal pronunciation modeling for low resource languages. In: Interspeech. Dresden, Germany; 2015.

Pronunciation lexicons can range from fully graphemic (modeling each word using the orthography directly) to fully phonemic (first mapping each word to a phoneme string). Between these two options lies a continuum of modeling options. We analyze techniques that can improve the accuracy of a graphemic system without requiring significant effort to design or implement. The analysis is performed in the context of the IARPA Babel project, which aims to develop spoken term detection systems for previously unseen languages rapidly, and with minimal human effort. We consider techniques related to letter-to-sound mapping and language-independent syllabification of primarily graphemic systems, and discuss results obtained for six languages: Cebuano, Kazakh, Kurmanji Kurdish, Lithuanian, Telugu and Tok Pisin.

@{288,
author = {Marelie Davel and Etienne Barnard and Charl Van Heerden and William Hartman and Damianos Karakos and Richard Schwartz and Stavros Tsakalidis},
title = {Exploring minimal pronunciation modeling for low resource languages},
abstract = {Pronunciation lexicons can range from fully graphemic (modeling each word using the orthography directly) to fully phonemic (first mapping each word to a phoneme string). Between these two options lies a continuum of modeling options. We analyze techniques that can improve the accuracy of a graphemic system without requiring significant effort to design or implement. The analysis is performed in the context of the IARPA Babel project, which aims to develop spoken term detection systems for previously unseen languages rapidly, and with minimal human effort. We consider techniques related to letter-to-sound mapping and language-independent syllabification of primarily graphemic systems, and discuss results obtained for six languages: Cebuano, Kazakh, Kurmanji Kurdish, Lithuanian, Telugu and Tok Pisin.},
year = {2015},
journal = {Interspeech},
pages = {538-542},
}

Badenhorst J, Davel MH. Synthetic triphones from trajectory-based feature distributions. In: Pattern Recognition Association of South Africa (PRASA). Port Elizabeth, South Africa; 2015. doi:10.1109/RoboMech.2015.7359509.

We experiment with a new method to create synthetic models of rare and unseen triphones in order to supplement limited automatic speech recognition (ASR) training data. A trajectory model is used to characterise seen transitions at the spectral level, and these models are then used to create features for unseen or rare triphones. We find that a fairly restricted model (piece-wise linear with three line segments per channel of a diphone transition) is able to represent training data quite accurately. We report on initial results when creating additional triphones for a single-speaker data set, finding small but significant gains, especially when adding additional samples of rare (rather than unseen) triphones.

@{287,
author = {Jaco Badenhorst and Marelie Davel},
title = {Synthetic triphones from trajectory-based feature distributions},
abstract = {We experiment with a new method to create synthetic models of rare and unseen triphones in order to supplement limited automatic speech recognition (ASR) training data. A trajectory model is used to characterise seen transitions at the spectral level, and these models are then used to create features for unseen or rare triphones. We find that a fairly restricted model (piece-wise linear with three line segments per channel of a diphone transition) is able to represent training data quite accurately. We report on initial results when creating additional triphones for a single-speaker data set, finding small but significant gains, especially when adding additional samples of rare (rather than unseen) triphones.},
year = {2015},
journal = {Pattern Recognition Association of South Africa (PRASA)},
pages = {118-122},
address = {Port Elizabeth, South Africa},
isbn = {978-1-4673-7450-7, 978-1-4673-7449-1},
doi = {10.1109/RoboMech.2015.7359509},
}

Newman G, Fischer B. Language Fuzzing with Name Binding. 2015;Honours.

The language fuzzing with name binding project generates syntactically valid test programs that exercise the name binding semantics of a language processor. We introduce generation algorithm and a tool, NameFuzz, for the test suite generation. It achieves this by by parsing in a ANTLR grammar representing the context free grammar of the language, along with the language’s name binding rules in the NaBL meta-language. The test sentences are intended to be either accepted (positive test cases) or rejected (failing test cases) by the language processor. The intention is to promote confidence in the language processor as far as semantic correctness is concerned. The generated test suite in syntactically correct, but the limitations of not taking type checking into account or having a method to evaluate expressions lead to a large number of test sentences that are semantically incorrect. To a degree these limitations are overcome by the combinatorial nature of the generation algorithm that ensures that each possible type correct sentence is generated as well.

@phdthesis{137,
author = {G. Newman and Bernd Fischer},
title = {Language Fuzzing with Name Binding},
abstract = {The language fuzzing with name binding project generates syntactically valid test programs that exercise the name binding semantics of a language processor. We introduce generation algorithm and a tool, NameFuzz, for the test suite generation. It achieves this by by parsing in a ANTLR grammar representing the context free grammar of the language, along with the language’s name binding rules in the NaBL meta-language. The test sentences are intended to be either accepted (positive test cases) or rejected (failing test cases) by the language processor. The intention is to promote confidence in the language processor as far as semantic correctness is concerned. The generated test suite in syntactically correct, but the limitations of not taking type checking into account or having a method to evaluate expressions lead to a large number of test sentences that are semantically incorrect. To a degree these limitations are overcome by the combinatorial nature of the generation algorithm that ensures that each possible type correct sentence is generated as well.},
year = {2015},
volume = {Honours},
}

Breytenbach JA, Fischer B. Progressive Software Design Tool. 2015;Honours.

Visualising software can be a tedious and cluttered affair with the design process and development often being out of sync. Some development methodologies even largely do away with the design entirely and focus on short bursts of coding and validation to make sure the project is still on the right track. This document focuses on deriving a methodology and subsequent tool to iteratively and progressively expand concepts, the understanding of the project and development cycles. An existing visualisation is adapted to better suit the needs of the designer by providing the ability to view the project from different layers of abstraction in one concise visualisation.

@phdthesis{136,
author = {J. Breytenbach and Bernd Fischer},
title = {Progressive Software Design Tool},
abstract = {Visualising software can be a tedious and cluttered affair with the design process and development often being out of sync. Some development methodologies even largely do away with the design entirely and focus on short bursts of coding and validation to make sure the project is still on the right track. This document focuses on deriving a methodology and subsequent tool to iteratively and progressively expand concepts, the understanding of the project and development cycles. An existing visualisation is adapted to better suit the needs of the designer by providing the ability to view the project from different layers of abstraction in one concise visualisation.},
year = {2015},
volume = {Honours},
}

Britz K, Casini G, Meyer T, Moodley K, Sattler U, Varzinczak I. Rational Defeasible Reasoning for Expressive Description Logics. 2015.

In this paper, we enrich description logics (DLs) with non-monotonic reasoning features in a number of ways. We start by investigating a notion of defeasible conditional in the spirit of KLM-style defeasible consequence. In particular, we consider a natural and intuitive semantics for defeasible subsumption in terms of DL interpretations enriched with a preference relation. We propose and investigate syntactic properties (à la Gentzen) for both preferential and rational conditionals and prove representation results for the description logic ALC. This representation result paves the way for more effective decision procedures for defeasible reasoning in DLs. We then move to non-monotonicity in DLs at the level of entailment. We investigate versions of entailment in the context of both preferential and rational subsumption, relate them to preferential and rational closure, and show that computing them can be reduced to classical ALC entailment. This provides further evidence that our semantic constructions are appropriate in a non-monotonic DL setting. One of the barriers to evaluating performance scalability of rational closure is the abscence of naturally occurring DL-based ontologies with defeasible features. We overcome this barrier by devising an approach to introduce defeasible subsumption into classical real world ontologies. This culminates in a set of semi-natural defeasible ontologies that is used, together with a purely artificial set, to test our rational closure algorithms. We found that performance is scalable on the whole with no major bottlenecks.

@misc{130,
author = {Katarina Britz and Giovanni Casini and Thomas Meyer and Kody Moodley and U. Sattler and Ivan Varzinczak},
title = {Rational Defeasible Reasoning for Expressive Description Logics},
abstract = {In this paper, we enrich description logics (DLs) with non-monotonic reasoning features in a number of ways. We start by investigating a notion of defeasible conditional in the spirit of KLM-style defeasible consequence. In particular, we consider a natural and intuitive semantics for defeasible subsumption in terms of DL interpretations enriched with a preference relation. We propose and investigate syntactic properties (à la Gentzen) for both preferential and rational conditionals and prove representation results for the description logic ALC. This representation result paves the way for more effective decision procedures for defeasible reasoning in DLs. We then move to non-monotonicity in DLs at the level of entailment. We investigate versions of entailment in the context of both preferential and rational subsumption, relate them to preferential and rational closure, and show that computing them can be reduced to classical ALC entailment. This provides further evidence that our semantic constructions are appropriate in a non-monotonic DL setting. One of the barriers to evaluating performance scalability of rational closure is the abscence of naturally occurring DL-based ontologies with defeasible features. We overcome this barrier by devising an approach to introduce defeasible subsumption into classical real world ontologies. This culminates in a set of semi-natural defeasible ontologies that is used, together with a purely artificial set, to test our rational closure algorithms. We found that performance is scalable on the whole with no major bottlenecks.},
year = {2015},
}

Kroon S, Nienaber S, Booysen MJ. A Comparison of Low-Cost Monocular Vision Techniques for Pothole Distance Estimation. In: IEEE Symposium Series on Computational Intelligence: IEEE Symposium on Computational Intelligence in Vehicles and Transportation Systems. ; 2015.

No Abstract

@{129,
author = {Steve Kroon and S. Nienaber and M.J. Booysen},
title = {A Comparison of Low-Cost Monocular Vision Techniques for Pothole Distance Estimation},
abstract = {No Abstract},
year = {2015},
journal = {IEEE Symposium Series on Computational Intelligence: IEEE Symposium on Computational Intelligence in Vehicles and Transportation Systems},
pages = {419-426},
month = {08/12-10/12},
}

van der Merwe B, Visser WC, van der Merwe H, Nel SEA, Tkachuk O. Environment Modeling Using Runtime Values for JPF-Android. ACM SIGSOFT Software Engineering Notes. 2015;40(6). http://dx.doi.org/10.1145/2830719.2830727.

No Abstract

@article{128,
author = {Brink van der Merwe and W.C. Visser and Heila van der Merwe and S.E.A. Nel and O. Tkachuk},
title = {Environment Modeling Using Runtime Values for JPF-Android},
abstract = {No Abstract},
year = {2015},
journal = {ACM SIGSOFT Software Engineering Notes},
volume = {40},
pages = {1-5},
issue = {6},
publisher = {ACM},
url = {http://dx.doi.org/10.1145/2830719.2830727},
}

Adeleke JA, Moodley D. An Ontology for Proactive Indoor Environmental Quality Monitoring and Control. In: The 2015 Annual Conference of the South African Institute of Computer Scientists and Information Technologists (SAICSIT '15). New York, NY, USA ©2015; 2015.

Proactive monitoring and control of indoor air quality in homes where there are pregnant mothers and infants is essential for healthy development and well-being of children. This is especially true in low income households where cooking practices and exposure to harmful pollutants produced by nearby industries can negatively impact on a healthy home environment. Interdisciplinary expert knowledge is required to make sense of dynamic and complex environmental phenomena from multivariate low level sensor observations and high level human activities to detect health risks and enact decisions about control. We have developed an ontology for indoor environmental quality monitoring and control based on an ongoing real world case study in Durban, South Africa. We implemented an Indoor Air Quality Index and a thermal comfort index which can be automatically determined by reasoning on the ontology. We evaluated the ontology by populating it with test sensor data and showing how it can be queried to analyze health risk situations and determine control actions. Our evaluation shows that the ontology can be used for real world indoor monitoring and control applications in resource constrained settings.

@{127,
author = {Jude Adeleke and Deshen Moodley},
title = {An Ontology for Proactive Indoor Environmental Quality Monitoring and Control},
abstract = {Proactive monitoring and control of indoor air quality in homes where there are pregnant mothers and infants is essential for healthy development and well-being of children. This is especially true in low income households where cooking practices and exposure to harmful pollutants produced by nearby industries can negatively impact on a healthy home environment. Interdisciplinary expert knowledge is required to make sense of dynamic and complex environmental phenomena from multivariate low level sensor observations and high level human activities to detect health risks and enact decisions about control. We have developed an ontology for indoor environmental quality monitoring and control based on an ongoing real world case study in Durban, South Africa. We implemented an Indoor Air Quality Index and a thermal comfort index which can be automatically determined by reasoning on the ontology. We evaluated the ontology by populating it with test sensor data and showing how it can be queried to analyze health risk situations and determine control actions. Our evaluation shows that the ontology can be used for real world indoor monitoring and control applications in resource constrained settings.},
year = {2015},
journal = {The 2015 Annual Conference of the South African Institute of Computer Scientists and Information Technologists (SAICSIT &#039;15)},
month = {28/09-30/09},
isbn = {978-1-4503-3683-3},
}

Fischer B, Greene GJ. Interactive tag cloud visualization of software version control repositories. Software Visualization (VISSOFT). 2015. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7332415&isnumber=7332403.

No Abstract

@article{126,
author = {Bernd Fischer and G.J. Greene},
title = {Interactive tag cloud visualization of software version control repositories},
abstract = {No Abstract},
year = {2015},
journal = {Software Visualization (VISSOFT)},
pages = {56-65},
url = {http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;arnumber=7332415&amp;isnumber=7332403},
}

Britz K, Klarman S. Ontology learning from interpretations in lightweight description logics. In: 25th International Conference on Inductive Logic Programming. ; 2015.

Data-driven elicitation of ontologies from structured data is a well-recognized knowledge acquisition bottleneck. The development of efficient techniques for (semi-)automating this task is therefore practically vital --- yet, hindered by the lack of robust theoretical foundations. In this paper, we study the problem of learning Description Logic TBoxes from interpretations, which naturally translates to the task of ontology learning from data. In the presented framework, the learner is provided with a set of positive interpretations (i.e., logical models) of the TBox adopted by the teacher. The goal is to correctly identify the TBox given this input. We characterize the key constraints on the models that warrant finite learnability of TBoxes expressed in selected fragments of the Description Logic $\mathcal{EL}$ and define corresponding learning algorithms.

@{125,
author = {Katarina Britz and Simon Klarman},
title = {Ontology learning from interpretations in lightweight description logics},
abstract = {Data-driven elicitation of ontologies from structured data is a well-recognized knowledge acquisition bottleneck. The development of efficient techniques for (semi-)automating this task is therefore practically vital --- yet, hindered by the lack of robust theoretical foundations. In this paper, we study the problem of learning Description Logic TBoxes from interpretations, which naturally translates to the task of ontology learning from data. In the presented framework, the learner is provided with a set of positive interpretations (i.e., logical models) of the TBox adopted by the teacher. The goal is to correctly identify the TBox given this input. We characterize the key constraints on the models that warrant finite learnability of TBoxes expressed in selected fragments of the Description Logic $\mathcal{EL}$ and define corresponding learning algorithms.},
year = {2015},
journal = {25th International Conference on Inductive Logic Programming},
month = {20/08-22/08},
}

de Vries M, Gerber A, van der Merwe A. The enterprise engineering domain. In: Advances in Enterprise Engineering IX. Springer; 2015. http://link.springer.com/chapter/10.1007%2F978-3-319-19297-0_4.

No Abstract

@inbook{124,
author = {Marne de Vries and Aurona Gerber and Alta van der Merwe},
title = {The enterprise engineering domain},
abstract = {No Abstract},
year = {2015},
journal = {Advances in Enterprise Engineering IX},
publisher = {Springer},
isbn = {978-3-319-19296-3},
}

van der Merwe A, Naidoo R, Gerber A. Understanding familiarization processes with Design Science Research: A social representation analysis. SACJ. 2015;65.

No Abstract

@article{123,
author = {Alta van der Merwe and Rennie Naidoo and Aurona Gerber},
title = {Understanding familiarization processes with Design Science Research: A social representation analysis.},
abstract = {No Abstract},
year = {2015},
journal = {SACJ},
volume = {65},
isbn = {ISSN: 2313-7835},
}

Kotzé P, van der Merwe A, Gerber A. Design Science Research as Research Approach in Doctoral Studies. In: AMCIS 2015, the 2015 Americas Conference on Information Systems. ; 2015.

No Abstract

@{122,
author = {Paula Kotzé and Alta van der Merwe and Aurona Gerber},
title = {Design Science Research as Research Approach in Doctoral Studies},
abstract = {No Abstract},
year = {2015},
journal = {AMCIS 2015, the 2015 Americas Conference on Information Systems},
month = {13/08-15/08},
}

Gerber MC, Gerber A, van der Merwe A. The Conceptual Framework for Financial Reporting as a Domain Ontology. In: AMCIS 2015, the 2015 Americas Conference on Information Systems. ; 2015.

No Abstract

@{121,
author = {Mathinus Gerber and Aurona Gerber and Alta van der Merwe},
title = {The Conceptual Framework for Financial Reporting as a Domain Ontology},
abstract = {No Abstract},
year = {2015},
journal = {AMCIS 2015, the 2015 Americas Conference on Information Systems},
month = {13/08-15/08},
}

Lapalme J, Gerber A, van der Merwe A, Zachman J, de Vries M, Hinkelmann K. Exploring the future of enterprise architecture: A Zachman perspective. Computers in Industry. 2015. http://www.sciencedirect.com/science/article/pii/S0166361515300166.

Today, and for the foreseeable future, organizations will face ever-increasing levels of complexity and uncertainty. Many believe that enterprise architecture (EA) will help organizations address such difficult terrain by guiding the design of adaptive and resilient enterprises and their information systems. This paper presents the “Grand Challenges” that we believe will challenge organizations in the future and need to be addressed by enterprise architecture. As a first step in using enterprise architecture as a solution for overcoming identified challenges, the Zachman Enterprise Architecture Framework is used to guide and structure the discussion. The paper presents the “Grand Challenges” and discusses promising theories and models for addressing them. In addition, current advances in the field of enterprise architecture that have begun to address the challenges will be presented. In conclusion, final thoughts on the future of enterprise architecture as a research field and a profession are offered.

@article{120,
author = {James Lapalme and Aurona Gerber and Alta van der Merwe and John Zachman and Marne de Vries and Knut Hinkelmann},
title = {Exploring the future of enterprise architecture: A Zachman perspective},
abstract = {Today, and for the foreseeable future, organizations will face ever-increasing levels of complexity and uncertainty. Many believe that enterprise architecture (EA) will help organizations address such difficult terrain by guiding the design of adaptive and resilient enterprises and their information systems. This paper presents the “Grand Challenges” that we believe will challenge organizations in the future and need to be addressed by enterprise architecture. As a first step in using enterprise architecture as a solution for overcoming identified challenges, the Zachman Enterprise Architecture Framework is used to guide and structure the discussion. The paper presents the “Grand Challenges” and discusses promising theories and models for addressing them. In addition, current advances in the field of enterprise architecture that have begun to address the challenges will be presented. In conclusion, final thoughts on the future of enterprise architecture as a research field and a profession are offered.},
year = {2015},
journal = {Computers in Industry},
publisher = {Elsevier},
url = {http://www.sciencedirect.com/science/article/pii/S0166361515300166},
}

Hinkelmann K, Gerber A, Karagiannis D, Thoenssen B, van der Merwe A, Woitsch R. A new paradigm for the continuous alignment of business and IT: Combining enterprise architecture modelling and enterprise ontology. Computers in Industry. 2015. http://www.sciencedirect.com/science/article/pii/S0166361515300270.

The paper deals with Next Generation Enterprise Information Systems in the context of Enterprise Engineering. The continuous alignment of business and IT in a rapidly changing environment is a grand challenge for today's enterprises. The ability to react timeously to continuous and unexpected change is called agility and is an essential quality of the modern enterprise. Being agile has consequences for the engineering of enterprises and enterprise information systems. In this paper a new paradigm for next generation enterprise information systems is proposed, which shifts the development approach of model-driven engineering to continuous alignment of business and IT for the agile enterprise. It is based on a metamodelling approach, which supports both human-interpretable graphical enterprise architecture and machine-interpretable enterprise ontologies. Furthermore, next generation enterprise information systems are described, which embed modelling tools and algorithms for model analysis

@article{119,
author = {Knut Hinkelmann and Aurona Gerber and Dimitris Karagiannis and Barbara Thoenssen and Alta van der Merwe and Robert Woitsch},
title = {A new paradigm for the continuous alignment of business and IT: Combining enterprise architecture modelling and enterprise ontology.},
abstract = {The paper deals with Next Generation Enterprise Information Systems in the context of Enterprise Engineering. The continuous alignment of business and IT in a rapidly changing environment is a grand challenge for today&#039;s enterprises. The ability to react timeously to continuous and unexpected change is called agility and is an essential quality of the modern enterprise. Being agile has consequences for the engineering of enterprises and enterprise information systems. In this paper a new paradigm for next generation enterprise information systems is proposed, which shifts the development approach of model-driven engineering to continuous alignment of business and IT for the agile enterprise. It is based on a metamodelling approach, which supports both human-interpretable graphical enterprise architecture and machine-interpretable enterprise ontologies. Furthermore, next generation enterprise information systems are described, which embed modelling tools and algorithms for model analysis},
year = {2015},
journal = {Computers in Industry},
publisher = {Elsevier},
url = {http://www.sciencedirect.com/science/article/pii/S0166361515300270},
}

Thomas A, Gerber A, van der Merwe A. Visual Syntax of UML Class and Package Diagram Constructs as an Ontology. In: KEOD 2015 the the 7th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management. ; 2015.

Diagrams are often studied as visual languages with an abstract and a concrete syntax (concrete syntax is often referred to as visual syntax), where the latter contains the visual representations of the concepts in the former. A formal specification of the concrete syntax is useful in diagram processing applications as well as in achieving unambiguous understanding of diagrams. Unified Modeling Language (UML) is a commonly used modeling language to represent software models using its diagrams. Class and package diagrams are two diagrams of UML. The motivation for this work is twofold; UML lacks a formal visual syntax specification and ontologies are under-explored for visual syntax specifications. The work in this paper, therefore, explores using ontologies for visual syntax specifications by specifying the visual syntax of a set of UML class and package diagram constructs as an ontology in the Web ontology language, OWL. The reasoning features of the ontology reasoners are then used to verify the visual syntax specification. Besides formally encoding the visual syntax of numerous UML constructs, the work also demonstrates the general value of using OWL for visual syntax specifications.

@{118,
author = {Anitta Thomas and Aurona Gerber and Alta van der Merwe},
title = {Visual Syntax of UML Class and Package Diagram Constructs as an Ontology},
abstract = {Diagrams are often studied as visual languages with an abstract and a concrete syntax (concrete syntax is often referred to as visual syntax), where the latter contains the visual representations of the concepts in the former. A formal specification of the concrete syntax is useful in diagram processing applications as well as in achieving unambiguous understanding of diagrams. Unified Modeling Language (UML) is a commonly used modeling language to represent software models using its diagrams. Class and package diagrams are two diagrams of UML. The motivation for this work is twofold; UML lacks a formal visual syntax specification and ontologies are under-explored for visual syntax specifications. The work in this paper, therefore, explores using ontologies for visual syntax specifications by specifying the visual syntax of a set of UML class and package diagram constructs as an ontology in the Web ontology language, OWL. The reasoning features of the ontology reasoners are then used to verify the visual syntax specification. Besides formally encoding the visual syntax of numerous UML constructs, the work also demonstrates the general value of using OWL for visual syntax specifications.},
year = {2015},
journal = {KEOD 2015 the the 7th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management},
month = {12/11-14/11},
isbn = {978-989-758-158-8},
}

Casini G, Straccia U, Meyer T. A Polynomial Time Subsumption Algorithm for EL⊥ under Rational Closure. 2015.

No Abstract

@misc{117,
author = {Giovanni Casini and Umberto Straccia and Thomas Meyer},
title = {A Polynomial Time Subsumption Algorithm for EL⊥ under Rational Closure},
abstract = {No Abstract},
year = {2015},
}

Britz K, Klarman S. Towards unsupervised ontology learning from data. 2015. http://ceur-ws.org/Vol-1423/.

Data-driven elicitation of ontologies from structured data is a well-recognized knowledge acquisition bottleneck. The development of efficient techniques for (semi-)automating this task is therefore practically vital --- yet, hindered by the lack of robust theoretical foundations. In this paper, we study the problem of learning Description Logic TBoxes from interpretations, which naturally translates to the task of ontology learning from data. In the presented framework, the learner is provided with a set of positive interpretations (i.e., logical models) of the TBox adopted by the teacher. The goal is to correctly identify the TBox given this input. We characterize the key constraints on the models that warrant finite learnability of TBoxes expressed in selected fragments of the Description Logic EL and define corresponding learning algorithms.

@misc{116,
author = {Katarina Britz and Simon Klarman},
title = {Towards unsupervised ontology learning from data},
abstract = {Data-driven elicitation of ontologies from structured data is a well-recognized knowledge acquisition bottleneck. The development of efficient techniques for (semi-)automating this task is therefore practically vital --- yet, hindered by the lack of robust theoretical foundations. In this paper, we study the problem of learning Description Logic TBoxes from interpretations, which naturally translates to the task of ontology learning from data. In the presented framework, the learner is provided with a set of positive interpretations (i.e., logical models) of the TBox adopted by the teacher. The goal is to correctly identify the TBox given this input. We characterize the key constraints on the models that warrant finite learnability of TBoxes expressed in selected fragments of the Description Logic EL and define corresponding learning algorithms.},
year = {2015},
publisher = {CEUR-WS Volume1423},
isbn = {ISSN 1613-0073},
url = {http://ceur-ws.org/Vol-1423/},
}

Booth R. On the Entailment Problem for a Logic of Typicality. In: IJCAI 2015. ; 2015.

Propositional Typicality Logic (PTL) is a recently proposed logic, obtained by enriching classical propositional logic with a typicality operator. In spite of the non-monotonic features introduced by the semantics adopted for the typicality operator, the obvious Tarskian definition of entailment for PTL remains monotonic and is therefore not appropriate. We investigate different (semantic) versions of entailment for PTL, based on the notion of Rational Closure as defined by Lehmann and Magidor for KLM-style conditionals, and constructed using minimality. Our first important result is an impossibility theorem showing that a set of proposed postulates that at first all seem appropriate for a notion of entailment with regard to typicality cannot be satisfied simultaneously. Closer inspection reveals that this result is best interpreted as an argument for advocating the development of more than one type of PTL entailment. In the spirit of this interpretation, we define two primary forms of entailment for PTL and discuss their advantages and disadvantages.

@{115,
author = {Richard Booth},
title = {On the Entailment Problem for a Logic of Typicality},
abstract = {Propositional Typicality Logic (PTL) is a recently proposed logic, obtained by enriching classical propositional logic with a typicality operator. In spite of the non-monotonic features introduced by the semantics adopted for the typicality operator, the obvious Tarskian definition of entailment for PTL remains monotonic and is therefore not appropriate.
We investigate different (semantic) versions of entailment for PTL, based on the notion of Rational Closure as defined by Lehmann and Magidor for KLM-style conditionals, and constructed using minimality. Our first important result is an impossibility theorem showing that a set of proposed postulates that at first all seem appropriate for a notion of entailment with regard to typicality cannot be satisfied simultaneously. Closer inspection reveals that
this result is best interpreted as an argument for advocating the development of more than one type of PTL entailment. In the spirit of this interpretation, we define two primary forms of entailment for PTL and discuss their advantages and disadvantages.},
year = {2015},
journal = {IJCAI 2015},
month = {25/07-31/07},
}

Casini G, Meyer T, Moodley K, Varzinczak I, Sattler U. Introducing Defeasibility into OWL Ontologies. In: The International Semantic Web Conference. ; 2015.

In recent years, various approaches have been developed for representing and reasoning with exceptions in OWL. The price one pays for such capabilities, in terms of practical performance, is an important factor that is yet to be quantified comprehensively. A major barrier is the lack of naturally occurring ontologies with defeasible features - the ideal candidates for evaluation. Such data is unavailable due to absence of tool support for representing defeasible features. In the past, defeasible reasoning implementations have favoured automated generation of defeasible ontologies. While this suffices as a preliminary approach, we posit that a method somewhere in between these two would yield more meaningful results. In this work, we describe a systematic approach to modify real-world OWL ontologies to include defeasible features, and we apply this to the Manchester OWL Repository to generate defeasible ontologies for evaluating our reasoner DIP (Defeasible-Inference Platform). The results of this evaluation are provided together with some insights into where the performance bottle-necks lie for this kind of reasoning. We found that reasoning was feasible on the whole, with surprisingly few bottle-necks in our evaluation.

@{113,
author = {Giovanni Casini and Thomas Meyer and Kody Moodley and Ivan Varzinczak and U. Sattler},
title = {Introducing Defeasibility into OWL Ontologies},
abstract = {In recent years, various approaches have been developed for representing and reasoning with exceptions in OWL. The price one pays for such capabilities, in terms of practical performance, is an important factor that is yet to be quantified comprehensively. A major barrier is the lack of naturally occurring ontologies with defeasible features - the ideal candidates for evaluation. Such data is unavailable due to absence of tool support for representing defeasible features. In the past, defeasible reasoning implementations have favoured automated generation of defeasible ontologies. While this suffices as a preliminary approach, we posit that a method somewhere in between these two would yield more meaningful results. In this work, we describe a systematic approach to modify real-world OWL ontologies to include defeasible features, and we apply this to the Manchester OWL Repository to generate defeasible ontologies for evaluating our reasoner DIP (Defeasible-Inference Platform). The results of this evaluation are provided together with some insights into where the performance bottle-necks lie for this kind of reasoning. We found that reasoning was feasible on the whole, with surprisingly few bottle-necks in our evaluation.},
year = {2015},
journal = {The International Semantic Web Conference},
month = {11/10-15/10},
}