Research Publications
2022
Passive acoustic monitoring with hydrophones makes it possible to detect the presence of marine animals over large areas. For monitoring to be cost-effective, this process should be fully automated. We explore a new approach to detecting whale calls, using an end-to-end neural architecture and traditional speech features. We compare the results of the new approach with a convolutional neural network (CNN) applied to spectrograms, currently the standard approach to whale call detection. Experiments are conducted using the “Acoustic trends for the blue and fin whale library” from the Australian Antarctic Data Centre (AADC). We experiment with different types of speech features (mel frequency cepstral coefficients and filter banks) and different ways of framing the task. We demonstrate that a time delay neural network is a viable solution for whale call detection, with the additional benefit that spectrogram tuning – required to obtain high-quality spectrograms in challenging acoustic conditions – is no longer necessary. While the initial speech feature-based system (accuracy 96%) did not outperform the CNN (accuracy 98%) when trained on exactly the same dataset, it presents a viable approach to explore further.
@{508, author = {Edrich Fourie and Marelie Davel and Jaco Versfeld}, title = {Neural speech processing for whale call detection}, abstract = {Passive acoustic monitoring with hydrophones makes it possible to detect the presence of marine animals over large areas. For monitoring to be cost-effective, this process should be fully automated. We explore a new approach to detecting whale calls, using an end-to-end neural architecture and traditional speech features. We compare the results of the new approach with a convolutional neural network (CNN) applied to spectrograms, currently the standard approach to whale call detection. Experiments are conducted using the “Acoustic trends for the blue and fin whale library” from the Australian Antarctic Data Centre (AADC). We experiment with different types of speech features (mel frequency cepstral coefficients and filter banks) and different ways of framing the task. We demonstrate that a time delay neural network is a viable solution for whale call detection, with the additional benefit that spectrogram tuning – required to obtain high-quality spectrograms in challenging acoustic conditions – is no longer necessary. While the initial speech feature-based system (accuracy 96%) did not outperform the CNN (accuracy 98%) when trained on exactly the same dataset, it presents a viable approach to explore further.}, year = {2022}, journal = {Southern African Conference for AI Research (SACAIR)}, volume = {1734}, pages = {276 - 290}, month = {November 2022}, publisher = {Springer, Cham}, doi = {https://doi.org/10.1007/978-3-031-22321-1_19}, }
Classification margins are commonly used to estimate the generalization ability of machine learning models. We present an empirical study of these margins in artificial neural networks. A global estimate of margin size is usually used in the literature. In this work, we point out seldom considered nuances regarding classification margins. Notably, we demonstrate that some types of training samples are modelled with consistently small margins while affecting generalization in different ways. By showing a link with the minimum distance to a different-target sample and the remoteness of samples from one another, we provide a plausible explanation for this observation. We support our findings with an analysis of fully-connected networks trained on noise-corrupted MNIST data, as well as convolutional networks trained on noise-corrupted CIFAR10 data.
@inbook{505, author = {Marthinus Theunissen and Coenraad Mouton and Marelie Davel}, title = {The Missing Margin: How Sample Corruption Affects Distance to the Boundary in ANNs}, abstract = {Classification margins are commonly used to estimate the generalization ability of machine learning models. We present an empirical study of these margins in artificial neural networks. A global estimate of margin size is usually used in the literature. In this work, we point out seldom considered nuances regarding classification margins. Notably, we demonstrate that some types of training samples are modelled with consistently small margins while affecting generalization in different ways. By showing a link with the minimum distance to a different-target sample and the remoteness of samples from one another, we provide a plausible explanation for this observation. We support our findings with an analysis of fully-connected networks trained on noise-corrupted MNIST data, as well as convolutional networks trained on noise-corrupted CIFAR10 data.}, year = {2022}, journal = {Artificial Intelligence Research (SACAIR 2022), Communications in Computer and Information Science}, volume = {1734}, pages = {78 - 92}, month = {November 2022}, publisher = {Springer, Cham}, doi = {https://doi.org/10.48550/arXiv.2302.06925}, }
We propose a new framework to improve automatic speech recognition (ASR) systems in resource-scarce environments using a generative adversarial network (GAN) operating on acoustic input features. The GAN is used to enhance the features of mismatched data prior to decoding, or can optionally be used to fine-tune the acoustic model. We achieve improvements that are comparable to multi-style training (MTR), but at a lower computational cost. With less than one hour of data, an ASR system trained on good quality data, and evaluated on mismatched audio is improved by between 11.5% and 19.7% relative word error rate (WER). Experiments demonstrate that the framework can be very useful in under-resourced environments where training data and computational resources are limited. The GAN does not require parallel training data, because it utilises a baseline acoustic model to provide an additional loss term that guides the generator to create acoustic features that are better classified by the baseline.
@article{492, author = {Walter Heymans and Marelie Davel and Charl Van Heerden}, title = {Efficient acoustic feature transformation in mismatched environments using a Guided-GAN}, abstract = {We propose a new framework to improve automatic speech recognition (ASR) systems in resource-scarce environments using a generative adversarial network (GAN) operating on acoustic input features. The GAN is used to enhance the features of mismatched data prior to decoding, or can optionally be used to fine-tune the acoustic model. We achieve improvements that are comparable to multi-style training (MTR), but at a lower computational cost. With less than one hour of data, an ASR system trained on good quality data, and evaluated on mismatched audio is improved by between 11.5% and 19.7% relative word error rate (WER). Experiments demonstrate that the framework can be very useful in under-resourced environments where training data and computational resources are limited. The GAN does not require parallel training data, because it utilises a baseline acoustic model to provide an additional loss term that guides the generator to create acoustic features that are better classified by the baseline.}, year = {2022}, journal = {Speech Communication}, volume = {143}, pages = {10 - 20}, month = {09/2022}, doi = {https://doi.org/10.1016/j.specom.2022.07.002}, }
The accurate estimation of channel state information (CSI) is an important aspect of wireless communications. In this paper, a multi-layer perceptron (MLP) is developed as a CSI estimator in long-term evolution (LTE) transmission conditions. The representation of the CSI data is investigated in conjunction with batch normalisation and the representational ability of MLPs. It is found that discontinuities in the representational feature space can cripple an MLP’s ability to accurately predict CSI when noise is present. Different ways in which to mitigate this effect are analysed and a solution developed, initially in the context of channels that are only affected by additive white Guassian noise. The developed architecture is then applied to more complex channels with various delay profiles and Doppler spread. The performance of the proposed MLP is shown to be comparable with LTE minimum mean squared error (MMSE), and to outperform least square (LS) estimation over a range of channel conditions.
@{491, author = {Andrew Oosthuizen and Marelie Davel and Albert Helberg}, title = {Multi-Layer Perceptron for Channel State Information Estimation: Design Considerations}, abstract = {The accurate estimation of channel state information (CSI) is an important aspect of wireless communications. In this paper, a multi-layer perceptron (MLP) is developed as a CSI estimator in long-term evolution (LTE) transmission conditions. The representation of the CSI data is investigated in conjunction with batch normalisation and the representational ability of MLPs. It is found that discontinuities in the representational feature space can cripple an MLP’s ability to accurately predict CSI when noise is present. Different ways in which to mitigate this effect are analysed and a solution developed, initially in the context of channels that are only affected by additive white Guassian noise. The developed architecture is then applied to more complex channels with various delay profiles and Doppler spread. The performance of the proposed MLP is shown to be comparable with LTE minimum mean squared error (MMSE), and to outperform least square (LS) estimation over a range of channel conditions.}, year = {2022}, journal = {Southern Africa Telecommunication Networks and Applications Conference (SATNAC)}, pages = {94 - 99}, month = {08/2022}, address = {Fancourt, George}, }
Building computational models of agents in dynamic, partially observable and stochastic environments is challenging. We propose a cognitive computational model of sugarcane growers’ daily decision-making to examine sugarcane supply chain complexities. Growers make decisions based on uncertain weather forecasts; cane dryness; unforeseen emergencies; and the mill’s unexpected call for delivery of a different amount of cane. The Belief-Desire-Intention (BDI) architecture has been used to model cognitive agents in many domains, including agriculture. However, typical implementations of this architecture have represented beliefs symbolically, so uncertain beliefs are usually not catered for. Here we show that a BDI architecture, enhanced with a dynamic decision network (DDN), suitably models sugarcane grower agents’ repeated daily decisions. Using two complex scenarios, we demonstrate that the agent selects the appropriate intention, and suggests how the grower should act adaptively and proactively to achieve his goals. In addition, we provide a mapping for using a DDN in a BDI architecture. This architecture can be used for modelling sugarcane grower agents in an agent-based simulation. The mapping of the DDN’s use in the BDI architecture enables this work to be applied to other domains for modelling agents’ repeated decisions in partially observable, stochastic and dynamic environments.
@article{488, author = {C. Sue Price and Deshen Moodley and Anban Pillay and Gavin Rens}, title = {An adaptive probabilistic agent architecture for modelling sugarcane growers’ decision-making}, abstract = {Building computational models of agents in dynamic, partially observable and stochastic environments is challenging. We propose a cognitive computational model of sugarcane growers’ daily decision-making to examine sugarcane supply chain complexities. Growers make decisions based on uncertain weather forecasts; cane dryness; unforeseen emergencies; and the mill’s unexpected call for delivery of a different amount of cane. The Belief-Desire-Intention (BDI) architecture has been used to model cognitive agents in many domains, including agriculture. However, typical implementations of this architecture have represented beliefs symbolically, so uncertain beliefs are usually not catered for. Here we show that a BDI architecture, enhanced with a dynamic decision network (DDN), suitably models sugarcane grower agents’ repeated daily decisions. Using two complex scenarios, we demonstrate that the agent selects the appropriate intention, and suggests how the grower should act adaptively and proactively to achieve his goals. In addition, we provide a mapping for using a DDN in a BDI architecture. This architecture can be used for modelling sugarcane grower agents in an agent-based simulation. The mapping of the DDN’s use in the BDI architecture enables this work to be applied to other domains for modelling agents’ repeated decisions in partially observable, stochastic and dynamic environments.}, year = {2022}, journal = {South African Computer Journal}, volume = {34}, pages = {152-191}, issue = {1}, url = {https://sacj.cs.uct.ac.za/index.php/sacj/article/view/857}, doi = {https://doi.org/10.18489/sacj.v34i1.857}, }
Artifcial Intelligence (AI) systems are ubiquitous. From social media timelines, video recommendations on YouTube, and the kinds of adverts we see online, AI, in a very real sense, flters the world we see. More than that, AI is being embedded in agent-like systems, which might prompt certain reactions from users. Specifcally, we might fnd ourselves feeling frustrated if these systems do not meet our expectations. In normal situations, this might be fne, but with the ever increasing sophistication of AI-systems, this might become a problem. While it seems unproblematic to realize that being angry at your car for breaking down is unfitting, can the same be said for AI-systems? In this paper, therefore, I will investigate the so-called “reactive attitudes”, and their important link to our responsibility practices. I then show how within this framework there exist exemption and excuse conditions, and test whether our adopting the “objective attitude” toward agential AI is justifed. I argue that such an attitude is appropriate in the context of three distinct senses of responsibility (answerability, attributability, and accountability), and that, therefore, AI-systems do not undermine our responsibility ascriptions.
@article{487, author = {Fabio Tollon}, title = {Responsibility gaps and the reactive attitudes}, abstract = {Artifcial Intelligence (AI) systems are ubiquitous. From social media timelines, video recommendations on YouTube, and the kinds of adverts we see online, AI, in a very real sense, flters the world we see. More than that, AI is being embedded in agent-like systems, which might prompt certain reactions from users. Specifcally, we might fnd ourselves feeling frustrated if these systems do not meet our expectations. In normal situations, this might be fne, but with the ever increasing sophistication of AI-systems, this might become a problem. While it seems unproblematic to realize that being angry at your car for breaking down is unfitting, can the same be said for AI-systems? In this paper, therefore, I will investigate the so-called “reactive attitudes”, and their important link to our responsibility practices. I then show how within this framework there exist exemption and excuse conditions, and test whether our adopting the “objective attitude” toward agential AI is justifed. I argue that such an attitude is appropriate in the context of three distinct senses of responsibility (answerability, attributability, and accountability), and that, therefore, AI-systems do not undermine our responsibility ascriptions.}, year = {2022}, journal = {AI and Ethics}, publisher = {Springer}, url = {https://link.springer.com/article/10.1007/s43681-022-00172-6}, doi = {https://doi.org/10.1007/s43681-022-00172-6}, }
We report on the development of two reference corpora for the analysis of SepediEnglish code-switched speech in the context of automatic speech recognition. For the first corpus, possible English events were obtained from an existing corpus of transcribed Sepedi-English speech. The second corpus is based on the analysis of radio broadcasts: actual instances of code switching were transcribed and reproduced by a number of native Sepedi speakers. We describe the process to develop and verify both corpora and perform an initial analysis of the newly produced data sets. We find that, in naturally occurring speech, the frequency of code switching is unexpectedly high for this language pair, and that the continuum of code switching (from unmodified embedded words to loanwords absorbed into the matrix language) makes this a particularly challenging task for speech recognition systems.
@article{483, author = {Thipe Modipa and Marelie Davel}, title = {Two Sepedi‑English code‑switched speech corpora}, abstract = {We report on the development of two reference corpora for the analysis of SepediEnglish code-switched speech in the context of automatic speech recognition. For the first corpus, possible English events were obtained from an existing corpus of transcribed Sepedi-English speech. The second corpus is based on the analysis of radio broadcasts: actual instances of code switching were transcribed and reproduced by a number of native Sepedi speakers. We describe the process to develop and verify both corpora and perform an initial analysis of the newly produced data sets. We find that, in naturally occurring speech, the frequency of code switching is unexpectedly high for this language pair, and that the continuum of code switching (from unmodified embedded words to loanwords absorbed into the matrix language) makes this a particularly challenging task for speech recognition systems.}, year = {2022}, journal = {Language Resources and Evaluation}, volume = {56}, pages = {https://rdcu.be/cO6lD)}, publisher = {Springer}, address = {South Africa}, url = {https://rdcu.be/cO6lD}, doi = {https://doi.org/10.1007/s10579-022-09592-6 (Read here: https://rdcu.be/cO6lD)}, }
Mismatched data is a challenging problem for automatic speech recognition (ASR) systems. One of the most common techniques used to address mismatched data is multi-style training (MTR), a form of data augmentation that attempts to transform the training data to be more representative of the testing data; and to learn robust representations applicable to different conditions. This task can be very challenging if the test conditions are unknown. We explore the impact of different MTR styles on system performance when testing conditions are different from training conditions in the context of deep neural network hidden Markov model (DNN-HMM) ASR systems. A controlled environment is created using the LibriSpeech corpus, where we isolate the effect of different MTR styles on final system performance. We evaluate our findings on a South African call centre dataset that contains noisy, WAV49-encoded audio.
@article{480, author = {Walter Heymans and Marelie Davel and Charl Van Heerden}, title = {Multi-style Training for South African Call Centre Audio}, abstract = {Mismatched data is a challenging problem for automatic speech recognition (ASR) systems. One of the most common techniques used to address mismatched data is multi-style training (MTR), a form of data augmentation that attempts to transform the training data to be more representative of the testing data; and to learn robust representations applicable to different conditions. This task can be very challenging if the test conditions are unknown. We explore the impact of different MTR styles on system performance when testing conditions are different from training conditions in the context of deep neural network hidden Markov model (DNN-HMM) ASR systems. A controlled environment is created using the LibriSpeech corpus, where we isolate the effect of different MTR styles on final system performance. We evaluate our findings on a South African call centre dataset that contains noisy, WAV49-encoded audio.}, year = {2022}, journal = {Communications in Computer and Information Science}, volume = {1551}, pages = {111 - 124}, publisher = {Southern African Conference for Artificial Intelligence Research}, address = {South Africa}, doi = {https://doi.org/10.1007/978-3-030-95070-5_8}, }
While deep neural networks (DNNs) have become a standard architecture for many machine learning tasks, their internal decision-making process and general interpretability is still poorly understood. Conversely, common decision trees are easily interpretable and theoretically well understood. We show that by encoding the discrete sample activation values of nodes as a binary representation, we are able to extract a decision tree explaining the classification procedure of each layer in a ReLU-activated multilayer perceptron (MLP). We then combine these decision trees with existing feature attribution techniques in order to produce an interpretation of each layer of a model. Finally, we provide an analysis of the generated interpretations, the behaviour of the binary encodings and how these relate to sample groupings created during the training process of the neural network.
@article{479, author = {Coenraad Mouton and Marelie Davel}, title = {Exploring layerwise decision making in DNNs}, abstract = {While deep neural networks (DNNs) have become a standard architecture for many machine learning tasks, their internal decision-making process and general interpretability is still poorly understood. Conversely, common decision trees are easily interpretable and theoretically well understood. We show that by encoding the discrete sample activation values of nodes as a binary representation, we are able to extract a decision tree explaining the classification procedure of each layer in a ReLU-activated multilayer perceptron (MLP). We then combine these decision trees with existing feature attribution techniques in order to produce an interpretation of each layer of a model. Finally, we provide an analysis of the generated interpretations, the behaviour of the binary encodings and how these relate to sample groupings created during the training process of the neural network.}, year = {2022}, journal = {Communications in Computer and Information Science}, volume = {1551}, pages = {140 - 155}, publisher = {Artificial Intelligence Research (SACAIR 2021)}, doi = {https://doi.org/10.1007/978-3-030-95070-5_10}, }
We explore how machine learning (ML) and Bayesian networks (BNs) can be combined in a personal health agent (PHA) for the detection and interpretation of electrocardiogram (ECG) characteristics. We propose a PHA that uses ECG data from wearables to monitor heart activity, and interprets and explains the observed readings. We focus on atrial fibrillation (AF), the commonest type of arrhythmia. The absence of a P-wave in an ECG is the hallmark indication of AF. Four ML models are trained to classify an ECG signal based on the presence or absence of the P-wave: multilayer perceptron (MLP), logistic regression, support vector machine, and random forest. The MLP is the best performing model with an accuracy of 89.61% and an F1 score of 88.68%. A BN representing AF risk factors is developed based on expert knowledge from the literature and evaluated using Pitchforth and Mengersen’s validation framework. The P-wave presence or absence as determined by the ML model is input into the BN. The PHA is evaluated using sample use cases to illustrate how the BN can explain the occurrence of AF using diagnostic reasoning. This gives the most likely AF risk factors for the individual
@inbook{478, author = {Tezira Wanyana and Mbithe Nzomo and C. Sue Price and Deshen Moodley}, title = {Combining Machine Learning and Bayesian Networks for ECG Interpretation and Explanation}, abstract = {We explore how machine learning (ML) and Bayesian networks (BNs) can be combined in a personal health agent (PHA) for the detection and interpretation of electrocardiogram (ECG) characteristics. We propose a PHA that uses ECG data from wearables to monitor heart activity, and interprets and explains the observed readings. We focus on atrial fibrillation (AF), the commonest type of arrhythmia. The absence of a P-wave in an ECG is the hallmark indication of AF. Four ML models are trained to classify an ECG signal based on the presence or absence of the P-wave: multilayer perceptron (MLP), logistic regression, support vector machine, and random forest. The MLP is the best performing model with an accuracy of 89.61% and an F1 score of 88.68%. A BN representing AF risk factors is developed based on expert knowledge from the literature and evaluated using Pitchforth and Mengersen’s validation framework. The P-wave presence or absence as determined by the ML model is input into the BN. The PHA is evaluated using sample use cases to illustrate how the BN can explain the occurrence of AF using diagnostic reasoning. This gives the most likely AF risk factors for the individual}, year = {2022}, journal = {Proceedings of the 8th International Conference on Information and Communication Technologies for Ageing Well and e-Health - ICT4AWE}, pages = {81-92}, publisher = {SciTePress}, address = {INSTICC}, isbn = {978-989-758-566-1}, doi = {https://doi.org/10.5220/0011046100003188}, }
Stock markets are dynamic systems that exhibit complex intra-share and inter-share temporal dependencies. Spatial-temporal graph neural networks (ST-GNN) are emerging DNN architectures that have yielded high performance for flow prediction in dynamic systems with complex spatial and temporal dependencies such as city traffic networks. In this research, we apply three state-of-the-art ST-GNN architectures, i.e. Graph WaveNet, MTGNN and StemGNN, to predict the closing price of shares listed on the Johannesburg Stock Exchange (JSE) and attempt to capture complex inter-share dependencies. The results show that ST-GNN architectures, specifically Graph WaveNet, produce superior performance relative to an LSTM and are potentially capable of capturing complex intra-share and inter-share temporal dependencies in the JSE. We found that Graph WaveNet outperforms the other approaches over short-term and medium-term horizons. This work is one of the first studies to apply these ST-GNNs to share price prediction.
@article{443, author = {Kialan Pillay and Deshen Moodley}, title = {Exploring Graph Neural Networks for Stock Market Prediction on the JSE}, abstract = {Stock markets are dynamic systems that exhibit complex intra-share and inter-share temporal dependencies. Spatial-temporal graph neural networks (ST-GNN) are emerging DNN architectures that have yielded high performance for flow prediction in dynamic systems with complex spatial and temporal dependencies such as city traffic networks. In this research, we apply three state-of-the-art ST-GNN architectures, i.e. Graph WaveNet, MTGNN and StemGNN, to predict the closing price of shares listed on the Johannesburg Stock Exchange (JSE) and attempt to capture complex inter-share dependencies. The results show that ST-GNN architectures, specifically Graph WaveNet, produce superior performance relative to an LSTM and are potentially capable of capturing complex intra-share and inter-share temporal dependencies in the JSE. We found that Graph WaveNet outperforms the other approaches over short-term and medium-term horizons. This work is one of the first studies to apply these ST-GNNs to share price prediction.}, year = {2022}, journal = {Communications in Computer and Information Science}, volume = {1551}, pages = {95-110}, publisher = {Springer}, address = {Cham}, isbn = {978-3-030-95070-5}, url = {https://link.springer.com/chapter/10.1007/978-3-030-95070-5_7}, doi = {10.1007/978-3-030-95070-5_7}, }
This research proposes an architecture and prototype implementation of a knowledge-based system for automating share evaluation and investment decision making on the Johannesburg Stock Exchange (JSE). The knowledge acquired from an analysis of the investment domain for a value investing approach is represented in an ontology. A Bayesian network, developed using the ontology, is used to capture the complex causal relations between different factors that influence the quality and value of individual shares. The system was found to adequately represent the decision-making process of investment professionals and provided superior returns to selected benchmark JSE indices from 2012 to 2018.
@{442, author = {Rachel Drake and Deshen Moodley}, title = {INVEST: Ontology Driven Bayesian Networks for Investment Decision Making on the JSE}, abstract = {This research proposes an architecture and prototype implementation of a knowledge-based system for automating share evaluation and investment decision making on the Johannesburg Stock Exchange (JSE). The knowledge acquired from an analysis of the investment domain for a value investing approach is represented in an ontology. A Bayesian network, developed using the ontology, is used to capture the complex causal relations between different factors that influence the quality and value of individual shares. The system was found to adequately represent the decision-making process of investment professionals and provided superior returns to selected benchmark JSE indices from 2012 to 2018.}, year = {2022}, journal = {Second Southern African Conference for AI Research (SACAIR 2022)}, pages = {252-273}, month = {06/12/2021-10/12/2021}, address = {Online}, isbn = {978-0-620-94410-6}, url = {https://protect-za.mimecast.com/s/OFYSCpgo02fL1l9gtDHUkY}, }
Within complex societies, social communities are distinguishable based on social interactions. The interactions can be between members or communities and can range from simple conversations between family members and friends to complex interactions that represent the flow of money, information, or power. In our modern digital society, social media platforms present unique opportunities to study social networks through social network analysis (SNA). Social media platforms are usually representative of a specific user group, and Twitter, a microblogging platform, is characterised by the fast distribution of news and often provocative opinions, as well as social mobilizing, which makes it popular for political interactions. The nature of Twitter generates a valuable SNA data source for investigating political conversations and communities, and in related research, specific archetypal conversation patterns between communities were identified that allow for unique interpretations of conversations about a topic. This paper reports on a study where social network analysis (SNA) was performed on Twitter data about political events in 2021 in South Africa. The purpose was to determine which distinct conversation patterns could be detected in datasets collected, as well as what could be derived from these patterns given the South African political landscape and perceptions. The results indicate that conversations in the South African political landscape are less polarized than expected. Conversations often manifest broadcast patterns from key influencers in addition to tight crowds or community clusters. Tight crowds or community clusters indicate intense conversation across communities that exhibits diverse opinions and perspectives on a topic. The results may be of value for researchers that aim to understand social media conversations within the South African society.
@article{434, author = {Aurona Gerber}, title = {The Detection of Conversation Patterns in South African Political Tweets through Social Network Analysis}, abstract = {Within complex societies, social communities are distinguishable based on social interactions. The interactions can be between members or communities and can range from simple conversations between family members and friends to complex interactions that represent the flow of money, information, or power. In our modern digital society, social media platforms present unique opportunities to study social networks through social network analysis (SNA). Social media platforms are usually representative of a specific user group, and Twitter, a microblogging platform, is characterised by the fast distribution of news and often provocative opinions, as well as social mobilizing, which makes it popular for political interactions. The nature of Twitter generates a valuable SNA data source for investigating political conversations and communities, and in related research, specific archetypal conversation patterns between communities were identified that allow for unique interpretations of conversations about a topic. This paper reports on a study where social network analysis (SNA) was performed on Twitter data about political events in 2021 in South Africa. The purpose was to determine which distinct conversation patterns could be detected in datasets collected, as well as what could be derived from these patterns given the South African political landscape and perceptions. The results indicate that conversations in the South African political landscape are less polarized than expected. Conversations often manifest broadcast patterns from key influencers in addition to tight crowds or community clusters. Tight crowds or community clusters indicate intense conversation across communities that exhibits diverse opinions and perspectives on a topic. The results may be of value for researchers that aim to understand social media conversations within the South African society.}, year = {2022}, journal = {Communications in Computer and Information Science}, volume = {1551}, pages = {15-31}, publisher = {Springer}, address = {Cham}, isbn = {978-3-030-95070-5}, url = {https://link.springer.com/chapter/10.1007/978-3-030-95070-5_2}, doi = {10.1007/978-3-030-95070-5_2}, }
Knowledge representation and reasoning (KRR) is an approach to artificial intelligence (AI) in which a system has some information about the world represented formally (a knowledge base), and is able to reason about this information. Defeasible reasoning is a non-classical form of reasoning that enables systems to reason about knowledge bases which contain seemingly contradictory information, thus allowing for exceptions to assertions. Currently, systems which support defeasible entailment for propositional logic are ad hoc, and few and far between, and little to no work has been done on improving the scalability of defeasible reasoning algorithms. We investigate the scalability of defeasible entailment algorithms, and propose optimised versions thereof, as well as present a tool to perform defeasible entailment checks using these algorithms. We also present a knowledge base generation tool which can be used for testing implementations of these algorithms.
@{428, author = {Joel Hamilton and Joonsoo Park and Aidan Bailey and Thomas Meyer}, title = {An Investigation into the Scalability of Defeasible Reasoning Algorithms}, abstract = {Knowledge representation and reasoning (KRR) is an approach to artificial intelligence (AI) in which a system has some information about the world represented formally (a knowledge base), and is able to reason about this information. Defeasible reasoning is a non-classical form of reasoning that enables systems to reason about knowledge bases which contain seemingly contradictory information, thus allowing for exceptions to assertions. Currently, systems which support defeasible entailment for propositional logic are ad hoc, and few and far between, and little to no work has been done on improving the scalability of defeasible reasoning algorithms. We investigate the scalability of defeasible entailment algorithms, and propose optimised versions thereof, as well as present a tool to perform defeasible entailment checks using these algorithms. We also present a knowledge base generation tool which can be used for testing implementations of these algorithms.}, year = {2022}, journal = {Second Southern African Conference for Artificial Intelligence}, pages = {235-251}, month = {06/12-10/12}, publisher = {SACAIR 2021 Organising Committee}, address = {Online}, isbn = {978-0-620-94410-6}, url = {https://2021.sacair.org.za/proceedings/}, }
Belief revision and belief update are approaches to represent and reason with knowledge in artificial intelligence. Previous empirical studies have shown that human reasoning is consistent with non-monotonic logic and postulates of defeasible reasoning, belief revision and belief update. We extended previous work, which tested natural language translations of the postulates of defeasible reasoning, belief revision and belief update with human reasoners via surveys, in three respects. Firstly, we only tested postulates of belief revision and belief update, taking the position that belief change aligns more with human reasoning than non-monotonic defeasible reasoning. Secondly, we decomposed the postulates of revision and update into material implication statements of the form “If x is the case, then y is the case”, each containing a premise and a conclusion, and then translated the premises and conclusions into natural language. Thirdly, we asked human participants to judge each component of the postulate for plausibility. In our analysis, we measured the strength of the association between the premises and the conclusion of each postulate. We used Possibility theory to determine whether the postulates hold with our participants in general. Our results showed that our participants’ reasoning is consistent with postulates of belief revision and belief update when judging the premises and conclusion of the postulate separately.
@{427, author = {Clayton Baker and Tommie Meyer}, title = {Belief Change in Human Reasoning: An Empirical Investigation on MTurk}, abstract = {Belief revision and belief update are approaches to represent and reason with knowledge in artificial intelligence. Previous empirical studies have shown that human reasoning is consistent with non-monotonic logic and postulates of defeasible reasoning, belief revision and belief update. We extended previous work, which tested natural language translations of the postulates of defeasible reasoning, belief revision and belief update with human reasoners via surveys, in three respects. Firstly, we only tested postulates of belief revision and belief update, taking the position that belief change aligns more with human reasoning than non-monotonic defeasible reasoning. Secondly, we decomposed the postulates of revision and update into material implication statements of the form “If x is the case, then y is the case”, each containing a premise and a conclusion, and then translated the premises and conclusions into natural language. Thirdly, we asked human participants to judge each component of the postulate for plausibility. In our analysis, we measured the strength of the association between the premises and the conclusion of each postulate. We used Possibility theory to determine whether the postulates hold with our participants in general. Our results showed that our participants’ reasoning is consistent with postulates of belief revision and belief update when judging the premises and conclusion of the postulate separately.}, year = {2022}, journal = {Second Southern African Conference for AI Research (SACAIR 2022)}, pages = {218-234}, month = {06/12/2021-10/12/2021}, publisher = {SACAIR 2021 Organising Committee}, address = {Online}, isbn = {978-0-620-94410-6}, url = {https://2021.sacair.org.za/proceedings/}, }
Explanation services are a crucial aspect of symbolic reasoning systems but they have not been explored in detail for defeasible formalisms such as KLM. We evaluate prior work on the topic with a focus on KLM propositional logic and find that a form of defeasible explanation initially described for Rational Closure which we term weak justification can be adapted to Relevant and Lexicographic Closure as well as described in terms of intuitive properties derived from the KLM postulates. We also consider how a more general definition of defeasible explanation known as strong explanation applies to KLM and propose an algorithm that enumerates these justifications for Rational Closure.
@inbook{426, author = {Lloyd Everett and Emily Morris and Tommie Meyer}, title = {Explanation for KLM-Style Defeasible Reasoning}, abstract = {Explanation services are a crucial aspect of symbolic reasoning systems but they have not been explored in detail for defeasible formalisms such as KLM. We evaluate prior work on the topic with a focus on KLM propositional logic and find that a form of defeasible explanation initially described for Rational Closure which we term weak justification can be adapted to Relevant and Lexicographic Closure as well as described in terms of intuitive properties derived from the KLM postulates. We also consider how a more general definition of defeasible explanation known as strong explanation applies to KLM and propose an algorithm that enumerates these justifications for Rational Closure.}, year = {2022}, journal = {Artificial Intelligence Research. SACAIR 2021.}, edition = {1551}, publisher = {Springer}, address = {Cham}, isbn = {978-3-030-95069-9}, url = {https://link.springer.com/book/10.1007/978-3-030-95070-5}, doi = {10.1007/978-3-030-95070-5_13}, }
2021
We investigate the use of silhouette coefficients in cluster analysis for speaker diarisation, with the dual purpose of unsupervised fine-tuning during domain adaptation and determining the number of speakers in an audio file. Our main contribution is to demonstrate the use of silhouette coefficients to perform per-file domain adaptation, which we show to deliver an improvement over per-corpus domain adaptation. Secondly, we show that this method of silhouette-based cluster analysis can be used to accurately determine more than one hyperparameter at the same time. Finally, we propose a novel method for calculating the silhouette coefficient of clusters using a PLDA score matrix as input.
@{482, author = {Lucas Van Wyk and Marelie Davel and Charl Van Heerden}, title = {Unsupervised fine-tuning of speaker diarisation pipelines using silhouette coefficients}, abstract = {We investigate the use of silhouette coefficients in cluster analysis for speaker diarisation, with the dual purpose of unsupervised fine-tuning during domain adaptation and determining the number of speakers in an audio file. Our main contribution is to demonstrate the use of silhouette coefficients to perform per-file domain adaptation, which we show to deliver an improvement over per-corpus domain adaptation. Secondly, we show that this method of silhouette-based cluster analysis can be used to accurately determine more than one hyperparameter at the same time. Finally, we propose a novel method for calculating the silhouette coefficient of clusters using a PLDA score matrix as input.}, year = {2021}, journal = {Southern African Conference for Artificial Intelligence Research}, pages = {202 - 216}, month = {06/12/2021 - 10/12/2021}, address = {South Africa}, isbn = {978-0-620-94410-6}, url = {https://2021.sacair.org.za/proceedings/}, }
We investigate the effect of a reduced modulation scheme pool on a CNN-based automatic modulation classifier. Similar classifiers in literature are typically used to classify sets of five or more different modulation types [1] [2], whereas our analysis is of a CNN classifier that classifies between two modulation types, 16-QAM and 8-PSK, only. While implementing the network, we observe that the network’s classification accuracy improves for lower SNR instead of reducing as expected. This analysis exposes characteristics of such classifiers that can be used to improve CNN classifiers on larger sets of modulation types. We show that presenting the SNR data as an extra data point to the network can significantly increase classification accuracy.
@{481, author = {Andrew Oosthuizen and Marelie Davel and Albert Helberg}, title = {Exploring CNN-based automatic modulation classification using small modulation sets}, abstract = {We investigate the effect of a reduced modulation scheme pool on a CNN-based automatic modulation classifier. Similar classifiers in literature are typically used to classify sets of five or more different modulation types [1] [2], whereas our analysis is of a CNN classifier that classifies between two modulation types, 16-QAM and 8-PSK, only. While implementing the network, we observe that the network’s classification accuracy improves for lower SNR instead of reducing as expected. This analysis exposes characteristics of such classifiers that can be used to improve CNN classifiers on larger sets of modulation types. We show that presenting the SNR data as an extra data point to the network can significantly increase classification accuracy.}, year = {2021}, journal = {Southern Africa Telecommunication Networks and Applications Conference}, pages = {20 - 24}, month = {21/11/2021 - 23/11/2021}, address = {South Africa}, url = {https://www.satnac.org.za/proceedings}, }
The COVID-19 pandemic and the subsequent response by governments to introduce national lockdown regulations have confined individuals to their residential premises. As a result, no recreational or sport activities are allowed outside the (often small) boundaries of family homes, a situation often rapidly introducing social isolation. Research has proven that emotional coping mechanisms, such as sport, can lower the stressful and uncertainty burden on individuals. However, without the availability of this coping mechanism, many individuals have been forced to use virtual sport training technology to keep active. This preliminary quantitative study investigated the role of technology, in particular virtual sport training technology (if any) by cyclists as emotional coping mechanism during a period of national lockdown. The results of an online survey indicated that sport, in general, has always been an emotional coping mechanism during normal challenging situations but that slightly more respondents used sport as mechanism during the lockdown period. Respondents indicated that virtual cycling training technology enabled them to continue with using their normal coping mechanism even in a period of national lockdown. One of the benefits of a virtual training environment is the ability to socialize by riding with virtual team members. Surprisingly, the number of cyclists who preferred riding alone in the virtual cycling environment was slightly more than the cyclists who preferred to join scheduled rides with virtual team members. The research is the first step towards an in-depth investigation into the adoption of technology as an emotional coping mechanism in stressful environments.
@article{439, author = {Sunet Eybers and Aurona Gerber}, title = {A Preliminary Investigation into the Role of Virtual Sport Training Technology as Emotional Coping Mechanism During a National Pandemic Lockdown}, abstract = {The COVID-19 pandemic and the subsequent response by governments to introduce national lockdown regulations have confined individuals to their residential premises. As a result, no recreational or sport activities are allowed outside the (often small) boundaries of family homes, a situation often rapidly introducing social isolation. Research has proven that emotional coping mechanisms, such as sport, can lower the stressful and uncertainty burden on individuals. However, without the availability of this coping mechanism, many individuals have been forced to use virtual sport training technology to keep active. This preliminary quantitative study investigated the role of technology, in particular virtual sport training technology (if any) by cyclists as emotional coping mechanism during a period of national lockdown. The results of an online survey indicated that sport, in general, has always been an emotional coping mechanism during normal challenging situations but that slightly more respondents used sport as mechanism during the lockdown period. Respondents indicated that virtual cycling training technology enabled them to continue with using their normal coping mechanism even in a period of national lockdown. One of the benefits of a virtual training environment is the ability to socialize by riding with virtual team members. Surprisingly, the number of cyclists who preferred riding alone in the virtual cycling environment was slightly more than the cyclists who preferred to join scheduled rides with virtual team members. The research is the first step towards an in-depth investigation into the adoption of technology as an emotional coping mechanism in stressful environments.}, year = {2021}, journal = {Lecture Notes in Networks and Systems}, volume = {186}, pages = {186-194}, publisher = {Springer}, address = {Cham}, isbn = {978-3-030-66093-2}, url = {https://link.springer.com/chapter/10.1007/978-3-030-66093-2_18}, doi = {10.1007/978-3-030-66093-2_18}, }
Social communities play a significant role in understanding complex societies, from communities formed by support interactions between friends and family to community structures that depict the flow of information, money and power. With the emergence of the internet, the nature of social networks changed because communities could form disassociated from physical location, and social network analysis (SNA) on social media such as Twitter and Facebook emerged as a distinct research field. Studies suggest that Twitter feeds have a significant influence on the views and opinions of society, and subsequently the formation of communities. This paper reports on a study where social network analysis was performed on Twitter feeds in South Africa around the 2019 elections to detect distinct patterns within the overall network. In the datasets that were analysed, a specific network pattern namely Broadcast Networks were observed. A Broadcast Network typically reflects central hubs such as media houses, political parties or influencers whose messages are repeated without interaction or discussion. Our results indicate that there were few discussions and interactions and that messages were broadcasted from central nodes even though the general experience of Twitter users during this time was of intense discussions and differences in opinion.
@article{438, author = {Aurona Gerber and Stephanie Strachan}, title = {Network Patterns in South African Election Tweets}, abstract = {Social communities play a significant role in understanding complex societies, from communities formed by support interactions between friends and family to community structures that depict the flow of information, money and power. With the emergence of the internet, the nature of social networks changed because communities could form disassociated from physical location, and social network analysis (SNA) on social media such as Twitter and Facebook emerged as a distinct research field. Studies suggest that Twitter feeds have a significant influence on the views and opinions of society, and subsequently the formation of communities. This paper reports on a study where social network analysis was performed on Twitter feeds in South Africa around the 2019 elections to detect distinct patterns within the overall network. In the datasets that were analysed, a specific network pattern namely Broadcast Networks were observed. A Broadcast Network typically reflects central hubs such as media houses, political parties or influencers whose messages are repeated without interaction or discussion. Our results indicate that there were few discussions and interactions and that messages were broadcasted from central nodes even though the general experience of Twitter users during this time was of intense discussions and differences in opinion.}, year = {2021}, journal = {Lecture Notes in Networks and Systems}, volume = {186}, pages = {3-13}, publisher = {Springer}, address = {Cham}, isbn = {978-3-030-66093-2}, url = {https://link.springer.com/chapter/10.1007/978-3-030-66093-2_1}, doi = {10.1007/978-3-030-66093-2_1}, }
Many organisations turn to enterprise architecture (EA) to assist with the alignment of business and information technology. While some of these organisations succeed in the development and implementation of EA, many of them fail to manage EA after implementation. Because of the specific focus on the management of EA during and after the initial implementation, the enterprise architecture management (EAM) field is developed. EAM is characterised by many dimensions or elements. It is a challenge to select the dimensions that should be managed and that are vital for successful EA practice. In this study, we executed a systematic literature review (SLR) of primary EA and EAM literature with the aim of identifying dimensions regarded as key areas of EAM. The main contribution of this work is a concept map of the essential EAM dimensions with their relationships. The results of the SLR indicate that dimensions that used to be considered important or seemed to be the most essential for EA, such as frameworks, EA principles and reference models, are no longer emphasised as strongly and more focus is placed on people, skills, communication and governance when considering EAM literature and EAM maturity.
@article{437, author = {Trishan Marimuthu and Alta van der Merwe and Aurona Gerber}, title = {A systematic literature review of essential enterprise architecture management dimensions}, abstract = {Many organisations turn to enterprise architecture (EA) to assist with the alignment of business and information technology. While some of these organisations succeed in the development and implementation of EA, many of them fail to manage EA after implementation. Because of the specific focus on the management of EA during and after the initial implementation, the enterprise architecture management (EAM) field is developed. EAM is characterised by many dimensions or elements. It is a challenge to select the dimensions that should be managed and that are vital for successful EA practice. In this study, we executed a systematic literature review (SLR) of primary EA and EAM literature with the aim of identifying dimensions regarded as key areas of EAM. The main contribution of this work is a concept map of the essential EAM dimensions with their relationships. The results of the SLR indicate that dimensions that used to be considered important or seemed to be the most essential for EA, such as frameworks, EA principles and reference models, are no longer emphasised as strongly and more focus is placed on people, skills, communication and governance when considering EAM literature and EAM maturity.}, year = {2021}, journal = {Lecture Notes in Networks and Systems}, volume = {235}, pages = {381-391}, publisher = {Springer}, address = {Singapore}, isbn = {978-981-16-2377-6}, url = {https://link.springer.com/chapter/10.1007/978-981-16-2377-6_36}, doi = {10.1007/978-981-16-2377-6_36}, }
The global Covid-19 pandemic caused havoc in higher education teaching routines and several residential institutions encouraged instructors to convert existing modules to flipped classrooms as part of an online, blended learning strategy. Even though this seems a reasonable request, instructors straightaway encountered challenges which include a vague concept of what an online flipped classroom entails within a higher education context, a lack of guidelines for converting an existing module, facilitating learner engagement as well as unique challenges for inclusion of all learners in a digitally divided developing country in Covid-19 lockdown. In order to respond, we embarked on a study to identify the distinguishing characteristics of flipped classrooms to understand the as-is and to-be scenarios using a systematic literature review. The characteristics were used to develop of design considerations to convert to an online flipped classroom for higher education taking our diverse learner profiles into account. We subsequently converted a short module in an information systems department and shortly report on our experience.
@article{436, author = {Aurona Gerber and Sunet Eybers}, title = {Converting to Inclusive Online Flipped Classrooms in Response to Covid-19 Lockdown}, abstract = {The global Covid-19 pandemic caused havoc in higher education teaching routines and several residential institutions encouraged instructors to convert existing modules to flipped classrooms as part of an online, blended learning strategy. Even though this seems a reasonable request, instructors straightaway encountered challenges which include a vague concept of what an online flipped classroom entails within a higher education context, a lack of guidelines for converting an existing module, facilitating learner engagement as well as unique challenges for inclusion of all learners in a digitally divided developing country in Covid-19 lockdown. In order to respond, we embarked on a study to identify the distinguishing characteristics of flipped classrooms to understand the as-is and to-be scenarios using a systematic literature review. The characteristics were used to develop of design considerations to convert to an online flipped classroom for higher education taking our diverse learner profiles into account. We subsequently converted a short module in an information systems department and shortly report on our experience.}, year = {2021}, journal = {South African Journal of Higher Education}, volume = {35}, pages = {34-57}, issue = {4}, isbn = {1753-5913}, url = {https://journals.co.za/doi/10.20853/35-4-4285}, doi = {10.20853/35-4-4285}, }
Blockchain is the underlying technology behind Bitcoin, the first digital currency, and due to the rapid growth of Bitcoin, there is significant interest in blockchain as the enabler of digital currencies due to the consensus distributed ledger model. The rise and the success of alternative cryptocurrencies such as Ethereum and Ripple has supported the development of blockchain technology, but the performance of blockchain applications has been documented as a significant obstacle for adoption. At the core of blockchain is a consensus protocol, which plays a key role in maintaining the safety, performance and efficiency of the blockchain network. Several consensus protocols exist, and the use of the right consensus protocol is crucial to ensure adequate performance of any blockchain application. However, there is a lack of documented overview studies even though there is agreement in the literature about the importance and understanding of blockchain consensus protocols. In this study, we adopt a systematic literature review (SLR) to investigate the current status of consensus protocols used for blockchain together with the identified limitations of these protocols. The results of this study include an overview of different consensus protocols as well as consensus protocol limitations and will be of value for any practitioner or scholar that is interested in blockchain applications.
@article{435, author = {Sikho Luzipo and Aurona Gerber}, title = {A Systematic Literature Review of Blockchain Consensus Protocols}, abstract = {Blockchain is the underlying technology behind Bitcoin, the first digital currency, and due to the rapid growth of Bitcoin, there is significant interest in blockchain as the enabler of digital currencies due to the consensus distributed ledger model. The rise and the success of alternative cryptocurrencies such as Ethereum and Ripple has supported the development of blockchain technology, but the performance of blockchain applications has been documented as a significant obstacle for adoption. At the core of blockchain is a consensus protocol, which plays a key role in maintaining the safety, performance and efficiency of the blockchain network. Several consensus protocols exist, and the use of the right consensus protocol is crucial to ensure adequate performance of any blockchain application. However, there is a lack of documented overview studies even though there is agreement in the literature about the importance and understanding of blockchain consensus protocols. In this study, we adopt a systematic literature review (SLR) to investigate the current status of consensus protocols used for blockchain together with the identified limitations of these protocols. The results of this study include an overview of different consensus protocols as well as consensus protocol limitations and will be of value for any practitioner or scholar that is interested in blockchain applications.}, year = {2021}, journal = {Lecture Notes in Computer Science}, volume = {12896}, pages = {580-595}, publisher = {Springer}, address = {Cham}, isbn = {978-3-030-85447-8}, url = {https://link.springer.com/chapter/10.1007/978-3-030-85447-8_48}, doi = {10.1007/978-3-030-85447-8_48}, }
We extend the expressivity of classical conditional reasoning by introducing context as a new parameter. The enriched conditional logic generalises the defeasible conditional setting in the style of Kraus, Lehmann, and Magidor, and allows for a refined semantics that is able to distinguish, for example, between expectations and counterfactuals. In this paper we introduce the language for the enriched logic and define an appropriate semantic framework for it. We analyse which properties generally associated with conditional reasoning are still satisfied by the new semantic framework, provide a suitable representation result, and define an entailment relation based on Lehmann and Magidor’s generally-accepted notion of Rational Closure.
@{430, author = {Giovanni Casini and Tommie Meyer and Ivan Varzinczak}, title = {Contextual Conditional Reasoning}, abstract = {We extend the expressivity of classical conditional reasoning by introducing context as a new parameter. The enriched conditional logic generalises the defeasible conditional setting in the style of Kraus, Lehmann, and Magidor, and allows for a refined semantics that is able to distinguish, for example, between expectations and counterfactuals. In this paper we introduce the language for the enriched logic and define an appropriate semantic framework for it. We analyse which properties generally associated with conditional reasoning are still satisfied by the new semantic framework, provide a suitable representation result, and define an entailment relation based on Lehmann and Magidor’s generally-accepted notion of Rational Closure.}, year = {2021}, journal = {35th AAAI Conference on Artificial Intelligence}, pages = {6254-6261}, month = {02/02/2021-09/02/2021}, publisher = {AAAI Press}, address = {Online}, }
We extend the KLM approach to defeasible reasoning to be applicable to a restricted version of first-order logic. We describe defeasibility for this logic using a set of rationality postulates, provide an appropriate semantics for it, and present a representation result that characterises the semantic description of defeasibility in terms of the rationality postulates. Based on this theoretical core, we then propose a version of defeasible entailment that is inspired by Rational Closure as it is defined for defeasible propositional logic and defeasible description logics. We show that this form of defeasible entailment is rational in the sense that it adheres to our rationality postulates. The work in this paper is the first step towards our ultimate goal of introducing KLM-style defeasible reasoning into the family of Datalog+/- ontology languages.
@{429, author = {Giovanni Casini and Tommie Meyer and Guy Paterson-Jones}, title = {KLM-Style Defeasibility for Restricted First-Order Logic}, abstract = {We extend the KLM approach to defeasible reasoning to be applicable to a restricted version of first-order logic. We describe defeasibility for this logic using a set of rationality postulates, provide an appropriate semantics for it, and present a representation result that characterises the semantic description of defeasibility in terms of the rationality postulates. Based on this theoretical core, we then propose a version of defeasible entailment that is inspired by Rational Closure as it is defined for defeasible propositional logic and defeasible description logics. We show that this form of defeasible entailment is rational in the sense that it adheres to our rationality postulates. The work in this paper is the first step towards our ultimate goal of introducing KLM-style defeasible reasoning into the family of Datalog+/- ontology languages.}, year = {2021}, journal = {19th International Workshop on Non-Monotonic Reasoning}, pages = {184-193}, month = {03/11/2021-05/11/2021}, address = {Online}, url = {https://drive.google.com/open?id=1WSIl3TOrXBhaWhckWN4NLXoD9AVFKp5R}, }