Ethics of AI Research Publications

2022

Tollon F. Responsibility gaps and the reactive attitudes. AI and Ethics. 2022. doi:https://doi.org/10.1007/s43681-022-00172-6.

Artifcial Intelligence (AI) systems are ubiquitous. From social media timelines, video recommendations on YouTube, and the kinds of adverts we see online, AI, in a very real sense, flters the world we see. More than that, AI is being embedded in agent-like systems, which might prompt certain reactions from users. Specifcally, we might fnd ourselves feeling frustrated if these systems do not meet our expectations. In normal situations, this might be fne, but with the ever increasing sophistication of AI-systems, this might become a problem. While it seems unproblematic to realize that being angry at your car for breaking down is unfitting, can the same be said for AI-systems? In this paper, therefore, I will investigate the so-called “reactive attitudes”, and their important link to our responsibility practices. I then show how within this framework there exist exemption and excuse conditions, and test whether our adopting the “objective attitude” toward agential AI is justifed. I argue that such an attitude is appropriate in the context of three distinct senses of responsibility (answerability, attributability, and accountability), and that, therefore, AI-systems do not undermine our responsibility ascriptions.

@article{487,
  author = {Fabio Tollon},
  title = {Responsibility gaps and the reactive attitudes},
  abstract = {Artifcial Intelligence (AI) systems are ubiquitous. From social media timelines, video recommendations on YouTube, and the kinds of adverts we see online, AI, in a very real sense, flters the world we see. More than that, AI is being embedded in agent-like systems, which might prompt certain reactions from users. Specifcally, we might fnd ourselves feeling frustrated if these systems do not meet our expectations. In normal situations, this might be fne, but with the ever increasing sophistication of AI-systems, this might become a problem. While it seems unproblematic to realize that being angry at your car for breaking down is unfitting, can the same be said for AI-systems? In this paper, therefore, I will investigate the so-called “reactive attitudes”, and their important link to our responsibility practices. I then show how within this framework there exist exemption and excuse conditions, and test whether our adopting the “objective attitude” toward agential AI is justifed. I argue that such an attitude is appropriate in the context of three distinct senses of responsibility (answerability, attributability, and accountability), and that, therefore, AI-systems do not undermine our responsibility ascriptions.},
  year = {2022},
  journal = {AI and Ethics},
  publisher = {Springer},
  url = {https://link.springer.com/article/10.1007/s43681-022-00172-6},
  doi = {https://doi.org/10.1007/s43681-022-00172-6},
}

2021

Tollon F. Designed to Seduce: Epistemically Retrograde Ideation and YouTube's Recommender System. International Journal of Technoethics (IJT). 2021;12 (2). doi:10.4018/IJT.2021070105.

Up to 70% of all watch time on YouTube is due to the suggested content of its recommender system. This system has been found, by virtue of its design, to be promoting conspiratorial content. In this paper, the author firstly critiques the value neutrality thesis regarding technology, showing it to be philosophically untenable. This means that technological artefacts can influence what people come to value (or perhaps even embody values themselves) and change the moral evaluation of an action. Secondly, he introduces the concept of an affordance, borrowed from the literature on ecological psychology. This concept allows him to make salient how technologies come to solicit certain kinds of actions from users, making such actions more or less likely, and in this way influencing the kinds of things one comes to value. Thirdly, he critically assesses the results of a study by Alfano et al. He makes use of the literature on affordances, introduced earlier, to shed light on how these technological systems come to mediate our perception of the world and influence action.

@article{415,
  author = {Fabio Tollon},
  title = {Designed to Seduce: Epistemically Retrograde Ideation and YouTube's Recommender System},
  abstract = {Up to 70% of all watch time on YouTube is due to the suggested content of its recommender system. This system has been found, by virtue of its design, to be promoting conspiratorial content. In this paper, the author firstly critiques the value neutrality thesis regarding technology, showing it to be philosophically untenable. This means that technological artefacts can influence what people come to value (or perhaps even embody values themselves) and change the moral evaluation of an action. Secondly, he introduces the concept of an affordance, borrowed from the literature on ecological psychology. This concept allows him to make salient how technologies come to solicit certain kinds of actions from users, making such actions more or less likely, and in this way influencing the kinds of things one comes to value. Thirdly, he critically assesses the results of a study by Alfano et al. He makes use of the literature on affordances, introduced earlier, to shed light on how these technological systems come to mediate our perception of the world and influence action.},
  year = {2021},
  journal = {International Journal of Technoethics (IJT)},
  volume = {12},
  issue = {2},
  publisher = {IGI Global},
  isbn = {9781799861492},
  url = {https://www.igi-global.com/gateway/article/281077},
  doi = {10.4018/IJT.2021070105},
}
Tollon F. Artifacts and affordances: from designed properties to possibilities for action. AI & SOCIETY Journal of Knowledge, Culture and Communication. 2021;36(1). doi:https://doi.org/10.1007/s00146-021-01155-7.

In this paper I critically evaluate the value neutrality thesis regarding technology, and find it wanting. I then introduce the various ways in which artifacts can come to influence moral value, and our evaluation of moral situations and actions. Here, following van de Poel and Kroes, I introduce the idea of value sensitive design. Specifically, I show how by virtue of their designed properties, artifacts may come to embody values. Such accounts, however, have several shortcomings. In agreement with Michael Klenk, I raise epistemic and metaphysical issues with respect to designed properties embodying value. The concept of an affordance, borrowed from ecological psychology, provides a more philosophically fruitful grounding to the potential way(s) in which artifacts might embody values. This is due to the way in which it incorporates key insights from perception more generally, and how we go about determining possibilities for action in our environment specifically. The affordance account as it is presented by Klenk, however, is insufficient. I therefore argue that we understand affordances based on whether they are meaningful, and, secondly, that we grade them based on their force.

@article{386,
  author = {Fabio Tollon},
  title = {Artifacts and affordances: from designed properties to possibilities for action},
  abstract = {In this paper I critically evaluate the value neutrality thesis regarding technology, and find it wanting. I then introduce the various ways in which artifacts can come to influence moral value, and our evaluation of moral situations and actions. Here, following van de Poel and Kroes, I introduce the idea of value sensitive design. Specifically, I show how by virtue of their designed properties, artifacts may come to embody values. Such accounts, however, have several shortcomings. In agreement with Michael Klenk, I raise epistemic and metaphysical issues with respect to designed properties embodying value. The concept of an affordance, borrowed from ecological psychology, provides a more philosophically fruitful grounding to the potential way(s) in which artifacts might embody values. This is due to the way in which it incorporates key insights from perception more generally, and how we go about determining possibilities for action in our environment specifically. The affordance account as it is presented by Klenk, however, is insufficient. I therefore argue that we understand affordances based on whether they are meaningful, and, secondly, that we grade them based on their force.},
  year = {2021},
  journal = {AI & SOCIETY Journal of Knowledge, Culture and Communication},
  volume = {36},
  issue = {1},
  publisher = {Springer},
  url = {https://link.springer.com/article/10.1007%2Fs00146-021-01155-7},
  doi = {https://doi.org/10.1007/s00146-021-01155-7},
}

2020

Tollon F. The artifcial view: toward a non‑anthropocentric account of moral patiency. Ethics and Information Technology. 2020;22(4). doi:https://doi.org/10.1007/s10676-020-09540-4.

In this paper I provide an exposition and critique of the Organic View of Ethical Status, as outlined by Torrance (2008). A key presupposition of this view is that only moral patients can be moral agents. It is claimed that because artificial agents lack sentience, they cannot be proper subjects of moral concern (i.e. moral patients). This account of moral standing in principle excludes machines from participating in our moral universe. I will argue that the Organic View operationalises anthropocentric intuitions regarding sentience ascription, and by extension how we identify moral patients. The main difference between the argument I provide here and traditional arguments surrounding moral attributability is that I do not necessarily defend the view that internal states ground our ascriptions of moral patiency. This is in contrast to views such as those defended by Singer (1975, 2011) and Torrance (2008), where concepts such as sentience play starring roles. I will raise both conceptual and epistemic issues with regards to this sense of sentience. While this does not preclude the usage of sentience outright, it suggests that we should be more careful in our usage of internal mental states to ground our moral ascriptions. Following from this I suggest other avenues for further exploration into machine moral patiency which may not have the same shortcomings as the Organic View.

@article{387,
  author = {Fabio Tollon},
  title = {The artifcial view: toward a non‑anthropocentric account of moral patiency},
  abstract = {In this paper I provide an exposition and critique of the Organic View of Ethical Status, as outlined by Torrance (2008). A key presupposition of this view is that only moral patients can be moral agents. It is claimed that because artificial agents lack sentience, they cannot be proper subjects of moral concern (i.e. moral patients). This account of moral standing in principle excludes machines from participating in our moral universe. I will argue that the Organic View operationalises anthropocentric intuitions regarding sentience ascription, and by extension how we identify moral patients. The main difference between the argument I provide here and traditional arguments surrounding moral attributability is that I do not necessarily defend the view that internal states ground our ascriptions of moral patiency. This is in contrast to views such as those defended by Singer (1975, 2011) and Torrance (2008), where concepts such as sentience play starring roles. I will raise both conceptual and epistemic issues with regards to this sense of sentience. While this does not preclude the usage of sentience outright, it suggests that we should be more careful in our usage of internal mental states to ground our moral ascriptions. Following from this I suggest other avenues for further exploration into machine moral patiency which may not have the same shortcomings as the Organic View.},
  year = {2020},
  journal = {Ethics and Information Technology},
  volume = {22},
  issue = {4},
  publisher = {Springer},
  url = {https://link.springer.com/article/10.1007%2Fs10676-020-09540-4},
  doi = {https://doi.org/10.1007/s10676-020-09540-4},
}
Friedman C. Human-Robot Moral Relations: Human Interactants as Moral Patients of Their Own Agential Moral Actions Towards Robots. Communications in Computer and Information Science . 2020;1342. doi:https://doi.org/10.1007/978-3-030-66151-9_1.

This paper contributes to the debate in the ethics of social robots on how or whether to treat social robots morally by way of considering a novel perspective on the moral relations between human interactants and social robots. This perspective is significant as it allows us to circumnavigate debates about the (im)possibility of robot consciousness and moral patiency (debates which often slow down discussion on the ethics of HRI), thus allowing us to address actual and urgent current ethical issues in relation to human-robot interaction. The paper considers the different ways in which human interactants may be moral patients in the context of interaction with social robots: robots as conduits of human moral action towards human moral patients; humans as moral patients to the actions of robots; and human interactants as moral patients of their own agential moral actions towards social robots. This third perspective is the focal point of the paper. The argument is that due to perceived robot consciousness, and the possibility that the immoral treatment of social robots may morally harm human interactants, there is a unique moral relation between humans and social robots wherein human interactants are both the moral agents of their actions towards robots, as well as the actual moral patients of those agential moral actions towards robots. Robots, however, are no more than perceived moral patients. This discussion further adds to debates in the context of robot moral status, and the consideration of the moral treatment of robots in the context of human-robot interaction.

@article{385,
  author = {Cindy Friedman},
  title = {Human-Robot Moral Relations: Human Interactants as Moral Patients of Their Own Agential Moral Actions Towards Robots},
  abstract = {This paper contributes to the debate in the ethics of social robots on how or whether to treat social robots morally by way of considering a novel perspective on the moral relations between human interactants and social robots. This perspective is significant as it allows us to circumnavigate debates about the (im)possibility of robot consciousness and moral patiency (debates which often slow down discussion on the ethics of HRI), thus allowing us to address actual and urgent current ethical issues in relation to human-robot interaction. The paper considers the different ways in which human interactants may be moral patients in the context of interaction with social robots: robots as conduits of human moral action towards human moral patients; humans as moral patients to the actions of robots; and human interactants as moral patients of their own agential moral actions towards social robots. This third perspective is the focal point of the paper. The argument is that due to perceived robot consciousness, and the possibility that the immoral treatment of social robots may morally harm human interactants, there is a unique moral relation between humans and social robots wherein human interactants are both the moral agents of their actions towards robots, as well as the actual moral patients of those agential moral actions towards robots. Robots, however, are no more than perceived moral patients. This discussion further adds to debates in the context of robot moral status, and the consideration of the moral treatment of robots in the context of human-robot interaction.},
  year = {2020},
  journal = {Communications in Computer and Information Science},
  volume = {1342},
  pages = {3-20},
  publisher = {Springer},
  isbn = {978-3-030-66151-9},
  url = {https://link.springer.com/chapter/10.1007/978-3-030-66151-9_1},
  doi = {https://doi.org/10.1007/978-3-030-66151-9_1},
}
Ruttkamp-Bloem E. The Quest for Actionable AI Ethics. Communications in Computer and Information Science . 2020;1342. doi:https://doi.org/10.1007/978-3-030-66151-9_3.

In the face of the fact that AI ethics guidelines currently, on the whole, seem to have no significant impact on AI practices, the quest of AI ethics to ensure trustworthy AI is in danger of becoming nothing more than a nice ideal. Serious work is to be done to ensure AI ethics guidelines are actionable. To this end, in this paper, I argue that AI ethics should be approached 1) in a multi-disciplinary manner focused on concrete research in the discipline of the ethics of AI and 2) as a dynamic system on the basis of virtue ethics in order to work towards enabling all AI actors to take responsibility for their own actions and to hold others accountable for theirs. In conclusion, the paper emphasises the importance of understanding AI ethics as playing out on a continuum of interconnected interests across academia, civil society, public policy-making and the private sector, and a novel notion of ‘AI ethics capital’ is put on the table as outcome of actionable AI ethics and essential ingredient for sustainable trustworthy AI.

@article{384,
  author = {Emma Ruttkamp-Bloem},
  title = {The Quest for Actionable AI Ethics},
  abstract = {In the face of the fact that AI ethics guidelines currently, on the whole, seem to have no significant impact on AI practices, the quest of AI ethics to ensure trustworthy AI is in danger of becoming nothing more than a nice ideal. Serious work is to be done to ensure AI ethics guidelines are actionable. To this end, in this paper, I argue that AI ethics should be approached 1) in a multi-disciplinary manner focused on concrete research in the discipline of the ethics of AI and 2) as a dynamic system on the basis of virtue ethics in order to work towards enabling all AI actors to take responsibility for their own actions and to hold others accountable for theirs. In conclusion, the paper emphasises the importance of understanding AI ethics as playing out on a continuum of interconnected interests across academia, civil society, public policy-making and the private sector, and a novel notion of ‘AI ethics capital’ is put on the table as outcome of actionable AI ethics and essential ingredient for sustainable trustworthy AI.},
  year = {2020},
  journal = {Communications in Computer and Information Science},
  volume = {1342},
  pages = {34-52},
  publisher = {Springer},
  isbn = {978-3-030-66151-9},
  url = {https://link.springer.com/chapter/10.1007/978-3-030-66151-9_3},
  doi = {https://doi.org/10.1007/978-3-030-66151-9_3},
}

2017

Psillos S, Ruttkamp-Bloem E. Scientific realism: quo vadis? Introduction: new thinking about scientific realism. Synthese. 2017;194(4). doi:10.1007/s11229-017-1493-x.

This Introduction has two foci: the first is a discussion of the motivation for and the aims of the 2014 conference on New Thinking about Scientific Realism in Cape Town South Africa, and the second is a brief contextualization of the contributed articles in this special issue of Synthese in the framework of the conference. Each focus is discussed in a separate section.

@article{416,
  author = {Stathis Psillos and Emma Ruttkamp-Bloem},
  title = {Scientific realism: quo vadis? Introduction: new thinking about scientific realism},
  abstract = {This Introduction has two foci: the first is a discussion of the motivation for and the aims of the 2014 conference on New Thinking about Scientific Realism in Cape Town South Africa, and the second is a brief contextualization of the contributed articles in this special issue of Synthese in the framework of the conference. Each focus is discussed in a separate section.},
  year = {2017},
  journal = {Synthese},
  volume = {194},
  pages = {3187-3201},
  issue = {4},
  publisher = {Springer},
  isbn = {0039-7857 , 1573-0964},
  doi = {10.1007/s11229-017-1493-x},
}

2015

Ruttkamp-Bloem E. Repositioning realism. Philosphia Scientiæ. 2015;19(1). doi:10.4000/philosophiascientiae.1042.

"Naturalised realism" is presented as a version of realism which is more compatible with the history of science than convergent or explanationist forms of realism. The account is unpacked according to four theses : 1) Whether realism is warranted with regards to a particular theory depends on the kind and quality of evidence available for that theory; 2) Reference is about causal interaction with the world; 3) Most of science happens somewhere in between instrumentalism and scientific realism on a continuum of stances towards the status of theories; 4) The degree to which realism is warranted has something to do with the degree to which theories successfully refer, rather than with the truth of theories.

@article{417,
  author = {Emma Ruttkamp-Bloem},
  title = {Repositioning realism},
  abstract = {"Naturalised realism" is presented as a version of realism which is more compatible with the history of science than convergent or explanationist forms of realism. The account is unpacked according to four theses : 1) Whether realism is warranted with regards to a particular theory depends on the kind and quality of evidence available for that theory; 2) Reference is about causal interaction with the world; 3) Most of science happens somewhere in between instrumentalism and scientific realism on a continuum of stances towards the status of theories; 4) The degree to which realism is warranted has something to do with the degree to which theories successfully refer, rather than with the truth of theories.},
  year = {2015},
  journal = {Philosphia Scientiæ},
  volume = {19},
  pages = {85-98},
  issue = {1},
  publisher = {Université Nancy 2},
  isbn = {1281-2463},
  doi = {10.4000/philosophiascientiae.1042},
}
  • CSIR
  • DSI
  • Covid-19