AI Safety & Alignment
AI safety & alignment is about ensuring that intelligent systems we develop act in accordance with human values and do not cause harm to us. Whilst it might seem trivial at first - “just programm them not to kill us” - developing super-intelligent machines that are connected to all of human knowledge and can interface with human beings is a highly technical challenge that requires serious consideration.
If managed correctly, the arrival of AI could create vast amounts of wealth for humanity. On the flip side, if this technological development is approached callously, the consequences may threaten the future of humanity.
OpenAI research scholar.
Data scientist and AI ethics advocate.
Co-director of the Center for Artificial Intelligence Research.
Lead AI research scientist at InstaDeep.
The discussion will be structured into two sections and will be followed by 30 minutes of audience Q&A with the panellists.
The first part of the discussion will tacke the issue of AI safety broadly. Why is AI alignment something we should be concerned about? How are people trying to ensure that AI will be aligned with human values? What is the most effective way to mitigate the potential existential risk posed by AI?
Subsequently, the discussion will turn to the relationship between AI development and the African continent. Suppose we successfully manage the alignment problem and AI does not pose an immediate existential risk to humanity, it will nonetheless be a highly disruptive force to the world and global economy. Many say AI will be the next great industrial revolution. If that is the case, how can we ensure that this technology does not only benefit the developed world but has positive effects in Africa as well? Or will powerful AI in the hands of the developed world simply lead to greater exploitation of Africa?
We will use Stuart Russell’s 2019 book Human Compatible as a framework for the first part of the discussion. If you have not read the book before, we recommend reading a summary of the book on the EA website here.
Another useful resource is the talk given by Pelonomi Moiloa at the Deep Learning IndabaX in 2019. The talk was titled Protecting Machines From Us - Ethics And Bias and can be found here.
Registration for the event here: https://docs.google.com/forms/d/e/1FAIpQLScsNebs6R2LAeDLeqO0pSBqG5fywj8BZgAu5OTvAXdQkhzHBA/viewform