IPT Talk Series 2023-24
21 November 2023, 20.00-21.00 (Greece time) (Online)
(University of Edinburgh)
The Future of Autonomy
The first thing we need to understand is that AI Algorithms embody moral values. This does not mean that Algorithms are moral agents, but only that when we accept the operation of Algorithms, their operations embody values, as all operations do (e.g., evaluations, decision-making), which we bring into our lives. This is what MIT, for example, discovered, that some Algorithms are biased by not recognising people with black faces as people. Such biases can be corrected, through additional training of the Algorithms. Other ways in which Algorithmic values interfere with our lives are, for example, infringements of our privacy, or other such values. Again, such infringements can be avoided/corrected with further training of the Algorithms. On this basis – of the possibility o correction and retraining Algorithms – the AI community has asked for REGULATING AI, so that partially trained algorithms do not enter the market. My main aim in today’s talk is to argue that REGULATED AI is HARMFUL to Humanity. It is harmful to Humanity because we will trust REGULATED AI, precisely because it is REGULATED, and we will therefore gradually surrender our AUTONOMY to AI’s decision-making about everything, in the era of AI-Governance which we are rapidly approaching. REGULATED AI is the pathway to BENEVOLENT AUTOCRACY, at first and at best!
Dory Scaltsas studied ‘Philosophy and Mathematics’ at Duke University and continued in Philosophy at Brandeis University and Oxford University, where he received his Doctorate in Philosophy. After teaching as a Lecturer in Philosophy at Oxford University for a few years, Dory was appointed at Edinburgh University, Philosophy, from where he retired as Chair of Ancient Greek Philosophy in 2018. Since then, Dory has focused on designing and creating Museums of Hellenic Culture and of Hellenic Wisdom, which has brought him to AI-Wisdom.
Watch Video here