Description | How could it be sufficient to seek an ‘AI ethics’ to arrange the algorithm for a good society when machine learning models are so deeply implicated in arranging the matter of what good looks like? An algorithm could be rendered compliant with AI codes of ethics and yet (because it is not reducible to its source code, because it modified itself through every exposure, every extracted feature) it will continue to learn, to generate thresholds of the good and the normal, to recognise and misrecognise, and to infer future intent. One may feel that something of oneself is protected and yet the clustered attributes continue to supply the conditions for future arbitrary actions against unknown others. What would happen if one began instead from the algorithm as already an ethico-political arrangement of propositions? In my book Cloud Ethics I propose a different way of thinking about the ethics of algorithms, one that does not belong to a paradigm of transparency and accountability, but instead begins from the opacity and partiality of all forms of giving an account, human and algorithmic. The apparent opacity of the algorithm should not pose an entirely new problem for us, for the difficulty of locating clear-sighted account of action was already present. The Data Then and Now speaker series explores the social and organizational history of data and data practices in order to better understand the current data intensive moment through its antecedents and continuities. Everyone interested is welcome to attend. For more information about this event, contact the eScience Institute at escienceadmin@uw.edu |
---|