“What Makes Us Human in the Age of AI?” Series
The Georgetown Humanities Initiative’s interdisciplinary event series “What Makes Us Human in the Age of AI?,” in collaboration with the Center for Digital Ethics, hosted two stimulating events addressing the role of intrinsically human attributes and of the humanities in facing the challenges of the latest digital and technological revolution.
Why Should We Care About Understanding AI?”
October 30
On October 30, Prof. Will Fleisher (Department of Philosophy; Center for Digital Ethics), in his talk “Why Should We Care About Understanding AI?” explored ethical questions raised by the opacity of AI tools, both for the public and for developers.
The most sophisticated AI tools are large and complex, and are trained using so much data, that not even their developers fully understand why they function the way they do. AI models are even more opaque to the general public, as much of the knowledge of how AI is developed is kept secret by its developers. And even when the models are open source, this does not aid the comprehension of those without advanced training.
This opacity has raised concerns about the use of complex AI tools in a democratic society. Transparency is a requirement for democratic legitimacy. Moreover, some have argued that people have a right to an explanation for how they are treated.
Prof. Fleisher claimed that there is something fundamentally right about these concerns: we often do need to understand a tool before it is permissible to use it. However, explaining why AI opacity is a problem for protecting legitimacy and the right to explanation is more complicated than it seems. There is a great deal of opacity present in our understanding of existing, non-AI technologies. For instance, drugs are commonly prescribed even without understanding their mechanism of action. Moreover, our governments currently operate with a great deal of opacity, and in some cases this opacity does not seem problematic.
Prof. Fleisher convincingly showed that, if we are to ground the importance of understanding AI, we need a better explanation of what that understanding does for us, and why we should care.
An Artificial History of Natural Intelligence: Thinking with Machines from Descartes to the Digital Age
November 8
On November 8, Professor David W. Bates (Rhetoric, UC Berkeley) presented his book (U of Chicago Press, 2024), followed by a conversation with Professors Katherine Chandler (SFS Program in Culture and Politics) and Daniel Shore (Department of English).
On November 8, Professor Join David W. Bates (Professor of Rhetoric at UC Berkeley) presented his new book An Artificial History of Natural Intelligence: Thinking with Machines from Descartes to the Digital Age (U of Chicago Press, 2024), followed by a conversation with Professors Katherine Chandler (SFS Program in Culture and Politics) and Daniel Shore (Department of English).
An Artificial History of Natural Intelligence provides a new history of human intelligence that argues that humans know themselves by knowing their machines. We imagine that we are both in control of and controlled by our bodies—autonomous and yet automatic. This entanglement, according to David Bates, emerged in the seventeenth century when humans first built and compared themselves with machines. Reading varied thinkers from Descartes to Kant to Turing, Bates reveals how time and time again technological developments offered new ways to imagine how the body’s automaticity worked alongside the mind’s autonomy. Tracing these evolving lines of thought, An Artificial History of Natural Intelligence offers a new theorization of the human as a being that is dependent on technology and produces itself as an artificial automaton without a natural, outside origin.