A recently released Inside Higher Ed survey of campus chief technology officers finds a mix of uncertainty and excitement when it comes to the potential for the influence of generative AI on campus operations.
While 46 percent of those surveyed are “very or extremely enthusiastic about AI’s potential,” almost two-thirds say institutions are not prepared to handle the rise of AI.
I’d like to suggest that these CTOs (and anyone else involved in making these decisions) read two recent books that dive into both artificial intelligence and the impact of enterprise software on higher education institutions.
The books are Smart University: Student Surveillance in the Digital Age by Lindsay Weinberg, director of the Tech Justice Lab at the John Martinson Honors College of Purdue University, and AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t and How to Tell the Difference by Arvind Narayanan, a professor of computer science at Princeton, and Sayash Kapoor, a Ph.D. candidate in computer science there.
How could we have two books of such relevance to the current discussion about AI, given that ChatGPT wasn’t commercially available until November of 2022, less than two years ago?
As Narayanan and Kapoor show, what we currently think of as “artificial intelligence” has deep roots that reach back to the earliest days of computer science, and even before that in some cases. The book takes a broad view of all manner of algorithmic reasoning used in the service of predicting or guiding human behavior and does so in a way that effectively translates the technical to the practical.
A significant chunk of the book is focused on the limits of algorithmic prediction, including the kinds of technology now routinely used in higher ed admissions and academic affairs departments. What they conclude about this technology is not encouraging: The book is titled AI Snake Oil for a reason.
Larded with case studies, the book helps us understand the important boundaries around what data can tell us, particularly when it comes to making predictions on events yet to come. Data can tell us many things, but the authors remind us we also must recognize that some systems are inherently chaotic. Take weather, for example, one of the examples in the book. On the one hand, hurricane modeling has gotten so good that predictions of the path of Hurricane Milton over a week in advance were within 10 miles of its eventual landfall in Florida.
But the extreme rainfall of Hurricane Helene in western North Carolina, leading to what’s being called a “1,000-year flood,” was not predicted, leading to significant chaos and numerous excess deaths. One of the patterns of consumers being taken in by AI snake oil is crediting the algorithmic analysis for the successes (Milton) while waving away the failures (Helene) as aberrations, but individual lives are lived as aberrations, are they not?
The AI Snake Oil chapter “Why AI Can’t Predict the Future” is particularly important for both laypeople—like college administrators—who may be required to make policy based on algorithmically generated conclusions, and, I would argue, to the entire field of computer science when it comes to applied AI. Narayanan and Kapoor repeatedly argue that many of the studies showing the efficacy of AI-mediated predictions are fundamentally flawed at the design level, essentially being run in a way where the models are predicting foregone conclusions based on the data and the design.
This circular process winds up hiding limits and biases that distort the behaviors and choices on the other end of the AI conclusions. Students subjected to predictive algorithms on their likely success based in data like their socioeconomic status may be counseled out of more competitive (and lucrative) majors based on aggregates that do not reflect them as individuals.
While the authors recognize the desirability of attempting to bring some sense of rationality to these chaotic events, they repeatedly show how much of the predictive analytics industry is built on a combination of bad science and wishful thinking.
The authors don’t go so far as to say it, but they suggest that companies pushing AI snake oil, particularly around predictive analytics, are basically inevitable, and so the job of resistance is on the properly informed individual to understand when we’re being sold some glossy marketing without sufficient substance underneath.
Weinberg’s Smart University unpacks some of the snake oil that universities have bought by the barrelful, to the detriment of both students and the purported mission of the university.
Weinberg argues that surveillance of student behavior, starting before students even enroll, as they are tracked as applicants, and extending through all aspects of their interactions with the institution—academics, extracurriculars, degree progress—is part of the larger “financialization” of higher education.
She says using technology to track student behavior is viewed as “a means of acting more entrepreneurial, building partnerships with private firms, and taking on their characteristics and marketing strategies,” efforts that “are often imagined as vehicles for universities to counteract a lack of public funding sources and preserve their rankings in an education market students are increasingly priced out of.”
In other words, schools have turned to technology as a means to achieve efficiencies to make up for the fact that they don’t have enough funding to treat students as individual humans. It’s a grim picture that I feel like I’ve lived through for the last 20-plus years.
Chapter after chapter, Weinberg demonstrates how the embrace of surveillance ultimately harms students. Its use in student recruiting and retention enshrines historic patterns of discrimination around race and socioeconomic class. The rise of tech-mediated “wellness” applications has proved only alienating, suggesting to students that if they can’t be helped by what an app has to offer, they can’t be helped at all—and perhaps do not belong at an institution.
In the concluding chapter, Weinberg argues that an embrace of surveillance technology, much of it mediated through various forms of what we should recognize as artificial intelligence, has resulted in institutions accepting an austerity mindset that over and over devalues human labor and student autonomy in favor of efficiency and market logics.
Taken together, these books do not instill confidence in how institutions will respond to the arrival of generative AI. They show how easily and quickly values around human agency and autonomy have been shunted aside for what are often phantom promises of improved operations and increased efficiency.
These books provided plenty of evidence that when it comes to generative AI, we should be wary of “transforming” our institutions so thoroughly that humans are an afterthought.