Sunday, November 17, 2024

Q&A with author of “Smart University”

Colleges are increasingly employing digital technology that can track students’ movements through campus, monitor how much time they spend on learning management systems, flag those who need advising and nudge some toward certain courses, among other uses.

“Higher education is becoming increasingly synonymous with digital surveillance in the United States,” Lindsay Weinberg writes in the introduction of her new book, Smart University (Johns Hopkins University Press, 2024).

Released this month, the book documents the rise of this technology on campus, how colleges rely on its tools and the problems they could present. The technology, while pitched as a way to reduce costs and improve campus sustainability, can actually perpetuate racial and economic inequalities in the higher education system, argues Weinberg, a clinical associate professor at Purdue University.

“Surveillance of student behavior forms the foundation of the smart university, often in ways that prove harmful to students—particularly those who are already marginalized within the academy,” she writes.

Weinberg uses the term “smart university” to describe digital transformations in the sector, but it goes beyond the technology implemented on campuses. “These initiatives emerge from and enact visions of what is most necessary for the future of higher education,” she writes.

Weinberg spoke with Inside Higher Ed about her new book. The interview has been edited for length and clarity.

Q: In the book, you’re pretty skeptical about how universities are using surveillance technology and the use of big data such as personalization and predictive analytics. Why is that and what do you say to those who argue that they are just trying to help students graduate?

A: Even on paper, there are times when these tools do deliver. One example I talk about in the book is Georgia State University, where using predictive analytics did increase students’ persistence through the degree. But it also ended up sending a lot of students of color to lower-earning degree paths because of historical biases in these data sets. It really depends on who gets to define what a successful tool looks like.

And I’m also really interested in the book in thinking about what other types of solutions get framed outside of what’s possible to address in higher ed. So these technologies are very much geared toward trying to nudge and shape students’ individual conduct and behavior. But when it comes to institutional accountability, issues of public policy, long-standing challenges of discrimination in higher ed—these are framed as outside of what’s possible to redress.

It’s really that idea that I’m trying to trouble in the book. I think sometimes we’re using technology in lieu of addressing some of these more deeply entrenched structural problems that are related to issues of austerity and discrimination.

Q: Throughout the book, you talk about how technology can further entrench the racial and economic inequities already in place in the system. How exactly does that work?

A: It depends on the tool. With predictive analytics, it’s because of past patterns of discrimination that we see in those data sets being amplified and reproduced when they’re plugged into these types of tools.

Another example is the use of Amazon Echo Dots in campus dormitories, and trying to ask Alexa why tuition is so high, and not being able to get an answer to that question, but library hours are readily shared. So it’s also just the kind of discretionary power of what seems possible to ask through some of these tools.

Another example that comes up in the book is WellTrack. So in terms of how it even thinks about student mental health, it’s very much about individual self-regulation, addressing one’s thinking patterns. But not necessarily issues of discrimination on campus or not getting adequate support from a mentor—these more institutional and structural failures. So again, it really kind of depends on the tool.

There’s an increasingly pervasive surveillance related to campus security, and much of that doubles as a way of trying to hinder campus protests. We’re in a particularly powerful moment where we’re seeing free speech under attack at universities. Some of the marketing of these tools is really a way to kind of try and hedge against the risk of increasing campus unrest as well, and many of those are about unrest related to racial justice causes.

Q: Was there one particular tool where you saw this issue the most in terms of reinforcing the racial and economic inequalities?

A: I think it’s really across the board. At the end of the day, higher ed is not really designed to support all students. Historically, higher ed is designed to support students who are already structurally advantaged in many ways. So whether it’s by omission, erasure, not interrogating the data that’s being used to build these systems, or … who’s even at the table to make decisions about how problems for higher ed get framed, these issues are really all the way down. It just takes on particular guises, depending on the tool.

Q: What’s at stake over all?

A: Sometimes what’s at stake is student privacy. Sometimes it’s just that these are private interests that are shaping the direction of research and institutional priorities. So they’re not neutral or objective. They have a particular stake in the game, and in that sense, I think it undermines the idea that research is supposed to be a scholarly, critical or at least somewhat objective enterprise, or it’s at least rooted in a commitment to the public good, especially at a public university. That’s really dangerous.

As long as higher ed remains defunded, as long as we see pushback against efforts to desegregate higher education—I think the overturning of affirmative action is an example of this—the university is going to be a motor of a class-based and unequal society, as opposed to a form of addressing those issues. But I am not hopeless, and I think another thing I try to pair in the book is that surveillance is always coupled with resistance.

If power were so effective as to create a perfect system of domination or control, there’d be no need for surveillance. Surveillance is precisely because there’s anxiety around the possibility of people resisting and refusing. And I think we see students and faculty engaging in resistance in ways big and small, individual and collective.

Q: What’s one example of the resistance that you’ve seen?

A: There’s been long-standing movements for the full cancellation of student loans that I think is really, really a big part of the story. We’ve seen pushback against anti-Black police brutality, and that was coupled with a push to get researchers to stop partnering with cities on predictive policing tools. So we see ways that kind of technology research and development gets paired with these longer struggles that are happening on college campuses.

Q: How does student privacy factor into this conversation?

A: A lot of folks have an idea of a university as benevolent. It’s like, OK, my university may have a lot of information about me, but I trust that they’re using it to support my education, or I trust that they’re going to be good stewards over that information. So I think it’s also about helping people think more critically about universities, both historically and presently. And then the stakes of what does it mean for an institution to be collecting your data become a bit more powerful.

Q: You’ve mentioned austerity and the decline in state support of higher education throughout the book. What’s the connection to the rise of smart universities?

A: This plays out in so many ways, but just to give you an example: The emphasis that’s put on getting students to graduate as fast as possible— a lot of that is driven by an austerity mindset. Tuition costs are skyrocketing. Students don’t have the means or the ability to be able to go to college for four-plus years. Getting them to go through a degree path as quickly as possible, and using data about them and historical data sets to guide and shape their trajectory, is a way of thinking about student success within that kind of austerity logic. That’s just one such example.

But even the emphasis being placed on public-private partnerships and bringing corporate stakeholders into the university, that’s also in part because there’s just less funding to support students. So these deals become increasingly attractive, but it happens all the way down. And then for universities that aren’t super well-ranked, they’re increasingly in a more competitive higher education marketplace as well, and that puts pressure on them to use these types of tools to kind of compete with one another for students.

Q: How do you see the use of big tech as being at odds with the mission of higher education and the work of educating students?

A: For me, a lot of these tools are not really in the interest of the university as a public good. It’s the logic of a student making an individual investment in their education, and these tools are designed to shepherd that investment to get as much of a return as possible. So I think that that’s part of it.

It’s also just a symbol of how private interests are shaping dominant priorities and discourses. So even when we think about the influx of ChatGPT and a lot of these big tech–produced AI tools, higher ed has been all too eager to kind of adopt this as the most innovative form of pedagogy that’s possible. That’s just an example of ways that big tech is seen as models for learning and for teaching that need to be emulated in order to kind of keep pace with technological development. That’s very dangerous. University communities need to be setting their own priorities through democratic processes, and I think they need to be willing to look at these more deeply entrenched, historically long-standing issues, because it’s those issues that are making higher education persistently unequal and unfair.

Related Articles

Latest Articles