Tuesday, December 3, 2024

Universities must beware of reliance on big AI (opinion)

Across the Anglophone world, universities are stressed to the brink. In the United States, nearly 100 universities closed in the past two years, and Project 2025 proposes closing the Department of Education. In England, at least 67 universities are restructuring programs and cutting jobs. In Australia, a recent federal report concluded that its universities have “neither the capacity nor capability to deliver what the nation needs.” And in Aotearoa New Zealand, the government has established two working groups to assess the health of the entire university and science sectors.

Meanwhile, higher education finds itself increasingly beholden to the education-technology industry. Ed-tech companies sell hardware and software—often built with artificial intelligence—that claims to enhance the research and teaching operations of universities. Today, many Anglophone universities already pay for services from ed-tech companies such as TurnItIn, Grammarly and Studiosity, all of which use AI in their products. That’s in addition to annual licenses that universities purchase from software-as-a-service companies like Microsoft, Google and Adobe. Their products also contain AI.

Because these AI products are so expensive to create and operate, Silicon Valley AI companies need to squeeze more money from the university sector to turn a profit. But how much do universities need Silicon Valley AI?

The AI Sector Is Bleeding Money

Despite all the recent buzz about generative AI, the sector is struggling. Take ChatGPT’s parent company, OpenAI, for example. It expects to lose $5 billion in 2024. It recently lost its chief technology officer, chief research officer and another vice president, and only three of its original eleven founders remain. In an effort to attract more venture capital investments, OpenAI recently announced plans to “restructure its core business into a for-profit benefit corporation.” But it’s not clear if OpenAI even has a profitable product to sell.

OpenAI has around 10 million ChatGPT subscriptions. But the cloud computing infrastructure to train and run generative AI is massive, which makes it difficult for AI companies to turn a profit. Simply put: Scaling generative AI is expensive. So expensive, in fact, that some critics speculate that the venture-backed AI bubble will burst and OpenAI will fail in the coming years.

To offset the exorbitant costs of operating generative AI at scale, OpenAI has engaged in big-time venture capital funding rounds. In its most recent round, it raised a record-breaking $6.6 billion. That’s incredible, especially for a company whose business model is still a losing proposition: Currently OpenAI spends $2.35 to make a dollar. But in Silicon Valley, the business plan often matters less than the story. And the story that OpenAI sells to investors is growth. That’s where universities come in.

AI Companies Need Universities

Silicon Valley AI companies need to convince university leaders that their AI products are essential to winning external research funding, scaling teaching capacity and saving money. If successful, critics suggest this could amount to a “corporate takeover of higher education.” Currently, though, higher education is still scrambling to sort out its relationship with AI. Arizona State University—which has always been an early mover in ed tech—already announced a partnership with OpenAI. At the same time, Rutgers University’s Center for Cultural Analysis launched a new interdisciplinary journal published by Duke University Press called Critical AI. And the Modern Language Association partnered with the Conference on College Composition and Communication to publish a series of research-backed recommendations for educators who assign written assignments in the age of AI.

At most universities, scholars and administrators remain divided about AI’s potential virtues and vices. Early adopters see first-mover advantages for universities that integrate AI into their research and teaching systems in an effort to maximize efficiencies in time, resources, workflows and outputs. On the other hand, researchers have documented the many problems with using AI-driven digital technologies in education, including increasing inequity, racial and gender biases, misinformation, disinformation, energy costs and the contribution to climate change, as well as violations of privacy, copyright, intellectual property and Indigenous data sovereignty.

In this divided environment, AI companies are throwing a new curve ball at universities: AI teaching clones.

AI Teaching Clones and Their Costs

AI organizations are now touting the rollout of “AI agents.” Educators can train these AI agents on their own course materials, transforming them into AI clones of the instructor that can interact with students 24-7. In one promotional video, an instructor praises the AI agent for helping him teach a course with more than 800 students. Of course, as I’ve written elsewhere, “another way to improve the teaching of such a large course is to hire more teachers.” Still, it’s not surprising to see universities expressing interest in AI teaching clones given the way “the university itself has become a service.”

But here’s the problem: We don’t yet know the full cost of AI teaching agents. They may be free or cheap during the development and market penetration phases, but the cloud computing costs are still extremely high. A senior engineer tells me that, due to these costs, companies with AI products are likely to shift in the coming years from a subscription model to a consumption pricing model. In other words, after a critical mass of institutions have become dependent on subscription software with AI capabilities, these companies will try to offload the high costs of AI by charging consumers for their energy consumption. For universities that have committed to AI teaching clones, such a pricing shift would almost certainly lead to a gigantic jump in costs. Will AI clones be cheaper than teachers then?

Plus there are the environmental costs. Microsoft’s emissions have increased by 30 percent due to energy-hungry data centers, which makes it highly unlikely that they’ll meet their goal of being climate negative by 2030. Many universities also aim to be carbon neutral in the coming years. But the amount of energy that it takes to build and operate a fleet of AI teaching clones makes such green goals a fantasy. Will universities follow Microsoft and renege on their green commitments to keep up with the AI arms race? And if “AI is pushing the world toward an energy crisis,” is it really worth the financial and environmental costs to replace educators with AI chat bots?

While many university stakeholders may sympathize with these arguments that question the value of Silicon Valley AI, FOMO hits hard in a sector facing such financial instability. I’ve heard some say that missing out on AI feels like missing out on the internet. But I’m not convinced that’s the right metaphor. In its current state, mainstream generative AI seems less like the internet and more like blockchain: It’s an energy-sapping technological craze that, despite its hypothesized disruptive potential, currently delivers few useful products and little value to investors. Generative AI only seems like a bigger invention than the internet because of the AI hype espoused by the “new artificial intelligentsia,” who have much to gain from our collective belief in its transformative potential.

Alternative AI, Indigenous AI

Instead of swiftly adopting whatever new AI tools ed-tech companies push on universities, what if universities actively invested in AI alternatives driven by academics or local community leaders? In Aotearoa New Zealand, Te Hiku Media—which contributes to the Indigenous AI initiative—offers a provocative alternative.

Te Hiku Media is a Māori-owned media organization that saw the need for a Māori-language speech-recognition tool. Instead of advocating for multinational corporations to make their tools more inclusive and accessible to Māori speakers—something that would have exposed Indigenous communities to exploitation—Te Hiku Media built their own speech-recognition tool by crowdsourcing audio through their community networks. Crucially, Te Hiku Media see themselves as guardians rather than owners of this language tool. By prioritizing stewardship and Indigenous data sovereignty, Te Hiku Media models a way of building generative language technologies according to different, more just, ideologies than the extractive logics that dominate ed tech and their AI tools.

Te Hiku Media is not, of course, the only tech and media organization that offers innovative alternatives that universities could learn from and potentially collaborate with. Here are others: Mijente, Media Justice, Allied Media Projects, Athena, Data for Black Lives, Our Data Bodies, May First Movement Technology, No Tech for Apartheid, 7amleh, Algorithmic Justice League and Data Workers’ Inquiry (I borrow this list from Ruha Benjamin’s incisive critique of “AI evangelists” in LARB).

For too long, the ed-tech tail has wagged the university dog. In most cases, that relationship has benefited the ed-tech companies more than university students or researchers. But universities have a chance to shift that relationship now, before Silicon Valley AI systems become entrenched in higher education. While the AI evangelists want us to believe that their own AI tools are inevitable and necessary, Benjamin reminds us that “we do have a choice … there are other worlds.”

Collin Bjork is a senior lecturer in English and media studies at Massey University in Aotearoa New Zealand.

Related Articles

Latest Articles