Tuesday, November 5, 2024

Can we trust AI in qualitative research? (opinion)

John Kevin/iStock/Getty Images Plus

Walt Whitman wrote, “I am large, I contain multitudes.” In qualitative social science, this applies as both a celebration of what makes us human and as a warning of the limitations of using artificial intelligence to analyze data.

While AI can emulate the pattern finding of qualitative research in social science, it lacks an identifiable human perspective. This matters because in qualitative work it’s important to articulate the investigator’s positionality—how the researcher connects to the research—to promote trust in the findings.

Trained on a vast body of human knowledge, technologies like ChatGPT are not a self that contains multitudes, but multitudes absent of a self. By design, these tools cannot have the single, describable point-of-view, and thus the positionality, required to promote trust.

For overworked faculty and students, using ChatGPT as a research assistant is a tempting alternative to the laborious task of analyzing mountains of text by hand. While there are many qualitative research methods, a common approach involves multiple cycles of meaning making within the data. Investigators tag portions of data with “codes” that either describe explicit phrasing or implicit meanings and then group them into patterns through additional cycles. For example, in analyzing interview transcripts in a study around college attrition, you may first find codes such as “financial needs,” “first-generation status” and “parental support.” In another cycle of coding, these may be grouped into a larger theme around familial factors.

While this is an oversimplification, it becomes clear that this sort of pattern finding is a key strength of current open AI tools. But using AI in this manner overlooks the impact of researcher identity and context in qualitative research.

There are four key reasons why hopping on the AI train too early could be troublesome for the future of qualitative work.

  1. The researcher is just as important as the research.

Good qualitative research studies have something in common: They reject the notion of objectivity and embrace the nature of interpretative work as subjective. They acknowledge that their studies are influenced by the context and background of the researcher. This idea of carefully considering positionality, while not fully the norm across the wide diversity of social science fields, is gaining more momentum. With the rapid adoption of AI tools for research, it becomes particularly critical to highlight the complexities of how investigators relate to the work they do.

  1. AI is not neutral.

We know that AI can have hallucinations and produce false information. But even if this weren’t the case, there is another issue: Technology is never neutral. It is always imbued with the biases and experiences of its creators. Add to this that AI tools are drawing from the massive medley of perspectives across the internet around any given topic. If we can agree that articulating positionality is key to supporting the trustworthiness of qualitative research, then we should take serious pause before adopting AI for wholesale analysis in interpretative studies. Experts admit that we don’t know how AI makes the decisions it does (the black-box problem).

  1. Adoption of AI tools can have a negative impact on the training of new researchers.

In the same way educators may be concerned that leaning on AI too early in the learning process may negate an understanding of the fundamentals, there are implications for the training of new qualitative researchers. This is a larger consideration than trustworthiness of results. Manual qualitative coding builds a skill set and a deeper understanding of the nature of interpretative research. Further, to be able to articulate and act upon how you as a researcher impact the analysis is no easy task, even for seasoned investigators, requiring a level of self-reflection and patience that many people may feel is not worth the effort. It’s nearly impossible to ask a new researcher to appreciate positionality without going through the process of manually coding data themselves.

  1. Unlike a human researcher, AI can’t safeguard our data.

It’s not only the positionality of the researcher that’s missing when we use open-access AI tools for data analysis. Institutions require safeguards for the information provided by participants for research studies. While including disclosures in consent forms for the use of data within an AI platform is certainly possible, the black-box factor means we can’t truly provide informed consent to participants about what is happening with their data. Off-line options may be available but would require computing resources and knowledge that are out of reach for most who would benefit.

So, can we trust the use of AI in qualitative research?

While AI can serve as a pseudo–research assistant or potentially add additional trustworthiness to the qualitative research process when used to audit findings, it should be applied cautiously in its current form. Of particular importance is the recognition that AI cannot, at this time, provide the necessary context and positionality that qualitative research requires. Instead, potentially useful applications of AI in qualitative research include things like providing general summary information or helping organize thoughts. These supplementary tasks and others like them can help streamline the research process, without denying the importance of the connection between the researcher and the study.

Even if we could trust AI, should we use it for qualitative analysis?

Lastly, there is a philosophical argument to be made. If we have an AI capable of qualitative analysis in a manner that we found acceptable, should we use it? Much like art, qualitative research can be a celebration of humanity. When researcher self-awareness, important questions and robust methods come together, the result is a glimpse into a rich and detailed subset of our world. It’s the context and humanity that the researcher brings that make these studies worth writing and worth reading. If we reduce the role of the qualitative scholar to AI prompt generator, the passion for investigating the human experience may fade along with it. To study humans, particularly in an open and interpretative way, requires a human touch.

Andrew Gillen is an assistant teaching professor in the College of Engineering at Northeastern University. His research focuses on engineering education.

Related Articles

Latest Articles