Author outlines AI in education ‘Bill of Rights’


Artificial intelligence illustration with silhouette of human brain, coding symbols.. Credit: Public Domain, via Creative Commons Zero.

LAWRENCE – Because ignoring the artificial intelligence elephant in the room is no longer feasible, the author of a new “Blueprint for an AI Bill of Rights in Education” has proposed some principles for dealing with it.

Kathryn Conrad, professor of English

The editors of the new journal Critical AI published the article, written by Kathryn Conrad, University of Kansas professor of English, online in July as a sneak preview of their February 2024 issue because they “were keen to get it out so that it could be helpful as people had conversations about the place of AI in education,” Conrad said.

Conrad’s scholarly work has centered on intersections of technology and culture, usually in the context of turn-of-the-20th-century Irish modernism.

Since November 2022, when the private company OpenAI introduced ChatGPT, a large language model chatbot that generates written responses to questions posed by users, leaders in any number of fields have pondered its implications.

After exploring the capabilities of the technology and related research, Conrad said she concluded that universities could be leaders calling for the responsible use of generative AI.

And while the initial buzz around ChatGPT, and more broadly AI, in education centered on its potential to write term papers for students, Conrad has delved deeply into other issues as well, from its potential to surveil users to its built-in algorithmic biases.

“What I've been working on, from both a scholarly and pedagogical standpoint, is critical AI literacy,” Conrad said. “And that means knowing something about how generative AI works as well as the ethics of these models, including the labor and copyright issues they entail, and some of the privacy and surveillance concerns that they raise.”

Conrad said students already know of ChatGPT/AI’s potential and deserve guidance on its proper usage in a university context, just as their teachers do.

“I like to say that for education, AI answered a question that no one was asking,” Conrad said. “Nobody in education asked for chatbots. But ChatGPT and other models came down to us anyway. And, as I mention in the article, they came down from people who are not concerned primarily with education. OpenAI CEO Sam Altman is a college dropout, and he has been openly hostile, in some cases, to higher education. He has said he's going to start an OpenAI Academy that's presumably run by chatbots. So that raises the question of whether or why we might want to adopt these tools. We shouldn't take for granted that these are specific tools that we have to use, or that we have to use uncritically.”

Conrad said that, far from taking “a technophobic perspective,” her research and that of her colleagues in the new journal “is bringing technologists into conversation with humanists and social scientists to tease out some of the larger, interesting issues around the deployment of these technologies.”

After much reading and many discussions of the subject of AI in education with colleagues, Conrad said of her intervention, “I decided it really needed to be a question of rights — student rights, as well — because we have responsibility as educators to protect them.”

In the article, Conrad acknowledged the White House Office of Science and Technology Policy’s 2022 “Blueprint for an AI Bill of Rights” and extended it for education.

Educators, she wrote, should have:

  • Input on institutional decisions to buy and implement AI tools
  • Input on policies regarding usage
  • Professional development (i.e., training)
  • Autonomy
  • Protection of legal rights.

Her proposed rights for students:

  • Guidance on whether and how AI tools are to be used in class
  • Privacy and creative control of their own work
  • Appeal rights, if charged with academic misconduct related to AI
  • Notice “when an instructor or institution is using an automated process to assess your assignments”
  • Protection of legal rights.

It is important, according to Conrad, to understand what the technology can and cannot do. She said that while ChatGPT can, for instance, write an essay or a legal brief, it is not always factual or accurate — often, the chatbot simply fabricates responses.

“It's an important part of critical AI literacy to explain to users — students in this case, but also faculty — that there is never a guarantee that the output is going to be right,” Conrad said. “It is designed to be plausible, which is a different thing entirely.”

And while she said that educators “cannot ignore” AI, Conrad argued that universities, particularly, with their potential for high-level cross-disciplinary work, could help lead the way to a better future.

“We have the potential to develop technologies that are trained on ethically obtained datasets, that have privacy protections built in, that are ethically deployed. This is a place we could potentially lead,” she said.

Image: Artificial intelligence illustration. Credit: Public Domain, via Creative Commons Zero.

Thu, 08/31/2023

author

Rick Hellman

Media Contacts

Rick Hellman

KU News Service

785-864-8852