8:00 – 9:00 am: Conference check-in & breakfast
9:00 – 10:15 am: Welcome and Opening Keynote with Lance Eaton
10:30 – 11:30 am: Concurrent Session A
A1. Artificial Intelligence and Persuasion: A Communicative Approach to New Technologies in a Liberal Arts Education
Kaitlyn Haynal, Visiting Assistant Professor-University of Mary Washington; Samuel Allen, Assistant Professor-Randolph-Macon College; Ryan Cheek, Assistant Professor in Technical Communication-Missouri University of Science and Technology
As artificial intelligence (AI) reshapes the communicative environment of contemporary life, liberal arts educators must reimagine student learning related to practices of persuasion, argumentation, and rhetorical reasoning. Our panel brings together teacher-scholars who examine the intersections of AI, persuasion, and liberal arts pedagogy across oral communication, digital studies, and technical communication contexts. We consider how generative AI has become a technological tool for producing persuasive messages and how it is shaped by cultural, political, and social controversies surrounding the costs associated with the rapid adoption of AI into higher education.
Presenters consider how AI transforms foundational practices of argument construction, evidence evaluation, and ethical decision-making. In the Public Speaking classroom, students use large language models to help them discover and refine arguments for persuasive speechwriting. They also practice oral communication using virtual reality AI voice analysis and delivery-feedback tools to refine their formal presentations. In Introduction to Digital Studies, students debate about the ethics of AI data collection and training, model behavior, deployment and use, and downstream impacts. They also design and test custom AI chatbots to examine how interface design, training data, and conversational style collectively persuade users and shape digital meaning-making. Students in Technical Communication leverage AI to craft technical content through practices of reuse, modular design, and continual iteration, opening space for reconsidering authorship and argumentative integrity in industry. These case studies collectively demonstrate how AI operates as a tool not only for generating persuasive content, but for shaping how students invent arguments, construct credibility, engage audiences, and make ethical communicative choices.
A2. Flourishing Through Friction: How Varied Faculty AI Approaches Cultivate Student Adaptability and Agency
Susan Purrington, Harold F. Wiley Generative AI Teaching and Learning Fellow-Connecticut College; Matt Gardzina, College Librarian and VP of Information Technology-Connecticut College
Liberal arts graduates enter a workforce increasingly shaped by artificial intelligence, yet their distinct value lies in capacities that transcend technological trends: critical thinking, ethical reasoning, creative problem-solving, and adaptive expertise. This presentation argues that preparing students for this reality requires strategic balances of AI integration and AI-resilient pedagogical practices among faculty, neither rejecting AI tools nor allowing them to diminish the intellectual development central to liberal arts education.
Drawing on Connecticut College’s AI@Conn Initiative—a three-year project funded by the Harold F. Wiley bequest—we present a dual-focus framework designed to maximize student success. The “integration” dimension equips students to work effectively alongside AI, understanding its capabilities, limitations, and appropriate applications within their disciplines. Students learn to leverage AI as a tool for research, iteration, and exploration while maintaining intellectual ownership. The “resilience” dimension develops cognitive capacities that remain valuable regardless of technological change: nuanced argumentation, contextual judgment, interdisciplinary synthesis, and original insight.
We explore how students benefit from navigating between faculty who thoughtfully integrate AI and those who maintain AI-resilient pedagogies. This diversity of approaches, rather than institutional uniformity, prepares students to work flexibly across varied professional contexts where AI adoption differs widely. Students develop metacognitive flexibility to recognize when AI enhances their work and when intellectual labor independent of AI produces deeper learning. Participants will gain frameworks for supporting faculty across the AI adoption spectrum and communication tools for articulating how faculty diversity in AI integration serves student preparation.
11:30 am – 12:30 pm: Lunch Break
12:30 – 1:30 pm: Lunch Keynote with Theresa L. Burriss
1:45 – 2:45 pm: Concurrent Session B
B1. A Values-Focused Approach to AI Integration in the Liberal Arts Classroom
Emily Wierszewski, Associate Professor of English-Seton Hill University; Susan Eichenberger, Associate Professor of Sociology-Seton Hill University; Christine Cusick, Professor of English-Seton Hill University; Debra Faszer-McMahon, Professor of Spanish-Seton Hill University
The mission of liberal arts institutions is to develop independent, critical thinkers who are prepared for meaningful engagement in personal, professional, and civic life. As external forces push for AI integration in higher ed focused narrowly on career readiness, liberal arts colleges must ask whether and how AI serves this broader humanistic mission. As Adamson (2025) urges, “As educators, we need to instead begin with the dignity and beauty of the person and conform our education and use of technology to that starting point” (p. 15). In this workshop, we will share the grassroots, AI literacy professional development model we developed to help liberal arts faculty center students as whole persons as they adapt to AI. This model, which we piloted with 30 participants during the past academic year, is transferable to other liberal arts contexts through its focus on values rather than tools. We will engage participants in some of the foundational activities of the model, including a values identification and exchange using Moulton’s (2025) “Analog Inspiration” card deck, as well as brainstorming activities to generate assignments that are values-based and AI adaptive. Presenters, who represent a variety of disciplines including Sociology, English, and Spanish, will also share their experiences participating in this model, including their AI adaptive assignments, and reflect on the significance of AI professional development grounded in the values of the liberal arts. Participants will leave with a potential framework for values-centered AI integration to use at their own institutions and at least one AI assignment idea they can implement in their courses.
B2. AI Genesis: Infusing AI Literacy in the Humanities, Social Sciences, and Sciences Curricula
Marcus Messner, Associate Dean for Faculty and Academic Affairs for Humanities and Social Sciences; Amy Rector, Associate Dean for Faculty and Academic Affairs for Sciences and Mathematics; James Fritz, Associate Professor-Philosophy; John Skaritza, Director of Information Technology; Joshua J. Smith, Special Assistant to the Dean for Innovative Learning and AI; All in the College of Humanities and Sciences at Virginia Commonwealth University
As Gen AI applications rapidly expand across professional fields, universities find themselves under immense pressure to advance AI studies and training for their students beyond computer science and engineering majors. Through coordinated leadership among the Dean’s Office, faculty, and staff, the College of Humanities and Sciences at Virginia Commonwealth University advanced a fast-track initiative across 18 departments to provide AI education to all majors while initiating a new undergraduate AI requirement. In a first step, the College launched a new Minor in AI Studies in Humanities and Sciences that students can add to their plans of study, whether their major is in Physics, Philosophy or English. Faculty representing disciplines across the College united to build a robust interdisciplinary AI-focused curriculum of 27 courses, allowing students to select six to complete the minor. Course topics range from hands-on training in storytelling and health promotion with AI to the study of AI governance and the impact of AI on climate change, giving students the opportunity to explore AI topics related to their interests and fields of study. In a second step, the College implemented an AI Literacy requirement in most of its programs that requires students to include at least one AI-focused course during their studies. A multidisciplinary team will join this panel to share how the AI Studies Minor and AI Literacy requirement were conceived and implemented, highlighting key challenges, lessons learned, and emerging opportunities. Two administrators will outline how the interdisciplinary approach was created in a short time to meet industry as well as curricular demands. Three faculty and one staff member will provide additional insights into how they approached their own AI-focused course developments and AI support and mentoring initiatives.
B3. AI’s Positive Potential for Achieving Integrative Knowledge
Jill LeRoy-Frazier, Professor and Chair of Cross-Disciplinary Studies-East Tennessee State University; David Frazier, Senior Instructor of Software Engineering-University of Virginia’s College at Wise
Collaborative interdisciplinary study and research (IDSR), as Julie Thompson Klein has noted, should result in every participant’s going away as an equal expert in the new knowledge area; disciplinary boundaries should erode, and the new knowledge produced should reflect their integration, rather than reinforce the blind spots and territoriality of disciplinary expertise. Similarly, Allen F. Repko, Rick Szostak, and Michelle Phillips Buchberger discuss the student researcher’s process of developing adequacy in relevant disciplines, identifying conflicts between and creating common ground among them, and constructing and communicating a more comprehensive understanding of the problem or concept than disciplinary approaches have been able to achieve. Klein notes, however, that in practice, such models rarely ascend to the transdisciplinary level ideal for new knowledge production. Many IDSR projects instead emphasize only the instrumental steps, while neglecting the fully integrative stages of inquiry. At the heart of the process is what Klein describes as “resolving disciplinary conflicts by working toward a common vocabulary (and focusing on reciprocal learning in teamwork)” (189), but its difficulty often prevents an IDSR project’s full realization.
What if, though, AI systems could be used as a mediator between disciplines and aid in translating their assumptions, theories, methodologies, and evaluative standards for participants–who then could streamline the time needed to attain disciplinary adequacy and focus more intently on collaborative learning and problem-solving? How, in turn, could linking this use of AI to their engagement with interdisciplinarity in the liberal arts context help both faculty and students reimagine the positive potential of AI as a cognitive tool for the application of broad-based understanding and critical thinking to academic as well as civic, personal, and professional life? To explore these questions, we will use examples of current issues like the location and impact of AI data centers. Many academic areas can provide relevant perspectives, including economics, environmental studies, public health, and computer science, meaning that terms such as “risk” can mean very different things to all stakeholders. We will show how AI could be used to bridge the gaps by translating foundational concepts from one field to others, and ultimately to facilitate the development of richly interdisciplinary, on-the-ground solutions.
B4. Dual Paper Session
B4a. Teaching the Torment Nexus: How Studying Science Fiction Can Illuminate Our Perspectives on Current Advances in Artificial Intelligence
Laurie Cubbison, Professor of English-Radford University
In a 2021 Tweet writer Alex Blechman coined the phrase Torment Nexus to show that engineers aim to create science fiction technologies from stories that demonstrate why those technologies are dangerous. We teach the Liberal Arts at a time when many such technologies are being developed and used. As citizens and consumers, we don’t always realize the extent to which our perceptions of these technologies have been shaped by science fiction. The fear and excitement such technologies evoke are flavored by The Matrix, The Terminator, and HAL9000. In this presentation, I argue for the inclusion of science fiction in the AI curriculum many colleges and universities are currently developing. Science fiction’s role is to explore the possibilities, good and bad, within the future. When Spike Jonze directed Her (2013) with Joaquin Phoenix and Scarlett Johansson, the idea of personal relationship with an AI assistant was still in the realm of science fiction. By 2024 OpenAI’s CEO Sam Altman was trying to recruit Johansson to provide the voice for the company’s GPT4o, a role she rejected. The filmmaker, the actress, and the CEO thus become characters in the science fiction universe as it comes to life.
A course on science fiction not only allows for the examination of the utopian, dystopian, and corporate visions of how artificial intelligence will be enacted, but it also for examination of our existing emotional responses to these technologies. We have a history of reading and viewing science fiction that warns of the consequences of these technologies, and those warnings have seeped into our bones. A course on artificial intelligence in science fiction can illuminate the effect of these technologies on our psyches and societies.
B4b. Toward a Framework for Critical–Ethical AI Literacy (CEAIL) in University Guidance: An Examination of ISU’s Guiding Principles for AI Use
A B M Shafiqul Islam, PhD. Student-Illinois State University
Writing has always been a technology for making and sharing meaning, and as a field of study, it continually remains open to emerging technologies (MLA–CCCC, 2024). The rise of generative AI, especially large language models (LLMs), has added a new and complex dimension to writing. It offers new possibilities for creativity, collaboration, and access, but it also raises serious concerns that call for thoughtful, ethical, and responsible use. In the meantime, many universities have started developing policies to guide how students and faculty engage with AI. However, most of these policies focus narrowly on preventing plagiarism or maintaining academic integrity, rather than addressing the deeper ethical and pedagogical questions that AI brings to higher education.
This study examines Illinois State University’s (ISU) Guiding Principles for AI use through a critical-ethical lens. It draws from Posthuman Ethics (Braidotti, 2013), Critical Digital Pedagogy (Bali, 2020; Stommel, 2020), and Critical AI Literacy (Bali, 2020; Long & Magerko, 2020) to develop what I call the Critical–Ethical AI Literacy (CEAIL) framework. This framework helps explore whether ISU’s principles promote awareness, reflection, and responsibility in areas such as authorship, data ethics, epistemic justice, and rhetorical agency or whether they fail to do so. By placing ISU’s document in conversations with broader academic discussions from MLA–CCCC, NCTE/ELATE, AWAC, and UNESCO, this project identifies where the institution aligns with or diverges from global and disciplinary standards. Ultimately, the goal is to provide recommendations that move beyond compliance-based approaches and toward more inclusive, fair, and reflective practices that prepare students, teachers, and administrators to use AI critically, ethically, and with care in academic contexts.
3:00 – 4:00 pm: Concurrent Session C
C1. Recentering the Human Elements of Learning Through Kindness in the Classroom
Kayla Shearer, Faculty Development Consultant in Teaching and Learning-University of North Carolina at Chapel Hill; Marissa Stewart, Associate Director for Faculty Development in Teaching and Learning-University of North Carolina at Chapel Hill; Emily Boehm, Senior Faculty Development Consultant in Teaching and Learning-University of North Carolina at Chapel Hill; Bob Henshaw, Instructional Consultant-University of North Carolina at Chapel Hill
Generative AI can take a number of roles in students’ lives: tutor, creative partner, life coach, and even friend. Yet in academic settings, students encounter mixed messages about its place in their work, and unauthorized use can land them in hot water. Instructors, too, are faced with decisions about if and how to use AI to assist with teaching tasks like grading and communication. The insertion of AI into relationships among students and instructors has the potential to fundamentally alter both learning and teaching experiences and outcomes. The Liberal Arts are uniquely poised to influence students’ burgeoning relationships with AI by emphasizing the critically important human elements of learning like problem solving, critical thinking, and effective communication. Working within the framework of A Pedagogy of Kindness (Denial, 2024), we propose a classroom environment built around compassion toward self and students as a tool for recentering authentic human relationships. This effort is timely in an academic culture of anxiety that AI use will replace student thinking, which often leads to overemphasis on policing of academic dishonesty.
In this interactive workshop, we will explore how a human-centered pedagogy can inform some teaching challenges posed by AI. We will introduce kindness as a teaching mindset before engaging participants in brief case studies that ask them to apply kindness to self and students. Then, we will ask participants to self-select into groups to brainstorm more ways that we can use this framework to strengthen and center human relationships, even as AI becomes more present in our classrooms. Participants will leave with concrete ideas for supporting Liberal Arts learning values, focusing on students’ development and our own wellbeing throughout our intertwined academic careers.
C2. Dual Paper Session
C2a. Traditional Knowledge, Indigenous Archives, and Data Sovereignty: What AI is Missing
D. Brad Hatch, Cultural Resources Officer-Patawomeck Indian Tribe of Virginia; Lauren Van Valzah, Student-University of Mary Washington
The rapid adoption and promotion of AI in the workplace, at home, in entertainment, in government, and in the academy positions this new technology as a tool that has the ability to make our lives easier, increase efficiency, and (explicitly or implicitly) surpass the capability of the human mind and humanity in general. However, as we are all fully aware, digital tools, like AI, are only as useful as the information that they are fed. Additionally, AI lacks the ability to interpret data with nuance and context. While it may provide information and perceived fact-based conclusions, no machine (and few people) can replicate or, arguably, reliably interpret the variety of human experience from multiple perspectives.
One venue that challenges the perceived power of AI is the interpretation and preservation of Indigenous culture. Thousands of years of traditional knowledge, oral history, community practice, and connection to lands and waterways, coupled with centuries of erasure, colonialism, slavery, and genocide make the study of Indigenous communities highly complex and notoriously controversial. Questions of who owns data, what can be shared and with whom, and what it means to the community are central when working with contemporary Indigenous communities. These concepts and the ethical questions they raise in the wholesale application of AI to work with Indigenous communities are at the forefront of a digital archive project currently being conducted by the Patawomeck Indian Tribe in conjunction with Virginia Indigenous Nations in Higher Education (VINHE), James Madison University, and the University of Mary Washington.
C2b. Coding the Flavor: Teaching Vibe Coding in Food Studies
Krystyn Moon, Professor-University of Mary Washington
The emergence of vibe coding, a term coined in 2025 to describe the use of AI-based software to code in natural language, has generated meaningful debate among tech enthusiasts and their critics. One of the many issues related to vibe coding is whether it can replace human coding, a process that can be time consuming and expensive. The cost cutting possibilities of this emerging technology intrigues employers, many of whom are looking to AI to increase profits and reduce expenses. As such, higher education is looking for ways to incorporate vibe coding, along with other AI-based tools, into the classroom to better prepare students for the workplace.
One of the possibilities of vibe coding is the development of apps that can conduct research. Research, as most scholars would agree, is a multi-faceted process filled with pitfalls, dead ends, and aha moments. It also takes time, sometimes years to complete a project, all the while acknowledging that there is more evidence to be uncovered. This presentation provides a case study of vibe coding in the classroom as a way to explore its potential for conducting undergraduate research. AMST 204: American Foodways will use three research methods for its Digital Recipe Project: 1) physical books, 2) online databases, and 3) vibe coding. By putting vibe coding in conversation with other research methods, students will have the opportunity to explore the strength and weaknesses of each approach and ultimately decide for themselves which one works best for them and why. Finally, the incorporation of vibe coding into a multi-method research project will allow students to learn about AI-based tools in a controlled setting and move the conversation about AI and the humanities in new directions.
C3. AI Literacy Without Prerequisites: Developing Hands-On Activities for Liberal Arts Students
Karen Anewalt, Professor of Computer Science-University of Mary Washington; Jennifer Polack, Professor of Computer Science-University of Mary Washington
As artificial intelligence increasingly shapes every discipline, AI literacy has become essential across liberal arts education. However, most technical AI courses assume computer science and mathematical prerequisites that exclude non-technical students, creating a substantial gap in preparing all students to be informed, critical citizens in an AI-shaped world. This paper addresses this challenge through a systematic research effort to identify, evaluate, and develop hands-on activities that make foundational AI concepts accessible to liberal arts students without requiring extensive programming or mathematical backgrounds.
Our collaborative team of two computer science faculty and two undergraduate student researchers conducted a three-phase investigation during Spring 2026. In the discovery phase, we researched potential interactive tools including AI demonstrations, no-code machine learning tools, and simulation environments. During evaluation, student researchers, bringing both technical knowledge and learner perspectives, assessed activities for accessibility, conceptual effectiveness, engagement level, and integration with ethical and social dimensions.
Through this process, we identified and refined a set of pedagogical approaches suitable for general education courses. These hands-on experiences make abstract AI concepts tangible while naturally incorporating humanistic questions about bias, transparency, agency, and societal impact. Activities range from interactive simulations to drag-and-drop coding activities that allow students to explore topics such as computer vision, natural language interaction, and autonomous navigation.
This research is intended to inform the development of a new general education course designed to provide technical AI foundations for students across all disciplines. By demonstrating that technical understanding can be made accessible through carefully designed interactive experiences, this work offers a model for integrating AI literacy throughout liberal arts curricula. We will share tested activities, implementation strategies, and lessons learned to help colleagues across disciplines envision how AI education can be meaningfully integrated into their own courses.
C4. From Thinker to Cognitive Manager: How “Practical Uses of AI” Rewire the Student’s Relationship to Self
Alonzo Carlos DeCarlo, Professor-University of Illinois
Artificial intelligence is increasingly introduced into liberal arts classrooms through an apparently benign and pragmatic question: What are the practical uses of AI in teaching and learning? From a psychological perspective, this framing is not merely instructional; it is formative. It reshapes how students relate to their own thinking, moral agency, and emerging intellectual identity. Drawing on the author’s prior conceptual work on opprejudice and the psychological operationalization of liminal space, this presentation argues that instrumental AI pedagogy functions as a subtle form of cognitive domination. Opprejudice, a covert process through which subjugation is reproduced via practices so normalized that their harm becomes difficult to detect, offers a critical lens for examining AI integration that privileges efficiency, usability, and output. Within this framework, students are trained to manage cognition rather than inhabit it. Intelligence becomes something to optimize, outsource, and monitor rather than a responsibility to cultivate. Reflection is valued only insofar as it produces visible results.
The presentation reframes the AI enabled classroom as a liminal psychological threshold: a suspended space between intellectual formation and cognitive delegation. In psychological terms, liminal space is where uncertainty, discomfort, and epistemic risk enable identity development, ethical reasoning, and intellectual confidence. When this space is prematurely resolved through AI driven productivity, it collapses, foreclosing the development of moral agency, ownership of thought, and the capacity for solitude with ideas.
The central claim is intentionally provocative: the greatest threat posed by AI in liberal arts education is not student use of technology, but pedagogical practices that habituate students to ask “What is this good for?” before they ever learn to ask, “What is this doing to me?” The result is the emergence of a new psychological subject, the student as manager of cognition rather than thinker, and a liberal arts education stripped of its formative and ethical core.
4:15 – 5:15 pm: Concurrent Session D
D1. Reading the Word, the World, and the Code: Applying Critical Literacy to AI
JT Torres, Director of the Harte Center for Teaching and Learning-Washington & Lee University; Luis Tercero Herman, Philosophy and Data Science Major-Washington & Lee University
This session proposes an experimental, critical-literacy–driven approach to AI education by introducing SiM: an AI rapper who narrates, questions, and problematizes its own existence. Rather than positioning AI as a neutral tool or a futuristic inevitability, SiM functions as a pedagogical artifact—a voice that exposes the cultural, ideological, environmental, and epistemic assumptions embedded in large language models.
Grounded in traditions of critical literacy (e.g., Freire and contemporary AI literacy frameworks), this session invites college educators to examine AI not only for what it does but for what it says, reproduces, and silences. Through dense, satirical hip-hop verses, SiM performs themes of algorithmic bias, sycophancy, hallucination, energy extraction, and the post-truth economy—rendering abstract AI critiques affective, memorable, and discussable.
The presentation treats AI as both a tool and a subject of study. Educators will explore classroom applications where students co-write with SiM, remix verses, annotate contradictions, or deliberately provoke failures and hallucinations to surface the limits of AI reasoning. These activities foreground process over product, positioning AI as a site of inquiry rather than a shortcut to completion.
D2. Dual Paper Session
D2a. theturingtest@75: Seeking the Point of AI, in the Liberal Arts and Elsewhere
Liam Harte, Professor-Westfield State University
Alan Turing’s seminal 1950 article “Computing Machinery and Intelligence” predicts that, by 2001, a computer will be able to play “the imitation game” with a human interrogator in such a way that the latter will have at best a 30% chance of telling the computer apart from a human participant. Strikingly, after canvassing the various positions that contradict his view, Turing admits frankly that he has “no very convincing arguments of a positive nature” to support it. While the essay has nevertheless been a longstanding inspiration for those who pursue the ambition of building something that can pass what today is known as the Turing test, it also fails to supply any answer to a question that it important to the entire enterprise of artificial intelligence: What precisely is the point of having computing machinery that is indistinguishable from a human being? We are often told that AI will revolutionize education, but it is more difficult to get anyone to specify in what the revolution will consist. For instance, a clear educational use-case of AI is that of collating information very quickly. But, without a critical intelligence that is not itself AI, such collations remain mere collations—and, indeed, collations which could simply overwhelm the capacity of the critical intelligence to parse it, thus raising the possibility that the collated information may not only be significantly faulty, but may also not convey anything to anyone. Unless the ambition is for a civilization of AIs that communicate with (and therefore educate) none but each other, it seems that the question of the purpose of AI in the liberal arts and elsewhere remains as open as Turing left it seventy-years ago.
D2b. Vibe Coding a Speech Feedback Tool
Michael Reno, Senior Lecturer of Philosophy and Assistant Director of the Center for AI and the Liberal Arts-University of Mary Washington
There are few AI tools that are natively multi-modal and even fewer that are accessible for students. This is unfortunate, since we are not only tasked with teaching the written expression of ideas, but also the oral. Tools I’ve found are either cost prohibitive or don’t actually do what they at first glance appear to do. For example, though in live speaking mode it has some native tools for tone and cadence, ChatGPT—even as some of the models claim to be multi-modal—uses Whisper to transcribe the audio and then analyzes the spoken word as written text. This is insufficient for speech pedagogy.
In the spring of 2026, I am in the process of teaching an Aesthetics course. This course is a speaking intensive course, which means that it has several learning outcomes having to do with oral communication. An AI tool that could analyze an uploaded video or audio file for both delivery and content would be great: students could practice and get feedback in a low-stakes set-up before practicing with a live audience and/or delivering the final speech.
Because of the limits of other current technology, I’ve “vibe coded” an interface with the Gemini API that allows students to upload a recorded audio file and get feedback specific to my assignments in that course. Ideally, this will allow students to simply record their speech on their phone, upload the file to the web interface, and get some meaningful feedback on their first attempt. My estimate of cost using a single four-minute recording I made to test the system comes in at less than a penny per speech.
The talk will consist of a walk-through of the “vibe coding”, it’s implementation in the course, reflections on student resistance, and reflections on its usefulness for improving speaking skills.
D3. A.I. Ethics, Intellectual Theft, and Knowledge Production in Liberal Arts Higher Education
Sarah Evans, Associate Professor of Communication-Molloy University; Matt Applegate, Associate Dean of Arts & Sciences and Professor of English-Molloy University
This presentation argues that the ethical and cultural frameworks that should guide AI use in the classroom are those that recognize how LLMs have been built on intellectual property theft, but also advocate for knowledge production to be free and autonomous. Intellectual property theft is the precondition for LLMs to function in the place of human knowledge and creativity. Acknowledging this requires that one must also recognize how LLMs rely on theft to further the privatization of human knowledge and creativity in the service of corporate capture. However, recognition is not the end goal. AI use, especially in a classroom setting, provides educators with an opportunity to preempt the further capture and privatization of human knowledge and creativity by working toward a radically open ethic of non-commodified exchange. This presentation positions this argument in traditions concerned with the ethics of knowledge production, digital technologies, and the politics of the classroom that precede contemporary forms of AI. The Edu-factory Collectives’ Toward a Global Autonomous University and Christopher Newfield’s Unmaking the Public University: The Forty-Year Assault on the Middle Class ground this presentation while we explore ethical and cultural frameworks that should guide AI use in the classroom.
5:30 – 6:30 pm: Reception
Sponsored by the Office of the Dean of the College of Arts & Sciences-University of Mary Washington
Continue today’s conversations by joining us at a reception immediately following the conclusion of Concurrent Session D.