As young adults navigate an increasingly digital world, concerns are growing about the potential for unhealthy relationships with social media, and chatbots. But what if these AI interactions weren’t simply distractions? At MIT, a new course is exploring the possibility of designing chatbots as “moral partners” – digital guides focused on positive social interaction rather than addictive engagement.
A Unique Interdisciplinary Approach
The course, 6.S061/21A.S02 (Humane User Experience Design, or Humane UXD), was born from a friendship between Professor Arvind Satyanarayan, a computer scientist, and Professor Graham Jones, an anthropologist. Combining these two disciplines, the class encourages students to design artificial intelligence chatbots in ways that facilitate users improve themselves.
Humane UXD is an upper-level computer science class that is also cross-listed with anthropology, allowing students to fulfill a humanities requirement. Professors Satyanarayan and Jones utilize methods from linguistic anthropology to teach students how to integrate human interactional and interpersonal needs into programming.
The professors met several years ago whereas co-advising a doctoral student’s research on data visualization for visually impaired people. According to Professor Jones, “There’s a way in which you don’t really fully externalize what you know or how you think until you’re teaching.” Professor Satyanarayan added that anthropology’s methods, like interviews and observation studies, have been “watered down” in the field of human-computer interaction, and this class aims to reintroduce those valuable techniques.
Student Projects Demonstrate Potential
Students in the class have already developed several promising projects, built with Google’s Gemini. One project, “Pond,” is designed to help college graduates navigate the challenges of independent adult life, offering advice on social life, professional development, and practical skills. Another, “News Nest,” aims to help young people engage with credible news sources in a fun and transparent way, using ten colorful bird characters to guide users.
A third project, “M^3 (Multi-Agent Murder Mystery),” experiments with making AI humane through engaging gameplay, incorporating chatbots powered by Gemini, ChatGPT, Grok, and Claude into a social deduction murder mystery.
The curriculum has already shown practical benefits, with one student securing a chatbot startup internship directly linked to the skills learned in the class.
Looking Ahead
The MIT Morningside Academy for Design Curriculum Program is currently accepting applications for the 2026-27 academic year, with a deadline of Friday, March 20. The success of this course could lead to similar interdisciplinary programs at other universities. Further development of these “humane” chatbots could also influence the broader AI industry, potentially leading to more ethical and user-centered designs.
Frequently Asked Questions
What is Humane UXD?
Humane UXD, or 6.S061/21A.S02, is an undergraduate class at MIT that combines anthropology and computer science to teach students how to design artificial intelligence chatbots in humane ways.
Who created the class?
Professor Arvind Satyanarayan, a computer scientist, and Professor Graham Jones, an anthropologist, created the class last summer with a grant from the MIT Morningside Academy for Design.
What is the goal of the class?
The goal of the class is to encourage students to design AI chatbots that help users improve themselves and act as social guides, rather than simply being addictive distractions.
As AI becomes increasingly integrated into our daily lives, how can we ensure that these technologies are designed to support and enhance human well-being?
