Displacement? Productivity among the Machines

Session Overview

Date and Time: Tuesday, August 20, 4 – 5:30pm

Location: Broad Theater, Santa Monica College Performing Arts Center, Los Angeles

Discussant: Shaheen Amir, California Office of Data & Innovation

Livestream: Free for EPIC Members

It’s frequently suggested that AI should support humans, rather than replace them. This is a critical moment to shape if and how that will happen. These presentations pinpoint vital human capabilities and human-AI collaborations, offering frameworks to guide the development of effective socio-technologies.

▶️ Download this deck with links to free articles, case studies & video in the EPIC Library that are related to this session.

Presentations

Experts-in-the-Loop: Why Humans Will Not Be Displaced by Machines when there Is “No Right Answer”

PAPER PRESENTATION

Janneke Van Hofwegen, Senior UX Researcher, Google
Tom Hoy, Partner, Stripe Partners

Today many EPIC practitioners are guiding the implementation of LLM technology across a variety of business domains, as they guided previous generations of expert systems (Beckwith and Sherry, 2020). Ethnographers are therefore in positions to shape the future definition of expertise across many industries, with significant downstream consequences.

Our paper makes three contributions to support ethnographers seeking to influence these shifting foundations. First, it provides a robust interdisciplinary approach to define the ongoing value of human expertise, including how to build bridges with data science and engineering colleagues to maximize impact. Second, it provides a clear, domain-agnostic framework that serves as a starting point for researchers seeking to conceptualize the space within their specific field. Third, it helps to situate this new technology within the broader trajectory of expert systems, helping practitioners to frame and contextualize their insights to wider organizational hierarchies.

Janneke Van Hofwegen is a UX Researcher on the Real World Journeys team on Google Search. She employs ethnographic and quantitative methods to learn how people are inspired by and decide to embark on experiences in the world. She holds a PhD in Linguistics from Stanford University. Her linguistics work focuses on understanding human behavior and stylistic expression through the lens of language and group/individual identity.

Tom Hoy is Partner and co-founder of Stripe Partners. His expertise lies in integrating social science, data science and design to unlock concrete business and product challenges. The frameworks developed by Tom’s teams guide the activity of clients including Apple, Spotify and Google. Prior to co-founding Stripe Partners, Tom was a leader in the social innovation field, growing a hackathon network in South London to several hundred members to address local causes.

From Efficiency to Empathy: Ethnographic Perspectives on AI Chatbots in Customer Support

CASE STUDY PRESENTATION

David Rheams, Senior Business Architect, Atlassian

This case study examines the research methods used in a three-month ethnographic project conducted at Atlassian, which focused on how engineers interacted with an AI chatbot in customer support. It addresses the challenges of integrating AI with existing communication practices and underscores the necessity of human oversight. The findings advocate for ethical AI integration and cover insights learned in researching sensitive topics around labor and technology. This research challenges the view of AI as merely a tool for efficiency, highlighting its potential to enhance human communication and relationships in the workplace.

David Rheams is a Senior Business Architect at Atlassian and a Lecturer at the University of Texas at Dallas in the Arts, Humanities, and Technology department. With over two decades of experience working in technology and communications, David specializes in the intersection of communications, technology, and culture. His current research focuses on the ethnographic study of AI usage in customer support, offering insights into how AI tools are co-produced by and impact human practices.

How Commoditized Empathy Can Impede Mutual Understanding

PECHAKUCHA PRESENTATION

Nicole Laborde, Senior Design Researcher and Ethnographer, Sutherland Labs
Marise Phillips, Senior Director, Service Design, Sutherland Labs
Kellie Hodge, Senior Research Director, Sutherland Labs
Jamie Taylor, Visual Designer, Sutherland Labs

In the customer support industry, a foundational business strategy is to create the illusion of proximity between agents and the customers they serve. Providers in this sector guarantee a level of human availability that, as they say, follows the sun. During COVID, this illusion of proximity fell away when support workers, displaced from contact center production floors, began answering calls from home. And while knowledge workers enjoyed the privilege of building virtual camaraderie with their fellow work-at-homers, support agents often experienced the opposite. Sounds of life (animals, children, street noise) leaked through the phone. These signals of distance and “foreignness” could provoke irritation, even ire, from customers. This PechaKucha posits an opportunity for our work as ethnographers to generate a new norm where homogeneity is not a condition for empathy. Too often, in the current state, neither agents nor customers are satisfied. Might we, as ethnographers, work to embed empathy into corporate policies in a way that truly embraces diversity? And, in the case of contact centers, is there a way we can generate empathy to encourage reciprocity so that both parties – the agent and the customer – achieve their goals of seeing and feeling seen?

A cultural anthropologist, Nicole Laborde brings over 20 years of research experience in international and domestic settings, often among vulnerable populations and on uncomfortable topics. She sees ethnography, empathy and storytelling as central to her work, and has a talent for respectfully eliciting stories and making meaning out of experiences. Out of the office, she loves thrift shopping with her daughters, practices karate, and cares for too many pets.

Marise Phillips leads the service design practice at Sutherland. Her experience in participatory design facilitation empowers cross functional teams to co-create empathy maps, ecosystems, journey maps and experience blueprints. At Wells Fargo, she managed the customer insights team shaping digital banking experiences whilst helping the bank become a more agile organization. In her spare time, she leads ethnographic initiatives to promote ecosystem restoration in the San Francisco Bay.

Kellie Hodge has a diverse background in design research and project management. Kellie works as the Senior Director of Design Research at Sutherland Labs, where she oversee research efforts. Prior to that, they held the role of Principal Design Researcher at the same company. Kellie also has experience as a Principal Design Research Lead at Nuna Inc, where she focused on healthcare industry partnerships. Off duty she loves sailing and snowboarding with her family.

Jamie Taylor brings his passion for storytelling and visual communication into his work as a service designer. Before joining Sutherland Jamie tried his hand at many professions; freelance design, running a ski business, owning a print shop, working at a brewery, and taxi driving –just to name a few! During his spare time Jamie loves to adventure, his interests include rock climbing, hiking, and spending time in the garden with his wife and two daughters.

Shifting Mental Models of GenAI

PAPER PRESENTATION

Soojin Jeong, Head of Insights, AI User Research, Google
Anoop Sinha, Research Director in AI & Future Technologies at Google

Our study explores the evolving relationship between humans and AI, highlighting how AI Mental Models are shaped among consumer.We dive into the evolving relationship between humans and AI, highlighting the importance of trust. We argue that while predictability in AI is crucial, it alone is not enough to foster trust. Interaction design addresses some trust issues, but the lack of real consequences for AI systems that breach trust remains a challenge. Until AI systems face tangible repercussions for trust violations, human trust will remain limited and conditional. By suggesting practical solutions for accountability, our research contributes to the development of socio-technologies that prioritize human capabilities and foster productive human-AI relationships.

Soojin Jeong is a passionate user researcher and team builder with 23 years of experience in the most advanced and fast-moving technology categories, specializing in future product innovation and AI UX. Soojin has built innovation insights teams at Samsung HQ (Korea), Intel (Japan & US), Meta, and Google and is passionate about discovering user insights that are critical to making AI technology more humane, natural, and accepted (not feared/distrusted).

Anoop Sinha is Research Director in AI & Future Technologies at Google.  His current research interests include new interfaces and interaction research and society-centered AI, applications that have the potential for beneficial impact on society. Anoop has a PhD from UC Berkeley in Computer Science with a focus on Human Computer Interaction.