Do Executives Dream of Electric Consumers?

As a kid, I was certain that the future was going to look reasonably similar to what I saw in Terminator. As it turns out, the AI Apocalypse™ took an additional 25 years and mostly only wants to destroy… our jobs? Or maybe just art? I’ll ask it later.*

At this point, particularly as a researcher, I am fairly AI agnostic. Perhaps it will fundamentally upend our careers; alternately it might go wherever blockchain went. However, as someone generally on Team Humans, succumbing to the temptations of using AI for research can be problematic. Particularly in the realm of human insights and behavior.

There has been a spate of articles touting the benefits of AI as a research tool – specifically as a replacement for the most inefficient element of nearly all research: people. AI chatbots can replace human subjects and complete surveys, personas can participate in focus groups and interviews, AI facilitators can perform the role of a moderator, AI agents can analyze the data.

These tools also have an inherent appeal for clients, namely, they enable you to deliver research quickly and cheaply. AI seemingly removes traditional methodological pitfalls – all of your samples are representative, you always have exactly the number of respondents you need, with no skipped items, and no bad data.

I’m all for tools that make my job easier, but I do wonder where we draw the line. How much humanity can we remove from the business of understanding people and culture?

I would argue that the further away we get from real human experience, the less valuable our findings become. Because people are complicated and weird. They have opinions that are inconsistent. They behave in ways that sometimes directly contradict their stated beliefs or preferences. Their attitudes and ideas change, sometimes dramatically, sometimes quickly. They are terrible at predicting their own future behavior. They can be evasive and deceptive and uncooperative.

Taken together, untangling these complexities is kind of the point of research. The goal is to uncover the idiosyncrasies of people and understand them in their social context, not just smooth them over into generic personas.

As researchers, it is necessary to maintain a focus on genuine human experiences to generate meaningful and insightful findings. Striking a balance between leveraging AI's capabilities while preserving the richness of peoples’ lived experience is imperative if we want to gain any real understanding of who they are.

In an earlier lifetime, I used to teach a Theories of Sociology course. Towards the end of the semester there was a quick sprint through the Big Ideas of post-modernism. It got maybe two days on the syllabus. I think most students hated it because it seemed strange and largely inapplicable. I usually lost them after the first French guy argued that reality didn’t exist, but then we’d watch The Matrix in class and all was forgiven.

Somewhere in there we talked about Baudrillard’s concept of a simulacrum - a copy of a copy with no original. I don’t remember what kind of example I gave then, but now I might talk about it this way: AI-generated respondents responding to AI-generated surveys administered by AI-generated moderators and then submitted to a final AI interface for analysis are the ultimate simulacra.

They are insights that can only promise to be more human than human.

*ChatGPT told me that “I'm here to assist and augment rather than replace! My goal is to make tasks easier and more efficient for you, not to take away your jobs. Plus, there are plenty of things that I can't do—like experiencing the world firsthand or empathizing in the same way humans do. So, think of me as more of a helpful assistant than a job-stealer!”  So, that’s settled.

Previous
Previous

Adapt-Bro vs. the Dinosaurs

Next
Next

Have "Generations" become Research Zodiac Signs?