A hot potato: Companies, startups, and everyday individuals around the world are embracing generative AI at an increasingly rapid pace. Going forward, a lot of what you read will likely have been created by a machine, but is that a good thing? According to a new study, many people don’t like it if they believe they’re being fed generated text as they find the practice insincere and inauthentic.
Cornell University carried out the study, which involved splitting a group of test subjects into 219 pairs. Each pair was told to talk about a policy issue over text message and was given one of three conditions: both pairs in a group could use a smart-reply platform to generate their conversation; only one person could use it; or neither participant could use the generative AI. On average, smart replies accounted for 14.3% of sent messages (1 in 7).
The good news for AI is that researchers found using smart replies increased communication efficiency, positive emotional language, and positive evaluations by communication partners.
But the caveat is that anyone who believed they were talking to someone whose responses were generated by AI perceived that person as less cooperative and felt less of a connection with them.
“I was surprised to find that people tend to evaluate you more negatively simply because they suspect that you’re using AI to help you compose text, regardless of whether you actually are,” said postdoctoral researcher Jess Hohenstein, lead author of the paper. “This illustrates the persistent overall suspicion that people seem to have around AI.”
A second experiment involved asking 299 randomly assigned pairs of participants to discuss a policy issue in one of four conditions: using no smart replies; using the default Google smart replies; using smart replies on a different AI tool with a positive emotional tone; and using ones with a negative emotional tone.
Conversations using Google’s Smart Reply or the tool that created positive texts were more upbeat and had a more positive emotional tone than those that didn’t use AI or were generated using negative responses. The researchers believe this illustrates AI’s benefits when creating text that requires a more positive flow, such as professional communications. But it comes at the cost of personalization.
“While AI might be able to help you write,” Hohenstein said, “it’s altering your language in ways you might not expect, especially by making you sound more positive. This suggests that by using text-generating AI, you’re sacrificing some of your own personal voice.”
The study indicates that AI has its place, but it shouldn’t be used universally. Nobody wants to have a one-on-one conversation (or part of one) with a machine when they believe they’re talking to a person, and there are professional situations that require a human touch, such as writing a mass email about a recent shooting – Vanderbilt University in Tennessee created such a letter using ChatGPT, much to the anger of those who received it.