Participants Tend to Exaggerate Emotions on Surveys, Study Says

Dr. Patrick E. Shrout is a professor of social psychology at NYU.

via nyu.edu

Dr. Patrick E. Shrout is a professor of social psychology at NYU.

Kristina Hayhurst, Deputy News Editor

A recent study published in the Proceedings of the National Academy of Sciences journal has found that people tend to overemphasize their negative emotions in surveys.

The research, co-authored by NYU Professor of Psychology Dr. Patrick Shrout, enlisted law and undergraduate students in preparation for various exams to assess their feelings of anxiety and depression in anticipation for the exam.

The research began in response to a puzzling trend in longitudinal studies where participants express high levels of distress at the onset of a study and then get inexplicably calmer in the subsequent entries. This initial bias entirely contradicted the expected progression of distress.

“The people appeared to be getting better for no reason,” Dr. Shrout said in an interview. “When I worked with people who studied mental disorders, and they had reported about lifetime difficulties with depression, anxiety and substance abuse, they would report lower lifetime rates at the second or third time compared to the first. For a long time, people just talked about it as a quirk.”

The paper performed four different studies with students in an attempt to explain this phenomena. In the first study, the researchers asked participants to write diary entries at different time periods over the course of a year. The bias was prevalent in almost all of the four studies, even after the exams had ended.

“One of the groups in study one who took the bar exam was asked to journal how they felt a week after the exam,” Shrout said. “There’s no reason for them to be anxious or to have somatic problems at that point but the group again showed this effect.”

In the second study, the researchers enlisted undergraduate students from NYU and Columbia University who were enrolled in pre-med classes and randomly assigned the students to fill out diary entries on eight different days before a unit exam. Even with the scattered time, the bias from study one was still prevalent.

In the third study, NYU and Columbia undergraduates were randomly assigned entry dates and were told that it was a simple study of college life, rather than a study of distress and anxiety. A few students were asked to give journal entries four times, while other groups were asked to report three times, two times, or only once throughout the year. Again, the results showed that even over long periods of time, those who reported first had effects much larger than the others.

The fourth study attempted to determine the reason for the bias prevalent in the first three, but it ultimately did not work the way they had intended. One group was told that the purpose of the daily dairy was about the stress of upcoming exams, while others were told it was a study on headaches. While the bias was still evident in the entries of those told about the exam, it was not for those who had completed it under the assumption of a study on headaches.

The researchers posited three possible solutions to explain why the bias was present throughout the first three studies. One solution in particular stuck out to Shrout and his colleagues.

“When people agree to be in a survey, whether it’s a political survey, a health survey or research study in college, they enter into a contract to try give the researchers what they want,” Dr. Shrout said. “Some of the participants knew that we wanted to hear about stress, distress or health problems, so they may have, unintentionally, overestimated their problems.”

Dr. Gertrude Stodler, senior lecturer at the University of Aberdeen and co-author of the paper, emphasized that while this bias is consistent and pervasive across a multitude of longitudinal surveys, it wasn’t because the participants were lying; in fact, research seemed to show the opposite.

“We don’t fully understand what is really going on, but what we do know is that it’s very unlikely that they are lying,” Stadler said. “But one answer is definitely that all these things fluctuate — your health, well being, anxiety, depression — so probably people, when we asked them the first time, don’t really know what their feelings are. So instead they just go for the upper limit of what they experience; ultimately they’re trying to be helpful.”

This bias affects more than just psychology studies — politicians use similar surveys to determine how best to represent their constituents and national health organizations use them to gauge the health of Americans. Attempting to determine the magnitude of this bias is essential in order to accurately measure health-related research.

“If we’re talking about depression or even political inclination, it matters if people are over exaggerating their symptoms,” Stodler said. “To assess this, we have to do it just like a doctor; when they really want to know about blood pressure, they take it twice.”

While it is more difficult to measure emotions than somatic problems, there are still effective methods to get reliable results. Dr. Joy McClure, assistant professor at the Adelphi University, reiterated that this bias doesn’t render the surveys unreliable.

“I think that it’s important to distinguish between noise and bias,” Dr. McClure said in an interview. “Measuring subjective things like feelings will always include some degree of noise — random variation that comes from any number of unmeasured factors. Because this is relatively random, it makes it harder to detect the underlying signal, but it doesn’t lead us to over or underestimate that signal.”

While the results of the study did not completely determine the source of this widespread bias, it did give extensive insights into how longitudinal studies should be considered in the future.

“Moving forward, we will want to think carefully about where potential measurement biases come from, do what we can to mitigate the biases we know about and try to answer our questions using many different kinds of methods,” McClure said.

 

Email Kristina Hayhurst at [email protected]