Conducting our research

Difficulties with research

How do the research difficulties discussed in the video manifest in our lab?

  1. Complexity: Our lab collects data on a very complex human behavior: language.

  2. Variation within individuals: The same person can give us a different measurement from one time to another. If we ask them to rate how grammatical a sentences is, and then ask them to rate the same sentences 30 trials later, they might provide a different response.

  3. Variation between individuals.: The way one person responds to something might be different from the way another person responds to the exact same thing. For example, in a reaction time task, children might respond significantly more slowly than adults, and even within a group, every children might have a different baseline level of speed with which they are able to respond. In rating scale studies, each person might differ in the way they use the rating scale; if the scale is 1 to 5, one person might never rate anything lower than a 3, while another is happy to rate some things as low as a 1.

  4. Measuring changes people. Demand characteristics are aspects of experiments that influence how participants respond. In our experiments, we usually sit in the room with the participants. This might cause them to respond in a certain way, because they feel more pressure to perform or be “correct”; or they might try to read the experimenter’s reaction to their responses to judge whether or not they are right, and adjust future responses based on their observations. In rating scale studies, participants are known to make certain assumptions - for example, that there are the same number of correct and incorrect answers on the “test” - and these assumptions alter the responses they make.

How do we reduce the influence of these difficulties?

These problems are always there, but we can be aware of them and adjust our research studies to try to reduce their influence and get more accurate data. In our lab we do this by:

  1. Keeping the study as consistent as possible between individuals and groups. If we give children stickers, we also have to give them to adults. If we give one participant positive feedback, we give them all positive feedback.

  2. Keeping instructions as consistent as possible between participants (and groups). If we explain the rating scale in a particular way to one person, we must explain it that same way to every other person (more on this in the experimenter bias section). This also means we need to explain things in the same way to both children and adults.

  3. Measuring an individual’s response to a given stimulus multiple times, then taking the average of their response.

  4. Normalizing data after it is collected before comparing across individuals. For example, instead of comparing participant’s raw reaction times, we normalize their reaction time data (using z-scores or similar methods) and then compare the normalized reaction times.

  5. Being aware that the way we frame the experiment might change the way participants perform the task, so we keep the framing of the experiment the same for every individual participant.

  6. Being aware that sitting in the room with the participant can influence the way they respond in the experiment, so we ensure that we keep this consistent across individuals and groups (e.g., if we sit in the room with kids, we should also sit in the room with adults).

  7. Being aware that any changes we make to the experiment can have an influence on the data we collect. Before implementing any changes - no matter how small they seem - we run them by the entire research team.

  8. Running multiple conditions of an experiment in exactly the same way, under exactly the same circumstances. This allows us to ensure that any changes we observe across conditions are likely due to the variable we are manipulating (and not the demand characteristics of the task).

Experimenter Bias

Questions to consider:

  1. How was Clever Hans able to perform arithmetic? How did researchers make this discovery? What happened when experimenter bias was removed?
  2. What was the twist in the “bright” rats and “dull” rats experiment? What explains the difference the students observed between these two groups?
  3. What kinds of experimenter bias might happen in our research studies? Can you think of some ways to reduce or eliminate experimenter bias?

How is our lab vulnerable to experimenter bias?

There are a number of ways in which our experiments can be influenced by experimenter bias. For example:

  1. We sit in the room with the participant. Just like with Clever Hans, if the experimenter knows what the participant should be doing (e.g., what results we are hypothesizing), she might be giving off some unconscious cues to the participant. Importantly, the experimenter may not even realize she is doing this.
  2. We provide feedback to the participant. Because our participants are responding to questions or making decisions right in front of us, they often look to us to tell them whether or not they are correct. The experimenter might, without realizing it, offer feedback differently for responses they feel are “correct” and those they feel are “incorrect”.
  3. We sometimes know what condition the participant is in. Sometimes, even though we try to avoid this, it is not possible for the experimenter to be “blind” to the condition the participant is in. Just like the “bright” and “dull” rats, the experimenter’s expectations about what will happen in each condition could cause them to (1) judge the child’s responses differently or (2) unknowingly send signals that influence what the participant does. The same is true for our transcribers and coders - their knowledge about the experiment’s hypotheses could influence the way they score the data.

What lab systems do we have to protect against bias and confounds?

To try to protect us from experimenter bias, we employ a number of lab systems whenever we can:

  • Randomization of subject assignment. We try to assign the participant to an experimental condition randomly, so you as the experimenter are not aware of the condition (you are “blind”).
  • Experimenter are as blind as possible. Even when we can’t fully randomize, we try to keep the experimenter as blind as possible to the language or pattern the participant is being exposed to. For example, we have the participant wear headphones so the experimenter cannot hear (or at least cannot hear clearly) what the stimuli are.
  • Consistent feedback. When verbal feedback is provided, we require experimenters to deliver the same verbal feedback on every trial. To double check this is the case, we have transcribers and coders explicitly listen and code for feedback so we can analyze whether or not differential feedback was given.
  • We instruct the experimenter to interfere as little as possible. Even though the experimenter is often in the room, we ask them to deliver the instructions and feedback as written in the protocol, and otherwise to interfere with the experiment as little as possible to reduce the chances of unintentional bias.
  • We have an explicit protocol to follow. For each experiment, we have an explicit protocol to follow to ensure that each participant is run in exactly the same way. We have checks in place so a supervisor can determine whether or not a protocol was followed.
  • Transcribers and coders are blind to condition. When possible, we keep our transcribers and coders blind to the experimental condition as well.

What are the consequence of experimenter bias in an experiment?

If we find that experimenter bias has influenced the data we’ve collected, there are a few things we will have to do:

  • First, we will have to re-evaluate all the data, resulting in substantial work for everyone on the research team (e.g., transcribers and coders may have to re-code and transcribe all data, etc.)
  • We might decide we have no choice but to exclude some or all of the data collected by an experimenter. This means we would have to collect new, unbiased data to replace it.
  • In extreme cases, if the bias is discovered after a paper has already been published, we may need to retract a published research paper.

In short: tell someone if you have are at all concerned that data being collected or coded in our lab might be being biased in some way. We are a team and are all helping each other protect against these things. You will never get in trouble for pointing out a suspected bias, nor will the experimenter who may have introduced the bias. We are all human and we make mistakes; the point is to point out mistakes as quickly as we can so we can solve the issue as quickly as possible.