knowt logo

2.5: Evaluating Psychological Research

2.5: Evaluating Psychological Research

  • She starts with a random sample of 10,000 people from the city of Inebriated, Indiana.
    • Participants are asked about their drinking habits and national background.
    • She finds that 1,200 citizens of Inebriated meet official diagnostic criteria for alcohol use disorder.

Why are you here?

  • It's easy to forget about base rates when interpreting findings.
    • Base rates aren't particularly vivid and are often "lurk in the distance" of our minds.
    • The base rate of people of German descent in Indiana is 25 times higher than the base rate of people of Norwegian descent.
  • The fact that there are 15 times more German alcoholics than Norwegians in Inebriated doesn't support her hypothesis.
  • We've focused on misuses and abuses of statistics.
    • We want to immunize you against statistical errors you're likely to encounter in the newspaper, on TV, as well as on the internet and social media.
    • Statistics are a wonderful set of tools that can help us understand behavior.
    • It's best to keep a middle course when evaluating statistics, between dismissing them out of hand and accepting them uncritically.
    • When it comes to psychology, we should keep our minds open, but not so open that our brains fall out.
  • How to correct flaws in research designs.
    • Skills for evaluating psychological claims can be found in the popular media.
    • Many of the studies aren't trustworthy.
  • Most psychological journals send their submitted articles to reviewers who screen them for quality control.
    • Peer reviewers need to identify flaws that could undermine a study's findings and conclusions, as well as tell researchers how to do the study better next time.
    • We've learned the key ingredients of a psychological experiment and the pitfalls that can lead experiments to go wrong, so let's try our hand at being peer reviewers.
    • We can become better consumers of realworld research by doing this.
  • We will present descriptions of two studies that contain at least one hidden flaw.
    • Try to figure out what's wrong with the study by reading it.
    • Try to come up with a way of fixing the flaw.
  • You can see how close you came by reading the paragraph after it.
  • The hypothesis that subliminal self-help tapes increase self-esteem is being tested by an investigator.
    • She tells them to play the tape for two months and then sleep for one hour.
  • Participants' self-esteem is measured at the beginning of the study and again after two months.
  • This "experiment" is not an experiment at all.
    • There is no random assignment of participants to experimental or control groups.
    • There is no manipulation of an independent variable.
    • A variable is something that can change.
    • All participants received the same manipulation, which was to play the subliminal selfhelp tape every night.
    • We don't know if the increase in self-esteem was the result of the tape or not.
    • It could have been because of a number of other factors, such as placebo effects or increases in self-esteem, that might occur over the course of one's Ruling Out Rival Hypotheses freshman year.
  • Dr. Art E. Fact is interested in determining if a new treatment, Anger Expression Therapy, is effective in treating anxiety.
    • 100 people with anxiety disorders are assigned to two groups.
    • Anger Expression Therapy is administered to the experimental group.
    • The control group is placed on a waiting list and does not receive treatment.
    • The rate of anxiety disorders is lower in the experimental group than in the control group according to Fact interviews his patients.
  • The experiment looks okay on its surface.
  • Fact didn't control for two important pitfalls.
  • People in the control group know they're not receiving a treatment, while people in the inter group know they are.
  • There is a condition in which a counselor provides attention to patients, but no formal therapy.
    • The counselor might chat with her patients once a week for the same amount of time as the patients who receive Anger Expression Therapy.
  • A researcher could examine the effect of fact on the experimenter expectancy problems.
    • He knows which patients are in which group and could sub individuals who receive a specific treatment to influence them to show less anger than people who don't.
    • Control for this effect and minimize the treatment.
    • It might be better to ask the same therapist if the client gets therapy for anger.
    • The other variable is dependent.
    • The treatment and control level of the client's anger should be administered at the end of the study.
  • Fact blind to the group assignment when he interviews patients at the end of the study.
  • News stories are prone to faulty conclusions because reporters rely on the same biases that we do.
  • It's important to keep a few tips in mind when evaluating psychological reports in the media.
  • The principle of "consider the source" applies to websites.
  • We should place more trust in findings from primary sources, such as the original journal articles themselves, than from secondary sources, such as newspapers, magazines, or websites that only report findings from primary sources.
  • The most important facts of a study are brought into sharper focus when leveling and sharpening.
    • Secondary sources in the news media need to engage in a certain amount of sharpening and leveling when reporting studies, because they can't possibly describe every minor detail of an investigation.
    • Too much leveling and sharpening can result in a misleading picture.
  • The headline isn't wrong, but it oversimplifies what the researcher found.
  • We can easily be misled by the coverage of a story.
    • The kind of balanced coverage that news reporters create by ensuring that representatives from both sides of the story receive equal air time is different from genuine scientific controversy.
    • The news media tries to include comments from "experts" on opposing sides of an issue to make the story appear more balanced.
  • A story about a study that provides scientific evidence against ESP might be in a newspaper.
    • The first four paragraphs could be devoted to a description of the study, but the last four paragraphs could be used to critique the study from ESP advocates.
    • It is possible that the scientific evidence for ESP is split down the middle, with half of the research supporting it and half disagreeing it.
    • It's easy to ignore the fact that there was no scientific evidence in the last four paragraphs.
    • The scientific evidence regarding ESP is mostly negative.
  • One reason why it's hard to think scientifically about research is that we're constantly bombarded with media reports that give us poor role models for interpreting research.
    • These tips should help us become better consumers of psychological science in everyday life and to make better real-world decisions.
  • He's bald.
    • The company's advertisement starts with "Grow back a ful head of hair in only 3 weeks" and you have to evaluate if it is accurate and if not.
    • "We've received confirmation of this remarkable claim from dozens of satisfied customers," the ad says.
  • This claim isn't terribly different from the actual ads for hair-loss remedies.
  • Consider the six principles of scientific thinking.
    • As you evaluate this claim, the company apparently didn't conduct these ex evant.
  • The claim in the ad is open to a host of alternative explana trolled study in the first place, let alone attempted to replicate tions.
    • There is no evidence that the claim is backed up by a small group of satisfied customers.
  • The evidence for the assertion is weak.
    • It's possible that the satisfied customers were based entirely on anecdotes, which are almost always using other hair growth products at the same time, and that these weak source of scientific support.
  • The critical thinking principle isn't relevant to the claim.
    • It's possible that the company is the scenario.
  • In principle, the claim that the product allows custom regrowth could be faked by conducting multiple well-controlled experiments.
    • It's an extraordinary claim that it isn't supported by a lot of participants, and it's open to a lot of different explanations.
  • Research methods use analytical thinking because scientific reasoning often critical psychological questions, there are simply no good requires us to question and at times override our intuitions alternatives to using animals.
    • Critics have about the world.
  • Statistics: The Language of Psychological Naturalistic observation, case studies, self-report measures, and research surveys are important research designs.
  • The average of all scores is the mean.
    • In testing them rigorously, the median is limited.
    • The most frequent score is the mode.
    • The mean and surveys ask people about themselves, but are the most sensitive to wealth of useful information, and have some disadvantages.
    • The range and response sets are measures of variability.
  • Correlational studies don't allow for causality, but they allow us to establish the relations.
  • Placebo effects and experimenter expectancy effects are effects, and failing to take base rates into account are all frequent examples of pitfalls in experimental designs that can lead to methods of manipulating statistics for the purposes of persuasion.
  • Concerns about ethical treatment of research participants, manipulation of an independent variable, and inclusion of an appropriate control condition to rule out plato have led research facilities, such as colleges and universities, to include an appropriate control condition to rule out plato.
    • It requires careful attention to involve human participants and requires informed consent for alternative explanations of observed effects.
  • They may need a full debriefing at the end of the research session.
  • To evaluate psychological claims in the news and elsewhere in the popular media, we should bear in mind that few reporters have formal psychological training.
    • We should beware of excessive sharptreatment and only mention a few advances when considering media ing of human learning, brain physiology, and psychological claims.
    • To answer many questions and be on the lookout for pseudosymmetry.

2.5: Evaluating Psychological Research

  • She starts with a random sample of 10,000 people from the city of Inebriated, Indiana.
    • Participants are asked about their drinking habits and national background.
    • She finds that 1,200 citizens of Inebriated meet official diagnostic criteria for alcohol use disorder.

Why are you here?

  • It's easy to forget about base rates when interpreting findings.
    • Base rates aren't particularly vivid and are often "lurk in the distance" of our minds.
    • The base rate of people of German descent in Indiana is 25 times higher than the base rate of people of Norwegian descent.
  • The fact that there are 15 times more German alcoholics than Norwegians in Inebriated doesn't support her hypothesis.
  • We've focused on misuses and abuses of statistics.
    • We want to immunize you against statistical errors you're likely to encounter in the newspaper, on TV, as well as on the internet and social media.
    • Statistics are a wonderful set of tools that can help us understand behavior.
    • It's best to keep a middle course when evaluating statistics, between dismissing them out of hand and accepting them uncritically.
    • When it comes to psychology, we should keep our minds open, but not so open that our brains fall out.
  • How to correct flaws in research designs.
    • Skills for evaluating psychological claims can be found in the popular media.
    • Many of the studies aren't trustworthy.
  • Most psychological journals send their submitted articles to reviewers who screen them for quality control.
    • Peer reviewers need to identify flaws that could undermine a study's findings and conclusions, as well as tell researchers how to do the study better next time.
    • We've learned the key ingredients of a psychological experiment and the pitfalls that can lead experiments to go wrong, so let's try our hand at being peer reviewers.
    • We can become better consumers of realworld research by doing this.
  • We will present descriptions of two studies that contain at least one hidden flaw.
    • Try to figure out what's wrong with the study by reading it.
    • Try to come up with a way of fixing the flaw.
  • You can see how close you came by reading the paragraph after it.
  • The hypothesis that subliminal self-help tapes increase self-esteem is being tested by an investigator.
    • She tells them to play the tape for two months and then sleep for one hour.
  • Participants' self-esteem is measured at the beginning of the study and again after two months.
  • This "experiment" is not an experiment at all.
    • There is no random assignment of participants to experimental or control groups.
    • There is no manipulation of an independent variable.
    • A variable is something that can change.
    • All participants received the same manipulation, which was to play the subliminal selfhelp tape every night.
    • We don't know if the increase in self-esteem was the result of the tape or not.
    • It could have been because of a number of other factors, such as placebo effects or increases in self-esteem, that might occur over the course of one's Ruling Out Rival Hypotheses freshman year.
  • Dr. Art E. Fact is interested in determining if a new treatment, Anger Expression Therapy, is effective in treating anxiety.
    • 100 people with anxiety disorders are assigned to two groups.
    • Anger Expression Therapy is administered to the experimental group.
    • The control group is placed on a waiting list and does not receive treatment.
    • The rate of anxiety disorders is lower in the experimental group than in the control group according to Fact interviews his patients.
  • The experiment looks okay on its surface.
  • Fact didn't control for two important pitfalls.
  • People in the control group know they're not receiving a treatment, while people in the inter group know they are.
  • There is a condition in which a counselor provides attention to patients, but no formal therapy.
    • The counselor might chat with her patients once a week for the same amount of time as the patients who receive Anger Expression Therapy.
  • A researcher could examine the effect of fact on the experimenter expectancy problems.
    • He knows which patients are in which group and could sub individuals who receive a specific treatment to influence them to show less anger than people who don't.
    • Control for this effect and minimize the treatment.
    • It might be better to ask the same therapist if the client gets therapy for anger.
    • The other variable is dependent.
    • The treatment and control level of the client's anger should be administered at the end of the study.
  • Fact blind to the group assignment when he interviews patients at the end of the study.
  • News stories are prone to faulty conclusions because reporters rely on the same biases that we do.
  • It's important to keep a few tips in mind when evaluating psychological reports in the media.
  • The principle of "consider the source" applies to websites.
  • We should place more trust in findings from primary sources, such as the original journal articles themselves, than from secondary sources, such as newspapers, magazines, or websites that only report findings from primary sources.
  • The most important facts of a study are brought into sharper focus when leveling and sharpening.
    • Secondary sources in the news media need to engage in a certain amount of sharpening and leveling when reporting studies, because they can't possibly describe every minor detail of an investigation.
    • Too much leveling and sharpening can result in a misleading picture.
  • The headline isn't wrong, but it oversimplifies what the researcher found.
  • We can easily be misled by the coverage of a story.
    • The kind of balanced coverage that news reporters create by ensuring that representatives from both sides of the story receive equal air time is different from genuine scientific controversy.
    • The news media tries to include comments from "experts" on opposing sides of an issue to make the story appear more balanced.
  • A story about a study that provides scientific evidence against ESP might be in a newspaper.
    • The first four paragraphs could be devoted to a description of the study, but the last four paragraphs could be used to critique the study from ESP advocates.
    • It is possible that the scientific evidence for ESP is split down the middle, with half of the research supporting it and half disagreeing it.
    • It's easy to ignore the fact that there was no scientific evidence in the last four paragraphs.
    • The scientific evidence regarding ESP is mostly negative.
  • One reason why it's hard to think scientifically about research is that we're constantly bombarded with media reports that give us poor role models for interpreting research.
    • These tips should help us become better consumers of psychological science in everyday life and to make better real-world decisions.
  • He's bald.
    • The company's advertisement starts with "Grow back a ful head of hair in only 3 weeks" and you have to evaluate if it is accurate and if not.
    • "We've received confirmation of this remarkable claim from dozens of satisfied customers," the ad says.
  • This claim isn't terribly different from the actual ads for hair-loss remedies.
  • Consider the six principles of scientific thinking.
    • As you evaluate this claim, the company apparently didn't conduct these ex evant.
  • The claim in the ad is open to a host of alternative explana trolled study in the first place, let alone attempted to replicate tions.
    • There is no evidence that the claim is backed up by a small group of satisfied customers.
  • The evidence for the assertion is weak.
    • It's possible that the satisfied customers were based entirely on anecdotes, which are almost always using other hair growth products at the same time, and that these weak source of scientific support.
  • The critical thinking principle isn't relevant to the claim.
    • It's possible that the company is the scenario.
  • In principle, the claim that the product allows custom regrowth could be faked by conducting multiple well-controlled experiments.
    • It's an extraordinary claim that it isn't supported by a lot of participants, and it's open to a lot of different explanations.
  • Research methods use analytical thinking because scientific reasoning often critical psychological questions, there are simply no good requires us to question and at times override our intuitions alternatives to using animals.
    • Critics have about the world.
  • Statistics: The Language of Psychological Naturalistic observation, case studies, self-report measures, and research surveys are important research designs.
  • The average of all scores is the mean.
    • In testing them rigorously, the median is limited.
    • The most frequent score is the mode.
    • The mean and surveys ask people about themselves, but are the most sensitive to wealth of useful information, and have some disadvantages.
    • The range and response sets are measures of variability.
  • Correlational studies don't allow for causality, but they allow us to establish the relations.
  • Placebo effects and experimenter expectancy effects are effects, and failing to take base rates into account are all frequent examples of pitfalls in experimental designs that can lead to methods of manipulating statistics for the purposes of persuasion.
  • Concerns about ethical treatment of research participants, manipulation of an independent variable, and inclusion of an appropriate control condition to rule out plato have led research facilities, such as colleges and universities, to include an appropriate control condition to rule out plato.
    • It requires careful attention to involve human participants and requires informed consent for alternative explanations of observed effects.
  • They may need a full debriefing at the end of the research session.
  • To evaluate psychological claims in the news and elsewhere in the popular media, we should bear in mind that few reporters have formal psychological training.
    • We should beware of excessive sharptreatment and only mention a few advances when considering media ing of human learning, brain physiology, and psychological claims.
    • To answer many questions and be on the lookout for pseudosymmetry.