Key Research Methods & Study Design to Know for AP Psychology

What You Need to Know

AP Psych research questions usually test whether you can: (1) identify a study type, (2) label variables and controls, (3) spot what can and can’t be concluded (especially causation), and (4) evaluate design quality (bias, confounds, ethics, validity/reliability).

Big idea: different methods answer different questions.

  • Descriptive methods (case study, naturalistic observation, survey): describe behavior, can’t show cause.
  • Correlational studies: measure relationship between variables, can predict, but can’t show cause.
  • Experiments: manipulate an independent variable and measure a dependent variable with random assignment to infer cause-and-effect.

Core “language” you must nail:

  • Population vs sample
  • Random sampling (for representativeness) vs random assignment (for causality)
  • Independent variable (IV), dependent variable (DV), confounding variables
  • Operational definition (exactly how you measure/manipulate variables)
  • Reliability (consistency) vs validity (accuracy)
  • Correlation coefficient rr (strength/direction)

Critical reminder: Only true experiments with random assignment allow causal conclusions.

Step-by-Step Breakdown

A. How to dissect any research scenario (fast)
  1. Name the method
    • Are they just observing/describing? → descriptive
    • Are they measuring two variables to see if they relate? → correlational
    • Are they manipulating something and using random assignment? → experiment
  2. Identify the variables (if any)
    • IV: what’s manipulated (or the grouping factor in quasi-experiments)
    • DV: what’s measured as the outcome
  3. Check for causation requirements
    • Manipulation of IV + random assignment + control of confounds → causation is possible
  4. Find the operational definitions
    • How exactly are “stress,” “aggression,” “memory,” etc. measured?
  5. Evaluate design quality
    • Sampling bias? Confounds? Demand characteristics? Experimenter bias? Placebo effects?
  6. Evaluate ethics
    • Informed consent? Deception justified + debrief? Protection from harm? Confidentiality?
B. Designing a solid experiment (what AP expects)
  1. State a testable hypothesis
    • Example: “If participants sleep 8hours8\,\text{hours}, then their recall score will be higher than participants who sleep 4hours4\,\text{hours}.”
  2. Operationally define the IV and DV
    • IV levels: 8hours8\,\text{hours} vs 4hours4\,\text{hours} of sleep
    • DV: “number of words recalled from a 2020-word list”
  3. Choose the design
    • Between-subjects: different participants in each condition
    • Within-subjects (repeated measures): same participants experience all conditions
  4. Get participants and assign
    • Random sampling helps generalize to population
    • Random assignment helps equalize groups (reduces confounds)
  5. Control confounds
    • Hold constant instructions, time of day, environment
    • Use counterbalancing for order effects (within-subjects)
  6. Reduce bias
    • Single-blind: participants don’t know condition
    • Double-blind: participants and researchers interacting with them don’t know condition
  7. Run procedure + collect DV
  8. Analyze + conclude
    • Look for differences between conditions; report whether data support hypothesis
    • If applicable, interpret pp-value / significance (conceptually)
C. Reading correlational results (quick)
  1. Identify the two measured variables.
  2. Read rr:
    • sign (+/−) = direction
    • magnitude = strength
  3. State prediction only (never “causes”).
  4. Mention the two big correlation problems:
    • directionality problem
    • third-variable problem

Key Formulas, Rules & Facts

Research methods: what they do and don’t prove
MethodWhat you doWhat you can concludeBig limitations / notes
Case studyIn-depth study of one person/small groupRich detail; generates hypothesesLow generalizability; can’t infer causation
Naturalistic observationObserve in real settingsReal-world behaviorObserver effect; hard to control variables
Survey / interviewSelf-report attitudes/behaviorsDescribes opinions/traitsWording effects, social desirability bias, sampling bias
Correlational studyMeasure 2+ variables and compute associationPrediction; relationship strength/directionNo causation (directionality + third variable)
ExperimentManipulate IV; measure DV; random assignmentCause-and-effect (if well-controlled)Artificiality; ethics; still can have confounds
Quasi-experimentGroups are pre-existing (e.g., gender)Differences between groupsNo random assignment → weak causal claims
Sampling and assignment (high-yield distinctions)
TermDefinitionWhy it mattersWhat it affects
PopulationWhole group you want to generalize toSets the target of inferenceGeneralizability
SampleSubset actually studiedMust represent populationGeneralizability
Random samplingEvery population member has equal chance to be chosenReduces sampling biasExternal validity (generalization)
Random assignmentParticipants randomly placed into conditionsEqualizes groups on averageInternal validity (causal inference)
Variables and control
  • Independent variable (IV): factor the researcher manipulates.
  • Dependent variable (DV): outcome the researcher measures.
  • Confounding variable: any factor that varies with the IV and could explain DV changes.
  • Control group: baseline comparison group (often placebo).
  • Experimental group: receives treatment/IV level.
  • Constants: factors kept the same across conditions.
Operational definitions (make vague variables measurable)
  • Operational definition: exact procedures used to define/manipulate a variable.
    • Example: “Stress” = score on a perceived-stress questionnaire.
    • Example: “Aggression” = number of times a participant blasts noise at an opponent.
Correlation essentials
  • Correlation coefficient rr ranges from 1.00-1.00 to +1.00+1.00.
    • Sign = direction (positive/negative)
    • Absolute value r|r| = strength
  • r=0r = 0 means no linear relationship (a curved relationship could still exist).
Descriptive statistics (AP-level)
ConceptWhat it tells youFormula (when relevant)Quick notes
MeanAveragexˉ=xn\bar{x} = \frac{\sum x}{n}Sensitive to outliers
MedianMiddle score(procedure, not formula)Robust to outliers
ModeMost frequent(procedure)Useful for categories
RangeSpreadrange=xmaxxmin\text{range} = x_{\max} - x_{\min}Very outlier-sensitive
Standard deviationTypical distance from meanσ=(xμ)2N\sigma = \sqrt{\frac{\sum (x-\mu)^2}{N}} (population)Larger σ\sigma = more variability
Z-scoreHow many SDs from meanz=xμσz = \frac{x-\mu}{\sigma}Compares scores across distributions
Inferential statistics (what AP expects conceptually)
  • Statistical significance: result is unlikely due to chance alone.
    • Often reported as p<0.05p < 0.05 (not a guarantee; just a convention).
  • Null hypothesis: assumes “no real effect” or “no difference.”
  • Confidence: stronger evidence when effect is large, variability is small, and sample size is larger.
Validity and reliability (often tested)
TermMeaningHow you improve itCommon confusion
ReliabilityConsistency (repeatable results)Standardized procedures; clear scoring; test-retestReliable ≠ valid
ValidityMeasures what it claims to measureGood operational definition; control confoundsA test can be reliable but invalid
Internal validityConfidence IV caused DV changeRandom assignment; control confounds; blindingThreatened by confounds
External validityGeneralizability to real world/populationRandom sampling; realistic settingsThreatened by biased samples
Key experiment features to reduce bias
  • Placebo effect: improvement due to expectations, not treatment.
  • Single-blind: participants unaware of condition.
  • Double-blind: participants + researchers interacting with them unaware.
  • Demand characteristics: cues that tell participants what behavior is expected.
  • Experimenter bias: researcher expectations influence results.
Ethics (must-know AP principles)
  • Informed consent (unless justified exception)
  • Protect from harm / minimal risk
  • Confidentiality
  • Right to withdraw
  • Debriefing (especially if deception used)
  • Justified deception only when necessary and followed by debriefing

Examples & Applications

Example 1: Identify method + conclusion

A psychologist watches children on a playground and records how often they share toys.

  • Method: naturalistic observation
  • What you can conclude: description of sharing behavior in that setting
  • What you cannot conclude: whether parenting style causes sharing (no manipulation/control)
Example 2: Correlation interpretation (what to say)

Study finds r=0.62r = -0.62 between hours of sleep and number of days absent from school.

  • Direction: negative (more sleep ↔ fewer absences)
  • Strength: moderately strong (because r|r| is relatively high)
  • Correct conclusion: sleep predicts absences (or vice versa)
  • Add the caveat: could be third variable (e.g., health) or directionality (chronic illness reduces sleep and increases absences)
Example 3: Turn a question into an experiment

Question: “Does caffeine improve memory?”

  • IV: caffeine dose (e.g., 0mg0\,\text{mg} vs 200mg200\,\text{mg})
  • DV: memory score (e.g., number of words recalled)
  • Control: same study time, same word list difficulty, same environment
  • Random assignment: participants randomly placed into caffeine vs placebo
  • Blinding: ideally double-blind to reduce placebo + experimenter bias
  • Conclusion allowed: if groups differ reliably, caffeine may cause the memory change
Example 4: Quasi-experiment (common AP trap)

Researcher compares anxiety scores between teenagers who use social media >4hours/day> 4\,\text{hours/day} and those who use it <1hour/day< 1\,\text{hour/day}, but teens choose their own usage.

  • This is not a true experiment (no random assignment).
  • Best label: quasi-experiment or correlational (depending on framing).
  • Correct conclusion: association/differences, not causation.

Common Mistakes & Traps

  1. Confusing random sampling with random assignment

    • Wrong: “They randomly assigned participants, so it generalizes to everyone.”
    • Why wrong: random assignment helps internal validity, not representativeness.
    • Fix: sampling → generalize; assignment → causation.
  2. Claiming correlation proves causation

    • Wrong: “Because rr is strong, X causes Y.”
    • Why wrong: directionality + third-variable problems always exist.
    • Fix: say “predicts,” “is associated with,” “relates to.”
  3. Mislabeling IV and DV

    • Wrong: calling the outcome the IV.
    • Why wrong: IV is what you manipulate; DV is what you measure.
    • Fix: ask “What did the researcher change?” (IV) and “What did they record?” (DV).
  4. Forgetting operational definitions

    • Wrong: “They measured intelligence.”
    • Why wrong: AP expects how (IQ test score? GPA? puzzle performance?).
    • Fix: always restate variables in measurable terms.
  5. Ignoring confounds in experiments

    • Wrong: assuming random assignment magically eliminates all confounds.
    • Why wrong: confounds can still occur via procedure differences, attrition, demand characteristics.
    • Fix: look for any variable that differs between groups besides the IV.
  6. Thinking r=0r = 0 means “no relationship at all”

    • Wrong: “No connection.”
    • Why wrong: it means no linear relationship; could be curvilinear.
    • Fix: phrase it as “no linear association found.”
  7. Mixing up reliability and validity

    • Wrong: “It’s reliable, so it’s valid.”
    • Why wrong: a measure can be consistent but consistently wrong.
    • Fix: reliability = consistent; validity = accurate.
  8. Overstating what surveys show

    • Wrong: “The survey proves teens are more depressed.”
    • Why wrong: surveys are self-report and often biased by wording and social desirability.
    • Fix: say “reports suggest…” and note possible biases.

Memory Aids & Quick Tricks

Trick / mnemonicHelps you rememberWhen to use
“Sample = Select; Assign = Allocate”Random sampling selects people; random assignment allocates to groupsAny question mixing generalization vs causation
“IV is what I Vary”IV is the manipulated factorLabeling variables quickly
“DV is what you Detect/Describe (data)”DV is the measured outcomeLabeling variables quickly
“Correlation: 2 problems = D + T”Directionality and Third variableExplaining why correlation ≠ causation
“Reliable = Repeatable; Valid = Verifies truth”Reliability vs validityTest/measure questions
“Blind blocks bias”Blinding reduces expectancy effectsPlacebo/experimenter bias scenarios

Quick Review Checklist

  • You can name: case study, naturalistic observation, survey, correlational study, experiment, quasi-experiment.
  • You can state what each method can conclude (describe, predict, or cause).
  • You can identify population vs sample and explain sampling bias.
  • You can distinguish random sampling (generalize) from random assignment (causation).
  • You can label IV, DV, and at least one plausible confounding variable.
  • You can write an operational definition for vague variables.
  • You can interpret rr (direction + strength) and say why it’s not causation.
  • You can explain placebo effect, demand characteristics, experimenter bias, and why double-blind helps.
  • You can define reliability, validity, internal validity, external validity.
  • You can list core ethics: informed consent, protect from harm, confidentiality, right to withdraw, debriefing.

You’re closest to full points when you automatically ask: “What method is this, what can it prove, and what’s the biggest flaw?”