Key Research Methods & Study Design to Know for AP Psychology
What You Need to Know
AP Psych research questions usually test whether you can: (1) identify a study type, (2) label variables and controls, (3) spot what can and can’t be concluded (especially causation), and (4) evaluate design quality (bias, confounds, ethics, validity/reliability).
Big idea: different methods answer different questions.
- Descriptive methods (case study, naturalistic observation, survey): describe behavior, can’t show cause.
- Correlational studies: measure relationship between variables, can predict, but can’t show cause.
- Experiments: manipulate an independent variable and measure a dependent variable with random assignment to infer cause-and-effect.
Core “language” you must nail:
- Population vs sample
- Random sampling (for representativeness) vs random assignment (for causality)
- Independent variable (IV), dependent variable (DV), confounding variables
- Operational definition (exactly how you measure/manipulate variables)
- Reliability (consistency) vs validity (accuracy)
- Correlation coefficient (strength/direction)
Critical reminder: Only true experiments with random assignment allow causal conclusions.
Step-by-Step Breakdown
A. How to dissect any research scenario (fast)
- Name the method
- Are they just observing/describing? → descriptive
- Are they measuring two variables to see if they relate? → correlational
- Are they manipulating something and using random assignment? → experiment
- Identify the variables (if any)
- IV: what’s manipulated (or the grouping factor in quasi-experiments)
- DV: what’s measured as the outcome
- Check for causation requirements
- Manipulation of IV + random assignment + control of confounds → causation is possible
- Find the operational definitions
- How exactly are “stress,” “aggression,” “memory,” etc. measured?
- Evaluate design quality
- Sampling bias? Confounds? Demand characteristics? Experimenter bias? Placebo effects?
- Evaluate ethics
- Informed consent? Deception justified + debrief? Protection from harm? Confidentiality?
B. Designing a solid experiment (what AP expects)
- State a testable hypothesis
- Example: “If participants sleep , then their recall score will be higher than participants who sleep .”
- Operationally define the IV and DV
- IV levels: vs of sleep
- DV: “number of words recalled from a -word list”
- Choose the design
- Between-subjects: different participants in each condition
- Within-subjects (repeated measures): same participants experience all conditions
- Get participants and assign
- Random sampling helps generalize to population
- Random assignment helps equalize groups (reduces confounds)
- Control confounds
- Hold constant instructions, time of day, environment
- Use counterbalancing for order effects (within-subjects)
- Reduce bias
- Single-blind: participants don’t know condition
- Double-blind: participants and researchers interacting with them don’t know condition
- Run procedure + collect DV
- Analyze + conclude
- Look for differences between conditions; report whether data support hypothesis
- If applicable, interpret -value / significance (conceptually)
C. Reading correlational results (quick)
- Identify the two measured variables.
- Read :
- sign (+/−) = direction
- magnitude = strength
- State prediction only (never “causes”).
- Mention the two big correlation problems:
- directionality problem
- third-variable problem
Key Formulas, Rules & Facts
Research methods: what they do and don’t prove
| Method | What you do | What you can conclude | Big limitations / notes |
|---|---|---|---|
| Case study | In-depth study of one person/small group | Rich detail; generates hypotheses | Low generalizability; can’t infer causation |
| Naturalistic observation | Observe in real settings | Real-world behavior | Observer effect; hard to control variables |
| Survey / interview | Self-report attitudes/behaviors | Describes opinions/traits | Wording effects, social desirability bias, sampling bias |
| Correlational study | Measure 2+ variables and compute association | Prediction; relationship strength/direction | No causation (directionality + third variable) |
| Experiment | Manipulate IV; measure DV; random assignment | Cause-and-effect (if well-controlled) | Artificiality; ethics; still can have confounds |
| Quasi-experiment | Groups are pre-existing (e.g., gender) | Differences between groups | No random assignment → weak causal claims |
Sampling and assignment (high-yield distinctions)
| Term | Definition | Why it matters | What it affects |
|---|---|---|---|
| Population | Whole group you want to generalize to | Sets the target of inference | Generalizability |
| Sample | Subset actually studied | Must represent population | Generalizability |
| Random sampling | Every population member has equal chance to be chosen | Reduces sampling bias | External validity (generalization) |
| Random assignment | Participants randomly placed into conditions | Equalizes groups on average | Internal validity (causal inference) |
Variables and control
- Independent variable (IV): factor the researcher manipulates.
- Dependent variable (DV): outcome the researcher measures.
- Confounding variable: any factor that varies with the IV and could explain DV changes.
- Control group: baseline comparison group (often placebo).
- Experimental group: receives treatment/IV level.
- Constants: factors kept the same across conditions.
Operational definitions (make vague variables measurable)
- Operational definition: exact procedures used to define/manipulate a variable.
- Example: “Stress” = score on a perceived-stress questionnaire.
- Example: “Aggression” = number of times a participant blasts noise at an opponent.
Correlation essentials
- Correlation coefficient ranges from to .
- Sign = direction (positive/negative)
- Absolute value = strength
- means no linear relationship (a curved relationship could still exist).
Descriptive statistics (AP-level)
| Concept | What it tells you | Formula (when relevant) | Quick notes |
|---|---|---|---|
| Mean | Average | Sensitive to outliers | |
| Median | Middle score | (procedure, not formula) | Robust to outliers |
| Mode | Most frequent | (procedure) | Useful for categories |
| Range | Spread | Very outlier-sensitive | |
| Standard deviation | Typical distance from mean | (population) | Larger = more variability |
| Z-score | How many SDs from mean | Compares scores across distributions |
Inferential statistics (what AP expects conceptually)
- Statistical significance: result is unlikely due to chance alone.
- Often reported as (not a guarantee; just a convention).
- Null hypothesis: assumes “no real effect” or “no difference.”
- Confidence: stronger evidence when effect is large, variability is small, and sample size is larger.
Validity and reliability (often tested)
| Term | Meaning | How you improve it | Common confusion |
|---|---|---|---|
| Reliability | Consistency (repeatable results) | Standardized procedures; clear scoring; test-retest | Reliable ≠ valid |
| Validity | Measures what it claims to measure | Good operational definition; control confounds | A test can be reliable but invalid |
| Internal validity | Confidence IV caused DV change | Random assignment; control confounds; blinding | Threatened by confounds |
| External validity | Generalizability to real world/population | Random sampling; realistic settings | Threatened by biased samples |
Key experiment features to reduce bias
- Placebo effect: improvement due to expectations, not treatment.
- Single-blind: participants unaware of condition.
- Double-blind: participants + researchers interacting with them unaware.
- Demand characteristics: cues that tell participants what behavior is expected.
- Experimenter bias: researcher expectations influence results.
Ethics (must-know AP principles)
- Informed consent (unless justified exception)
- Protect from harm / minimal risk
- Confidentiality
- Right to withdraw
- Debriefing (especially if deception used)
- Justified deception only when necessary and followed by debriefing
Examples & Applications
Example 1: Identify method + conclusion
A psychologist watches children on a playground and records how often they share toys.
- Method: naturalistic observation
- What you can conclude: description of sharing behavior in that setting
- What you cannot conclude: whether parenting style causes sharing (no manipulation/control)
Example 2: Correlation interpretation (what to say)
Study finds between hours of sleep and number of days absent from school.
- Direction: negative (more sleep ↔ fewer absences)
- Strength: moderately strong (because is relatively high)
- Correct conclusion: sleep predicts absences (or vice versa)
- Add the caveat: could be third variable (e.g., health) or directionality (chronic illness reduces sleep and increases absences)
Example 3: Turn a question into an experiment
Question: “Does caffeine improve memory?”
- IV: caffeine dose (e.g., vs )
- DV: memory score (e.g., number of words recalled)
- Control: same study time, same word list difficulty, same environment
- Random assignment: participants randomly placed into caffeine vs placebo
- Blinding: ideally double-blind to reduce placebo + experimenter bias
- Conclusion allowed: if groups differ reliably, caffeine may cause the memory change
Example 4: Quasi-experiment (common AP trap)
Researcher compares anxiety scores between teenagers who use social media and those who use it , but teens choose their own usage.
- This is not a true experiment (no random assignment).
- Best label: quasi-experiment or correlational (depending on framing).
- Correct conclusion: association/differences, not causation.
Common Mistakes & Traps
Confusing random sampling with random assignment
- Wrong: “They randomly assigned participants, so it generalizes to everyone.”
- Why wrong: random assignment helps internal validity, not representativeness.
- Fix: sampling → generalize; assignment → causation.
Claiming correlation proves causation
- Wrong: “Because is strong, X causes Y.”
- Why wrong: directionality + third-variable problems always exist.
- Fix: say “predicts,” “is associated with,” “relates to.”
Mislabeling IV and DV
- Wrong: calling the outcome the IV.
- Why wrong: IV is what you manipulate; DV is what you measure.
- Fix: ask “What did the researcher change?” (IV) and “What did they record?” (DV).
Forgetting operational definitions
- Wrong: “They measured intelligence.”
- Why wrong: AP expects how (IQ test score? GPA? puzzle performance?).
- Fix: always restate variables in measurable terms.
Ignoring confounds in experiments
- Wrong: assuming random assignment magically eliminates all confounds.
- Why wrong: confounds can still occur via procedure differences, attrition, demand characteristics.
- Fix: look for any variable that differs between groups besides the IV.
Thinking means “no relationship at all”
- Wrong: “No connection.”
- Why wrong: it means no linear relationship; could be curvilinear.
- Fix: phrase it as “no linear association found.”
Mixing up reliability and validity
- Wrong: “It’s reliable, so it’s valid.”
- Why wrong: a measure can be consistent but consistently wrong.
- Fix: reliability = consistent; validity = accurate.
Overstating what surveys show
- Wrong: “The survey proves teens are more depressed.”
- Why wrong: surveys are self-report and often biased by wording and social desirability.
- Fix: say “reports suggest…” and note possible biases.
Memory Aids & Quick Tricks
| Trick / mnemonic | Helps you remember | When to use |
|---|---|---|
| “Sample = Select; Assign = Allocate” | Random sampling selects people; random assignment allocates to groups | Any question mixing generalization vs causation |
| “IV is what I Vary” | IV is the manipulated factor | Labeling variables quickly |
| “DV is what you Detect/Describe (data)” | DV is the measured outcome | Labeling variables quickly |
| “Correlation: 2 problems = D + T” | Directionality and Third variable | Explaining why correlation ≠ causation |
| “Reliable = Repeatable; Valid = Verifies truth” | Reliability vs validity | Test/measure questions |
| “Blind blocks bias” | Blinding reduces expectancy effects | Placebo/experimenter bias scenarios |
Quick Review Checklist
- You can name: case study, naturalistic observation, survey, correlational study, experiment, quasi-experiment.
- You can state what each method can conclude (describe, predict, or cause).
- You can identify population vs sample and explain sampling bias.
- You can distinguish random sampling (generalize) from random assignment (causation).
- You can label IV, DV, and at least one plausible confounding variable.
- You can write an operational definition for vague variables.
- You can interpret (direction + strength) and say why it’s not causation.
- You can explain placebo effect, demand characteristics, experimenter bias, and why double-blind helps.
- You can define reliability, validity, internal validity, external validity.
- You can list core ethics: informed consent, protect from harm, confidentiality, right to withdraw, debriefing.
You’re closest to full points when you automatically ask: “What method is this, what can it prove, and what’s the biggest flaw?”