AP Psychology Unit 1 Notes: Scientific Foundations, Research Design, and Data Reasoning
History and Approaches to Psychology
Psychology is the scientific study of behavior (what organisms do) and mental processes (how organisms think, feel, perceive, and remember). Calling it “scientific” is not just a label—it means psychologists try to build knowledge using systematic observation, careful measurement, and reasoning that can be checked by other researchers. That scientific mindset is especially important in topics you’ll later see in biological psychology (like brain imaging or genetics), where it’s easy to confuse “cool-sounding” explanations with well-supported evidence.
From philosophy to science: what changed?
For most of human history, questions about the mind were mainly philosophical: What is consciousness? Do we have free will? Those questions matter, but philosophy alone does not give you a reliable method for settling disagreements with evidence.
Psychology became a distinct scientific field when researchers began using controlled observation and experimentation to study mental life and behavior.
- Wilhelm Wundt is commonly credited with establishing the first psychology laboratory (in Leipzig, Germany, in 1879). His broader impact is that he treated the mind as something you could study systematically.
- Early researchers used introspection—a method where trained participants reported their conscious experiences (for example, describing the sensations and feelings they had while hearing a sound). Introspection helped launch the field, but it had a major problem: people’s reports are subjective and hard to verify.
The core lesson from this history is not “old methods were bad.” It’s that psychology kept improving its methods because it needed results that were reliable, replicable, and not dependent on a single person’s interpretation.
Major historical perspectives and what they emphasized
Different early “schools” of thought asked different questions. You should think of them as lenses—each highlights certain phenomena and can ignore others.
Structuralism and functionalism
- Structuralism aimed to describe the basic “elements” of conscious experience (like the building blocks of perception). It relied heavily on introspection.
- Functionalism focused on what mental processes and behaviors do—how they help an organism adapt, survive, and flourish. This perspective fits naturally with biological psychology because it encourages you to ask how a brain process might serve a function.
A common misconception is that one of these perspectives is simply “right.” Instead, modern psychology borrows questions and methods from many perspectives, depending on the problem.
Behaviorism
Behaviorism argues that psychology should focus on observable behavior rather than unobservable mental states. Behaviorists emphasized learning through the environment.
Why it mattered: behaviorism pushed psychology toward measurable variables and stronger experimental designs. Even if psychologists now study cognition directly, the behaviorist demand for clear measurement still shapes good research.
What can go wrong: students sometimes assume behaviorism means “the mind doesn’t exist.” A more accurate summary is that behaviorists doubted whether mental states were measurable enough to be scientific at the time.
Psychoanalytic and humanistic perspectives
- The psychoanalytic perspective (associated with Freud) emphasized unconscious influences and early childhood experiences.
- The humanistic perspective emphasized growth, meaning, and personal agency.
These perspectives strongly influenced therapy and how psychologists think about motivation and emotion, even though some claims (especially psychoanalytic ones) are harder to test with controlled experiments.
Cognitive, biological, and sociocultural perspectives
Modern introductory psychology often highlights several broad approaches:
- The cognitive approach studies mental processes like attention, memory, language, and problem-solving.
- The biological approach explains behavior and mental processes in terms of the brain, nervous system, hormones, and genetics.
- The evolutionary approach asks how behaviors and mental traits might have been shaped by natural selection.
- The sociocultural approach emphasizes how social contexts and cultural norms shape behavior and thinking.
A helpful way to organize these is the biopsychosocial approach, which treats behavior as the product of interacting biological, psychological, and social-cultural influences. In the biological bases unit, you’ll often emphasize the biological layer—but AP questions commonly reward you for knowing that biology is not the only influence.
Approaches in action: one behavior, multiple explanations
Consider chronic insomnia.
- Biological: irregular melatonin regulation, stress hormones, or differences in neural arousal systems.
- Cognitive: racing thoughts and worry maintaining arousal.
- Behavioral: inconsistent sleep schedule and reinforcement of daytime naps.
- Sociocultural: shift work, family expectations, or technology use norms.
None of these explanations automatically cancels the others. A strong AP-style response often combines levels of analysis.
Exam Focus
- Typical question patterns
- Identify which psychological perspective best explains a scenario (for example, a behaviorist vs cognitive explanation).
- Explain how a historical shift (like behaviorism) influenced research methods (emphasis on observable measures).
- Apply the biopsychosocial framework to a behavior with one example per level.
- Common mistakes
- Treating perspectives as mutually exclusive “teams” instead of complementary lenses.
- Confusing behaviorism (focus on observable behavior) with biological (focus on physiology).
- Giving vague labels (“that’s cognitive”) without explaining the specific mechanism (memory bias, attention, interpretation, etc.).
Research Methods in Psychology
Research methods are the tools psychologists use to turn questions into evidence. To understand any claim in psychology—especially biological claims like “this brain region causes X”—you need to know what kind of method can support that claim and what the method cannot prove.
The research process: from question to conclusion
A typical scientific workflow looks like this:
- Theory: a broad explanation that organizes observations and predicts outcomes.
- Hypothesis: a testable prediction derived from a theory.
- Operational definitions: clear, measurable definitions of variables.
- Data collection: using a method (experiment, survey, observation, etc.).
- Analysis and interpretation: using statistics to decide what the data suggest.
- Replication and revision: repeating and refining to improve confidence.
A key idea is operationalization. If you say “stress causes memory problems,” you must define stress (cortisol level? self-report scale? number of stressful events?) and memory problems (recall score? reaction time? error rate?). Without operational definitions, two studies might “test the same idea” but actually measure different things.
Descriptive methods: observing without manipulating
Descriptive research aims to measure or describe behavior. It is excellent for generating hypotheses and understanding patterns, but it cannot establish cause-and-effect.
Case studies
A case study is an in-depth analysis of one individual (or a small group). In biological psychology, famous examples include people with localized brain damage, which can suggest what certain brain areas do.
Why it matters: case studies can reveal rare phenomena you can’t ethically or practically create in a lab.
What can go wrong: case studies often have limited generalizability—one person’s experience may not represent others.
Naturalistic observation
Naturalistic observation means watching behavior in real-world settings without interfering.
Why it matters: you can see how people actually behave, not just how they behave in a lab.
What can go wrong:
- Observer bias: seeing what you expect to see.
- Hawthorne effect (reactivity): people change behavior when they know they are observed.
Researchers reduce these issues using clear coding rules, training observers, and checking interrater reliability (how consistently different observers code the same behavior).
Surveys and interviews
A survey measures self-reported attitudes or behaviors through questions.
Why it matters: surveys can collect data from many people efficiently.
What can go wrong:
- Wording effects: small wording changes shift responses.
- Social desirability bias: people answer in a socially acceptable way.
- Sampling problems: if the sample is not representative, results can mislead.
Sampling: how you choose participants matters
A population is the entire group you care about (for example, all U.S. high school students). A sample is the subset you actually study.
- A random sample means every member of the population has an equal chance of being selected. This supports generalizing results to the population.
- A representative sample mirrors the population’s characteristics (sometimes achieved through random sampling, but not guaranteed).
Students often confuse random sampling with random assignment (used in experiments). Random sampling is about who is selected; random assignment is about which condition participants enter.
Correlational research: measuring relationships
A correlation measures how strongly two variables vary together.
Why it matters: correlation is widely used when experiments are unethical or impractical (for example, you can’t randomly assign children to “high trauma” vs “low trauma”). In biological psychology, you often correlate brain activity measures with behavior.
What correlation can’t do: prove causation. If variable A correlates with variable B, several explanations are possible:
- A causes B
- B causes A
- A third variable C causes both (a confounding variable)
Example: If sleep loss correlates with anxiety, sleep loss might increase anxiety, anxiety might reduce sleep, or stress might increase both.
Experiments: testing cause-and-effect
An experiment is a research method in which the researcher manipulates an independent variable (IV) and measures its effect on a dependent variable (DV), while controlling other factors.
Why it matters: experiments are the best method for establishing causation because they aim to isolate the IV as the only systematic difference between groups.
Core features of experiments
- Independent variable (IV): what is manipulated (for example, caffeine vs placebo).
- Dependent variable (DV): what is measured (for example, reaction time).
- Experimental group: receives the treatment.
- Control group: does not receive the treatment (or receives a placebo).
- Random assignment: participants are placed into groups by chance, reducing preexisting differences.
Random assignment matters because it makes the groups similar “on average.” Without it, differences in outcomes might reflect who ended up in each group rather than the IV.
Confounds and control
A confounding variable is any factor that varies with the IV and could also influence the DV. Confounds threaten internal validity (your ability to conclude the IV caused the change in DV).
Example: If one group drinks coffee in a quiet room and the other drinks decaf in a noisy room, noise becomes a confound.
Researchers control confounds by standardizing procedures, using random assignment, and sometimes using counterbalancing (varying the order of conditions to reduce order effects).
Placebo effect and blinding
- A placebo effect occurs when participants’ expectations produce changes in the DV.
- A single-blind study means participants do not know which condition they are in.
- A double-blind study means neither participants nor the researchers interacting with them know who is in which condition.
Double-blind procedures reduce experimenter bias, where researchers (often unintentionally) influence participants or interpret behaviors in line with expectations.
Quasi-experiments and ethics
Sometimes researchers cannot randomly assign participants (for example, comparing people with vs without brain injury). These designs can be informative but weaker for causal claims because preexisting group differences may explain results.
Research in biological psychology: special tools
In the biological bases unit, you’ll see methods like EEG, fMRI, PET scans, lesion studies, and drug studies. The same logic applies:
- If you manipulate something (a drug dose) and randomly assign, you’re closer to causal inference.
- If you only measure brain activity and correlate it with behavior, you must avoid causal language (no “the brain area causes behavior” without experimental evidence).
Example: designing a clean experiment
Suppose you want to test whether acute stress impairs memory.
- Operationalize stress: a standardized stress task (like timed mental arithmetic with evaluative feedback) vs a non-stressful task.
- Randomly assign participants to stress vs control.
- Operationalize memory: number of words recalled from a list after 10 minutes.
- Control confounds: same room, same instructions, same timing, same word list difficulty.
- Consider blinding: the person scoring recall should not know condition.
A common mistake is to call any study with two groups an “experiment.” Without manipulation of an IV and random assignment, it isn’t a true experiment.
Exam Focus
- Typical question patterns
- Identify IV, DV, control group, and confounding variables in a described study.
- Decide whether a scenario is a case study, survey, naturalistic observation, correlational study, or experiment.
- Explain why correlation does not imply causation using the third-variable problem.
- Common mistakes
- Mixing up random sampling (population selection) with random assignment (group placement).
- Calling correlational findings “effects” or “impacts” (causal wording) without an experiment.
- Forgetting to operationally define variables (using vague terms like “intelligence” without a measure).
Statistics, Tests, and Ethics
Psychology uses statistics because raw observations are noisy. Two people can look at the same dataset and reach different conclusions unless they use shared tools for summarizing data and deciding whether an effect is likely real or just random variation.
Describing data: central tendency and variability
Descriptive statistics summarize what your sample looks like.
Measures of central tendency
Central tendency tells you what a “typical” score is.
- Mean: the arithmetic average.
- Median: the middle score when ordered.
- Mode: the most frequent score.
Formula for the mean (where \bar{x} is the sample mean, x_i are individual scores, and n is sample size):
\bar{x} = \frac{\sum x_i}{n}
Why it matters: the mean uses all scores, but it is sensitive to outliers. The median is resistant to outliers, which makes it useful for skewed distributions.
A skewed distribution has a long tail on one side:
- Positive skew: tail to the right (a few very high scores pull the mean up, so mean > median).
- Negative skew: tail to the left (a few very low scores pull the mean down, so mean < median).
Students often memorize this incorrectly; a reliable check is: “Which direction does the tail go?” That direction names the skew.
Measures of variability
Variability tells you how spread out scores are.
- Range: highest minus lowest.
- Standard deviation: a typical distance from the mean.
Standard deviation matters because two groups can have the same mean but very different spreads, which changes how you interpret “typical.” For example, two classes might average 80% on a test, but one class might be tightly clustered around 80 while the other has many very high and very low scores.
A common AP-level formula set includes:
s^2 = \frac{\sum (x_i - \bar{x})^2}{n - 1}
s = \sqrt{s^2}
Here, s^2 is the sample variance and s is the sample standard deviation.
You do not usually need to compute standard deviation by hand on the AP exam, but you do need to interpret what “larger” or “smaller” standard deviation implies.
Visualizing distributions
Graphs help you see patterns quickly.
- A histogram shows frequency of scores across intervals.
- A bar graph compares categories.
- A scatterplot shows the relationship between two quantitative variables.
The most important scatterplot skill is recognizing direction and strength:
- Positive correlation: as one variable increases, the other tends to increase.
- Negative correlation: as one increases, the other tends to decrease.
- No correlation: no consistent pattern.
Correlation coefficient and interpretation
The correlation coefficient r ranges from -1 to +1 and indicates strength and direction of linear relationship.
- r = 1: perfect positive linear relationship.
- r = -1: perfect negative linear relationship.
- r = 0: no linear relationship.
What can go wrong: a correlation near 0 might hide a non-linear relationship (for example, a U-shaped pattern). Also, strong correlations can still be non-causal.
Standard scores (z-scores) and comparison
A z-score tells you how many standard deviations a score is from the mean.
z = \frac{x - \mu}{\sigma}
Here, x is an individual score, \mu is the population mean, and \sigma is the population standard deviation.
Why it matters: z-scores let you compare scores across different scales (for example, comparing performance on two different tests).
Example: If a student’s score is one standard deviation above the mean, then z = 1 regardless of the test’s raw points.
Inferential statistics: deciding whether results “count”
Inferential statistics help you judge whether an observed difference or relationship is likely due to chance.
Statistical significance and p-values
A result is statistically significant if it would be unlikely to occur by random chance alone, assuming there is no real effect.
A p-value is the probability of observing results at least as extreme as the data, assuming the null hypothesis is true.
- A common threshold is p < 0.05 (used often in psychology), meaning fewer than 5 times out of 100 you’d expect such an extreme result by chance under the null.
What this does not mean (common misconception): p < 0.05 does not prove the hypothesis is true, and it does not mean there is a 95% chance the results will replicate. It only addresses how surprising the data are under the null model.
Type I and Type II errors
When making decisions from data, you can make two classic errors:
- Type I error: false positive (concluding there is an effect when there isn’t).
- Type II error: false negative (missing a real effect).
Lowering the significance threshold (for example, requiring smaller p-values) reduces Type I errors but can increase Type II errors. This tradeoff matters when considering consequences: in medical contexts, you may weigh these errors differently.
Psychological testing: reliability and validity
When psychologists use tests (like intelligence tests, memory tasks, or personality inventories), they care about measurement quality.
- Reliability means consistency. A reliable test gives similar results under consistent conditions (for example, retest reliability over time).
- Validity means the test measures what it claims to measure.
Why it matters: A test can be reliable but not valid. For example, a scale that always reads 5 pounds too heavy is consistent (reliable) but inaccurate (not valid).
Common validity ideas you may see:
- Construct validity: does the measure actually capture the theoretical construct (like “working memory”)?
- Content validity: does the test cover the full range of the domain?
- Predictive validity: does it predict relevant future outcomes?
A frequent student error is to treat “valid” as meaning “good” in a general sense. In psychology, validity is a specific technical claim about what is being measured.
Ethics: protecting participants and ensuring responsible science
Ethics in psychological research exists because human participants are vulnerable to harm, coercion, or misuse of their data. Ethical rules also protect the integrity of science: if participants are mistreated, data quality and public trust suffer.
Core ethical principles in human research
AP Psychology commonly emphasizes these safeguards:
- Informed consent: participants should understand the study’s purpose, procedures, risks, and their right to refuse or withdraw (within practical limits). For some designs involving deception, full details cannot be shared upfront, but consent still requires a reasonable understanding of what participation involves.
- Protection from harm: researchers should minimize risk and avoid causing significant physical or psychological harm.
- Confidentiality: personal data should be protected; identities should not be revealed without permission.
- Debriefing: after participation, researchers explain the study’s true purpose, especially if deception was used, and address questions or distress.
Deception: when (and how) it can be used
Deception may be used when it is necessary to prevent demand characteristics (participants changing behavior because they know the hypothesis) and when risks are minimal.
What can go wrong: students sometimes assume deception is automatically unethical. In reality, it depends on whether it is justified, minimized, approved by oversight, and followed by thorough debriefing.
Institutional oversight
Studies involving humans are typically reviewed by an Institutional Review Board (IRB), which evaluates risks, consent procedures, and protections.
Animal research ethics
Animal research can be important in biological psychology because some questions require invasive procedures that cannot be done ethically with humans.
Ethical expectations include:
- Clear scientific purpose
- Humane housing and care
- Minimizing pain and distress
A common misconception is that animal research is “unregulated.” In practice, ethical and legal standards require justification and welfare protections, though the details vary by context.
Example: spotting ethical issues in a study
Imagine a study that tests the effect of sleep deprivation on attention by keeping participants awake for 36 hours, without telling them the duration beforehand, and then publishing their names with performance scores.
Problems you should notice:
- Lack of adequate informed consent (withholding major risk/procedure details)
- Risk of harm (extended deprivation may be unsafe)
- Breach of confidentiality (publishing names)
A strong response would also propose fixes: informed consent with clear risks, medical screening, right to withdraw, monitoring, and anonymized data.
Exam Focus
- Typical question patterns
- Interpret a research result using mean/median/mode, standard deviation, or a skewed distribution.
- Explain what a p-value or “statistically significant” result implies (and what it does not imply).
- Identify ethical violations and propose appropriate safeguards (consent, debriefing, confidentiality).
- Common mistakes
- Saying “significant” means “important” rather than “unlikely due to chance.”
- Claiming p < 0.05 proves the hypothesis or guarantees replication.
- Confusing reliability with validity, or assuming reliability automatically implies validity.