ACT Science: Scientific Investigation (Research Summaries) — How to Reason Through Experiments
Understanding Experimental Tools and Procedures
Scientific Investigation questions on the ACT often come from Research Summaries passages—short descriptions of one or more experiments plus figures (tables/graphs). Your job is rarely to “know” outside science facts; it’s to understand what the researchers did and what the tools and procedures mean.
What experimental tools and procedures are
Experimental tools are the instruments and materials used to measure or change something in a study (thermometers, balances, pH meters, voltmeters, spectrophotometers, graduated cylinders, microscopes, timers). Procedures are the step-by-step actions that define how the experiment was run (how samples were prepared, what was held constant, what was changed, what was measured, and when).
On the ACT, procedures are usually simplified, but they still define the logic of the experiment: if you misread the steps, you’ll misinterpret every graph and conclusion.
Why tools and procedures matter
Tools and procedures determine:
- What is actually being measured (temperature of the water vs temperature of the air; mass of a beaker vs mass of the sample)
- How trustworthy the measurements are (precision, resolution, and possible systematic errors)
- Whether results can be compared across trials or across experiments (same units, same timing, same sample preparation)
A common ACT trap is giving you two answer choices that sound scientifically “reasonable,” but only one matches the specific procedure described.
How to read procedures like a scientist
When you read a methods paragraph, train yourself to extract a small set of anchors:
- Samples/groups: What is being tested (plants, metal rods, solutions, patients, model cars)? How many groups?
- Manipulation: What do they intentionally change between groups?
- Measurement: What do they record, with what tool, and in what units?
- Timing: When do they measure (immediately, every 5 minutes, after 24 hours)?
- Repetition: How many trials or replicates? Are results averaged?
If you can point to those five items, most questions become “lookup + logic.”
Measurement basics you’re expected to use
You don’t need advanced chemistry/physics, but you do need comfort with measurement ideas.
Accuracy vs. precision
- Accuracy: how close a measurement is to the true value.
- Precision: how consistent repeated measurements are with each other.
A tool can be precise but inaccurate (consistently off by the same amount), which matters when you’re comparing groups.
Resolution and significant digits (conceptually)
The resolution of an instrument is the smallest change it can reliably detect (a ruler marked in millimeters vs centimeters). ACT questions may hint at this by the scale on a graph axis or the labeled increments.
If a balance reads to 0.01 g, it supports finer comparisons than one that reads to 1 g. When interpreting results, don’t claim an effect smaller than the tool could detect.
Common lab tools and what they measure
| Tool | What it measures | Common ACT use |
|---|---|---|
| Graduated cylinder | Volume of liquid | Comparing volumes; computing density when paired with mass |
| Balance | Mass | Mass changes; density |
| Thermometer/temperature probe | Temperature | Reaction rate vs temperature; heating/cooling curves |
| Stopwatch/timer | Time | Rates; time to completion |
| pH paper/pH meter | Acidity/basicity | Enzyme activity; solubility; reaction conditions |
| Light sensor/spectrophotometer | Light intensity/absorbance | Concentration proxies; photosynthesis-type setups |
| Voltmeter/ammeter | Voltage/current | Circuits; electrical relationships |
You don’t need to know the internal physics of these devices; you need to match the device to the measured quantity.
“Show it in action” example
Scenario: A procedure states: “Each solution was placed in a water bath at 30°C for 10 minutes. Then absorbance was recorded using a colorimeter.”
What you should infer:
- The water bath is controlling temperature.
- The 10 minutes is a standardized waiting period (helps fair comparison).
- Absorbance is the measured outcome (dependent measure), and the tool is the colorimeter.
If a question asks why the water bath was used, the best answer is about controlling temperature—not “to heat the solution” unless the procedure explicitly says temperature was changed.
What goes wrong (common misunderstandings)
Students often:
- Confuse the setup (water bath, incubator) with the measurement tool (colorimeter, balance).
- Assume a tool is used to change something when it’s actually used to keep something constant (a water bath can maintain temperature, not just increase it).
- Ignore timing details, even though “measured after 1 minute” vs “after 10 minutes” can completely change a trend.
Exam Focus
- Typical question patterns:
- “Which instrument would be most appropriate to measure ___?” based on units or context.
- “According to the procedure, what happened immediately before/after ___?” (sequence questions).
- “Why was step X included?” (purpose of a procedural step).
- Common mistakes:
- Answering based on real-world intuition instead of the stated steps—always anchor to the text.
- Missing that a device is used to control conditions (constant temperature/light) rather than measure outcomes.
- Over-interpreting precision (claiming tiny differences matter when the scale is coarse).
Identifying Controls and Variables
Nearly every Scientific Investigation question becomes easier once you correctly label variables. Think of an experiment as a “cause-and-effect test”: researchers change a cause on purpose and watch what effect follows.
What variables and controls are
A variable is any factor that can change in an experiment.
- The independent variable is what the experimenter changes on purpose (the input, treatment, or condition).
- The dependent variable is what the experimenter measures as the outcome (the response).
- Controlled variables (also called constants) are factors kept the same across conditions so they don’t confuse the results.
A control can mean two related ideas:
- A control condition/group: a baseline used for comparison (often “no treatment” or “standard condition”).
- The act of controlling variables: keeping other factors constant.
ACT questions may use “control” in either sense, so always read the phrasing carefully.
Why this matters
If you mix up independent and dependent variables, you’ll misread graphs (especially which axis should show which variable). If you ignore controlled variables, you may accept a conclusion that isn’t actually supported.
A strong experiment aims to change one main factor at a time while holding others constant. That’s what makes “cause and effect” reasoning believable.
How to identify variables step by step
- Find what differs between experimental groups (different temperatures, different concentrations, different materials). That is usually the independent variable.
- Find what is recorded in a table/graph (growth, time, absorbance, voltage). That is usually the dependent variable.
- Scan for phrases like “kept constant,” “all samples were,” “same volume,” “same mass,” “maintained at.” Those are controlled variables.
A fast check: if you can phrase it as “Does changing ____ cause a change in ____?” the first blank is independent and the second is dependent.
Control groups vs. “normal” conditions
A control group is not automatically “nothing happened.” It’s the condition designed to represent a baseline. Examples:
- If testing a drug, the control might receive a placebo.
- If testing fertilizer, the control might be soil with no fertilizer.
- If testing light color on plant growth, the control might be white light (a standard) or darkness, depending on the question.
The key is comparison: the control group gives meaning to “increase” or “decrease.”
Confounding variables (the hidden problem)
A confounding variable is an extra factor that changes along with the independent variable, making it unclear which factor caused the observed effect.
Example: Suppose Experiment 1 uses 10 mL of acid at 20°C, and Experiment 2 uses 20 mL of acid at 40°C. If the reaction rate differs, you can’t tell whether volume or temperature caused the change.
On the ACT, confounds often show up when:
- multiple factors change between setups,
- groups are treated differently in a way unrelated to the intended variable,
- or measurement timing differs.
“Show it in action” example
Scenario: Students test how salt concentration affects the time for ice to melt.
- They prepare solutions of 0%, 5%, 10% salt.
- They place identical ice cubes into each solution.
- They record time to fully melt.
Identify:
- Independent variable: salt concentration.
- Dependent variable: time to melt.
- Controlled variables: cube size/mass, starting temperature, volume of solution, container type, and measurement method.
- A good control condition: 0% salt (pure water).
What goes wrong (common misunderstandings)
- Calling the largest group the “control” because it sounds important—control is defined by purpose, not size.
- Thinking the dependent variable is “the thing on the y-axis” without checking the graph—most of the time it is, but the safest approach is to read axis labels.
- Treating “controlled variables” as “the control group”—they’re related but not identical.
Exam Focus
- Typical question patterns:
- “What is the independent/dependent variable in Experiment 2?”
- “Which factor was held constant?” (often lifted directly from the procedure).
- “Which setup serves as the control?”
- Common mistakes:
- Swapping independent and dependent variables—use the “change vs measure” test.
- Choosing a factor that changes accidentally (like time) as the independent variable when it’s actually just part of the procedure.
- Missing confounds when two features change at once.
Analyzing Experimental Design
Analyzing design means judging whether the experiment’s structure can actually answer the question it claims to answer. On the ACT, you’re not asked to write a full critique like a research scientist, but you are expected to notice major design features: fairness, replication, and whether conclusions match the data.
What “experimental design” includes
An experiment’s design is the planned structure connecting:
- the research question (what they want to find out),
- the hypothesis (what they predict),
- the independent variable (what they manipulate),
- the dependent variable (what they measure),
- the controls (baseline and constants),
- and the data collection plan (timing, number of trials, how results are summarized).
A well-designed experiment isolates the effect of the independent variable while minimizing noise from other factors.
Why design analysis matters on ACT Science
Many questions are not “compute” questions; they are “Does this conclusion follow?” questions.
You might be asked:
- whether a result supports a claim,
- how a study should be modified to test a new idea,
- or why a certain step was needed.
These all depend on understanding what makes evidence strong.
Key design features the ACT tends to test
Replication and sample size
Replication means repeating measurements or trials. More trials reduce the chance that an unusual outcome (random variation) misleads you.
If a table shows “Trial 1, Trial 2, Trial 3,” the ACT may ask you to average them or identify which trial is an outlier. Sometimes the passage will report “mean values,” which implies replication happened.
Random error vs. systematic error
- Random error varies unpredictably (small differences in timing, tiny measurement fluctuations). Replication helps reveal and reduce its influence.
- Systematic error pushes results consistently in one direction (a miscalibrated balance, consistently reading 0.5 g too high). Replication does not fix systematic error; calibration or redesign does.
You usually infer these from descriptions like “the scale was not zeroed” (systematic) or “measurements varied between trials” (random).
Fair tests: changing one factor at a time
A “fair test” keeps all variables constant except the independent variable. This is why procedures repeat phrases like “same volume,” “identical containers,” “maintained at 25°C.”
If an experiment changes two things at once, the correct ACT conclusion is often limited: “The difference could be due to either factor.”
Validity: does it measure what it claims?
Validity is about whether the measurement actually represents the concept.
Example: If an experiment claims to measure “bacterial growth” but only measures cloudiness, that can be valid if cloudiness is a known proxy in that setup. On the ACT, validity issues are usually simpler: wrong instrument, wrong units, or measuring at the wrong time.
Interpreting design through graphs and tables
Many design clues are embedded in figures.
- If the x-axis lists categories (A, B, C) or conditions (0%, 5%, 10%), those are levels of the independent variable.
- If separate lines are labeled “Trial 1” and “Trial 2,” the design includes replication.
- If error bars appear (not always), they represent variability; larger bars suggest more spread.
“Show it in action” worked example: spotting a design flaw
Scenario: Researchers test whether fertilizer increases plant height.
- Group 1 (control): plants receive 100 mL water daily.
- Group 2: plants receive 100 mL fertilizer solution daily.
- Plant height is measured weekly.
Now suppose the procedure also says Group 2 plants are kept in a sunny window, while Group 1 plants are kept under a shelf.
Step-by-step reasoning:
- Intended independent variable: fertilizer vs no fertilizer.
- Dependent variable: height.
- Confound: light exposure differs between groups.
- Conclusion: If Group 2 grows taller, you cannot attribute the difference only to fertilizer.
A better design would keep light conditions identical and only vary fertilizer.
Useful calculations tied to design questions
Sometimes you’re asked to quantify change or error.
Percent change is common when comparing before/after or control/treatment:
\text{percent change} = \frac{\text{new} - \text{old}}{\text{old}} \times 100\%
- “Old” is the baseline (often the control or initial value).
- A positive result means an increase; negative means a decrease.
Percent error can appear when comparing a measured value to an accepted value:
\text{percent error} = \left|\frac{\text{measured} - \text{accepted}}{\text{accepted}}\right| \times 100\%
On ACT Science, you usually won’t need to memorize specialized formulas beyond these; the key is careful substitution and interpreting what the result means.
What goes wrong (common misunderstandings)
- Assuming correlation proves causation: if two variables change together in a graph, that does not automatically mean one caused the other unless the design actually manipulated one variable while controlling others.
- Overgeneralizing from a narrow design: if an experiment tested one species, one temperature range, or one concentration range, conclusions should stay within that scope.
- Ignoring “operational definitions”: if the passage defines “reaction rate” as “volume of gas produced in 30 s,” that is the rate measure for this problem, even if you know other definitions.
Exam Focus
- Typical question patterns:
- “Which change would improve the design?” (remove a confound, add a control, increase trials).
- “Is the conclusion supported by the data?” (must match trends and scope).
- “Which variable was not controlled?” (often implied rather than stated).
- Common mistakes:
- Picking an improvement that changes the research question (adding a new variable rather than controlling one).
- Treating a single trial as definitive—ACT often rewards skepticism when replication is missing.
- Extending conclusions beyond the tested range (especially with extrapolation).
Comparing and Extending Experiments
Many Research Summaries passages contain Experiment 1, Experiment 2, Experiment 3. Your job is to compare them: what stayed the same, what changed, and how those differences explain different results.
What it means to compare and extend experiments
To compare experiments is to identify similarities and differences in:
- independent variables (what they changed),
- dependent variables (what they measured),
- controls/constants (what they held fixed),
- and procedures (timing, instruments, sample prep).
To extend an experiment is to propose a logical next step—often a new condition, a wider range, or a different measurement—that still follows the experimental logic.
Why it matters on the ACT
Comparison is a high-yield reasoning skill because it doesn’t require outside knowledge. The test can ask you about Experiment 2 using information from Experiment 1, or it can ask what you’d expect if Experiment 1 were modified to match Experiment 3.
These questions reward careful reading: the most important differences are often small (a different temperature, a different wavelength of light, a different starting concentration, or a different measurement time).
How to compare experiments systematically
When two experiments look similar, don’t rely on memory—build a quick mental “compare grid”:
- Same dependent variable?
- Same organism/material?
- Same units and measurement tool?
- Same range and step size for the independent variable?
- Same control group?
Even one mismatch can explain why results cannot be directly combined.
Extending experiments without breaking them
A good extension:
- changes one element at a time,
- keeps measurement consistent,
- and tests a clear reasoned prediction.
Common extension types:
- Add levels of the independent variable (test additional concentrations).
- Expand the range (go beyond 10°C to include 5°C and 15°C).
- Test robustness (repeat with a different material/species while keeping the method the same).
- Add a control (include a baseline condition missing before).
- Change the dependent variable only if you are intentionally measuring a different outcome (and then you must reinterpret conclusions).
“Show it in action” example: comparing two designs
Scenario:
- Experiment 1 measures how temperature affects enzyme activity by recording product formed in 1 minute.
- Experiment 2 measures how pH affects enzyme activity by recording product formed in 5 minutes.
Comparison:
- Same general dependent concept (enzyme activity), but operational definitions differ (1 minute vs 5 minutes).
- Independent variables differ (temperature vs pH).
What you can and cannot conclude:
- You can compare trends within each experiment.
- You should be cautious about comparing absolute values across experiments because measurement times differ.
A strong extension would be to repeat Experiment 2 with a 1-minute measurement window to match Experiment 1 if the goal is direct comparison.
Interpreting results when experiments disagree
Sometimes different experiments produce different trends. Before concluding “the data contradict,” check:
- Were starting conditions the same?
- Were units the same?
- Was the dependent variable measured the same way?
- Did they test the same range?
Disagreement often disappears once you notice that experiments tested different ranges. For example, one experiment might show a variable increasing from 0 to 10, while another shows it decreasing from 10 to 20. Both can be true if the real relationship has a peak.
What goes wrong (common misunderstandings)
- Treating “Experiment 1 vs Experiment 2” as if they differ in only one way—ACT writers often include two differences, and you must decide which difference matters for the question asked.
- Assuming you can combine data sets that used different measurement methods.
- Proposing an extension that adds multiple changes at once, making outcomes ambiguous.
Exam Focus
- Typical question patterns:
- “How did Experiment 2 differ from Experiment 1?” (look for a single procedural change).
- “Which experiment tested ___?” (match variable and measurement).
- “If Experiment 1 were repeated at ___, what would you predict?” (extension + trend).
- Common mistakes:
- Missing subtle differences like timing, sample size, or units.
- Confusing “different independent variable” with “different levels of the same independent variable.”
- Ignoring the control condition when comparing outcomes.
Predicting Results of Additional Trials
Prediction is where the ACT checks whether you can read patterns, respect the limits of data, and think statistically in a simple, practical way.
What predicting additional trials means
To predict results means using the relationship shown in existing data to estimate what would happen:
- in another trial under the same conditions,
- at a new value of the independent variable,
- or after more repetitions (more data points).
This is not guessing; it’s evidence-based extrapolation or interpolation, always constrained by the experiment’s design.
Why it matters
ACT Science often asks: “What would most likely happen if…” These questions test whether you:
- understood the trend,
- noticed variability,
- and recognized whether the prediction is within the tested range.
How to make solid predictions from graphs and tables
Step 1: Identify the trend type
Common patterns include:
- Increasing (as one variable increases, the other increases)
- Decreasing
- Plateau (increases then levels off)
- Peak/optimum (increases then decreases)
- No relationship (values scattered with no pattern)
Your prediction should match the pattern. If the graph plateaus, you should not predict continued steep increases.
Step 2: Decide whether it’s interpolation or extrapolation
- Interpolation: predicting within the range of the data (generally safer).
- Extrapolation: predicting beyond the range (riskier; ACT often expects “cannot be determined” if the pattern might change).
If the data only cover 0–10 units, a prediction at 100 units is usually not justified unless the passage explicitly suggests the trend continues.
Step 3: Account for variability and replication
If multiple trials show slightly different results, the best prediction for a new trial is usually “near the average of previous trials,” not exactly equal to one trial.
The mean (average) of measurements x_1, x_2, \dots, x_n is:
\bar{x} = \frac{x_1 + x_2 + \dots + x_n}{n}
On the ACT, you might not be asked to compute the mean with many points, but you should understand the logic: replication makes you expect a cluster of values, not a single perfect repeat.
Predicting the effect of “more trials”
A subtle but common idea: doing more trials usually changes your confidence, not the underlying effect.
- If the effect is real, more trials tend to make the average more stable.
- If results vary widely, more trials help reveal that the relationship may be weak or that measurement noise is high.
ACT questions may phrase this as: “If the experiment were repeated many more times, what would most likely happen to the average?” The best reasoning is that the average would likely be similar, but the estimate would be more reliable.
“Show it in action” worked example: predicting a new data point
Scenario: A table shows plant growth (cm) after 7 days at different light intensities.
| Light intensity (units) | Growth (cm) |
|---|---|
| 10 | 2.0 |
| 20 | 3.1 |
| 30 | 4.0 |
| 40 | 4.2 |
Prediction for 35 units:
- This is interpolation between 30 and 40.
- Growth increases from 4.0 to 4.2 across that interval, a small change.
- A reasonable prediction is slightly above 4.0, around 4.1 cm.
Prediction for 80 units:
- This is extrapolation.
- The pattern shows a plateau (growth barely increases from 30 to 40).
- A cautious prediction would be that growth might remain near the plateau, but the most defensible ACT-style answer is often that the exact growth at 80 units cannot be determined from the data alone.
Predicting results when the relationship is non-linear
If the graph has a peak (optimum), predictions depend strongly on where you are:
- If you are left of the peak, increasing the independent variable may increase the dependent variable.
- If you are right of the peak, increasing it further may decrease the dependent variable.
Many students mistakenly extend the initial upward trend even after the graph has turned downward. Always look at the whole curve.
What goes wrong (common misunderstandings)
- Predicting beyond the data range as if it’s guaranteed—ACT often rewards cautious reasoning.
- Ignoring trial-to-trial variation and choosing an extreme value rather than a central tendency.
- Reading the wrong axis or mixing up which line corresponds to which condition.
Exam Focus
- Typical question patterns:
- “Based on Figure 2, what would most likely be observed at ___?” (interpolation).
- “If an additional trial were run under the same conditions, which result is most likely?” (choose near the average).
- “What would happen if the independent variable were increased beyond the tested range?” (often “cannot be determined” unless a clear continuation is justified).
- Common mistakes:
- Extending a trend linearly when the data show curvature or plateau.
- Forgetting which condition a line/symbol represents in multi-line graphs.
- Treating a single data point as a rule rather than part of a pattern.