AP CSP Big Idea 1 (Creative Development): Artifacts, Process, Requirements, Collaboration, Debugging, Documentation, and IP
What Makes a Good Computational Artifact?
A computational artifact is anything created with a computer (or with computing tools) that communicates something to an audience. In AP CSP, the term is intentionally broad: a program you wrote is a computational artifact, but so is a video, an animation, an infographic, a simulation, a visualization, a website, or an interactive story—so long as computing played an essential role in creating it.
Artifact vs. Program (and why AP CSP cares)
A program is a set of instructions that a computer can execute. A computational artifact might be a program, but it could also be media that was edited or generated using computing tools. AP CSP emphasizes artifacts because computing is not only about “making code run.” It is also about communicating ideas, solving problems for people, and expressing creativity through design, style, and interaction.
Purpose and audience: the two questions you must answer
Every strong artifact has a clear purpose and a specific audience.
Purpose is the goal: what you are trying to accomplish (for example, teach users about recycling, help someone practice vocabulary, or let users explore a data story). Audience is the intended user or viewer (for example, 3rd graders, busy commuters, people new to budgeting, or players who like puzzle games).
Purpose and audience drive nearly every decision you make: tone and visuals (playful vs. professional), complexity (simple controls vs. feature-rich menus), accessibility (color contrast, text size, captions), and platform (phone, web, desktop). A common misconception is thinking “the audience is everyone.” In practice, designing for “everyone” usually means you design well for no one because you cannot make good tradeoffs without a target.
Criteria and constraints: what “good” means for your project
When you build an artifact, you evaluate it using criteria and constraints.
Criteria are the measures of success (what the artifact should do well), such as “Users can create an account in under 30 seconds.” Constraints are limitations you must work within, such as “Must run in a browser,” “Must be finished in two weeks,” or “Only free assets allowed.” A helpful way to remember the difference is that criteria are the destination and constraints are the road conditions—both shape the design.
Example: defining purpose, audience, criteria, and constraints
Imagine you want to create a small program that helps students manage homework.
- Purpose: Help students track assignments and due dates.
- Audience: High school students who use phones.
- Criteria:
- Add an assignment in 3 taps or fewer.
- Sort by due date automatically.
- Remind users the day before a due date.
- Constraints:
- Must work offline.
- Must be usable with one hand.
- Must be completed by the end of the month.
Notice how those choices affect development: “offline” pushes you toward local storage; “one hand” affects button placement and screen layout; “3 taps or fewer” affects interface structure.
Exam Focus
- Typical question patterns:
- Identify the purpose and intended audience of a described artifact.
- Distinguish criteria vs. constraints in a project scenario.
- Choose design decisions that best match a given audience.
- Common mistakes:
- Describing features instead of purpose (purpose is the “why,” not the “what”).
- Calling time/budget/tool limits “criteria” when they’re constraints.
- Claiming the audience is “everyone,” which avoids the actual design reasoning.
The Creative Development Process (From Idea to Iteration)
Creative development is the process of turning an idea into a working artifact through planning, building, feedback, and revision. In AP CSP, you’re expected to understand that high-quality computing work is rarely produced in a single try. Instead, it evolves.
Why iterative development is the default in computing
Iterative development means you build something, test it (with yourself and others), learn from what happens, and improve it—repeating this cycle. Iteration matters because you cannot fully predict how users will react until they try it, bugs and design flaws often appear only when a real version exists, and requirements change as new ideas, constraints, or user needs emerge.
Stages you’ll commonly see (and what they accomplish)
Different teams name the stages differently, but most creative development includes:
- Ideation: generate ideas and choose a direction.
- Planning: decide what to build, for whom, and how you’ll measure success.
- Prototyping: make an early, simplified version (sometimes not fully functional).
- Implementation: build the artifact/program.
- Testing and feedback: evaluate against criteria; observe users.
- Refinement: revise based on evidence.
The key is that this is not a one-way pipeline. Iteration means you might prototype, discover a problem, and return to planning; or implement, test, and then redesign.
Prototypes: learning faster by building less
A prototype is a preliminary version of your artifact that you create to answer questions quickly. Prototypes reduce risk: if you are unsure whether users will understand your interface, it’s cheaper to test a prototype than to build a full product and then discover it’s confusing.
Prototypes can be low-fidelity (paper sketches, slide mockups, clickable wireframes) or high-fidelity (a partial but realistic version). A common mistake is thinking a prototype must be a “mini final product.” In reality, prototypes are allowed to be incomplete—they exist to learn.
Incorporating feedback: not all feedback is equally useful
Feedback is information from users, teammates, or stakeholders that helps you improve your artifact. The key skill is not just collecting feedback, but interpreting it. Look for patterns (if 5 out of 6 users struggle at the same step, that’s a real issue), separate symptoms from causes (“This is confusing” is a symptom; you must find why), and respect constraints (if feedback asks for an impossible feature, you may need an alternative).
Example: iteration in a simple interactive story
Suppose you create an interactive story where users choose paths.
- First version: Users click “Option A” or “Option B,” but they don’t understand the consequences.
- Feedback: “I didn’t know what I was choosing.”
- Revision: Add preview text like “Sneak into the castle (risky)” vs. “Wait until dawn (safer).”
- Second feedback: “Too much text on the screen.”
- Revision: Move detail text to a “More info” button.
This is iterative development: each change is driven by evidence.
Exam Focus
- Typical question patterns:
- Explain why iteration improves an artifact’s quality.
- Identify which stage (planning, prototyping, testing) a scenario describes.
- Choose the best next step given user feedback.
- Common mistakes:
- Treating development as linear (plan once, build once, done).
- Equating “more features” with “better,” instead of meeting criteria for the audience.
- Ignoring feedback because it conflicts with the creator’s preferences.
Requirements, Design Decisions, and Tradeoffs
When you build a program or artifact, you constantly make decisions: which features to include, what to simplify, and what to postpone. Good developers make these decisions based on requirements and tradeoffs, not guesses.
Requirements: what the artifact must do
A requirement is a specific statement about what your artifact should do or be. Requirements connect purpose to implementation. Good requirements are specific, testable, and relevant to the purpose and audience.
Vague requirements like “make it user-friendly” are not useless, but they’re incomplete. You usually need to translate them into testable expectations (for example, “New users can complete task X without instructions”).
User stories: a simple format that clarifies audience and purpose
A helpful way to write requirements is a user story format:
- “As a [type of user], I want [goal], so that [reason].”
Example: “As a student who forgets deadlines, I want a reminder the day before, so that I can plan my evening.”
User stories keep you honest about who you’re designing for. If you can’t fill in the blanks, you may not understand your audience yet.
Constraints create tradeoffs (and tradeoffs are unavoidable)
A tradeoff is a compromise where improving one aspect of a design often reduces another. Constraints make tradeoffs unavoidable. Common tradeoffs in computing projects include speed vs. accuracy, simplicity vs. features, customization vs. consistency, and security/privacy vs. convenience.
A common misconception is that tradeoffs mean you “did something wrong.” In reality, tradeoffs mean you’re making informed decisions within constraints.
Design decisions should be justified
In AP CSP, you’re often asked to explain or justify choices. A strong justification links (1) audience need, (2) criteria/constraints, and (3) your decision.
Example: “Because the audience uses phones and may be in a hurry (audience), and because one criterion is adding an assignment quickly (criteria), I used a single-screen form with only two required fields (decision).”
Example: turning a broad goal into requirements
Goal: “Help users learn vocabulary.”
Possible requirements:
- The app shows 10 words per session.
- Each word includes a definition and an example sentence.
- The user can self-report whether they knew the word.
- The app repeats missed words more frequently than mastered words.
These are testable; you can verify each one.
Exam Focus
- Typical question patterns:
- Identify whether a statement is a requirement, constraint, or vague goal.
- Explain a tradeoff and justify why a decision fits the audience.
- Determine which requirement best supports the stated purpose.
- Common mistakes:
- Writing requirements that are not testable (“make it cool”).
- Adding requirements that don’t connect to the purpose (feature creep).
- Explaining a choice without referencing audience/criteria (no justification chain).
Collaboration: Building Better Artifacts With Other People
Programming is a collaborative and creative process that brings ideas to life through the development of software. Many software development processes used in industry require people to work together in teams, and AP CSP expects you to recognize that collaboration can happen throughout the lifecycle rather than only at the end.
One important concept connected to collaboration (and discussed across the course) is the idea of computing innovations. A computing innovation uses a computer program to take in data, transform data, and output data. Because innovations often combine many parts (data sources, algorithms, user interaction, testing, communication), teamwork and coordination matter.
Why collaboration improves results (when done well)
Collaboration can combine different strengths (one person designs UI, another writes code, another tests), increase creativity through multiple perspectives, reduce errors through peer review, and improve inclusivity by making it more likely that different users’ needs are considered. Collaboration can occur in the planning, designing, or testing/debugging parts of the development process, and strong teams intentionally decide when to collaborate versus when to divide tasks.
Collaboration tools and collaborative learning
Collaboration tools allow students and teachers to exchange resources in several different ways, depending on what suits a particular task (for example, sharing drafts, collecting feedback, or coordinating versions).
Collaborative learning can occur peer-to-peer or in larger groups. Peer instruction involves students working in pairs or small groups to discuss concepts and find solutions to problems; this kind of structured discussion often catches misunderstandings earlier than working alone.
Roles and responsibilities: making teamwork concrete
Teams often work better when responsibilities are explicit. Common roles in small student projects include a project manager (tracks tasks and deadlines), developer(s) (implement features), designer (layout, visuals, usability), tester/QA (finds issues and verifies fixes), and documenter (maintains notes, README, citations). You do not need formal titles, but you do need clarity, and you should revisit roles as the project changes.
Communication strategies that prevent “team bugs”
Many team failures are communication failures. Useful habits include agreeing on a shared vocabulary (including what counts as “done”), holding regular check-ins to report progress and blockers, maintaining a single source of truth (one task board, one shared doc for requirements, one place for the latest version), and recording decisions in writing so the team does not re-litigate them later.
Versioning and merging: why “finalfinalv3” is a warning sign
When multiple people work on files, you need a way to manage changes. In professional settings, teams use version control systems; in student settings, you may use shared drives or platform tools. The key idea is versioning: keeping track of which version is current and being able to recover older versions if something breaks.
If your project files are named “final,” “final2,” “finalfinal,” “finalfinal_really,” that is a sign you lack a versioning strategy. Even without professional tools, you can reduce chaos by agreeing on naming conventions, recording who is editing what, and backing up working versions before major changes.
Example: splitting a project without breaking it
Suppose your team is making a small quiz game.
A risky split is “You do the quiz, I do the UI,” because UI and quiz logic depend on each other—what happens when the user clicks “Submit”? Who owns the score variable?
A better split is to have Person A implement the question bank and scoring functions, Person B implement screens and buttons, and then agree on an interface (for example, a function that returns the next question and accepts an answer). This reduces integration problems because you defined how parts connect.
Exam Focus
- Typical question patterns:
- Explain how collaboration can improve program quality or creativity.
- Identify a collaboration practice that would prevent a described issue.
- Evaluate a team workflow and suggest an improvement.
- Common mistakes:
- Splitting work without defining how pieces will integrate.
- Failing to communicate changes, causing teammates to overwrite work.
- Confusing “working together” with “everyone coding the same file at once.”
Testing, Debugging, and Refinement
Even a well-planned project will have problems when you build it. Testing and debugging are how you turn a “mostly works” artifact into a reliable one.
Testing: checking behavior against expectations
Testing is the process of running an artifact or program to see whether it behaves as intended. Testing matters because computers do exactly what you tell them to do—not what you meant. Good testing connects back to requirements and criteria. If a requirement says “sort by due date,” your tests must include multiple due dates. If a criterion says “add an assignment quickly,” you might time how long it takes new users.
A common misconception is “testing is something you do at the end.” In reality, testing throughout development catches problems early, when they are cheaper to fix.
Test cases: making testing systematic
A test case is a specific input and situation you use to check a specific behavior. Strong test cases include normal cases (typical expected use), edge cases (unusual but possible situations), and error cases (invalid input or unexpected actions).
For example, if users enter their age, you might test:
- Normal: 16
- Edge: 0, 120
- Error: -5, “sixteen”, blank
The purpose of edge/error cases is not to “trick the program.” It’s to make sure your program handles reality.
Identifying and correcting errors: common types you should recognize
In many AP CSP contexts, you will be asked to classify errors. One common breakdown is that there are 4 main types of errors in programming:
- Syntax error: A mistake where the rules of the programming language are not followed.
- Runtime error: A mistake that occurs during execution and stops (ceases) execution.
- Logic error: A mistake in the algorithm or program that causes it to behave incorrectly or unexpectedly.
- Overflow error: A mistake that occurs when a computer attempts to handle a number that is outside of the defined range of values.
In Creative Development, you also need to watch for design flaws: situations where the program works “as coded,” but does not meet user needs. Design flaws are not always categorized as “programming errors,” but they matter because they still prevent the artifact from meeting its purpose and criteria.
Examples of the four error types
Syntax error example (case-sensitive variable name):
a ← expression
DISPLAY(A)
A syntax error occurs here because the second statement attempts to display the variable A, which is not the defined variable. Variable names are case sensitive in many languages/environments; defining a does not define A, so the rules for valid identifiers/defined variables are violated.
Runtime error example (division by zero):
DISPLAY(5/0)
There is no syntax error here because the language is used correctly, but it causes a runtime error because you cannot divide by zero. Execution halts at this line.
Logic error example (multiple grades printed):
a ← 95
IF (a > 90)
DISPLAY("You got an A.")
IF (a > 80)
DISPLAY("You got a B.")
IF (a > 70)
DISPLAY("You got a C.")
The code is intended to print the correct grade for a score of 95, so it should display only “You got an A.” However, it actually prints:
You got an A.
You got a B.
You got a C.
This logic error occurs because the student’s score is also greater than 80 and 70, and there is no restriction that prevents multiple grades from being printed.
Overflow error example (number too large for a data type):
x ← 2000*365
DISPLAY(x)
The result of the multiplication is a large number. In many languages, this product can be outside the range of certain data types; if x is stored in a type that cannot represent the result, the computation can cause an overflow error.
Debugging: finding and fixing the cause
Debugging is the process of finding and fixing errors (bugs). Debugging is not guessing—it’s a method.
A practical debugging strategy is to (1) reproduce the bug consistently, (2) describe expected vs. actual behavior clearly, (3) narrow down the location (which feature/function/event triggers it), (4) inspect state by checking key variable values at key moments, and (5) change one thing at a time and retest.
You can use test cases, extra output statements (for example, temporary displays/logs of variable values), careful examination for syntax errors, and other debugging tools to find and fix problems.
Refinement: improvement based on evidence
Refinement is the broader process of improving an artifact after testing and feedback. Debugging is one type of refinement, but refinement can also include simplifying UI, improving instructions, adjusting visuals, optimizing performance, or reworking a feature to better match criteria.
Example: debugging a simple scoring bug
Suppose your quiz game always shows a score of 0.
A systematic approach:
- Reproduce: answer a question correctly; score stays 0.
- Expected vs. actual: expected score increments; actual score unchanged.
- Narrow down: check the event handler for “Submit.”
- Inspect state: print/log score before and after increment.
- Likely cause: score variable resets inside the submit handler each time.
- Fix: initialize score once at the start, not inside the handler.
Even if your course environment differs, the reasoning pattern is what AP CSP cares about.
Exam Focus
- Typical question patterns:
- Choose the best test case to verify a requirement.
- Distinguish syntax errors, runtime errors, logic errors, overflow errors, and usability/design issues.
- Identify a debugging step that would isolate a bug.
- Common mistakes:
- Only testing “happy path” inputs and missing edge cases.
- Debugging by random changes instead of isolating variables.
- Treating user confusion as “user error” instead of a design issue.
Documenting and Communicating Your Work
In AP CSP, you are expected not only to build an artifact, but also to explain it. Communication is part of computing because programs are read by people—teammates, future maintainers, and sometimes your future self.
Documentation: making your thinking visible
Documentation is written information that explains how something works and why it was designed that way. Documentation matters because it supports collaboration, supports debugging and refinement (you can retrace decisions), and helps evaluate whether requirements were met.
Documentation can include comments in code (what a section does, assumptions), a README (what the project is, how to run it), design notes (requirements, criteria, constraints, decisions), and citations for external resources.
A common misconception is that comments should restate code line-by-line. Good comments explain intent and reasoning, not obvious syntax.
Explaining algorithms and behavior in plain language
AP CSP often asks you to describe how a program works. The best explanations name the goal (“This part calculates the total cost”), describe steps in order, mention key decisions (conditions) and repeated processes (loops) if relevant, and use a small example input to illustrate what happens. Avoid vague statements like “it does stuff with variables.” Precision is what shows understanding.
Communicating design: justify choices using audience and criteria
When you explain why you designed something a certain way, connect the user need (audience), goal (purpose), evidence (feedback/tests), and choice (design decision).
Example: “Users said the buttons were hard to read in bright light (feedback), so I increased contrast and font size (choice) to improve readability for phone users (audience), which supports the goal of quick task entry (purpose).”
Example: a short, strong project description
Weak: “My app helps with homework and has buttons.”
Stronger: “My program is a homework tracker for high school students who frequently forget deadlines. It lets users add assignments with a title and due date, automatically sorts by the soonest due date, and shows a reminder the day before. I designed the interface for one-handed use on phones, based on peer feedback that the original layout required too many taps.”
Exam Focus
- Typical question patterns:
- Explain what a part of a program/artifact does in clear, ordered steps.
- Identify what kind of documentation would help in a scenario (collaboration, maintenance).
- Justify a design decision using purpose/audience and feedback.
- Common mistakes:
- Writing documentation that describes what is obvious but not what is intended.
- Explaining features without linking to purpose or audience.
- Forgetting to mention how feedback/testing changed the artifact (missing iteration).
Intellectual Property, Licensing, and Using Others’ Work Responsibly
Creative development in computing almost always involves using existing resources: images, sound effects, code libraries, tutorials, datasets, templates, and more. The key is to use them legally and ethically—and to give proper credit.
Intellectual property: who owns creative work?
Intellectual property refers to creations of the mind—such as music, writing, images, and code—that the law may protect. In computing projects, intellectual property shows up constantly because digital work is easy to copy and share. A common misconception is: “If it’s online, it’s free to use.” Online availability does not automatically grant permission.
Copyright: default protection for original works
Copyright is a legal protection that gives creators certain exclusive rights over their original work (for example, copying and distributing). In many contexts, copyright applies automatically when a work is created.
For AP CSP purposes: you should not assume you can reuse copyrighted media or code however you want; you should follow license terms or seek resources that explicitly allow reuse; and you should provide attribution when required (and often even when not strictly required, as an ethical practice).
Licenses: permissions and conditions
A license is the set of rules that tells you how you are allowed to use a resource. Two common licensing categories in student projects are Creative Commons (CC) licenses (often used for media) and open-source software licenses (often used for code). The big idea is that licenses let creators share work while setting conditions (like requiring attribution or limiting commercial use).
Creative Commons: common conditions you must recognize
Creative Commons licenses are standardized. The parts you commonly see:
- BY (Attribution): you must credit the creator
- SA (ShareAlike): adaptations must use the same license
- NC (NonCommercial): no commercial use
- ND (NoDerivatives): you can share unchanged copies, but not modified versions
Here’s a practical comparison table (always verify the specific resource’s license details when you use it):
| License element | What it allows/requires | Common pitfall |
|---|---|---|
| CC BY | Use and adapt, but must credit | Forgetting attribution in the final artifact |
| CC BY-SA | Use/adapt, must credit, and adaptations must keep same license | Remixing but failing to apply ShareAlike to your version |
| CC BY-NC | Use/adapt with credit for noncommercial use | Using it in a context that counts as commercial |
| CC BY-ND | Share with credit, no changes allowed | Cropping/editing an image (counts as a derivative) |
If you’re ever unsure whether your change counts as a derivative (for ND licenses), treat it as not allowed or find a different resource.
Open-source and remix culture: building on existing work
Open source generally refers to software whose source code is made available under a license that allows others to view, use, modify, and share it under certain conditions.
Remixing is common in computing: you might build on sample code, use libraries, or adapt an algorithm. That’s normal and often encouraged—but you must follow license terms and give credit where required. A useful ethical guideline is that even when copying is technically allowed, you should be transparent about what you wrote vs. what you reused.
Plagiarism vs. legitimate reuse
Plagiarism is presenting someone else’s work as your own. In programming, plagiarism can look like copying code from a tutorial without attribution, slightly changing variable names to hide copying, or using someone’s images/music without credit when credit is required.
Legitimate reuse includes using properly licensed resources, following license conditions, and giving clear attribution.
How to cite and attribute (in a way that’s actually useful)
Attribution should allow someone else to find the original source and understand what license applies. A strong attribution typically includes creator name (if available), title of work (if available), link to the source, license name and link, and a note about changes.
Example attribution:
- “City Skyline” by Jane Doe, https://example.com, licensed under CC BY 4.0, changes: cropped and color-adjusted.
For code, you can also attribute in comments and in a README.
Why this matters beyond “not getting in trouble”
Respecting intellectual property is not just about avoiding penalties. It supports a healthy creative ecosystem: creators are more willing to share when credit and conditions are respected, users can trust your work when sources are transparent, and you model ethical behavior in a field where copying is easy.
Exam Focus
- Typical question patterns:
- Identify what a license allows (reuse, modification, sharing) given conditions.
- Decide what attribution is needed for a described use of media/code.
- Distinguish plagiarism from permitted reuse.
- Common mistakes:
- Assuming “educational use” automatically makes any use legal or allowed.
- Forgetting that “NoDerivatives” forbids editing/remixing.
- Providing incomplete citations (no link, no license, unclear what was reused).
Putting It All Together: A Worked Creative Development Walkthrough
This section ties the major ideas together in one coherent development story: purpose and audience, requirements, iteration, collaboration, testing, documentation, and responsible reuse.
Scenario: designing a simple habit-tracker app
You want to build an artifact (a small app or interactive program) that helps users build a daily habit like drinking water.
Step 1: Define purpose and audience clearly
- Purpose: Help users remember and track daily water intake.
- Audience: Students who are busy and often forget.
Why this matters: “Students who are busy” suggests quick interactions, minimal typing, and reminders timed to daily schedules.
Step 2: Establish criteria and constraints
Criteria (success measures):
- User can log a drink in under 5 seconds.
- User can view today’s progress at a glance.
- Reminder system is easy to enable/disable.
Constraints (limitations):
- Two-week build timeline.
- Must use only free media assets.
- Must work on a school-approved platform.
These constraints immediately create tradeoffs: you might skip advanced analytics and focus on a simple, fast log.
Step 3: Write requirements (testable)
Possible requirements:
- The program stores a daily count.
- A “+1” button increments the count.
- A reset occurs automatically each day or via a reset button (depending on platform).
- A progress display shows count relative to a user goal.
Each can be tested.
Step 4: Prototype the interface
Before coding, you sketch a home screen with a big “+1” button, a goal setting screen, and a simple progress bar or numeric display. You test the sketch with classmates by asking, “Where would you tap to log water?” If they hesitate, your UI isn’t obvious.
Step 5: Implement and collaborate
If working in a team, you define integration points:
- One person implements data storage and daily count logic.
- Another implements screens and button actions.
- The team agrees on function names and shared variables (or a documented interface) so parts connect.
Step 6: Test with normal, edge, and error cases
Examples:
- Normal: goal = 8, log 3 drinks, verify display shows 3/8.
- Edge: goal = 1, log 2 drinks (should it cap at goal or exceed it? Decide and document.)
- Error: goal left blank or set to 0 (should show a message or use a default).
The “goal = 0” case is a classic place where logic errors happen (for example, progress calculations that assume a nonzero goal). Even if you don’t use math-heavy features, you must handle invalid inputs.
Step 7: Refine based on feedback
User feedback: “I don’t want the app to judge me if I miss a day.”
Refinement: change messaging from “You failed yesterday” to neutral language like “Start fresh today.” This is creative development, not just bug-fixing: you’re refining the user experience to match audience needs.
Step 8: Document and cite resources
You add:
- A README describing purpose, audience, and how to use the app.
- Comments explaining why the reset logic works the way it does.
- Citations for icons or sound effects with license info.
If you used an icon under a Creative Commons license requiring attribution, you include that attribution in your documentation (and wherever your project guidelines expect it).
What commonly goes wrong in projects like this
Feature creep (charts, social sharing, streaks, themes—and you finish none), unclear success criteria (so you cannot prioritize fixes), testing too late (discovering a major design flaw after everything is built), and uncredited assets (using an image from the internet without checking license/attribution) are some of the most common failure modes.
A helpful memory aid for development priorities is “PAT”:
- Purpose and audience first
- Acceptable requirements (specific and testable)
- Test and iterate early
Exam Focus
- Typical question patterns:
- Given a project scenario, identify which actions reflect iterative development.
- Choose which test cases best verify a requirement or criterion.
- Determine what documentation or attribution is needed for reused components.
- Common mistakes:
- Confusing criteria (success measures) with requirements (must-have behaviors).
- Treating “works on my computer” as proof it meets the audience’s needs.
- Forgetting to connect changes to feedback (iteration without justification).