Big Idea 1: Creative Development

What “Creative Development” Means in Computing

When you hear “creative,” you might think of art, music, or writing. In AP Computer Science Principles, creative development means something broader: the process of using imagination and intentional decision-making to design and build a computing artifact that serves a purpose for an audience.

A computing artifact is anything created with a computer (or by programming) that has meaning or function. That can include a program, an app, a game, a simulation, a visualization, a website, a piece of digital art, a robot behavior, or even a short video you edited with software. The key idea is that you are not just “coding”—you are designing something that communicates, solves a problem, entertains, teaches, persuades, or helps someone accomplish a task.

A closely related course-wide idea is the computing innovation: a product or development that uses a computer program to take in data, transform data, and output data. Many computing artifacts are computing innovations, and thinking in terms of input–process–output helps you explain what your creation does and why it matters.

Creativity is purposeful, not random

In computing, creativity shows up in the choices you make: what problem you’re trying to solve (or experience you’re trying to create), who your audience is and what they need, which features matter most given limited time and tools, what data you’ll use and how you’ll represent it, and how users will interact with the artifact.

Even small programs involve creative decision-making. For example, if you’re making a quiz game, you can make choices about question format, feedback style, difficulty progression, visuals, accessibility, and pacing. Two students can build a “quiz app” that feels completely different because creative development is about designing an experience, not just meeting a technical requirement.

Constraints are part of creative work

A common misunderstanding is that creativity means you can do “anything.” In real development, you always have constraints such as time (deadlines), tools (language/platform/libraries), requirements (what the artifact must do), resources (data availability and allowed media), and user needs (simplicity, accessibility, device limitations). Constraints don’t kill creativity—they shape it. Many great designs come from choosing the best solution within constraints.

The “artifact” has both a purpose and an audience

A strong computing artifact is not judged only by whether it “runs.” It’s also judged by whether it effectively serves its intended purpose for its intended audience.

  • Purpose answers: What is this for? What problem does it solve or what experience does it provide?
  • Audience answers: Who is it for? What do they know already? What do they care about? What device will they use?

If you build a budgeting tool for teenagers but you use financial jargon without explanation, the artifact may be technically correct but poorly designed for its audience. Creative development means aligning your design decisions with user needs.

Computing innovations are developed by people (often teams)

Another major theme is that computing innovations typically come from collaborative processes. Programming is a collaborative and creative process that brings ideas to life through software development, and many industry development processes require people to work together in teams.

Collaboration can happen at many points in a project (planning, designing, testing, and debugging). People also rely on code patterns, libraries, frameworks, and media created by others, which makes two professional habits essential:

  • Collaboration: working with others effectively (peer-to-peer or in larger groups).
  • Attribution: giving credit for code, media, or ideas you did not create.
Exam Focus
  • Typical question patterns
    • Identify which parts of building a program reflect creative choices (user interface, features, data representation, audience needs).
    • Explain why constraints and trade-offs are normal in development.
    • Distinguish a computing artifact’s purpose from its functionality (what it does) and audience.
  • Common mistakes
    • Treating creativity as “decoration only” (colors, images) rather than problem framing and design decisions.
    • Describing an artifact without connecting it to an audience and purpose.
    • Assuming “if it runs, it’s done” and ignoring usability or user needs.

The Program Development Process (From Idea to Working Artifact)

A program development process is a structured way to go from a vague idea to a tested, improved computing artifact. You’re expected to understand development as a cycle of planning, building, testing, and refining—not a one-time linear path.

Why have a process at all?

Without a process, you tend to start coding before you understand the problem, build features you don’t need, miss important edge cases, and end up with code that’s difficult to debug or improve. A process helps you make progress intentionally, communicate with teammates, and justify your design decisions.

Common stages of development

Different organizations label stages differently, but the underlying activities are consistent.

1) Investigate / define the problem

You begin by figuring out what you’re trying to accomplish: the goal, what success looks like, who the users are, and what constraints exist (time, platform, allowed resources). This often includes asking questions, researching existing solutions, and clarifying scope.

2) Determine requirements

A requirement is a statement of what the artifact should do (or how it should behave). Requirements prevent “feature drift.” Two useful categories are:

  • Functional requirements: specific behaviors or features (for example, “the user can save their score”).
  • Non-functional requirements: qualities of the system (for example, “the app should respond within 1 second,” “text must be readable on a phone,” “must be accessible”).

A beginner mistake is writing requirements as vague goals like “make it fun.” That can be a design goal, but you need requirements you can actually test.

3) Design a solution

Design is where you plan how the artifact will work before you implement it. This might include wireframes, storyboards, data planning, pseudocode, and deciding what to build first (a minimum viable version). Good design reduces rework and forces you to think about user experience early.

4) Implement (build the program)

Implementation is the coding stage: writing the program, creating UI elements, connecting input/output, and integrating libraries or media. Implementation is not “the end”—it’s part of an iterative cycle.

5) Test and debug

Testing asks “Does it work as intended?” Debugging asks “If not, why not?” Testing includes typical inputs, unusual inputs (edge cases), and checking whether outputs match expected behavior. Debugging includes reproducing the problem reliably, narrowing down where it occurs, and fixing it in a way that doesn’t break other features.

6) Refine based on feedback

Refinement is improvement after testing or user feedback: fixing confusing interactions, clarifying the interface, improving performance, and adding small features that users actually need. A professional mindset is that the first working version is a beginning, not a finish line.

Iterative development: build, learn, improve

Iterative development means repeating the cycle multiple times, using what you learn from testing or feedback to make the next version better. It’s common because you can’t predict all user needs up front, prototypes reveal hidden problems, and requirements often change once people see a working version.

Prototyping: making ideas testable

A prototype is an early, simplified version of an artifact used to explore an idea and gather feedback. Prototypes can be paper sketches of screens, clickable mockups, or a small program that tests one feature (like input validation). A prototype does not have to be code—the point is to make the idea concrete enough to evaluate.

Trade-offs are unavoidable

A trade-off is a decision where improving one aspect may worsen another, such as more features vs. finishing on time, more detailed graphics vs. performance, or more security checks vs. convenience. Trade-offs show up when you justify design choices under constraints.

Example: developing a habit tracker app (process in action)

Imagine you want to build a simple habit tracker.

1) Investigate: Users want to track habits daily and see streaks.

2) Requirements:

  • Functional: add a habit, mark habit complete for the day, display current streak.
  • Non-functional: simple interface, works on a phone-sized screen.

3) Design:

  • Two screens: “Habits list” and “Habit detail.”
  • Data: list of habits with name and streak count.

4) Implement: Build the list screen and the ability to add a habit.

5) Test:

  • Add multiple habits.
  • Try empty names.
  • Confirm streak increments only once per day.

6) Refine:

  • Users say they forget what “streak” means—add explanation text.
  • Add a confirmation message after marking complete.

Notice how refinement depends on real feedback and testing, not guesswork.

Exam Focus
  • Typical question patterns
    • Identify which step of a development process is being described (requirements vs. testing vs. refinement).
    • Explain why iterative development is beneficial (especially when requirements change).
    • Recognize examples of trade-offs and constraints in design decisions.
  • Common mistakes
    • Treating development as strictly linear (“plan once, code once, done”).
    • Confusing testing with debugging (testing finds issues; debugging fixes them).
    • Writing “requirements” that are actually implementation details (like “use a loop”) instead of describing what the program must do.

Human-Centered Design: Building for Real Users

A computing artifact is successful when it works for the people who use it. Human-centered design (also called user-centered design) is an approach that focuses on the user’s goals, experiences, and limitations throughout development.

Why user-centered design matters

It’s easy to design for yourself because you already know what you meant. Users don’t. They bring different expectations, backgrounds, and needs. User-centered design helps prevent failures like a technically correct but confusing interface, features that exist because the developer thought they were cool (not because users needed them), and unintended exclusion (for example, low-contrast text).

Understanding the user: empathy and assumptions

A good starting point is to describe your users concretely: age range, context, device, what they’re trying to accomplish, what might confuse them, and accessibility needs (vision, motor control, attention limitations). You don’t need professional research to do this for student projects; lightweight methods like quick interviews, observing someone use a prototype, or simple surveys can produce useful insights. The key is being honest about what you learned and how it influenced your design.

Usability: making the artifact easy to use

Usability is how easy it is for a user to accomplish their goals. It’s not the same as visual appeal. Practical usability ideas include clear labels/instructions, consistent controls, immediate feedback, error prevention (validation and sensible defaults), and making important actions easy while making rare actions harder to do accidentally.

Accessibility: making the artifact usable by more people

Accessibility means designing so that people with disabilities can use the artifact. Common considerations include color contrast and readable font sizes, not relying on color alone, captions or text alternatives for audio/video where possible, and simple navigation with clear focus indicators. Even if you don’t fully implement accessibility features, you should be able to reason about barriers and design choices that reduce them.

Example: redesigning a weather app screen

Suppose your first weather app screen uses small gray text on a light background and shows icons without labels. Feedback might reveal that users can’t quickly identify today’s temperature, icons are unclear (sun vs. partly cloudy), and low contrast text is hard to read. Refinements could include making the current temperature the largest element, adding text labels next to icons, and increasing contrast and font size. This is creative development because you’re shaping the artifact around user experience, not just adding code.

Exam Focus
  • Typical question patterns
    • Explain how a design choice supports a particular audience.
    • Identify improvements that would increase usability or accessibility.
    • Connect user feedback to iterative refinement.
  • Common mistakes
    • Describing the audience too vaguely (“everyone”) rather than specifying likely users.
    • Treating “looks good” as the same thing as “usable.”
    • Ignoring accessibility until the end, when it’s harder to fix.

Design Techniques: Brainstorming, Planning, and Making Trade-offs

Creative development involves turning a goal into a plan you can build. Design techniques help you think clearly before you code, which usually saves time and leads to better outcomes.

Brainstorming: generating options without judging too early

Brainstorming generates many ideas quickly; the goal is variety, not perfection. Methods include “How might we…?” questions, feature lists (must-have vs. nice-to-have), and remixing (combining existing ideas in a new way). A common pitfall is brainstorming only features (“add chat,” “add scores”) instead of user goals (“help users practice,” “help users compare progress”). Features should serve goals.

Storyboarding: planning user experience as a sequence

A storyboard is a sequence of sketches or descriptions showing how a user interacts with your artifact step-by-step. It forces you to think about flow and transitions.

Example storyboard for a flashcard app:

1) User opens app and sees deck list.
2) User selects “Biology Deck.”
3) App shows a card question.
4) User taps “Reveal Answer.”
5) User marks “Got it” or “Need practice.”
6) App tracks progress and shows summary.

Wireframes and UI sketches: designing screens before building them

A wireframe is a simple layout sketch showing where elements go (buttons, text, images) without final design polish. Wireframes help avoid clutter, prioritize information, and plan consistent navigation. A common beginner mistake is designing a screen by adding elements while coding, which often produces messy, inconsistent interfaces.

Pseudocode and flowcharts: planning logic

When a feature has multiple steps or conditions, it helps to plan the logic.

  • Pseudocode describes an algorithm in structured plain language.
  • Flowcharts visualize steps and branching decisions.

Example pseudocode for validating a username:

  • Ask user for username
  • If username is empty, show error message
  • Else if username contains spaces, show error message
  • Else accept username and continue

Decomposition: breaking a problem into manageable parts

Decomposition means splitting a large task into smaller tasks that are easier to design, implement, and test. For example, “Build a recipe recommendation app” can be decomposed into input (collect dietary preferences), data (store recipes/tags), processing (match recipes), output (display recommendations), and UI (navigation). Decomposition also supports collaboration because different teammates can work on different components.

Making trade-offs explicit

Design often involves choosing between competing goals. Being able to explain trade-offs is part of communicating your creative development.

Example trade-off statement:

  • “We chose a text-based interface instead of graphics because it loads faster and is clearer on school devices, even though it looks less visually impressive.”
Exam Focus
  • Typical question patterns
    • Identify which planning technique fits a situation (wireframe for layout, storyboard for user flow, pseudocode for logic).
    • Explain how decomposition helps with managing complexity or collaboration.
    • Describe a trade-off and justify a decision given constraints.
  • Common mistakes
    • Confusing brainstorming (generate options) with deciding (choose one and justify).
    • Writing “pseudocode” that is actually full code syntax, or so vague it can’t guide implementation.
    • Ignoring user flow—designing screens/features without planning how users move between them.

Collaboration in Creative Development (Building Better Together)

Most real software is built by teams. Collaboration is not just dividing work; it’s coordinating ideas, merging contributions, and maintaining shared understanding. Collaboration can occur in planning, designing, or testing (including debugging) parts of the development process.

Why collaboration improves computing artifacts

Collaboration can improve quality because different people notice different problems and edge cases, ideas combine into better solutions than any one person’s first draft, and work can be divided so progress happens in parallel. It also reflects reality: many computing innovations depend on multiple roles (designers, programmers, testers, project managers).

Collaboration requires communication structures

Collaboration fails when teams assume “we’ll just figure it out.” Effective teams clarify goals and requirements together, decide roles (even informally), use shared documents for plans/assets/decisions, and check in frequently to avoid building incompatible pieces. Even in a small project, two people can write code that doesn’t fit together unless they agree on things like variable names, data formats, or how the UI should flow.

Collaboration tools and collaborative learning

Collaboration tools allow students and teachers to exchange resources in different ways depending on the task (for example, shared folders, platform collaboration features, shared docs, or version control). Collaborative learning can occur peer-to-peer or in larger groups. A common classroom model is peer instruction, where students work in pairs or small groups to discuss concepts and find solutions to problems.

Pair programming (a common collaboration model)

Pair programming is a technique where two people work at one computer:

  • The driver writes the code.
  • The navigator reviews, thinks ahead, catches mistakes, and suggests improvements.

They switch roles periodically. Pair programming works well because it builds continuous review into the process. A misconception is that pair programming means one person codes while the other watches; the navigator should be actively engaged.

Versioning and “source of truth”

When multiple people edit files, you need a way to avoid overwriting each other. In professional settings, teams use version control systems (like Git) to track changes and merge work. In student settings, you might use simpler systems (shared folders, naming conventions), but the concept is the same: you need a clear “source of truth” and a process for combining work.

Even without formal tools, you can reduce conflicts by agreeing who edits which file/section, setting a schedule for merging changes, and keeping backups of working versions.

Giving and receiving feedback

Creative development improves when teammates give specific, actionable feedback (for example, “The button labels are unclear—maybe change ‘Go’ to ‘Submit Answer’.”). Receiving feedback is also a skill: you don’t have to accept every suggestion, but you should evaluate suggestions against your goals and audience.

Ethical collaboration: credit and boundaries

Collaboration does not mean “anything goes.” You are expected to follow collaboration rules for required project work and clearly distinguish what you created, what you created with a partner, and what you used from someone else (with attribution). Credit matters in both academic and professional contexts.

Exam Focus
  • Typical question patterns
    • Explain how collaboration can improve a program’s correctness, usability, or creativity.
    • Identify strategies to coordinate work and avoid merge conflicts.
    • Describe how feedback supports iterative improvement.
  • Common mistakes
    • Dividing work without agreeing on interfaces/data formats, causing integration failures.
    • Assuming collaboration is automatically efficient—ignoring communication costs.
    • Failing to track contributions or sources, leading to unclear attribution.

Testing, Debugging, and Iteration: Improving a Program Over Time

Testing and debugging are not just “fixing mistakes.” They’re part of how developers learn what their program actually does (which can differ from what they intended). In creative development, iteration based on testing is how you turn a rough idea into a reliable artifact.

Testing: checking behavior against expectations

Testing is the process of running a program (or parts of it) with selected inputs to see whether the actual output matches the expected output. A strong testing mindset includes testing typical cases, boundary/edge cases (empty input, very large values, unusual sequences), and testing after changes to ensure you didn’t break something that used to work.

Test cases: making expectations explicit

A test case usually includes the input(s), the expected output or behavior, and the actual output you observed. Writing test cases forces you to define what “correct” means.

Identifying and correcting errors: four main error types

Understanding common error types makes it easier to choose good tests and debug systematically.

Syntax error

A syntax error happens when the rules of the programming language are not followed.

Example:

a ← expression
DISPLAY (A)

A syntax error occurs because the second statement attempts to display variable A, which is not the defined variable. Variable names are case sensitive. The first statement defines a (lowercase), but the second references A (uppercase), so the language rules are violated.

Runtime error

A runtime error occurs during program execution and can cause the program to stop running.

Example:

DISPLAY (5/0)

There is no syntax error here, but dividing by zero causes a runtime error, and execution will halt at that line.

Logic error

A logic error is a mistake in the algorithm or program that causes it to behave incorrectly or unexpectedly.

Example:

a ← 95
IF (a > 90)
  DISPLAY("You got an A.")
IF (a > 80)
  DISPLAY("You got a B.")
IF (a > 70)
  DISPLAY("You got a C.")

This code is intended to print the correct letter grade. Since a is 95, it should display only:

You got an A.

But it actually prints:

You got an A.
You got a B.
You got a C.

The logic error happens because 95 is also greater than 80 and 70, and there is no restriction preventing multiple grades from being printed.

Overflow error

An overflow error occurs when a computer attempts to handle a number that is outside the defined range of values.

Example:

x ← 2000*365
DISPLAY (x)

The result of the multiplication is a large number. In many languages, this product can be outside the range of certain data types. If x uses one of those data types, the multiplication can cause an overflow error.

Debugging: systematically finding and fixing the cause

Debugging is the process of finding and fixing errors. Effective debugging is systematic:

1) Reproduce the bug consistently.
2) Narrow down where it happens (which input, which feature).
3) Use tools/strategies such as extra output (print) statements, tracing variables, checking conditions, and examining code for syntax errors.
4) Fix the root cause (not just the symptom).
5) Retest to confirm the fix and check for side effects.

A common beginner mistake is changing multiple things at once, which makes it hard to know which change fixed (or worsened) the problem.

User testing: when “correct” isn’t the same as “good”

A program can be logically correct but still fail for users: they don’t understand how to start, labels are confusing, or responses feel unpredictable. Feedback from real users reveals usability issues you won’t notice when you already know how the program works.

Iteration: turning results into improvements

Iteration becomes meaningful when you connect evidence to changes, such as “Users didn’t notice the reset button, so we moved it and changed its label,” or “Edge-case testing showed empty input caused errors, so we added validation.”

Example: debugging a score counter in a game

Suppose you have a game where the score sometimes increases twice for one action. A systematic approach:

  • Reproduce: it happens when the user clicks quickly.
  • Narrow down: the click event triggers two handlers (or the handler runs twice).
  • Inspect: add temporary output statements to confirm the event count.
  • Fix: ensure the event listener is added only once, or disable input briefly after a click.
  • Retest: try rapid clicks, normal clicks, and long sessions.

This is evidence-based debugging, not guessing.

Exam Focus
  • Typical question patterns
    • Distinguish testing from debugging and explain why both matter.
    • Identify which test cases would best reveal a particular kind of error.
    • Explain how feedback leads to iteration and refinement.
  • Common mistakes
    • Only testing with one or two “normal” inputs.
    • Debugging by random changes instead of isolating the cause.
    • Declaring the program finished after it works once, without retesting after modifications.

Using Existing Code, Media, and Ideas Responsibly (Attribution and Intellectual Property)

Modern computing is built on reuse. Developers routinely use libraries, sample code, APIs, frameworks, icons, sound effects, and design patterns created by others. Reuse is normal—and it comes with responsibilities.

Why reuse is valuable

Reuse allows you to build more complex artifacts faster, use well-tested components instead of reinventing them, and focus your creativity on the parts unique to your purpose. For example, using a graphics library lets you spend time on game mechanics instead of low-level drawing code.

The ethical responsibility: attribution

Attribution means giving credit to the original creator when you use code, media, or other content you didn’t create. In school, missing attribution can be plagiarism; professionally, it can violate licenses or contracts. A good rule is: if you did not create it entirely yourself, document where it came from.

What should you attribute?

Common items that require attribution include copied or adapted code from tutorials/examples/forums/repositories, images/icons/fonts/music/sound effects/videos you did not create, and (when appropriate) large ideas or designs that you substantially follow.

Licenses: permissions and restrictions

Reusable resources often come with licenses that specify what you’re allowed to do. Some licenses allow reuse with attribution, some restrict to noncommercial use, some allow modification while others do not, and some require derivative work to be shared under similar terms. The key principle is that “available online” does not automatically mean “free to use without conditions.”

Resource typeTypical permissionsTypical responsibilities
Open-source code libraryOften reusable and modifiableFollow license terms; keep notices; sometimes share changes
Creative Commons mediaOften reusable with conditionsAttribute; follow restrictions (noncommercial, no-derivatives, share-alike)
Stock media (paid/free)Use depends on licenseFollow usage limits; attribution may or may not be required
“Random image from Google”Unknown/variesNot safe to assume permission; find licensed sources

Adapting vs. copying: what changes the responsibility?

Even if you modify code or media, you still typically need to attribute the original source if your work is based on it. Changing a few lines does not remove the need for credit.

Documenting sources clearly

Good attribution is specific (link/source name/author if available), connected to the part you used (comment near code, credits section for media), and honest about what you changed.

For code, a common practice is to include a comment like:

# Adapted from: [source]
# Changes: [brief description]

For media, a simple “Credits” section in documentation often works well.

Using APIs and libraries responsibly

Responsible use includes reading documentation to understand inputs/outputs, respecting usage limits (rate limits), handling errors (like network failures), and not collecting or exposing sensitive user data unnecessarily.

Exam Focus
  • Typical question patterns
    • Identify when attribution is needed and what good attribution looks like.
    • Reason about why using existing components can speed development.
    • Recognize that online availability does not equal permission-free use.
  • Common mistakes
    • Forgetting to cite code snippets or media assets used in a project.
    • Believing that changing small parts removes the need for attribution.
    • Using unlicensed images/sounds because they were “easy to find.”

Communicating Your Creative Process and Explaining Your Artifact

In AP CSP, you’re not only building artifacts—you’re also expected to explain what you built, why you built it that way, and how it works. Communication is part of creative development because your artifact exists for an audience, and your development process often involves other people.

Why documentation and explanation matter

Clear explanation helps others understand the artifact’s purpose and features, helps teammates maintain and extend the project, helps you evaluate whether your solution meets requirements, and helps you justify design decisions and trade-offs. In real software development, poor documentation makes even good code difficult to reuse or improve.

Describing purpose vs. describing function

Students often mix these up.

  • Purpose: the real-world goal (for example, “help users practice vocabulary with spaced repetition”).
  • Function: what the program does (for example, “shows a word, waits for input, checks correctness, updates a score”).

When you communicate your artifact, begin with purpose and audience, then explain functionality.

Explaining development: showing iteration and learning

When you describe your creative process, focus on decisions and evidence: what you tried first, what feedback/testing results you got, what changed because of that feedback, and what trade-offs you made. Specific changes demonstrate real iterative development.

Communicating algorithms and program behavior (conceptually)

Even in Big Idea 1, you should be able to explain how key parts of your artifact behave using input–process–output:

  • Inputs: what information comes in
  • Processing: how the program decides what to do
  • Outputs: what changes the user sees

Example explanation for a recommendation feature:

  • Input: user selects preferred genres.
  • Processing: program checks which items match the genres and ranks them.
  • Output: program displays the top matches.

Visual communication: demos, screenshots, and videos

Often the fastest way to communicate what you built is to demonstrate it. A short demo video or sequence of screenshots can show user flow, key features in action, and evidence the artifact works. The key is selecting examples that represent the program’s purpose—not random clicking.

Keeping a development journal (a practical strategy)

A simple way to document your process is to keep brief entries: what you worked on, bugs you found and how you fixed them, feedback you received, and what you plan next. This makes it much easier later to explain your iterative development clearly.

Connecting communication to academic integrity

When you communicate your artifact, you also communicate what is yours. Cite sources for reused code and media, make clear what parts were created collaboratively, and describe your own contributions accurately.

Exam Focus
  • Typical question patterns
    • Write or choose a clear description of a program’s purpose and intended audience.
    • Explain how feedback/testing led to changes (iteration).
    • Distinguish purpose statements from feature lists or implementation details.
  • Common mistakes
    • Describing only what the program does without explaining why it exists for the user.
    • Claiming “iteration” without specifying what changed and what evidence caused the change.
    • Leaving out citations/credits in documentation or presentations.

Putting It All Together: A Worked Creative Development Scenario

To see how Big Idea 1 concepts connect, it helps to walk through a realistic scenario and explicitly call out where each idea shows up.

Scenario: building a “Study Session Timer” tool

Goal: Create a simple tool that helps students do focused study sessions.

1) Purpose and audience

  • Purpose: help students structure study time using timed intervals.
  • Audience: middle/high school students studying on a laptop or phone.

You start with people and goals, not an implementation detail like “I will use a loop.”

2) Requirements (clear and testable)

Functional requirements:

  • User can start a study timer for a chosen number of minutes.
  • User can pause/resume.
  • Program alerts the user when time is up.
  • Program tracks how many sessions were completed today.

Non-functional requirements:

  • Interface must be readable on small screens.
  • Controls must be simple (no more than 3 main buttons).

3) Design artifacts (planning techniques)

  • Wireframe: one main screen with time display, start/pause, reset.
  • Storyboard: choose duration → start → countdown → alert → session count increments.
  • Pseudocode: rules for session count (only increments when timer reaches zero).

4) Implementation strategy (iterative)

Iteration 1 (minimum viable version):

  • A timer that counts down and displays remaining time.

Iteration 2:

  • Add pause/resume.

Iteration 3:

  • Add session counter and daily reset.

This sequence reduces risk because you get something working early, then build complexity.

5) Testing and debugging

Testing examples:

  • Typical: set 25 minutes, start, confirm countdown.
  • Edge: set 0 minutes, confirm program handles it gracefully.
  • Edge: pause at 1 second left, resume, confirm it still alerts once.

Debugging example:

  • Bug: session count increments when user presses reset.
  • Fix: only increment when countdown reaches 0 naturally.

6) Feedback and refinement (human-centered)

User feedback:

  • “I can’t tell whether it’s paused.”
  • “The alert is easy to miss when I’m in another tab.”

Refinement:

  • Add clear paused indicator.
  • Add stronger alert message and optional sound (with licensed sound + attribution).

7) Responsible reuse and attribution

You might reuse:

  • A beep sound under an allowed license
  • A UI icon set with attribution
  • A timer code pattern from documentation (with citation if you copy/adapt)

A strong habit is recording sources as you add them, not at the end when you’ve forgotten.

8) Communicating the final artifact

A strong explanation would include:

  • Purpose and audience
  • Key features (mapped to requirements)
  • What changed across iterations and why
  • Any trade-offs (for example, “we kept a single-screen interface to reduce complexity”)
  • Credits for reused content

This is the full Big Idea 1 loop: creative idea → purposeful design → iterative build → responsible reuse → clear communication.

Exam Focus
  • Typical question patterns
    • Given a scenario, identify appropriate requirements, tests, or iterations.
    • Explain which changes best address user feedback.
    • Recognize where attribution is needed when resources are reused.
  • Common mistakes
    • Jumping to implementation details instead of describing goals, requirements, and audience.
    • Calling something “iterative” when it’s really just “added random features.”
    • Neglecting to connect tests to requirements (testing should prove requirements are met).