Week 1 tutorial notes
Design Judgment in the AI-Native Product Era
Explain design as evidence-based help for people using systems, read one everyday object and one product screen, compare vague and structured AI critique, curate AI output with accept/adapt/reject notes, and produce a small product decision artifact grounded in user, task, context, friction, and evidence.
Lesson thesis
Design judgment is the discipline of making decisions that help people use systems well, especially when AI can generate options quickly. It starts with user, task, context, interface, friction, evidence, and decision. Product ownership begins when those observations become a clear choice about what to improve, defer, test, or explain.
Preparation status
Prepared with study notes, references, and web examples.
1. Lecture spine
This session is the entry point for the whole course. The visible topic is simple: what is design? The deeper purpose is more serious: learning how to observe a situation, name what matters, and turn that observation into a product decision.
A university-grade version of this lesson should not treat design as a list of tips. It should introduce a disciplined way of thinking: person, task, context, interface, friction, evidence, decision. These words are simple, but they can support sophisticated product conversations later.
The teaching rhythm is a loop. First the learner observes an object or screen. Then she names what helps or blocks the user. Then she asks AI or another person for another view. Finally, she chooses one improvement and explains why it matters.
- 01ObserveWhat is happening here?Look at the object or screen before naming the problem. The first move is evidence, not opinion.
- 02NameMake the observation discussable.Use a small shared vocabulary: user, task, interface, friction, feedback, evidence, decision.
- 03CompareDo not rely on one view.Ask AI, the tutor, or a teammate for another interpretation, then compare it with what you can see.
- 04DecideTurn noticing into product ownership.Choose one improvement and explain why it matters for the user task and product outcome.
2. Product + Design + AI operating model
This course is Product + Design + AI. Product is the field where the work matters. Design is the specialist capability that helps the learner see quality and shape user experience. AI is the accelerator that makes options, drafts, critique, and prototypes appear faster.
The first lesson should already feel AI-native, but not AI-dependent. The learner should use AI to think faster while also learning that generated output is not automatically good output. The human job is to give direction, inspect evidence, reject weak assumptions, and make the product decision.
The tool bench starts deliberately small. Use one LLM and one visual workspace. The learner can open a Figma or FigJam file, create five sticky notes, paste the AI critique, and write the final decision. This is enough to make the lesson tactile without turning it into a click-by-click software class.
- ChatGPT or another LLM: use it for explanation, critique, options, uncertainty, and clearer decision sentences.
- Figma or FigJam: create the first observation board with sticky notes for user, task, context, friction, evidence, AI feedback, and final decision.
- Later AI production tools: Figma Make, Google Stitch, v0, Lovable, Bolt.new, Uizard, Adobe Firefly, and Canva Magic Studio should be introduced only when they serve a specific product/design task.
- Tool judgment rule: ask what the tool produced quickly, what it missed, what design principle reveals the weakness, and what artifact should be saved.
- 01OpenStart with one AI critique tool.Create or open a ChatGPT account, or use another LLM already available to the learner. This is the critique and wording partner, not the final judge.
- 02CreateStart with one visual workspace.Go to Figma, create or sign into an account, and open FigJam or a blank Figma Design file.
- 03Set upMake the board match the learning loop.Create five areas: human observation, vague AI output, structured AI output, accept/adapt/reject, and final decision.
- 04CaptureKeep human evidence and AI output separate.Paste or type one observation from the object or screen, then paste the two AI responses beside it.
- 05DecideFinish with ownership, not copying.Mark one AI point to accept, one to adapt, and one to reject, then write the final product decision sentence.
- 06SaveKeep the first artifact.Save the board link or screenshot, the prompt used, the AI output, and the final decision sentence in the learner artifact folder.
3. What design means professionally
In everyday conversation, people often use "design" to mean appearance. In professional product work, design means more than appearance. It is the shaping of a product, service, object, or system so people can understand it, use it, and reach an outcome.
Human-centred design gives this lesson its academic foundation. ISO 9241-210 frames human-centred design around interactive systems across their life cycle. NIST summarizes the goal in practical terms: systems should be usable and useful by focusing on users, needs, requirements, human factors, usability, accessibility, and evaluation.
This does not mean the first learner must read standards documents in week one. It means the tutor can teach a simple definition with serious backing: design is the practice of making decisions that help people use systems well.
- Design is concerned with people and systems interacting, not only with visual surface.
- A useful design helps the right person complete a meaningful task in a real context.
- A usable design reduces avoidable confusion, effort, risk, and delay.
- A responsible design considers accessibility and evaluation before and after launch.
4. Decoration, usability, and product value
Decoration can matter. Color, imagery, mood, typography, brand personality, and visual polish all influence how a product feels. The mistake is treating decoration as the whole of design.
A button can be beautiful and still unclear. A form can be stylish and still too difficult. A screen can look modern while hiding the most important information. Good design asks whether the person can complete the task with confidence.
For a product owner, this distinction is vital. If the problem is decoration, the next decision might be about brand expression. If the problem is usability, the next decision might be about labels, sequence, information hierarchy, accessibility, or error prevention.
5. First vocabulary set
The learner does not need a large vocabulary in Session 1. She needs a small vocabulary that can be used immediately. These six words make design talk more precise without turning the lesson into jargon.
The most important move is from opinion to evidence. "This is confusing" is a start. "This is confusing because the delivery cost appears after the user has already entered payment details" is a stronger design statement.
User
The person using the object, screen, service, or system.
Example: In a food delivery app, the user may be a hungry person ordering dinner.
Task
The action the user is trying to complete.
Example: Choosing a restaurant, paying for an order, booking a time, or sending a message.
Interface
The part of a product or object the user sees, reads, touches, clicks, hears, or controls.
Example: A screen, button, form, menu, handle, switch, label, sound, or warning.
Friction
Anything that makes progress harder, slower, less clear, less safe, or less trustworthy.
Example: A hidden fee, unclear button, tiny label, missing confirmation, or confusing error.
Feedback
A signal that tells the user what happened, what state they are in, or what to do next.
Example: A loading spinner, success message, disabled button, error text, light, sound, or vibration.
Evidence
A visible detail or observed behavior that supports a design judgment.
Example: The total cost appears only at the final step, so users may feel surprised before payment.
6. How to read an everyday object
Everyday objects are excellent teaching material because they make design physical. A kettle, door handle, microwave, medicine bottle, traffic sign, remote control, or washing machine already contains decisions about shape, labels, order, feedback, and safety.
The point is not to find a perfect object. The point is to slow down and see how the object teaches the user. A handle suggests where to hold. A switch suggests what changes. A light gives feedback. A label explains a state. A badly placed label or tiny measurement line creates friction.
This exercise protects confidence. The learner does not need drawing skill or software knowledge to begin. She only needs to look carefully and explain what the object is doing for the person.
- What is the object helping a person do?
- Which part invites the correct action?
- Which part shows state, safety, quantity, direction, or consequence?
- What could the user misunderstand if they were tired, rushed, distracted, or new?
- What one change would improve the task without redesigning everything?
7. How to read a product screen
A product screen is also an interface. It teaches the user what matters, what can be done, and what happens next. A screen can fail even if it looks polished, because polish does not guarantee clarity.
Professional screen reading starts with purpose. What is the user trying to do at this moment? Then it looks for the primary action, information order, language, feedback, and accessibility. These areas connect directly to sources such as Apple interface guidance, USWDS accessibility guidance, and Nielsen Norman Group usability heuristics.
For Session 1, the learner should analyze only one screen at a time. Complexity will come later. A single checkout screen, calendar event screen, food ordering screen, map screen, or public service start page is enough.
- Purpose: what moment is this screen for?
- Primary action: what should the user do next?
- Information order: what must be understood first, second, and third?
- Language: are labels and messages familiar and specific?
- Feedback: does the screen show status, success, error, loading, or next step?
- Accessibility: can different users perceive, operate, and understand the interface?
8. From observation to product ownership
Product ownership begins when design observation becomes a decision. A product owner does not only collect opinions. They help the team understand what matters, why it matters, and what should happen next.
This is why the artifact for Session 1 is a design observation and product decision sheet. The learner practices moving from "I noticed this" to "I would improve this first because it affects this user task."
This is also where startup relevance appears. In a fast startup, the team rarely has complete certainty. The product owner must make limited but thoughtful decisions: improve this, defer that, test this assumption, clarify this acceptance criterion, or ask for more evidence.
- 01ObservationThe delivery cost appears only near the end.Describe what you can see without solving it yet.
- 02EvidenceThe cart shows item price, but not tax or delivery.Point to the exact screen area, label, sequence, message, or behavior.
- 03JudgmentSurprise cost can reduce trust and stop checkout.Explain why the evidence matters to the user and business.
- 04DecisionShow estimated total earlier in the cart.Choose a practical next step that a team could build, test, or defer.
9. AI as a study partner
AI appears in this lesson as a study partner, not as a replacement designer. It can explain vocabulary, offer alternative readings, generate examples, and help the learner write clearer decisions. This is especially useful when the learner is new to design language or learning in a second language.
AI also creates risk. Figma's own AI guidance warns that AI outputs may be misleading or wrong and should not replace expert advice or research. NIST's AI risk work similarly reminds teams that trustworthiness must be managed. In a teaching context, the simple rule is: AI can help us think, but we still check.
Good prompting is not a magic phrase. OpenAI's prompting guidance emphasizes clear instructions, audience, purpose, and iteration. For product design, that means providing context, naming the user and task, asking for evidence, and requiring uncertainty.
I am studying beginner product design. Object or screen I am looking at: [describe it here] Please help me analyze it in simple English. Return: 1. User: who is this for? 2. Task: what is the person trying to do? 3. Context: where or when might they use it? 4. Helpful decisions: what makes the task easier? 5. Friction: what could confuse, slow, or worry the user? 6. Evidence: what can I point to in the object or screen? 7. Product decision: one improvement I would choose first, and why it matters. Rules: - Do not redesign the whole thing. - Give two possible improvements, then recommend one. - Tell me what you are uncertain about. - Use plain language suitable for a new learner.
- Does the answer refer to visible evidence from the object or screen?
- Does it name the user and task, not only the visual style?
- Does it recommend one realistic improvement instead of redesigning everything?
- Does it admit uncertainty where context is missing?
- The answer says "modern", "clean", or "intuitive" without proof.
- The answer invents user research or business facts we did not provide.
- The recommendation is too broad for a first improvement.
- The AI ignores accessibility, context, or the real task.
I am practicing product/design judgment. Observation: [paste my observation] Give me: 1. Three possible interpretations of what is happening. 2. The visible evidence that would support each interpretation. 3. One thing I may be assuming without enough evidence. 4. One question I should ask before recommending a change.
- Does AI challenge your assumptions instead of only improving your wording?
- Does it separate evidence from interpretation?
- Does it name what would need user research or more context?
- AI treats every interpretation as equally likely.
- AI invents user behavior that is not visible.
- AI gives a confident answer without naming missing evidence.
Act as a product design tutor. My draft decision sentence: [paste sentence] Improve it so it is clear enough for a product team. Keep it to one sentence. It must include: - what I would improve first - the user task it affects - the product reason it matters - one tradeoff or reason I am not redesigning everything
- Does the sentence name the change, user task, product reason, and tradeoff?
- Can a designer, engineer, or founder understand the next move?
- Is it one decision rather than a pile of unrelated suggestions?
- The sentence becomes generic product language.
- The recommendation is too large for one first improvement.
- The tradeoff disappears.
Turn this observation into a simple FigJam board structure. Observation: [paste observation] Return: 1. Board title. 2. Five sticky-note headings. 3. The exact text for each sticky note. 4. One "accept/adapt/reject" section for AI feedback. 5. One final product decision sentence. Keep the board simple enough for a beginner to recreate in 10 minutes.
- Can the learner recreate the board in 10 minutes?
- Does the board separate human observation from AI critique?
- Does the final note preserve the learner’s product decision?
- The board becomes a complex design workshop template.
- AI creates too many sections for Session 1.
- The tool output hides the original evidence.
10. Model artifact and quality examples
A strong answer does not need advanced vocabulary. It needs specificity. It names the user, task, helpful decision, friction, evidence, AI curation, first improvement, and tradeoff.
The model artifact uses the checkout example to make the quality bar visible. The weak answer is not weak because it is short. It is weak because it cannot support action. The better answer starts to use evidence. The strong answer shows enough reasoning for a product team, tutor, or external reviewer to understand the decision.
The full sample artifact and public handout are stored in docs/accelerator/assets/session-01 so the standard can be reused outside the lesson page.
Weak
This checkout looks good and simple. AI said it is clean, so I would make the colors nicer and add icons.
Evidence: The answer uses words such as good, clean, nicer, and icons without pointing to what the user is trying to do or what the screen shows.
Assessment: This is not yet assessment-ready because it treats AI as the authority, stays at the level of taste, and gives no user task, visible evidence, scope, or tradeoff.
Better
This checkout helps a shopper pay for two items. The Pay now button is clear, but the delivery cost is not visible yet. I would show delivery cost earlier because the user may want to know the total before paying.
Evidence: The answer points to delivery cost and payment confidence, so it is moving from taste toward user-task reasoning.
Assessment: This is useful but not fully strong. It names the task and one friction point, but it does not preserve the AI trail, state uncertainty, or explain why this change should be chosen instead of a broader redesign.
Strong
A shopper at checkout is trying to confirm the full cost before payment. The screen shows item price and a Pay now action, but delivery says "Next step", so the cost commitment is unclear. I would show estimated delivery and total before Pay now if the system can calculate them, and I would leave the rest of the layout unchanged because the highest-risk issue is trust, not visual polish.
Evidence: The answer names user, task, visible evidence, friction, condition, decision, and tradeoff without inventing research or metrics.
Assessment: This is strong for Lesson 1 because a product team could discuss, test, or scope the decision immediately. It also keeps AI in a supporting role instead of letting AI own the judgment.
- 01Human observationA shopper may not know the full cost before Pay now.The learner first names the user, task, context, helpful decisions, friction, visible evidence, and assumption before asking AI for help.
- 02Vague promptThe prompt gives AI too little context.The learner saves the weak prompt as a contrast exercise: "Is this design good?"
- 03Vague outputThe output sounds polished but is not decision-grade.The likely answer praises cleanliness, suggests colors or icons, and misses priority because the prompt did not name user, task, evidence, or criteria.
- 04Structured promptThe prompt becomes a tiny product/design brief.The learner then provides the screen description, user task, output fields, recommendation limit, and uncertainty requirement.
- 05Stronger outputAI becomes useful because the criteria are clearer.The stronger answer identifies late delivery cost as the main trust risk, suggests two options, recommends one, and admits missing delivery and tax rules.
- 06Accept/adapt/rejectThe learner curates the AI output.Accept the missing-cost insight, adapt the estimate into a conditional decision, and reject decorative polish or full redesign advice.
- 07Revised decisionHuman judgment owns the final product move.Show estimated delivery and total before Pay now if the system can calculate them, while keeping the rest of the layout stable to avoid unnecessary scope.
11. Studio exercise and rubric
The studio exercise should feel practical, not like a quiz. The tutor and learner complete the first example together, then the learner attempts a second example independently. The tutor should ask questions rather than rescue too quickly.
The target artifact is small but serious: one object observation, one screen observation, one AI-output critique, one prompt comparison, one FigJam or Figma observation board, accept/adapt/reject notes, and one product decision sentence. Small does not mean shallow. This artifact trains the exact habits used later in briefs, critique sheets, acceptance criteria, prototypes, and case-study writing.
The public handout should be enough for a motivated learner to complete the artifact without guessing what strong looks like. The tutor should point back to the model artifact whenever the learner drifts into taste, copying AI, or redesigning too broadly.
- 01Part AEveryday object readingChoose one physical object and complete the observation fields.
- 02Part BProduct screen readingChoose one safe, non-private app or web screen and complete the same fields.
- 03Part CAI-generated output critiqueGenerate or paste one simple AI design critique or interface idea, then identify what is specific, what is vague, and what is unsupported.
- 04Part DPrompt comparisonRun the vague prompt and the structured prompt. Mark one AI point to accept, one to adapt, and one to reject.
- 05Part EProduct decisionOpen Figma or FigJam, make a simple observation board, and write one decision sentence: I would improve ___ first because ___.
- Names a plausible user and task.
- Uses visible evidence instead of taste alone.
- Identifies at least one helpful design decision.
- Identifies one friction point without exaggerating.
- Uses AI critically: accept, adapt, or reject.
- Compares vague AI output with structured AI output.
- Keeps human observation separate from AI output.
- Explains at least one accepted, adapted, and rejected AI suggestion.
- Creates or sketches a simple visual observation board.
- Finishes with one clear product decision sentence that names a tradeoff or uncertainty.
12. Home study and follow-up reading
For home study, the learner should find three designed things: at least one everyday object, one app or web screen, and one service or sign. For each, answer: who is the user, what task are they trying to complete, what helps, what creates friction, what evidence supports the observation, and what would I improve first?
The lecturer can use the readings below to deepen future versions of the course. The learner does not need to read all of them after Session 1. The purpose is to keep the teaching standard grounded in professional sources while preserving accessible language.
Follow-up reading for the lecturer
References
The current ISO standard for human-centred design of interactive systems. Use it to anchor the lesson in professional practice: design work continues across the product life cycle and is concerned with human-system interaction.
NIST Human Centered DesignNIST summarizes human-centered design as making systems usable and useful by focusing on users, needs, requirements, human factors, usability, accessibility, and evaluation.
Design Council Double DiamondA visual model for moving from understanding a problem to testing possible solutions. It helps the learner see design as inquiry and decision-making, not decoration.
Nielsen Norman Group: 10 usability heuristicsJakob Nielsen's enduring interaction-design heuristics. Session 1 uses only a beginner subset: visibility, familiar language, recognition over memory, focus, and error recovery.
Apple UI Design Dos and DontsPractical interface guidance for layout, hit targets, readable text, contrast, organization, and alignment. Useful for translating design ideas into visible interface checks.
U.S. Web Design System accessibility guidanceA clear accessibility reference for perceivable, operable, understandable, and robust interfaces, plus concrete reminders that accessibility requires testing and iteration.
Figma AI tools in Figma DesignCurrent Figma guidance for AI design tools, including the reminder that AI output can be misleading or wrong and should be checked against research and expert judgment.
OpenAI Academy: Prompting fundamentalsOpenAI Academy guidance on writing clear prompts, setting the task and audience, asking for options, and iterating toward more useful responses.
Figma MakeFigma Make is a current signal that product/design learners need prompt-to-prototype judgment, not only static screen awareness.
Google Labs: Stitch AI UI DesignGoogle Stitch is an AI-native UI design canvas that shows where prompt-to-interface workflows are heading.
v0 documentationv0 shows how prompts can become product interface prototypes and code, which raises the value of clear product/design direction.
Lovable quick startLovable is a practical example of natural-language full-stack product building, useful later when design judgment moves into prototype critique.
Bolt.newBolt.new connects prompting with an in-browser development environment, reinforcing that AI acceleration still needs human product criteria.
IBM Enterprise Design ThinkingIBM's design-thinking approach is useful for connecting beginner observation to user outcomes, rapid making, reflection, and alignment across a team.
Web examples
Use this as the first professional contrast: Apple frames good interface work through readable content, touch targets, contrast, spacing, organization, and alignment. That lets the learner see design as concrete decisions, not taste.
GOV.UK Design SystemButton component and start buttonsThe GOV.UK start button is a simple, teachable example of a main action. It shows how one visible action can reduce uncertainty at the beginning of a service journey.
Baymard InstituteCart and checkout usability researchBaymard's checkout research gives the product-owner bridge: small design friction in checkout can become lost revenue, support load, and user abandonment.
Figma LearnUse AI tools in Figma DesignUse this source to discuss AI honestly. Figma positions AI as useful for drafting and iteration, while warning that outputs still require human checking.
FigmaFigma MakeUse this as the first tool-landscape signal: AI can now move from prompt to interactive product draft, so the learner must become good at direction and critique.
Vercelv0 documentationUse this to show that prompt-to-code tools are entering product workflows. The lesson should not teach coding yet; it should teach the judgment needed to steer generated interfaces.
Nielsen Norman Group10 usability heuristics for user interface designUse the first few heuristics as a starter critique language: visibility, familiar language, recognition rather than recall, focus, and recoverable errors.
U.S. Web Design SystemAccessibility guidanceUse USWDS to show that accessibility is not a later polish step. A useful design must be perceivable, operable, understandable, and robust for different people and contexts.
Complete a three-loop studio exercise: read one everyday object, read one safe app or web screen, critique one AI-generated design suggestion, compare a vague prompt with a structured prompt, then capture the result in a simple Figma/FigJam observation board before writing one product decision sentence.
- Can the learner separate decoration from design without dismissing visual quality?
- Can the learner name a plausible user, task, context, interface, friction point, and evidence?
- Can the learner explain one helpful design decision in an everyday object?
- Can the learner read a product screen by purpose, information order, primary action, feedback, and accessibility?
- Can the learner use AI for explanation or critique while identifying what to accept, adapt, and reject?
- Can the learner compare vague AI output with structured AI output and explain why one is more useful?
- Can the learner explain why a weak, better, and strong answer differ in evidence quality?
- Can the learner create a simple visual observation board in Figma or FigJam?
- Can the learner finish with one product decision sentence that a team could act on?
I am studying beginner product design. Object or screen I am looking at: [describe it here] Please help me analyze it in simple English. Return: 1. User: who is this for? 2. Task: what is the person trying to do? 3. Context: where or when might they use it? 4. Helpful decisions: what makes the task easier? 5. Friction: what could confuse, slow, or worry the user? 6. Evidence: what can I point to in the object or screen? 7. Product decision: one improvement I would choose first, and why it matters. Rules: - Do not redesign the whole thing. - Give two possible improvements, then recommend one. - Tell me what you are uncertain about. - Use plain language suitable for a new learner.
Home study
Find three designed things: one everyday object, one app or web screen, and one AI-generated interface idea or critique. For each, write the user, task, context, one helpful decision, one friction point, visible evidence, and one product decision sentence. Save the strongest example into a simple Figma/FigJam board with accept/adapt/reject notes for the AI output.