KLD Institute
Course mapOpen slides

Week 1 tutorial notes

Screen Reading in the AI-Native Product Era

Read one product screen as a user moment, identify screen job, hierarchy, primary action, supporting information, status, friction, and accessibility concerns, use AI critique critically, then create one evidence-backed product decision.

Lesson thesis

A screen is a product moment. It should orient the user, show what matters, make the main action visible, explain status or feedback, reduce avoidable friction, and give AI enough evidence to support critique without replacing judgment.

Preparation status

Prepared with study notes, references, and web examples.

Study notes

1. Lecture spine

Session 2 moves the program from ordinary design noticing into product-screen judgment. In Session 1, the learner learned that design helps people act and that AI must be checked. Now she applies that logic to one screen and asks: what user moment is this, what is the screen job, what does the user notice, what can they do next, what feedback do they receive, and what might block confidence?

A screen is not a poster. A poster may mainly communicate a message. A product screen usually supports a task. It shows information, offers controls, responds to actions, gives feedback, and carries risk. If the learner can read those pieces clearly, she can later write better briefs, critique AI output, and explain product decisions.

The standard for this session is evidence-based screen reading with AI acceleration. The learner should not say only "it looks good" or "it is confusing." She should point to visible details: the title, button label, order of information, missing status, hidden cost, low contrast, crowded choices, unclear next step, or unsupported AI suggestion.

Diagram showing a five-step loop for reading a product screen before improving it.
Screen reading loop: The Session 2 loop: frame, scan, locate, check, decide. AI enters after evidence has been named.Original KLD Institute teaching diagram.
  1. 01FrameWho is here, and what are they trying to finish?Name the user moment before judging the screen. A screen is good or weak only in relation to what the user is trying to do.
  2. 02ScanWhat is noticed first, second, and third?Write what the screen makes most visible. This includes title, biggest content, strongest contrast, primary action, and anything repeated.
  3. 03LocateWhat is the screen asking the user to do?Find the action, supporting information, status, and possible friction before suggesting a fix.
  4. 04CheckWhat might break confidence?Look for visible feedback, hidden risk, accessibility concerns, and missing reassurance.
  5. 05DecideWhat should improve first?Use human evidence and AI critique to write one practical improvement with a product reason.

2. What makes a screen professional

Professional interface guidance gives us practical ways to inspect screens. Apple guidance points to readable content, adequate contrast, tappable controls, organization, and alignment. GOV.UK templates show how consistent page structure can help users understand where they are and what the service is for. Nielsen Norman Group heuristics help us ask whether the screen shows status, uses familiar language, and avoids making users remember too much.

Accessibility guidance adds a non-negotiable layer. A screen should be perceivable, operable, understandable, and robust. For a beginner, that can start with plain questions: can people read it, tap it, understand it, and recover if something goes wrong?

The learner does not need to memorize all the standards. She needs to learn that screen quality is not personal taste. It can be inspected with repeatable questions. This also prepares her for AI tools: a generated screen must pass the same screen-quality checks as a human-made screen.

Screen quality anchors
  • A screen should orient the user: where am I, and what is this for?
  • A screen should prioritize attention: what matters first, second, and third?
  • A screen should offer action: what can I do next?
  • A screen should provide feedback: what happened, what is happening, or what will happen next?
  • A screen should reduce avoidable risk: what could confuse, worry, or block the user?

3. Vocabulary for reading screens

Session 2 adds vocabulary for screens. These words let the learner say more than "this page is confusing." They help her explain which part of the screen is doing the work and which part is creating friction.

The most important new phrase is screen job. If the screen job is unclear, every other decision becomes harder. A screen cannot have good hierarchy if nobody knows what task it is supporting.

Screen job

The reason this screen exists in the user journey.

Example: A checkout payment screen helps the user confirm details and pay.

Hierarchy

The order in which the screen asks for attention.

Example: The page title is first, the delivery total is second, and the Pay now button is third.

Primary action

The main thing the user can do next.

Example: Start now, Continue, Pay now, Save changes, Send message.

Supporting information

Information that helps the user decide safely.

Example: Total cost, delivery time, eligibility, file size limit, warning, or next step.

Status

A signal about what is happening, what happened, or what state the system is in.

Example: Loading, saved, failed, empty, disabled, selected, completed.

Screen friction

A detail that makes the task harder, slower, less clear, or less trustworthy.

Example: A hidden fee, vague button, missing error message, crowded layout, or tiny tap target.

4. Screen anatomy

A useful screen reading starts by separating parts of the screen. The learner should not try to judge the whole thing at once. First she should find orientation, decision content, controls, feedback, and support.

Orientation answers "where am I?" Decision content answers "what do I need to know?" Controls answer "what can I do?" Feedback answers "what happened?" Support answers "is it safe or sensible to continue?"

This anatomy is not a rigid wireframe. Some screens are dashboards, some are forms, some are content pages, some are modals, some are empty states. The same questions still help. The learner can also use this anatomy as a prompt checklist when asking AI for critique.

Annotated fictional booking screen showing orientation, decision content, support, action, status, and friction.
Screen anatomy annotation: A screen becomes easier to critique when the learner separates orientation, decision content, support, action, feedback, and risk.Original KLD Institute teaching diagram.
Screen anatomy checklist
  • Orientation: title, location, navigation, progress, or service name.
  • Decision content: the information the user needs before acting.
  • Controls: buttons, links, fields, menus, choices, toggles, and inputs.
  • Feedback and status: loading, saved, success, error, empty, disabled, selected.
  • Support and reassurance: cost, delivery, eligibility, safety, consequence, recovery.

5. User, task, and context inside a screen

A screen does not exist in isolation. The same screen may feel clear or unclear depending on the user moment. A new user may need orientation. A returning user may need speed. A worried user may need reassurance. A rushed user may need fewer decisions.

This is why the later user, task, context, and outcome lesson will add deeper product framing. Session 2 prepares for that by asking: what situation is the user in when this screen appears?

Context changes the screen job
New user
Weak: The screen uses internal language and expects the user to already know the system.
Strong: The screen explains the purpose and next action without assuming previous knowledge.
Returning user
Weak: The screen hides common actions behind new labels or changing layouts.
Strong: The screen makes familiar actions easy to find again.
Worried user
Weak: The screen asks for action before answering the user's risk question.
Strong: The screen shows cost, consequence, privacy, recovery, or confirmation before commitment.

6. Hierarchy and scanning

Hierarchy is the order of attention. It is one of the first design-specialist skills because it turns visual layout into user clarity. The learner should ask: what do I notice first, second, and third? Is that the same order the user needs?

People often scan screens rather than reading every word carefully. This makes hierarchy especially important. Titles, headings, button labels, form labels, grouping, spacing, and contrast should help users find the next useful thing.

A common beginner mistake is making everything important. If every element is visually loud, the user has to do the prioritization work. Good hierarchy does some of that work for them. AI-generated screens often fail here because they can look polished while giving equal weight to everything.

Fictional interface with numbered attention markers showing first, second, and third attention order.
Attention order map: Attention order should support the task. If the first thing noticed is not useful, the screen may be forcing the user to work too hard.Original KLD Institute teaching diagram.
Attention-order checklist
  • Look for the largest text or strongest visual element.
  • Look for the strongest action button or input.
  • Look for the first piece of information that helps the user decide.
  • Look for what is visually loud but not important.
  • Look for what is important but visually buried.
Weak and strong hierarchy
Good hierarchy
Weak: Everything competes equally, so the user must search for meaning.
Strong: The user can tell the screen job, important information, and next action quickly.
Good grouping
Weak: Related information is scattered, forcing the user to remember it.
Strong: Information sits near the decision or control it affects.
Good contrast
Weak: Low contrast or excessive emphasis makes the screen harder to scan.
Strong: Important content is readable and distinct without shouting.

7. Action, support, feedback, and status

The primary action is the main next step. It is a promise. A button labelled "Pay now" promises payment. A button labelled "Save changes" promises that the changes will be stored. A vague label such as "Submit" often forces the user to guess what will happen.

Status and feedback protect confidence. If the user clicks and nothing visible happens, they may click again, leave, or worry. Loading, success, error, empty, disabled, selected, and saved states all help the user understand the system.

Support is the information that makes the action safe. A product owner should ask what the user needs before acting: cost, consequence, eligibility, privacy, recovery, or next step. This session introduces states lightly. The actions lesson will go deeper into buttons, states, errors, and recovery.

Action and status checklist
  • What is the primary action?
  • Does the label say what will happen?
  • Is the action placed after enough information to decide?
  • Is there a clear way to go back, cancel, edit, or recover if needed?
  • Does the screen show status after the action?
Support checklist
  • Useful support: cost, delivery time, eligibility, availability, limits, requirements, privacy, next step.
  • Weak support: repeated labels, vague marketing copy, decorative explanation, hidden conditions.
  • Missing support: the user cannot tell whether the action is final, paid, reversible, safe, or editable.

8. Trust and risk

Trust is built from small screen details. In checkout, trust may depend on total cost, delivery timing, return policy, payment security, and whether the user can edit mistakes. In a government service, trust may depend on eligibility, document requirements, privacy, and whether the user can save progress.

Baymard checkout research is useful here because it shows the product impact of screen friction. If users meet hidden costs, too many fields, or confusing checkout steps, the design problem can become a business problem.

A product owner should learn to spot trust questions early. The practical question is: what does the user need to know before acting confidently?

Trust questions on screens
Cost
Weak: Costs appear late, after the user has invested effort.
Strong: The user sees total cost, delivery, taxes, and conditions before payment.
Consequence
Weak: The user cannot tell what happens after pressing the button.
Strong: The screen explains whether the action is final, editable, reversible, or shared.
Recovery
Weak: The error appears without plain explanation or next step.
Strong: The screen offers a clear way to fix missing information or leave safely.

9. Accessibility as screen quality

Accessibility belongs inside screen reading from the beginning. The learner does not need to master accessibility law in Session 2, but she should learn that screen quality must work for different bodies, contexts, devices, and abilities.

A screen with low-contrast text, tiny tap targets, color-only errors, missing labels, unclear focus order, or dense language may block people even if the visual style looks polished.

The beginner rule is simple: if users cannot perceive, operate, understand, or recover, the screen has a design problem.

Beginner accessibility screen check
  • Perceivable: can users notice and read the content?
  • Operable: can users reach and use controls by touch, keyboard, or assistive technology?
  • Understandable: are words, order, feedback, and errors clear?
  • Robust: will the screen still work across devices, browsers, zoom, and assistive tools?
  • Not color alone: does the screen use labels, icons, position, or text as well as color?

10. AI tool bench and walkthrough

Lesson 2 must feel like a studio, not a reading assignment. The learner should touch real tools early, but the tool workflow stays narrow. First she captures or recreates one safe screen. Then she annotates it in FigJam or Figma. Then she asks AI for critique. Only after that should she try a small generated variant in a prompt-to-interface tool.

This order matters. If the learner starts by generating a new screen, she may confuse output speed with design skill. If she starts by reading the current screen, AI tools become accelerators for a clear design task.

Current tools change quickly, so the named tools are examples, not permanent dependencies. The transferable skill is tool judgment: what does this tool produce quickly, what does it miss, and what design fundamentals must I use to check it?

Four-step workflow showing capture, FigJam annotation, AI critique, and comparison or prototyping.
Lesson 2 tool workflow: The tool stack stays small: capture a safe screen, annotate in FigJam or Figma, ask AI for critique, then compare or generate one small variant.Original KLD Institute teaching diagram.
  1. 01Step 1Open the AI critique tool.Create or open a ChatGPT account, or use another LLM already available to the learner. The tool is used for critique, uncertainty, and rewriting, not final judgment.
  2. 02Step 2Create the annotation board.Create or open a Figma account and open FigJam. Start with a blank board, add the screen image or a simple recreated rectangle, then add sticky notes for screen job, attention order, action, support, status, friction, accessibility, and decision.
  3. 03Step 3Protect private information.Use a safe public screen or a redacted screenshot. Remove names, payment details, addresses, health details, private messages, and internal company data before using AI or cloud tools.
  4. 04Step 4Generate only after criteria are clear.Optionally open Figma Make, Google Stitch, or Uizard after the critique is written. Ask for one small variant, then compare it against the original screen-reading criteria.
  5. 05Step 5Save the learning artifact.Save the board link or screenshot, prompt, AI critique, optional variant, accept/adapt/reject notes, and final decision sentence.
Tool walkthrough rules
  • Expected output: one FigJam or Figma board with the screen, six to eight evidence notes, one AI critique section, and one final decision sentence.
  • Use ChatGPT or another LLM for structured critique and clearer wording.
  • Use FigJam or Figma for visual annotation and evidence organization.
  • Use Figma Make, Stitch, Uizard, or a current equivalent only for one small variant after the screen criteria are written.
  • Reject any tool output that invents user research, hides accessibility issues, changes the screen job, or treats polish as proof.

11. AI prompt lab library

AI can be useful in Session 2 because it can help the learner notice possible issues, improve wording, challenge assumptions, and turn observations into a board or small variant prompt. It can also produce generic critique if the prompt is vague. The learner should not ask "Is this good?" and accept the answer.

A stronger prompt gives AI a screen description, product context, structured fields, visible evidence, and uncertainty requirements. This turns AI into a screen-reading assistant rather than a judge.

The learner should practice three responses to AI: accept, adapt, reject. Accept specific evidence-backed points. Adapt useful but broad suggestions. Reject claims that are unsupported, invented, outside scope, or not relevant to the user moment.

Prompt contrast
Vague prompt
Weak: Is this screen good? [Paste a safe screenshot description or describe the screen.]
Strong: Use this as a contrast exercise. It usually produces generic comments about simplicity, modernity, or color.
Structured prompt
Weak: Do not ask AI to redesign everything before the learner has read the current screen.
Strong: Use the full Session 2 prompt when the learner needs screen job, attention order, action, support, status, friction, accessibility, and one decision.
Session 2 prompt lab
I am reading one product screen as an AI-native product/design learner.

Screen I am looking at:
[describe the screen, paste a safe screenshot description, or describe a redacted screenshot]

Product context I know:
[who the product is for, what the user is trying to do, and anything I know about business risk or constraints]

Please analyze it in simple English.
Return:
1. Screen job: what is this screen trying to help the user do?
2. User, task, context: who is using it, what are they trying to do, and what situation are they in?
3. Attention order: what do users probably notice first, second, and third?
4. Primary action: what is the main thing the user can do next?
5. Supporting information: what information helps the user decide?
6. Feedback or status: does the screen show where the user is, what happened, or what will happen next?
7. Friction or risk: what could confuse, slow, worry, or block the user?
8. Accessibility check: what might be hard to read, tap, understand, or operate?
9. Product decision: recommend one improvement first and explain why.
10. AI uncertainty: what are you unsure about because the screen or context is missing evidence?

Rules:
- Do not redesign the whole screen.
- Give two possible improvements, then recommend one.
- Use visible evidence from the screen.
- Separate visible evidence from assumptions.
- Tell me what you would accept, adapt, or reject if this were an AI-generated critique.
Check before accepting
  • Does the answer name a screen job?
  • Does it refer to visible evidence?
  • Does it identify attention order and primary action?
  • Does it name uncertainty instead of pretending to know the user?
  • Does it recommend one realistic improvement first?
Reject or revise when
  • The answer redesigns the whole screen without reading it first.
  • The answer invents user research, metrics, or business facts.
  • The answer only talks about style.
  • The answer ignores accessibility, status, or user risk.
Attention-order prompt
I am practicing screen hierarchy.

Screen description:
[paste a safe screen description]

Return:
1. What the user probably notices first, second, and third.
2. The visible evidence for each attention claim.
3. Whether that attention order matches the user task.
4. One low-scope improvement to make the attention order more useful.

Rules:
- Do not redesign the whole screen.
- Do not comment on style unless it affects clarity, action, trust, or accessibility.
Check before accepting
  • Does the answer identify first, second, and third attention claims?
  • Does it point to visible evidence for each claim?
  • Does it ask whether the attention order matches the user task?
Reject or revise when
  • AI comments on beauty without naming attention order.
  • AI recommends a style change without visible evidence.
  • AI ignores the user task and only describes layout.
Evidence challenge prompt
Challenge my screen critique like a senior product designer.

My critique:
[paste critique]

Return:
1. Which statements are visible evidence.
2. Which statements are interpretations.
3. Which statements are assumptions that need research, analytics, or product context.
4. One stronger version of my critique that is more precise and less overconfident.
5. One question I should ask before recommending a change.
Check before accepting
  • Does AI separate evidence, interpretation, and assumption?
  • Does it make the critique more precise rather than more dramatic?
  • Does it name a question that would improve confidence before changing the screen?
Reject or revise when
  • AI treats assumptions as facts.
  • AI invents analytics, research, or user behavior.
  • AI turns a focused critique into a full redesign.
Decision sentence prompt
Act as a product/design tutor.

My draft screen decision:
[paste sentence]

Rewrite it as one product-ready decision sentence.
It must include:
- what I would improve first
- which user task or risk it affects
- the visible evidence from the screen
- the product benefit
- why I am not redesigning everything yet
Check before accepting
  • Does the sentence name the improvement, user task or risk, visible evidence, product benefit, and scope limit?
  • Could a founder, designer, or engineer understand the next move?
  • Is it one decision rather than a bundle of unrelated fixes?
Reject or revise when
  • The sentence becomes generic product language.
  • The recommendation is too large for one screen pass.
  • The product benefit disappears.
FigJam board prompt
Turn this screen reading into a FigJam annotation board.

Screen reading:
[paste my notes]

Return:
1. Board title.
2. Six sticky-note headings.
3. Exact text for each sticky note.
4. One "AI critique" section with accept, adapt, reject columns.
5. One final product decision sentence.

Keep the board simple enough for a beginner to recreate in 15 minutes.
Check before accepting
  • Can the learner recreate the board in 15 minutes?
  • Does the board separate human evidence from AI critique?
  • Does the final note preserve one product decision?
Reject or revise when
  • The board becomes a complex workshop template.
  • AI creates too many sections for Session 2.
  • The tool output hides the visible screen evidence.
AI UI variant prompt
I want to explore a small AI-generated screen variant after reading the current screen.

Current screen job:
[paste screen job]

User task:
[paste user task]

Visible friction:
[paste friction]

Constraints:
[paste product, brand, accessibility, or technical constraints]

Create a prompt I can use in Figma Make, Stitch, Uizard, or another AI UI tool.
The prompt should ask for:
1. One small variant, not a full redesign.
2. Preserved screen job and primary action.
3. Clear attention order.
4. Accessible labels, readable text, and visible status.
5. A short explanation of what changed and why.
Check before accepting
  • Does the prompt ask for one small variant rather than a full redesign?
  • Does it preserve the screen job and primary action?
  • Does it require accessibility, status, and explanation of changes?
Reject or revise when
  • The prompt asks the tool to decide the product strategy.
  • The variant changes the screen job without reason.
  • The output looks polished but removes useful support or feedback.

12. Model boards and quality examples

A strong screen-reading answer names the screen job, attention order, primary action, supporting information, status, friction, accessibility concern, AI curation, and first decision. It does not need advanced design language. It needs visible evidence and a practical recommendation.

The deep model artifact uses a fictional laptop recommendation screen to show the full reasoning trail: human read, vague AI output, structured AI output, accept/adapt/reject notes, and final product decision. This prevents the lesson from feeling like a checklist and gives the learner a concrete artifact standard to imitate.

The upgraded transfer set then uses three screen types so the learner can see that the method travels. Checkout tests trust before commitment. Booking tests selected state and time confidence. Empty state tests first-use orientation and action clarity.

The full sample artifact and public handout are stored in docs/accelerator/assets/session-02 so the benchmark can be used in teaching, public previews, or partner review.

Completed AI-native screen reading board for a fictional laptop recommendation screen.
Completed screen reading board: The deep model artifact shows human screen evidence, vague AI contrast, structured AI critique, accept/adapt/reject curation, and one scoped product decision.Original KLD Institute teaching diagram.
Board comparing human screen evidence, AI critique, and accept, adapt, reject decisions.
AI screen critique board: A strong learner can explain which AI points to accept, adapt, or reject, and why.Original KLD Institute teaching diagram.
Three completed sample screen noticing boards for checkout, booking, and empty-state dashboard screens.
Screen noticing sample boards: A public-ready Lesson 2 sample should show transferability across different screen jobs, not only one checkout example.Original KLD Institute teaching diagram.
Weak, better, and strong Session 2 answers
Score 1: Not yet

Weak

This screen looks clean and modern. AI said it is intuitive, so I would make the colors better and simplify it.

Evidence: The answer uses clean, modern, intuitive, and better colors without pointing to any specific screen element or user task.

Assessment: This is weak because it does not name the screen job, user moment, attention order, primary action, visible evidence, or product risk. It also accepts AI confidence without checking what evidence AI used.

Score 2-3: Emerging to capable

Better

This checkout screen helps a shopper pay. The Pay now button is clear, but delivery cost is not visible. I would show the cost earlier because the user needs confidence before paying.

Evidence: The answer points to delivery cost and user confidence, so the critique is grounded in a visible support gap.

Assessment: This is better because it names a screen job, a primary action, one friction point, and a product reason. It is still incomplete because it does not map attention order, check status or accessibility, or show what AI advice was accepted or rejected.

Score 4: Strong

Strong

The checkout screen is trying to help a shopper confirm cost and pay. Attention goes first to Pay now, then item price, while delivery says "Next step", so the action appears before full cost confidence. I would show estimated delivery and total before Pay now if the system can calculate them; if not, I would change the action label so the screen does not imply immediate payment.

Evidence: The answer ties Pay now, item price, and delivery Next step to a clear confidence risk and a conditional product decision.

Assessment: This is strong because it names screen job, user task, attention order, primary action, support gap, visible evidence, uncertainty, scoped decision, and tradeoff. A product team could use this as the start of a review conversation.

Transfer across screen types
Checkout
Weak: Only comment on visual polish or button color.
Strong: Focus on cost confidence, commitment, recovery, payment status, and support before action.
Booking
Weak: Only say the time cards look easy to scan.
Strong: Focus on selected state, timezone, availability, location, confirmation, and what happens after Continue.
Empty state
Weak: Only suggest a nicer illustration.
Strong: Focus on first-use orientation, primary start action, next-step preview, and useful recovery from blankness.
  1. 01Human screen readRead the current screen before generating a new one.The learner names screen job, user, task, context, attention order, action, support, status, friction, accessibility concern, and evidence before prompting.
  2. 02Vague AI outputSave vague critique as a contrast.The vague prompt usually produces clean, modern, intuitive, spacing, and color comments with little evidence.
  3. 03Structured AI outputUse AI as a screen-reading assistant.The structured prompt asks for screen job, attention order, action, support, status, friction, accessibility, product decision, and uncertainty.
  4. 04Accept/adapt/rejectCurate AI before deciding.Accept evidence-backed points, adapt useful but broad ideas, and reject unsupported claims, full redesigns, or polish-first advice.
  5. 05Final decisionFinish with a product-ready screen decision.Write one decision that names the improvement, visible evidence, product benefit, and reason the scope stays narrow.

13. Studio exercise and rubric

The studio output for Session 2 is now an AI-native screen noticing board and product decision sheet. The artifact should be practical enough to use in a real product review. The learner chooses one screen, maps the user moment, reads the screen anatomy, compares AI critique, optionally tests one generated variant, and writes one product decision.

The tutor should keep the scope narrow. One screen is enough. The learner should not redesign a full app or produce a polished wireframe. The goal is screen literacy, product judgment, and tool discipline.

For public-ready work, the learner should also compare the chosen screen with at least one other screen type from the model board set. This prevents the artifact from becoming a memorized checkout answer and shows that the learner can transfer the method.

  1. 01Part ASelect one screenChoose one safe app or web screen. Avoid private banking, health, identity, or payment details unless personal data is hidden.
  2. 02Part BBuild the annotation boardPaste or recreate the screen in FigJam or Figma. Add sticky notes for screen job, user, task, context, attention order, primary action, support, status, friction, and accessibility concern.
  3. 03Part CCompare AI critiqueRun the vague prompt and the structured prompt. Mark one AI point to accept, one to adapt, and one to reject.
  4. 04Part DTest one generated variantOptional: use the AI UI variant prompt in Figma Make, Stitch, Uizard, or a current equivalent. Generate one small variant and critique it against the original screen reading.
  5. 05Part EMake the product decisionWrite one decision sentence: I would improve ___ first because the evidence is ___ and the product benefit is ___.
Rubric for a public-ready screen noticing sheet
  • Names the screen job in one sentence.
  • Identifies user, task, and context without over-inventing.
  • Marks attention order: first, second, third.
  • Finds the primary action and explains its promise.
  • Identifies supporting information and missing support.
  • Checks feedback/status and one accessibility concern.
  • Compares vague AI output with structured AI output.
  • Uses AI critically: accept, adapt, reject.
  • Keeps visible screen evidence separate from AI interpretation.
  • Explains one transfer insight: how the same method applies to another screen type.
  • Creates a simple visual annotation board in FigJam, Figma, or an equivalent canvas.
  • If using a generator, compares the generated variant against the screen criteria.
  • Finishes with one evidence-backed product decision that names product benefit and scope limit.

14. Home study and follow-up reading

For home study, the learner should choose two screens from everyday apps or websites. For each screen, she should create a small annotation board, write the screen job, user, task, context, attention order, primary action, supporting information, feedback/status, friction, accessibility concern, AI accept/adapt/reject notes, and one improvement decision.

The lecturer can use the readings below to expand examples, especially for screen structure, usability heuristics, accessibility, checkout friction, AI-assisted critique, FigJam annotation, and current prompt-to-interface tools.

Follow-up reading for the lecturer

Apple UI Design Dos and DontsA practical interface-design reference for screen-level checks: readable text, contrast, spacing, organization, alignment, and touch targets.Nielsen Norman Group: 10 usability heuristicsA stable professional reference for reading screens through visibility, familiar language, recognition over recall, focused information, and error recovery.GOV.UK Design System page templateA useful example of screen structure: page title, header, main content, constrained width, and consistent service layout.GOV.UK Design System button componentA simple source for teaching the main action on a screen, including when a start button is appropriate for beginning a service journey.U.S. Web Design System accessibility guidanceA strong accessibility anchor for screen reading: perceivable, operable, understandable, robust, plus reminders that teams must test their own services.Baymard cart and checkout usability researchA product-owner bridge for why screen friction matters. Checkout screens show how information order, hidden costs, and form effort can affect completion.OpenAI Help Center: Prompt engineering best practices for ChatGPTCurrent ChatGPT prompting guidance for clear, specific requests, useful context, and iterative refinement. Session 2 uses this to turn a vague screen opinion into a screen-reading brief.Figma Learn: Guide to FigJamFigJam is the first practical tool surface for Session 2: learners can paste a screen, add sticky notes, group evidence, and run lightweight critique without needing polished design skills.Figma MakeFigma Make shows the current prompt-to-interface direction: learners can start from a design or prompt and quickly make functional prototypes, which makes screen-reading criteria more valuable.Google Labs: Stitch AI UI DesignA current 2026 tool-landscape signal from Google Labs: Stitch is positioned as an AI-native software design canvas for creating and iterating high-fidelity UI from natural language.Uizard Screenshot ScannerUizard Screenshot Scanner is a practical example of a tool that can convert screenshots into editable mockups, useful for showing why learners must critique generated structure and not only admire speed.

References

Apple UI Design Dos and Donts

A practical interface-design reference for screen-level checks: readable text, contrast, spacing, organization, alignment, and touch targets.

Nielsen Norman Group: 10 usability heuristics

A stable professional reference for reading screens through visibility, familiar language, recognition over recall, focused information, and error recovery.

GOV.UK Design System page template

A useful example of screen structure: page title, header, main content, constrained width, and consistent service layout.

GOV.UK Design System button component

A simple source for teaching the main action on a screen, including when a start button is appropriate for beginning a service journey.

U.S. Web Design System accessibility guidance

A strong accessibility anchor for screen reading: perceivable, operable, understandable, robust, plus reminders that teams must test their own services.

Baymard cart and checkout usability research

A product-owner bridge for why screen friction matters. Checkout screens show how information order, hidden costs, and form effort can affect completion.

OpenAI Help Center: Prompt engineering best practices for ChatGPT

Current ChatGPT prompting guidance for clear, specific requests, useful context, and iterative refinement. Session 2 uses this to turn a vague screen opinion into a screen-reading brief.

Figma Learn: Guide to FigJam

FigJam is the first practical tool surface for Session 2: learners can paste a screen, add sticky notes, group evidence, and run lightweight critique without needing polished design skills.

Figma Make

Figma Make shows the current prompt-to-interface direction: learners can start from a design or prompt and quickly make functional prototypes, which makes screen-reading criteria more valuable.

Google Labs: Stitch AI UI Design

A current 2026 tool-landscape signal from Google Labs: Stitch is positioned as an AI-native software design canvas for creating and iterating high-fidelity UI from natural language.

Uizard Screenshot Scanner

Uizard Screenshot Scanner is a practical example of a tool that can convert screenshots into editable mockups, useful for showing why learners must critique generated structure and not only admire speed.

Web examples

GOV.UK Design SystemPage template

Use this to show that a screen has structure before it has decoration: title, main content, container width, header, footer, and service conventions.

GOV.UK Design SystemButton component

Use the start button as a concrete main-action example. It teaches how a screen can reduce hesitation by naming the next step plainly.

Apple DeveloperUI Design Dos and Donts

Use Apple guidance to make screen quality visible: content should fit, targets should be tappable, text should be legible, contrast should support reading, and alignment should show relationships.

Baymard InstituteCart and checkout usability research

Use Baymard for the startup/product layer. Checkout screens make it obvious that screen friction is not just aesthetic: it can affect revenue, trust, and support load.

Nielsen Norman Group10 usability heuristics for user interface design

Use only a starter subset in Session 2: visibility of status, familiar language, recognition over recall, focused information, and plain error recovery.

U.S. Web Design SystemAccessibility guidance

Use this to keep accessibility inside screen reading from the beginning. The learner should ask whether people can perceive, operate, understand, and continue using the interface.

Figma LearnGuide to FigJam

Use FigJam as the first hands-on tool surface. Learners paste a safe screen, create sticky notes for screen job, attention order, action, status, friction, and accessibility, then compare AI critique with their own evidence.

FigmaFigma Make

Use Figma Make as a later-track preview. It shows that AI can create functional interface drafts quickly, so screen-reading criteria become the learner’s steering wheel.

Google LabsStitch AI UI Design

Use Stitch as a current AI-native design-canvas example. The teaching point is not to chase novelty, but to compare generated screens against user moment, attention order, action, trust, and accessibility.

UizardScreenshot Scanner

Use Uizard Screenshot Scanner as a concrete screenshot-to-mockup workflow. Learners should practice redacting private data, importing a screen, and checking what the tool preserves, changes, or invents.

Guided practice

Choose one safe product screen, build a FigJam or Figma annotation board, run the vague and structured AI critique prompts, accept/adapt/reject AI feedback, optionally test one generated variant, then write one product decision sentence.

Artifact: AI-native screen noticing board and product decision sheet
Tutor review questions
  • Can the learner state the screen job in one sentence?
  • Can the learner name user, task, and context without inventing too much?
  • Can the learner identify what is noticed first, second, and third?
  • Can the learner find the primary action and explain its promise?
  • Can the learner identify supporting information, feedback/status, friction, and one accessibility concern?
  • Can the learner build a simple visual annotation board in FigJam, Figma, or an equivalent canvas?
  • Can the learner use AI critique critically, compare one optional generated variant, and finish with one evidence-backed product decision?
  • Can the learner explain why a weak, better, and strong screen read differ in evidence quality?
  • Can the learner transfer the screen-reading method across checkout, booking, empty state, form, search, or error screens?
AI prompt
I am reading one product screen as an AI-native product/design learner.

Screen I am looking at:
[describe the screen, paste a safe screenshot description, or describe a redacted screenshot]

Product context I know:
[who the product is for, what the user is trying to do, and anything I know about business risk or constraints]

Please analyze it in simple English.
Return:
1. Screen job: what is this screen trying to help the user do?
2. User, task, context: who is using it, what are they trying to do, and what situation are they in?
3. Attention order: what do users probably notice first, second, and third?
4. Primary action: what is the main thing the user can do next?
5. Supporting information: what information helps the user decide?
6. Feedback or status: does the screen show where the user is, what happened, or what will happen next?
7. Friction or risk: what could confuse, slow, worry, or block the user?
8. Accessibility check: what might be hard to read, tap, understand, or operate?
9. Product decision: recommend one improvement first and explain why.
10. AI uncertainty: what are you unsure about because the screen or context is missing evidence?

Rules:
- Do not redesign the whole screen.
- Give two possible improvements, then recommend one.
- Use visible evidence from the screen.
- Separate visible evidence from assumptions.
- Tell me what you would accept, adapt, or reject if this were an AI-generated critique.

Home study

Choose two safe app or web screens. For each one, create a small annotation board, write the screen job, user, task, context, attention order, primary action, supporting information, feedback/status, friction, accessibility concern, AI accept/adapt/reject notes, and one improvement decision.