Learner worksheet
Screen Noticing Board
A screen annotation board for mapping screen job, attention order, action, support, status, friction, accessibility, AI critique, transfer insight, and one product decision.
Output standard
One public-ready screen noticing board with evidence notes, AI curation, and a scoped product decision.
Use when
Use in Session 2 before generating variants, redesigning a screen, or writing a product decision.
Board fields
Use this when the learner is reading a real or fictional screen as a product moment. The board should make visible evidence stronger than taste language.
- Safe or redacted screen
- Screen job
- User
- Task
- Context
- Attention order: first, second, third
- Primary action and action promise
- Supporting information
- Feedback or status
- Friction or risk
- Accessibility concern
- Vague AI prompt and output
- Structured AI prompt and output
- Accepted AI suggestion
- Adapted AI suggestion
- Rejected AI suggestion
- Optional generated variant critique
- Transfer note to another screen type
- Final product decision
Quality check
The board should show that the learner can read the screen before asking AI to judge or generate.
- Screen job is stated before critique.
- Attention order is tied to visible evidence.
- Primary action, support, and status are checked.
- Accessibility is included as screen quality.
- AI output is accepted, adapted, and rejected with reasons.
- Final decision names product benefit and scope limit.
- Transfer note shows how the method applies beyond this one screen.
Quality benchmark
Use this benchmark to calibrate the board before showing it publicly or submitting it for review.
- Weak: describes the screen as clean, modern, or intuitive without evidence.
- Better: names screen job, action, friction, and one useful improvement.
- Strong: includes screen job, attention order, visible evidence, AI curation, uncertainty, decision, and tradeoff.
Starter prompt
Use this prompt after the learner has already completed a human screen read.
I am reading one product screen as an AI-native product/design learner. Screen I am looking at: [describe the screen, paste a safe screenshot description, or describe a redacted screenshot] Product context I know: [who the product is for, what the user is trying to do, and anything I know about business risk or constraints] Please analyze it in simple English. Return: 1. Screen job: what is this screen trying to help the user do? 2. User, task, context: who is using it, what are they trying to do, and what situation are they in? 3. Attention order: what do users probably notice first, second, and third? 4. Primary action: what is the main thing the user can do next? 5. Supporting information: what information helps the user decide? 6. Feedback or status: does the screen show where the user is, what happened, or what will happen next? 7. Friction or risk: what could confuse, slow, worry, or block the user? 8. Accessibility check: what might be hard to read, tap, understand, or operate? 9. Product decision: recommend one improvement first and explain why. 10. AI uncertainty: what are you unsure about because the screen or context is missing evidence? Rules: - Do not redesign the whole screen. - Give two possible improvements, then recommend one. - Use visible evidence from the screen. - Separate visible evidence from assumptions. - Tell me what you would accept, adapt, or reject if this were an AI-generated critique.