Week 3 tutorial notes
Options 101: Choices, Tradeoffs, and Judgment
Generate and compare three meaningfully different product or design options with AI, evaluate tradeoffs using consistent criteria, choose one option with rationale, and record evidence gaps and the next test.
Lesson thesis
Options make product judgment visible. A Product Owner uses AI to generate meaningfully different approaches, compare user value, business value, effort, risk, accessibility, confidence, and learning value, then chooses with an explicit tradeoff and decision record.
Preparation status
Prepared with study notes, references, and web examples.
1. Lecture spine
Session 9 teaches the learner to move from AI-assisted generation to product judgment. The output is not a pile of ideas. The output is a three-option tradeoff sheet that shows what was considered, why options are different, what each option gains and costs, and what decision should happen now.
This is a major Product Owner skill. Many weak decisions happen because the team silently commits to the first plausible answer. Stronger product work compares alternatives before commitment.
The session should make tradeoffs feel normal, not like failure. A product decision is rarely perfect. It is a choice made under constraints, with evidence, risk, effort, and user value in view.
- 01Part 1Why options matterConnect from Session 8. AI collaboration is useful because it can create multiple possible directions, but only product judgment turns those directions into decisions.
- 02Part 2Option disciplineTeach divergent and convergent thinking, meaningful option difference, option anatomy, decision criteria, and tradeoff notes.
- 03Part 3Decision toolsIntroduce RICE, simple decision matrices, prototyping riskiest assumptions, and decision records.
- 04Part 4Studio artifactLearners generate three options for one course project problem, compare them, choose one, write a tradeoff note, and define the next prototype or test.
2. Core vocabulary
The learner should leave this session with a basic decision vocabulary: option, tradeoff, criteria, confidence, effort, risk, reversibility, prototype, and decision record.
The key distinction is between generating and deciding. Generating is about opening. Deciding is about narrowing. If the learner narrows too early, she may miss better approaches. If she never narrows, she creates noise rather than progress.
This vocabulary helps the learner talk like a Product Owner. Instead of saying I like this one, she can say: this option improves user confidence, costs more effort, carries lower accessibility risk, and should be tested against one assumption.
Option
A possible way to solve the same problem or make the same product decision.
Example: A quiz-first recommendation flow, a comparison-first flow, or a shortlist-first flow.
Tradeoff
The cost or weakness accepted when choosing one option over another.
Example: A guided quiz may improve confidence but adds time before recommendations appear.
Divergent thinking
Opening the thinking space to create multiple possible answers.
Example: Generating five different ways to help students choose a laptop.
Convergent thinking
Narrowing possibilities by using criteria, evidence, constraints, and judgment.
Example: Choosing comparison-first because it best supports the current learning goal.
3. One answer is not enough
A single solution hides the comparison that should have happened. Without options, the team cannot see what it is giving up.
Options make judgment visible because they force questions: What matters most? What is the evidence? What is easier? What is riskier? Which option teaches us the most? Which user need does this serve?
AI can help create options quickly, but the learner must ask for meaningfully different options and reject fake variety.
4. Divergent and convergent thinking
Digital.gov describes design as cycles of divergent and convergent thinking. The learner should practice deliberately switching modes.
In divergent mode, the aim is range. In convergent mode, the aim is judgment. Both are necessary. A team that only diverges becomes scattered. A team that only converges becomes narrow.
The lecturer can teach this with a simple signal: in the first ten minutes, do not choose; in the next ten minutes, do not add new options unless the comparison proves the current options are false choices.
- 01OpenDivergeGenerate options without judging them too early. Include unusual, simple, conservative, and learning-focused ideas.
- 02ClarifyOrganizeCluster similar ideas, remove duplicates, and name the actual differences.
- 03NarrowConvergeApply criteria: user value, business value, effort, risk, accessibility, confidence, and learning value.
- 04CommitDecideChoose what to do now, what to test, what to defer, and what to reject.
5. Start with the decision frame
Options must be compared against a shared problem. If the options solve different problems, the comparison becomes confused.
The learner should begin by naming the actual decision. For example: Which recommendation approach should we prototype first? This is clearer than: Which design is best?
A good problem frame includes user, task, context, outcome, constraints, and current uncertainty. This creates fair conditions for comparison.
6. What makes options meaningfully different
The learner should learn to identify fake variety. If three options only change labels, colors, or tone, they may not be product options.
A meaningful option changes the approach. It may change the flow, the order of information, the level of guidance, the degree of automation, the amount of user control, the risk exposure, or the learning strategy.
Small copy options can still matter, but this session is about product and design judgment. The learner should be asked: what decision would change if we chose this option?
7. Option anatomy
An option needs enough structure to compare. A name alone is not enough. A screenshot alone is not enough. The learner should explain what the option is trying to do and what must be true for it to work.
The failure condition is especially useful. It asks: under what circumstance would this option be wrong? For example, a quiz-first flow fails if students do not know enough about their needs to answer the quiz confidently.
This structure also makes AI output easier to review. If AI produces vague options, the learner can ask it to fill the option anatomy or regenerate options that are more distinct.
- Option name
- Core idea
- User value
- Business value
- Effort
- Risk
- Accessibility and inclusion concerns
- Evidence supporting the option
- Evidence missing
- Failure condition
8. Criteria for comparing options
Evaluation criteria turn preference into judgment. For this course, use a beginner-friendly set: user value, business value, effort, risk, accessibility, confidence, reversibility, and learning value.
Not every criterion has equal weight in every situation. In a high-risk flow, safety and accessibility may outweigh speed. In a discovery prototype, learning value may outweigh polish.
The learner should write at least one sentence per criterion. Scores without reasoning can create false precision.
9. Tradeoff notes
The tradeoff note is the main writing pattern of the session: Choosing this means we gain ___ but accept ___. We choose it now because ___. We will test ___.
A tradeoff note prevents shallow certainty. It teaches the learner to name the cost of the choice. This is crucial because product teams rarely choose between good and bad. They choose between different kinds of good and different kinds of cost.
A strong tradeoff note should mention user value, effort or risk, and current evidence.
10. RICE and simple scoring
RICE can help compare product ideas by asking for reach, impact, confidence, and effort. It is especially useful because confidence reduces the pull of exciting but weakly evidenced ideas.
The lecturer should be careful not to present RICE as an automatic truth machine. The score is only as good as the estimates and evidence behind it.
For a beginner, the value of RICE is the conversation it creates: Who is affected? How much do we expect the outcome to improve? How sure are we? What will it cost?
11. Prototype to learn
Prototyping lets the team test an option before production investment. The learner should not prototype everything. She should prototype the uncertainty.
For example, if the risk is that students do not understand the recommendation reason, a low-fidelity screen with explanation copy may be enough to test comprehension. If the risk is data availability, a technical spike may be needed.
The prototype should answer a decision question. If the prototype does not change what the team would do next, it may be theater rather than learning.
- 01Step 1Find the riskiest assumptionName the assumption that would most damage the option if wrong.
- 02Step 2Choose the prototypeChoose the smallest artifact that can test that assumption: sketch, clickthrough, copy test, prototype, or interview stimulus.
- 03Step 3Define the signalDefine what evidence would increase, lower, or change confidence.
12. AI option generation lab
AI can be very helpful in Session 9 because it can quickly create multiple approaches, but it also tends to produce false variety unless the prompt demands real differences.
The prompt asks AI to restate the problem, name the decision, list assumptions, generate three meaningfully different options, compare them, write tradeoff notes, recommend one, and define the next test.
The learner should check whether the options actually differ in approach. If they do not, she should ask AI to regenerate around explicit difference dimensions such as flow, level of guidance, level of automation, risk, or effort.
I am comparing product or design options for a small product decision. Context: - Product or feature: - User: - User task: - Desired outcome: - Current problem statement: - Evidence available: - Evidence missing: - Constraints: - Risk level: - Decision deadline: Generate and compare options as a Product Owner with design judgment. 1. Restate the problem in one sentence. 2. Name the decision we are actually making. 3. List assumptions and missing evidence. 4. Generate 3 meaningfully different options. They must differ in approach, not just wording or visual style. 5. For each option, include: option name, core idea, user value, business value, effort, risk, accessibility/inclusion concerns, what evidence supports it, what evidence is missing, and what would make it fail. 6. Compare the options in a decision table using: user value, business value, effort, risk, reversibility, accessibility, confidence, and learning value. 7. Identify the strongest option, the safest option, and the riskiest option. 8. Write a tradeoff note for each option: choosing this means we gain ___ but accept ___. 9. Recommend one option for now and explain why it fits the current stage, constraints, and evidence. 10. Name one assumption to prototype or test before committing more effort. 11. Finish with a decision record: - Decision: - Why: - Tradeoff accepted: - Evidence used: - Evidence still needed: - Next test or prototype: Rules: - Do not invent research, metrics, or technical certainty. - Do not choose only because something sounds polished. - If the options are too similar, regenerate them with clearer differences. - If evidence is weak, lower confidence and say what would need to be learned. - Keep the final choice explicit and traceable.
- Does the prompt ask for meaningfully different approaches?
- Does it give the problem, user, task, outcome, constraints, and evidence?
- Does it require a decision table?
- Does it ask for evidence gaps and confidence?
- Does it ask what would make each option fail?
- Does it ask for tradeoff notes?
- Does it ask for one prototype or test before commitment?
- The answer gives three versions that differ only in wording or visual style.
- The answer chooses the most polished option without comparing tradeoffs.
- The answer invents research, metrics, or feasibility certainty.
- The answer does not name evidence gaps.
- The answer has no decision record or next test.
13. Decision records
A decision record turns judgment into team memory. Without it, future teammates only see the chosen solution, not why other options were rejected.
The decision record should be short enough to use. It does not need to be a long report. It needs the context, options considered, evidence, rationale, tradeoff, and next step.
This prepares the learner for product work where decisions are revisited. If new evidence appears, the team can update the decision rather than arguing from memory.
- Context and problem statement
- Decision being made
- Options considered
- Evidence used
- Assumptions and unknowns
- Decision criteria
- Decision and rationale
- Tradeoff accepted
- Next prototype, test, or review point
14. Worked example: recommendation options
The course project example is a laptop recommendation product. The decision is: which recommendation approach should be prototyped first for first-time university students with a limited budget?
The three options should be meaningfully different. Quiz-first changes the input strategy. Comparison-first changes the information priority. Shortlist-first changes the decision process over time.
A possible decision: choose comparison-first for the first prototype because it makes recommendation reasons visible quickly, but test whether the amount of comparison information overwhelms low-confidence students.
15. Worked example: decision record
After comparing options, the learner should write a decision record. This is where Product Owner judgment becomes visible.
The decision record should avoid certainty theater. It can say: We choose this now because it best supports the current learning goal. We accept this tradeoff. We still need to verify this assumption.
The next test is part of the decision, not an afterthought. A good choice includes what the team needs to learn next.
16. Studio exercise, rubric, and home study
The studio artifact is a three-option tradeoff sheet. It should include the decision frame, three options, option anatomy, decision table, tradeoff notes, selected option, evidence gaps, and next test.
For home study, the learner should repeat the exercise on a smaller decision, such as choosing between three empty-state messages or three onboarding question orders. The tutor should push her to say whether the options are truly different or only cosmetically different.
Close by connecting to Session 10. Once an option is chosen, the next professional skill is critique: deciding whether the chosen direction meets standards and what acceptance criteria make it ready.
- 01Part AChoose the decisionChoose one course project problem or screen decision, such as recommendation approach, onboarding question order, comparison layout, or shortlist behavior.
- 02Part BGenerate optionsUse the Session 12 prompt to create three meaningfully different options and fill the option anatomy for each.
- 03Part CCompareCompare options using user value, business value, effort, risk, accessibility, confidence, reversibility, and learning value.
- 04Part DDecide and logChoose one, write the tradeoff note, complete the decision record, and name the next prototype or test.
- Decision frame is clear and grounded in user, task, context, and outcome.
- Options are meaningfully different in approach, not just style.
- Each option includes user value, business value, effort, risk, accessibility, evidence, and failure condition.
- Decision table compares options using consistent criteria.
- Tradeoff notes name both gain and accepted cost.
- Selected option is justified by evidence, constraints, and current stage.
- Evidence gaps and confidence are stated honestly.
- Next prototype or test targets the riskiest assumption.
Follow-up reading for the lecturer
References
Design Council description of the Double Diamond, including discovering and defining the problem, developing different answers, testing at small scale, rejecting what does not work, and improving what does.
Digital.gov: Divergent and convergent thinkingDigital.gov guidance on divergent and convergent thinking: exploring what is possible, then moving back into constraints, analysis, refinement, and decision making.
GOV.UK Service Manual: Deciding on prioritiesGOV.UK Service Manual guidance on prioritising product and service work using performance analysis, user research, stakeholder input, clear methods, and team involvement.
GOV.UK Service Manual: How the alpha phase worksGOV.UK alpha guidance on trying different solutions, testing riskiest assumptions, exploring constraints, inclusion, accessibility, and deciding whether to move forward.
DEFRA Digital Service Manual: PrototypingDigital service manual guidance on prototyping as a way to test and explore ideas early, build shared understanding, and reduce risk before production work.
GOV.UK Service Manual: Learning about users and their needsGOV.UK user-needs guidance that product decisions should be linked to user needs, evidence, stories, complexity, dependencies, and traceability.
Intercom: RICE prioritization frameworkIntercom explanation of RICE scoring: reach, impact, confidence, and effort as a way to compare hard-to-compare product ideas while naming tradeoffs.
Atlassian: DACI method for better decisionsAtlassian explanation of DACI decision-making, including background, relevant data, options considered, questions, outcome, and decision visibility.
OpenAI Academy: Prompting fundamentalsOpenAI Academy guidance on asking for options, setting priorities, splitting complex tasks, giving context, and describing the desired output.
OpenAI Help Center: Prompt engineering best practicesOpenAI Help Center guidance on clear and specific prompts, enough context, and iterative refinement.
Microsoft HAX Toolkit: Guidelines for Human-AI InteractionMicrosoft HAX guidance on human-AI interaction, useful for keeping AI option generation transparent about capabilities, uncertainty, correction, and user control.
Web examples
The Double Diamond gives the session its structure: open up beyond the first answer, then narrow with evidence, prototype learning, and explicit rejection of weak options.
Digital.govDivergent and convergent thinkingDigital.gov makes the cognitive shift explicit: divergent thinking explores possibility, while convergent thinking returns to constraints, practicalities, and decisions.
GOV.UK Service ManualPrioritising product and service workGOV.UK prioritisation guidance anchors decisions in performance analysis, user research, stakeholder input, clear prioritisation methods, and team involvement.
GOV.UK Service ManualTesting riskiest assumptions in alphaThe alpha-phase guidance is strong for teaching options because alpha is where teams try different solutions and test riskiest assumptions before committing.
IntercomReach, impact, confidence, and effortRICE gives learners a simple product comparison model: reach, impact, confidence, and effort. The lesson uses it as a conversation aid, not as automatic truth.
AtlassianDecision records and options consideredDACI is useful because it makes the decision record visible: background, relevant data, options considered, questions, outcome, and who needs to be involved.
Frame one decision, use the Session 12 prompt to create options, reject fake variety, compare options by value/effort/risk/confidence/accessibility, write tradeoff notes, and complete a decision record.
- Can the learner distinguish real options from superficial variations?
- Can the learner frame a decision with user, task, context, and outcome?
- Can the learner compare user value, business value, effort, risk, accessibility, confidence, and learning value?
- Can the learner write a tradeoff note naming both the gain and the accepted cost?
- Can the learner use RICE or a decision matrix without treating the score as automatic truth?
- Can the learner choose a next prototype or test for the riskiest assumption?
I am comparing product or design options for a small product decision. Context: - Product or feature: - User: - User task: - Desired outcome: - Current problem statement: - Evidence available: - Evidence missing: - Constraints: - Risk level: - Decision deadline: Generate and compare options as a Product Owner with design judgment. 1. Restate the problem in one sentence. 2. Name the decision we are actually making. 3. List assumptions and missing evidence. 4. Generate 3 meaningfully different options. They must differ in approach, not just wording or visual style. 5. For each option, include: option name, core idea, user value, business value, effort, risk, accessibility/inclusion concerns, what evidence supports it, what evidence is missing, and what would make it fail. 6. Compare the options in a decision table using: user value, business value, effort, risk, reversibility, accessibility, confidence, and learning value. 7. Identify the strongest option, the safest option, and the riskiest option. 8. Write a tradeoff note for each option: choosing this means we gain ___ but accept ___. 9. Recommend one option for now and explain why it fits the current stage, constraints, and evidence. 10. Name one assumption to prototype or test before committing more effort. 11. Finish with a decision record: - Decision: - Why: - Tradeoff accepted: - Evidence used: - Evidence still needed: - Next test or prototype: Rules: - Do not invent research, metrics, or technical certainty. - Do not choose only because something sounds polished. - If the options are too similar, regenerate them with clearer differences. - If evidence is weak, lower confidence and say what would need to be learned. - Keep the final choice explicit and traceable.
Home study
Choose one small product/design decision from the course project. Generate three meaningfully different options, fill option anatomy for each, compare with a decision table, choose one, write a tradeoff note, and define the next prototype or test.