Week 3 tutorial notes
AI Collaboration 101: Brief, Generate, Check, Revise
Use AI as a disciplined collaborator by briefing clearly, generating meaningfully different options, critiquing assumptions, verifying evidence and risk, revising one artifact, and recording the final human decision.
Lesson thesis
AI is a product and design collaborator, not an authority. It can expand options, draft artifacts, critique assumptions, and support revision, but useful output depends on clear context, constraints, verification, risk awareness, and explicit human judgment.
Preparation status
Prepared with study notes, references, and web examples.
1. Lecture spine
Session 8 begins Week 3 by turning AI from a novelty into a disciplined work partner. The learner has now practiced design observation, screen reading, user-task-context framing, layout, actions, states, and interface copy. AI can accelerate all of those practices, but only if the learner brings structure and judgment.
The lecture should make one principle unmistakable: AI is useful because it expands and critiques thinking, not because it removes the need to think. The learner remains responsible for evidence, ethics, accuracy, prioritization, accessibility, and product decisions.
The artifact is an AI collaboration practice log. It should show the brief, prompt, AI output, human critique, verification checks, revision, accepted/adapted/rejected suggestions, and final decision.
- 01Part 1Collaboration stanceIntroduce AI as an operating habit for Product Owner and design specialist work: useful for drafting, option generation, critique, and revision, but not a substitute for accountability.
- 02Part 2Collaboration loopTeach the seven-step loop: brief, generate, critique, verify, revise, decide, and log.
- 03Part 3Prompt and output controlShow how prompt structure, context quality, output format, options, critique, and verification improve the usefulness of AI output.
- 04Part 4Studio artifactLearners complete one practice log for a screen problem or feature idea, marking accepted, adapted, rejected, verified, and unresolved AI suggestions.
2. What AI is and is not
The learner does not need to understand model architecture deeply to use AI responsibly, but she does need a useful mental model. AI can generate fluent patterns from instructions and context. Fluency is not the same as truth. Polish is not the same as evidence.
This distinction should be repeated often. AI can help the learner draft a product brief, list edge cases, compare options, rewrite copy, or critique a flow. It cannot know the actual user unless evidence is provided. It cannot take responsibility for the final decision.
The correct beginner posture is neither blind trust nor blanket rejection. The goal is calibrated use: trust AI more for low-risk drafting and less for facts, policy, user evidence, or high-stakes decisions.
Generative AI
A system that can create text, images, code, or other content from patterns, instructions, and context.
Example: A chatbot drafts three onboarding screen options from a product brief.
Prompt
The input that guides what AI should do and how the answer should be shaped.
Example: Review this screen copy for a first-time user and return a table of risks and rewrites.
Context
The background information AI uses during the current request.
Example: User, task, constraints, existing copy, known evidence, and desired output.
Verification
The process of checking whether AI output is true, useful, safe, feasible, and appropriate for the user and product.
Example: Checking a suggested claim against source material before putting it in a brief.
3. Collaboration, not delegation
The first professional shift is moving from delegation to collaboration. Delegation says: do this for me. Collaboration says: help me think, show me options, critique assumptions, and help me improve a decision I still own.
This matters because AI output can feel complete. It may be grammatically clean, structured, and confident. That feeling can hide weak assumptions, invented facts, missing user context, or product risk.
A Product Owner with AI-native habits uses AI to make thinking visible: What are the options? What are the risks? What evidence is missing? What should be verified? What decision should be made now?
4. The AI collaboration loop
The seven-step loop is the spine of the session. It turns AI use into a professional habit that can be inspected and improved.
The loop prevents two common failures. The first failure is under-briefing: asking AI for something broad and receiving generic output. The second failure is over-trusting: accepting the first output because it sounds fluent.
The learner should write the loop at the top of the practice log and use it every time: brief, generate, critique, verify, revise, decide, log.
- 011BriefState the user, task, context, desired outcome, evidence, constraints, and risk level.
- 022GenerateAsk for options, drafts, questions, assumptions, and alternatives.
- 033CritiqueAsk what is weak, missing, risky, biased, inaccessible, or unsupported.
- 044VerifyCheck facts, sources, feasibility, user evidence, accessibility, privacy, and product fit.
- 055ReviseImprove the chosen output using the critique and human checks.
- 066-7Decide and logChoose with a reason and record what was accepted, adapted, rejected, verified, and left unresolved.
5. Prompt anatomy
A prompt is a brief. The learner should stop thinking of prompts as secret wording and start thinking of them as structured work instructions.
A strong prompt usually contains the task, context, constraints, desired output, success criteria, and any evidence the AI should use. If facts matter, the prompt should include source material or ask for source-based checking.
The prompt should also name what AI should not do. For example: do not invent user research, do not remove legal meaning, do not make claims without evidence, and mark uncertainty instead of guessing.
6. Context quality
Context is the main difference between generic AI output and useful product work. A learner who asks for a landing page will get a generic landing page. A learner who gives a user, task, context, outcome, and constraints can get useful alternatives.
The context package should be small but specific. It does not need every detail. It needs the details that shape the decision.
The lecturer should model this by rewriting a vague prompt live. For example, "make this screen better" becomes "critique this laptop recommendation screen for a first-time university student choosing within a fixed budget; focus on clarity, trust, and next action."
- Product or feature name
- User and user task
- Situation, stress level, or constraint
- Desired product outcome
- Current artifact, screen, flow, or text
- Known evidence and missing evidence
- Business, technical, time, brand, and accessibility constraints
- Risk level and what must be verified
7. Generate options, not just answers
AI is especially useful for expanding the option space. But options are only useful when they are meaningfully different. Three variations of the same answer do not create real choice.
The learner should ask for dimensions of difference. For product and design, useful differences include flow structure, information priority, amount of guidance, level of automation, risk reduction, and effort.
This section prepares the learner for Session 9, where options and tradeoffs become the main topic.
8. Critique and revision
The first AI output is not the end. It is material for critique. A useful pattern is to ask AI to change roles: first drafter, then critic, then editor.
Critique should be specific. Asking "is this good?" usually invites shallow reassurance. Better critique asks against a rubric: user task, accessibility, risk, evidence, clarity, feasibility, and product outcome.
Human critique remains necessary because AI may critique according to generic norms. The learner should compare AI critique with the actual course principles and user context.
- Does this answer the actual user task?
- What assumptions could be wrong?
- What evidence is missing?
- What is too generic?
- What could confuse or exclude someone?
- What is hard to build, test, or maintain?
- What should be simplified?
- What should a human decide?
9. Verification and evidence
Verification is separate from generation. Asking AI to produce a better answer is not the same as checking whether the answer is correct.
The learner should verify at different levels. Fact claims need reliable sources. Product claims need user or business evidence. Interface claims need task fit, accessibility, and interaction logic. Technical claims need developer or system verification.
The higher the risk, the more formal the verification. Low-risk brainstorming can move quickly. Anything involving money, safety, health, legal meaning, private data, or public claims needs stronger checking.
10. Responsible use and risk
Responsible AI use can be taught simply. The learner does not need a full governance program to understand basic risk habits. She needs to ask: What could be wrong? Who could be harmed? What data am I sharing? What must be checked? Who decides?
NIST frames AI risk management as continuous, not a one-time checklist. For this course, that becomes a habit: map the task, measure output quality, manage risk, and keep the human decision visible.
This is also where privacy boundaries belong. The learner should not paste raw customer data, confidential strategy, credentials, health details, or other sensitive material unless the tool and organization policy explicitly allow it.
- Hallucination: plausible but false output.
- Over-reliance: accepting AI output without enough checking.
- Privacy leakage: sharing sensitive or confidential information with the wrong tool.
- Bias and homogenization: output that excludes, stereotypes, or smooths away important differences.
- Provenance risk: losing track of where claims, images, or ideas came from.
- Accountability gap: unclear human ownership of the final decision.
11. Human-AI interaction principles
The same principles that apply to AI products apply to personal AI workflows. People need to know what the AI can do, how well it can do it, what it is using, how to correct it, and when to take over.
Microsoft HAX and Google People + AI both emphasize expectation setting, trust calibration, correction, feedback, and control. These principles help the learner understand why an AI output should include uncertainty, assumptions, and verification steps.
The learner should also avoid over-automation. If a decision is important, AI can prepare the decision but should not quietly remove the moment of human choice.
12. AI for product and design work
This course is not teaching AI as a separate hobby. It is teaching AI as a multiplier for the Product Owner with design specialist capability.
For product work, AI can help turn messy thoughts into briefs, stories, acceptance criteria, edge cases, and tradeoff notes. For design work, AI can help critique screens, rewrite copy, map states, and generate alternatives.
The learner should keep the role boundary clear: AI helps produce and examine material; the human owns the product decision.
13. AI collaboration practice log
The practice log is the main artifact because it captures process. A polished final answer alone does not show whether the learner used AI well.
The log helps the tutor see the learner thinking: what context she provided, what she asked for, what AI produced, what she trusted, what she changed, what she checked, and what she decided.
This habit matters in teams. Product work needs traceability. If someone asks why a recommendation was chosen, the learner can point to the brief, tradeoff table, verification checklist, and decision sentence.
- Brief: user, task, context, outcome, constraints, evidence, risk level.
- Prompt: the actual prompt used.
- Output: the useful parts of the AI response.
- Checks: what was verified and how.
- Accept/adapt/reject: how human judgment changed the AI output.
- Revision: the improved artifact.
- Decision: final human choice and remaining uncertainty.
14. Session 11 prompt lab
The Session 11 prompt is intentionally structured. It does not merely ask AI to be helpful. It asks AI to collaborate in a way that exposes assumptions, options, risk, uncertainty, verification needs, and human decision-making.
Learners should use the prompt on a small product or design problem, then inspect the output against the checks. The tutor should ask which AI suggestions were accepted, adapted, rejected, and why.
The most important rule in the prompt is that AI should not invent facts, research findings, metrics, policies, technical constraints, or user evidence. When it does, the learner should mark the issue and revise the brief or verification plan.
I am using AI as a product and design collaboration partner. Context: - Product or idea: - User: - User task: - Situation or constraint: - Desired outcome: - Current artifact or screen idea: - Evidence I have: - Evidence I do not have: - Risk level: low, medium, or high - What I need from AI: Work as a careful collaborator, not as an authority. 1. Restate the brief in simple language. 2. List assumptions, unknowns, and missing evidence. 3. If a missing detail blocks useful work, ask up to 3 clarifying questions. If not, proceed and label assumptions. 4. Generate 3 meaningfully different options. 5. Compare the options in a table with columns: user value, business value, effort, risk, accessibility, evidence needed, and confidence. 6. Critique the strongest option and the weakest option. 7. Revise the strongest option once, using the critique. 8. Create a verification checklist for a human Product Owner: facts to check, user assumptions to test, feasibility questions, accessibility questions, privacy or data risks, and bias or inclusion risks. 9. Name anything you are uncertain about instead of guessing. 10. Finish with a decision sentence: I would choose ___ because ___, but I would verify ___ before using it. Rules: - Do not invent facts, research findings, metrics, policies, technical constraints, or user evidence. - Do not treat AI output as user research. - Prefer clear structured tables when comparing options. - Keep the human decision explicit. - If the request involves sensitive data, legal, medical, financial, safety, or high-stakes advice, say what expert or source must verify it.
- Does the prompt provide user, task, context, outcome, constraints, evidence, and risk level?
- Does the AI restate the brief and identify assumptions or unknowns?
- Does it generate meaningfully different options?
- Does it compare options using user value, business value, effort, risk, accessibility, evidence, and confidence?
- Does it critique and revise instead of stopping at the first answer?
- Does it create a human verification checklist?
- Does it avoid invented facts, fake research, and unsupported certainty?
- Does it finish with an explicit human decision sentence?
- The answer gives one polished solution with no options.
- The answer invents research, metrics, policy, or user evidence.
- The answer does not separate assumptions from known evidence.
- The answer treats AI simulation as real user research.
- The answer has no human verification or final decision.
15. Worked example: recommendation screen
Use the course project as the worked example. The learner wants to improve a laptop recommendation screen. A weak prompt asks AI to make it better. A strong prompt gives the user, task, budget constraint, screen goal, known evidence, and output format.
The output should include three approaches, not one answer. For example: quiz-first, comparison-first, and shortlist-first. The learner then compares tradeoffs.
The human decision might be: I would choose comparison-first because it supports confident selection, but I would verify which comparison criteria students actually use before building it.
16. Studio exercise, rubric, and home study
The studio output is one complete AI collaboration practice log. The log should show the work, not just the final result. It is acceptable if the first AI output is weak, as long as the learner identifies why and improves the brief or revision.
For home study, the learner should repeat the loop on one small task from the week: a UX copy rewrite, state map, layout critique, or user story. The goal is to build repeatable judgment, not collect impressive AI answers.
Close by previewing Session 9. Now that the learner can collaborate with AI safely, the next lesson will focus on generating and comparing options with tradeoff notes.
- 01Part AChoose the problemChoose one course project screen problem or feature idea small enough to complete in one sitting.
- 02Part BPrepare the promptWrite the brief with user, task, context, evidence, constraints, desired outcome, and risk level.
- 03Part CGenerate and critiqueRun the Session 11 prompt and save the useful output, assumptions, options, critique, and verification checklist.
- 04Part DRevise and decideVerify key claims, revise one option, and record accepted, adapted, rejected, unresolved, and final decision.
- Brief includes user, task, context, outcome, evidence, constraints, and risk level.
- Prompt asks for options, critique, revision, and verification.
- AI output includes assumptions and unknowns.
- Options are meaningfully different.
- Human checks facts, user fit, feasibility, accessibility, privacy, and bias.
- Log marks accepted, adapted, rejected, and unresolved suggestions.
- Revision improves one selected option with clear rationale.
- Final decision sentence names what was chosen and what still needs verification.
Follow-up reading for the lecturer
References
OpenAI Academy guidance on prompt fundamentals: outline the task, provide useful context, describe the ideal output, iterate, split large tasks, and ask for options.
OpenAI Help Center: Prompt engineering best practicesOpenAI Help Center guidance on effective ChatGPT prompts, including clear instructions, specificity, enough context, iterative refinement, and tone direction.
OpenAI API: PromptingOpenAI API prompting guidance on prompt structure, versions, examples, and evaluation as prompts evolve.
OpenAI API: Model optimizationOpenAI model optimization guidance that frames better AI output as a loop of context, instructions, examples, evals, feedback, and continuous measurement.
OpenAI Research: Why language models hallucinateOpenAI research explaining why language models can produce plausible but false statements and why uncertainty, abstention, and verification matter.
Microsoft HAX Toolkit: Guidelines for Human-AI InteractionMicrosoft HAX guidance for human-AI interaction, including making capabilities and limits clear, supporting correction, and planning for when AI is wrong.
Google People + AI Guidebook: Explainability and TrustGoogle People + AI guidance on explainability, trust calibration, data use, user control, uncertainty, and when people should apply their own judgment.
Google People + AI Guidebook: Feedback and ControlGoogle People + AI guidance on feedback and control, including explicit feedback, editability, balancing control and automation, and user expectations.
NIST AI RMF CoreNIST AI RMF Core guidance on govern, map, measure, and manage as continuous AI risk management functions.
NIST AI 600-1: Generative AI ProfileNIST Generative AI Profile for understanding generative AI risks, lifecycle risk management, provenance, testing, and human oversight roles.
IBM: Best practices for augmenting human intelligence with AIIBM guidance that AI should augment human intelligence rather than replace it, while preserving human oversight, agency, accountability, and upskilling.
Web examples
The OpenAI prompting guidance gives the learner a practical foundation: state the task, add context, describe the output, iterate, and ask for options. This becomes the operational core of the practice log.
OpenAI ResearchWhy plausible AI answers still need checkingThe OpenAI hallucination research helps learners understand why confident AI output is not the same as verified evidence. It supports a verification-first habit.
Microsoft HAX ToolkitHuman-AI interaction guidelinesMicrosoft HAX gives a product-design frame: make clear what the AI can do, make clear how well it can do it, support correction, and plan for wrong or uncertain outputs.
Google People + AI GuidebookExplainability and trust calibrationGoogle People + AI emphasizes trust calibration: people need to know when to trust AI, when to apply judgment, what data is being used, and how to stay in control.
NISTAI risk management as a continuous cycleNIST AI RMF provides the responsible-use structure behind the beginner workflow: map the context, measure output quality, manage risk, and keep governance visible.
IBMAI as augmentation, not replacementIBM frames the desired professional stance: AI augments human intelligence, but human responsibility, oversight, agency, and accountability remain in place.
Brief AI on one screen or feature problem, ask for options and critique, verify key claims and assumptions, revise one option, and complete an AI collaboration practice log.
- Can the learner explain why AI should augment rather than replace human judgment?
- Can the learner write a brief with user, task, context, outcome, evidence, constraints, and risk level?
- Can the learner ask for meaningfully different options and compare their tradeoffs?
- Can the learner identify assumptions, unknowns, hallucination risk, privacy risk, bias risk, and evidence gaps?
- Can the learner verify facts, user fit, feasibility, accessibility, and policy-sensitive claims?
- Can the learner record accepted, adapted, rejected, unresolved, and final human decision notes?
I am using AI as a product and design collaboration partner. Context: - Product or idea: - User: - User task: - Situation or constraint: - Desired outcome: - Current artifact or screen idea: - Evidence I have: - Evidence I do not have: - Risk level: low, medium, or high - What I need from AI: Work as a careful collaborator, not as an authority. 1. Restate the brief in simple language. 2. List assumptions, unknowns, and missing evidence. 3. If a missing detail blocks useful work, ask up to 3 clarifying questions. If not, proceed and label assumptions. 4. Generate 3 meaningfully different options. 5. Compare the options in a table with columns: user value, business value, effort, risk, accessibility, evidence needed, and confidence. 6. Critique the strongest option and the weakest option. 7. Revise the strongest option once, using the critique. 8. Create a verification checklist for a human Product Owner: facts to check, user assumptions to test, feasibility questions, accessibility questions, privacy or data risks, and bias or inclusion risks. 9. Name anything you are uncertain about instead of guessing. 10. Finish with a decision sentence: I would choose ___ because ___, but I would verify ___ before using it. Rules: - Do not invent facts, research findings, metrics, policies, technical constraints, or user evidence. - Do not treat AI output as user research. - Prefer clear structured tables when comparing options. - Keep the human decision explicit. - If the request involves sensitive data, legal, medical, financial, safety, or high-stakes advice, say what expert or source must verify it.
Home study
Run one complete AI collaboration loop on a small product or design problem. Save the brief, prompt, AI output, human checks, accepted/adapted/rejected suggestions, revised artifact, and final decision sentence.