Lock the launch date and enforce a no-changes-to-scope rule from this point forward.
Product Launch
A timeline built around the most preventable launch failures: scope changes that arrive too late, deployments with no rollback path, support teams blindsided on day one, and retrospectives that produce observations instead of process changes. For more background and examples, see the guidance below; for built-in tools and options, use the quick tools guide.
Checklist Items
0 done•20 left•5 of 6 sections collapsed
Brief sales, customer success, and support with team-specific documentation — not a single shared overview.
Notify integration partners and API consumers if the launch changes any API contract, OAuth scope, or webhook payload.
🎯 The First Decision: What Kind of Launch Is This?
Not every feature warrants a coordinated, multi-channel launch. Matching the launch posture to the feature's scope prevents over-engineering small releases and under-investing in significant ones. Choosing the wrong posture is a hidden time tax: a coordinated posture applied to a minor UX tweak can consume 40–60 hours of cross-functional time for minimal return.
Dark Launch
Code ships to production, invisible to all users. Used for infrastructure changes, backend refactors, and A/B test scaffolding. No external communication. No support briefing required. The checklist items for content creation, briefings, and retrospectives do not apply.
Quiet Rollout
Feature enabled progressively, no announcement. Used for incremental UX improvements and minor additions. An in-app changelog entry is sufficient. Support is notified but no sales briefing is needed. Testing and monitoring items still apply in full.
Coordinated Launch
Blog post, email, in-app, social, and full cross-functional briefing. Reserved for features that change the value proposition, unlock new use cases, or justify a pricing conversation. This checklist is designed for this posture.
The posture decision should be made explicitly at the scope-lock meeting — not assumed. When in doubt, err toward the quieter posture and upgrade if adoption data from the first week warrants a follow-up announcement.
🧮 The Gray Zone: When Launch-Day Metrics Are Ambiguous
Pre-defined thresholds handle the easy cases — green means proceed, red means roll back. The hard decisions happen in the gray zone: metrics that are worse than baseline but haven't crossed the rollback threshold. This is where most launch-day debates occur, and where the absence of a pre-agreed protocol is most costly.
Error rate and latency are within baseline. Proceed normally. Advance the rollout to the next percentage tier on schedule.
Metrics elevated but below the rollback threshold. The correct response is pause the rollout at the current percentage, investigate the source, and set a 30-minute re-evaluation window. Do not continue expanding. Do not immediately roll back. The pause gives you data without escalating the damage — which is exactly what you need in the gray zone.
Metrics at or above the rollback threshold, or a user-reported critical functionality break. The named decision authority acts — no committee vote, no discussion of whether it is really that bad. The rollback plan executes as written.
The most costly launch-day mistake is not rolling back too quickly — it's continuing to expand rollout while metrics are amber because the team believes "it's probably fine." Pausing at amber costs 30 minutes. Continuing through amber and hitting red at 50% rollout costs an incident report, an all-hands, and three weeks of user trust recovery.
📖 The Pattern Behind Most Launch Failures
Post-mortems from launch failures cluster around a predictable sequence: a change was made too late in the process, which compressed testing time, which meant a critical issue was missed, which was only discovered when users hit it on launch day — at which point the team was already overextended managing the launch itself.
The compounding factor is almost always the same: the issue was visible in a test or bug report but classified as low-severity because someone on the team had a mental model of customer behavior that turned out to be wrong. The customer found the edge case the team didn't believe was a real usage path.
💡 Why the Developer Mental Model Is the Risk
Developers interact with a feature hundreds of times during its build. They know which flows work, which don't, and which edge cases exist. They build an accurate mental model — of how they use it. That model diverges from customer behavior in ways that are only visible once real customers are involved.
This is the structural reason that testing coverage — even extensive automated testing — does not prevent this failure mode. Automated tests verify the paths the developer wrote tests for. They cannot verify the paths customers take that no developer anticipated.
📊 Gradual Rollout: Tiers, Stability Windows, and Gates
A gradual rollout without explicit gates is just a slow launch. Each percentage tier should advance only when the previous tier's metrics have been stable for a defined window — not automatically, and not on a fixed schedule. The table below reflects a standard progression for a mid-complexity feature with meaningful infrastructure load.
| Tier | Audience | Stability Window | Gate Condition to Advance |
|---|---|---|---|
| Internal | Employees only | 24–48 hrs | No critical bugs; core flows work as expected |
| 1% | Random production users | 2–4 hrs | Error rate at or near baseline for all new endpoints |
| 10% | Early cohort | 4–8 hrs | Latency and error rate both stable; no spike in support volume |
| 50% | Broad rollout | 12–24 hrs | Support ticket volume within expected range; no new error patterns |
| 100% | All users | — | All prior tiers green across all gate conditions |
For features with significant infrastructure load — new data pipelines, video processing, or search indexing — extend the stability windows at the 10% and 50% tiers. For low-risk UI changes to well-tested existing flows, you can compress to three tiers: internal, 25%, and 100%. The principle is the same: each advance is a deliberate decision, not a default.
Master This Checklist Quickly
Every important button and option for this pre-made checklist, shown in a glance-friendly format.
Start Here
- 1
Click any item row to mark it complete.
- 2
Use the note row under each item for quick notes.
- 3
Use the tool row for undo, redo, reset, and check all.
- 4
Use Save Progress when you want to continue later.
Checklist Row Tools
Top Action Buttons
Share
Open all sharing and export options in one menu.
Add & Ask
Open one menu for apps and AI guidance.
Copy and customize
Create a new editable checklist pre-filled with your chosen content.
Save Progress
Adds this checklist to My Checklists and keeps your progress in this browser.
Most Natural Usage
Track over time
Check items -> Add notes where needed -> Save Progress
Send or export
Open Share -> Choose format -> Continue
Make your own version
Copy and customize -> Open create page -> Edit freely
Checklistify
Free Printable Checklists
Product Launch
A timeline built around the most preventable launch failures: scope changes that arrive too late, deployments with no rollback path, support teams blindsided on day one, and retrospectives that produce observations instead of process changes.
4 Weeks Out — Alignment and Briefing
3 Weeks Out — Content and Infrastructure
2 Weeks Out — Testing
1 Week Out — Final Preparation
Launch Day — Monitoring and Communication
Post-Launch — Learning and Iteration
Additional Notes
Use this space for follow-ups, reminders, and key references.
