top of page

4.0 Improve Phase

Improve Phase Purpose of the Improve Phase The Improve phase converts analytical findings from the Analyze phase into practical, tested solutions that optimize process performance. The focus is on: - Translating root causes into solution ideas - Designing and selecting high-impact improvements - Testing solutions using experiments and pilots - Validating gains with data - Preparing for full-scale implementation and control The Improve phase remains evidence-based: every proposed change must be justified, tested, and quantified. --- Linking Improve to Analyze Results Translating Root Causes into Solution Targets Effective improvement begins by explicitly linking solutions to validated root causes. Key actions: - Review the problem statement and project Y (primary performance metric) - Confirm critical Xs (input variables) that significantly affect the Y - Convert each significant X into a solution target: - Reduce variation of X - Shift mean level of X - Eliminate or replace X - Control or standardize X Each improvement idea must answer: “Which X does this address, and how will it change the Y?” --- Generating Solution Ideas Structured Ideation Idea generation should be structured, not random. Useful methods directly aligned to Improve include: - Brainstorming: Rapid idea listing with no immediate judgment; filter later. - Brainwriting: Individuals silently write ideas, then share; reduces bias and dominance. - Cause–solution mapping: For each validated cause on the cause-and-effect diagram, list possible solutions adjacent to it. - Benchmarking-based ideas: Adapt proven practices from internal or external exemplars to address specific Xs. - Error-proofing focus: Generate ideas that prevent errors rather than detect them later. To maintain alignment, keep a visible mapping from: - Problem Y → Critical X → Improvement ideas --- Selecting Solutions Solution Evaluation Criteria Not all ideas are worth implementing. Typical criteria include: - Impact on Y: Expected effect size on key performance metrics - Feasibility: Technical difficulty, resource needs, lead time - Cost/Benefit: Investment versus expected financial and non-financial benefits - Risk: Potential for failure, safety issues, or unintended consequences - Customer impact: Effect on internal or external customers - Alignment: Fit with policies, regulations, and strategic priorities - Sustainability: Ease of maintaining gains over time Prioritization Tools Common structured tools in the Improve phase: - Impact/Effort Matrix - Plot ideas on a 2×2: high/low impact vs. high/low effort. - Prioritize: - High impact / low effort: implement first - High impact / high effort: consider as larger projects or phased - Low impact ideas: deprioritize unless required for compliance - Decision Matrix (Pugh Matrix) - Define criteria (e.g., impact on Y, cost, risk, time). - Assign weights to criteria based on importance. - Score each solution against each criterion. - Calculate weighted scores to rank solutions. - FMEA (Failure Modes and Effects Analysis) for solution risk - Identify potential failures of the proposed solution. - Assess: - Severity (effect if it occurs) - Occurrence (likelihood of happening) - Detection (likelihood of detecting before impact) - Compute Risk Priority Number (RPN) = S × O × D. - Use RPN and criticality to refine solution design before testing. The goal is to select a manageable set of high-value, testable solutions. --- Designing Improvements Solution Design Principles Design improvements so they are: - Directly connected to Xs: Each change must act on known drivers of variation or waste. - Simple where possible: Prefer the least complex approach that achieves the target. - Standardizable: Solutions should support clear procedures, checklists, or work instructions. - Measurable: Design with measurement in mind; define how the Y and critical Xs will be monitored. Common improvement categories: - Elimination: Remove non–value-added steps or sources of error. - Simplification: Streamline steps, reduce handoffs, clarify responsibilities. - Standardization: Introduce standard work methods and clear process parameters. - Automation: Use technology to reduce manual variation and errors. - Error-proofing (Poka-Yoke): Make it hard or impossible to do the wrong thing. Error-Proofing (Poka-Yoke) Error-proofing is central in the Improve phase: - Goal: Prevent defects at the source rather than inspecting them out later. - Types of error-proofing: - Prevention devices: Make an error impossible (e.g., connectors that only fit one way). - Detection devices: Identify errors immediately and stop the process or alert. - Common mechanisms: - Physical constraints - Checklists and prompts - Interlocks and automatic stops - Sensor-based verifications - Evaluation: - The earlier in the process the error is prevented or detected, the stronger the solution. --- Experimental Design in Improve Purpose of Designed Experiments Designed experiments determine how changes in one or more Xs impact the Y. They help: - Quantify the effect of potential improvements - Identify optimal settings of critical Xs - Detect interactions between Xs - Validate and refine solution designs before full rollout Experiments must be structured to isolate effects while controlling noise. Key DOE Concepts in Improve Important concepts applied in the Improve phase: - Factor: A controllable X being tested (e.g., temperature). - Level: Specific setting of a factor (e.g., 60°C, 70°C). - Response: The Y being measured (e.g., defect rate). - Main effect: The average change in Y when a factor goes from low to high, holding others constant. - Interaction: Combined effect of two factors that is not simply additive. Screening vs Optimization Experiments Two major stages of experimentation: - Screening experiments - Purpose: Identify which factors matter most among many candidates. - Typical designs: - Two-level full factorials with few factors - Two-level fractional factorials with many factors - Outcome: Short list of significant factors for deeper study. - Optimization experiments - Purpose: Fine-tune levels of already-important factors. - Typical approaches: - Full factorials with centered levels - Augmented designs (e.g., central points for curvature) - Response surface methods (when warranted) - Outcome: Recommended factor settings to achieve target Y and stability. Full and Fractional Factorial Designs Understand trade-offs between information and resource use: - Full factorial - Tests all combinations of factor levels. - Advantages: - Complete information on all main effects and interactions. - Simplest interpretation. - Disadvantages: - Grows exponentially with number of factors. - Fractional factorial - Tests only a fraction of all possible combinations. - Uses design resolution concepts: - Higher resolution designs separate main effects from low-order interactions. - Advantages: - Requires fewer runs. - Practical when many factors are considered. - Disadvantages: - Some effects are aliased (confounded). - Requires careful planning and interpretation. Planning an Experiment in Improve Essential steps: - Define the objective: Screen factors, confirm solutions, or optimize settings. - Select: - Response(s) to measure - Factors and levels (including current vs proposed conditions) - Design type and number of runs - Determine: - Replication (repeating runs to estimate pure error) - Randomization order (to reduce time-related bias) - Blocking (if needed to handle known variation sources) - Prepare: - Standard operating procedures for each run - Data collection plan and forms - Roles and training for participants - Conduct: - Run experiments according to the randomized plan. - Ensure measurement system stability. - Analyze: - Use main effect and interaction plots. - Apply ANOVA to test significance. - Confirm model assumptions (normality, equal variance, independence). - Confirm: - Perform confirmation runs at recommended settings. - Check that predicted improvements match observed results. --- Piloting and Implementation Planning Pilot Testing A pilot tests solutions in a controlled, limited scope before full implementation. Pilot objectives: - Validate that the solution works in real conditions - Detect operational issues, side effects, or resistance - Refine procedures, training, and documentation - Gather evidence to support broader rollout Key pilot planning elements: - Scope: Define clear boundaries (e.g., one shift, one site, one product family). - Duration: Long enough to observe stable performance and variation. - Measures: - Primary Y measures - Critical X measures - Process stability indicators - Baseline comparison: - Use pre-pilot data from Define/Measure/Analyze. - Maintain comparable conditions wherever possible. Implementation Plan Once a pilot confirms effectiveness: - Document an implementation plan covering: - Activities and timeline - Responsibilities and ownership - Required resources and budget - Training needs and materials - Communication plan - Risk mitigation and contingency actions - Coordinate with process owners and supporting functions. - Prepare for integration with Control phase activities: - Control plans - Standard work documents - Ongoing monitoring methods --- Quantifying Improvement Statistical Validation of Gains To verify true improvement, compare performance before and after implementing improvements. Common comparisons: - Before/after on means (e.g., cycle time, cost, lead time): - Use appropriate tests (e.g., t-tests) when assumptions are met. - Confirm significant and practically meaningful shifts. - Before/after on proportions (e.g., defect rate, error rate): - Compare proportions across time periods. - Evaluate both statistical significance and absolute reduction. - Before/after on variation: - Use variance or standard deviation measures. - Confirm reduced spread, not only shifted average. Key considerations: - Ensure measurement system stability remains intact. - Consider sample size and power when planning data collection. - Distinguish between statistical significance and business significance. Capability and Performance Measures Recalculate capability indices under improved conditions when relevant: - Cp, Cpk for processes with known specifications: - Assess whether the process meets capability targets after improvements. - Pp, Ppk for overall performance across broader time ranges: - Compare before vs after to confirm overall performance gains. Interpretation in Improve: - Confirm that solution settings not only shift the mean but also keep the process comfortably within specification limits. - Use capability improvements as evidence to support standardization and handoff to Control. --- Refining and Finalizing Solutions Balancing Trade-Offs Improvements may introduce trade-offs that must be managed: - Cost vs cycle time - Cycle time vs quality - Flexibility vs standardization In Improve: - Assess trade-offs explicitly using data. - Adjust factor settings or design features to reach an acceptable balance. - Ensure that gains on the primary Y do not cause unacceptable losses on secondary metrics. Documenting the Improved Process Before closing Improve: - Update process maps to reflect the new future-state process. - Document: - New standards, set-points, or parameter ranges for critical Xs - Error-proofing devices and their operation - New checklists, forms, or electronic workflows - Prepare concise reference materials that will support: - Training during rollout - Monitoring and control activities This documentation provides the foundation needed for a robust Control phase. --- Summary The Improve phase transforms analytical insight into validated, sustainable solutions. Essential capabilities include: - Translating validated root causes into targeted solution ideas - Systematically generating, evaluating, and prioritizing improvements - Designing changes that act directly on critical Xs, with emphasis on error-proofing - Planning and executing designed experiments to confirm and optimize solutions - Running pilots to test solutions under real conditions and refine implementation details - Quantifying gains with appropriate statistical methods and capability measures - Finalizing solution designs and process documentation to support long-term control By following an evidence-driven approach throughout Improve, solutions are not only creative but also proven, quantified, and ready for sustained adoption in the next phase.

Practical Case: Improve Phase A regional lab network’s DMAIC project found long turnaround times for routine blood tests in its central lab. The Improve Phase focused on fixing issues already validated in Measure and Analyze. Context and Problem Primary issue: average turnaround time from sample receipt to result release was too long, causing delayed treatment decisions. Root causes already confirmed: - Batching of samples before analysis. - Frequent rework due to mislabeled tubes. - Unbalanced workload across three analyzers. How Improve Was Applied The team ran a focused Improve workshop with lab techs, supervisors, and IT: 1. Brainstorm and Select Solutions (using impact/effort grid) Shortlisted practical changes: - Shift from large batches to small, time-based “micro-batches.” - Standardized labeling at collection sites with a simple checklist. - Load-leveling rules for assigning samples to analyzers. 1. Pilot Test For two weeks on day shift only: - Micro-batches every 15 minutes instead of waiting for a full rack. - Mandatory label check at sample reception using a 10-second visual checklist. - Lab information system auto-routed samples to the least-loaded analyzer. 1. Refine and Mistake-Proof - Adjusted micro-batch interval to 20 minutes after staff feedback. - Added a hard stop in the system: results could not be released if label fields were incomplete. - Created a simple visual board showing real-time analyzer load so techs could override routing if a machine was down. 1. Full Rollout and Control Handover - Extended new process to all shifts. - Updated SOPs and quick-reference job aids. - Defined daily checks: spot-audits of labels and monitoring of queue length per analyzer. Result Within one month of full rollout: - Turnaround time was significantly reduced and consistently met the target. - Label-related rework became rare. - Staff reported clearer work patterns and fewer urgent escalations. End section

Practice question: Improve Phase A team is optimizing a metal stamping process and wants to identify the combination of pressure, temperature, and dwell time that minimizes defect rate with the fewest experimental runs. Which Improve Phase tool is most appropriate? A. Multi-vari chart B. Full factorial design at all factor levels C. Fractional factorial design D. Time series analysis Answer: C Reason: A fractional factorial design efficiently explores multiple factors and their interactions with fewer runs, which is appropriate in Improve when optimizing settings under resource constraints. Other options either do not support multi-factor optimization (A, D) or may require unnecessary runs (B). --- A Black Belt has a regression model predicting cycle time from three controllable factors and wants to identify the factor settings that minimize predicted cycle time while keeping defect rate within a specified limit. Which method is most suitable in the Improve Phase? A. Residual analysis B. Constrained optimization using the regression model C. Gage R&R study D. Pareto chart of defects Answer: B Reason: Constrained optimization uses the regression equation to mathematically find factor settings that minimize or maximize the response subject to constraints, which is a core Improve activity. Other options are for model validation (A), measurement system analysis (C), or problem prioritization (D), not solution optimization. --- A call center project shows that average handle time is significantly impacted by three factors. The team wants to compare multiple alternative solutions combining script changes and training methods to select the best overall configuration quickly. Which Improve Phase tool is most appropriate? A. Design of Experiments with a randomized block design B. SIPOC diagram C. Control chart for individual values D. Scatter diagram Answer: A Reason: A DOE with blocking allows efficient comparison of multiple solution combinations while controlling for nuisance variables, supporting data-driven selection of the best configuration in Improve. Other options assist with process mapping (B), monitoring (C), or simple relationships (D) but not structured solution testing. --- In an Improve Phase experiment measuring the effect of two factors (A and B) on yield, the interaction plot shows non-parallel lines, and ANOVA indicates a statistically significant interaction term (p < 0.05). How should the Black Belt interpret this result? A. The main effects of A and B can be interpreted independently B. Factor A has no effect and should be removed from the model C. The effect of one factor depends on the level of the other factor D. The model is invalid and must be discarded Answer: C Reason: A statistically significant interaction and non-parallel lines indicate that the impact of one factor changes with the level of the other, which is critical when selecting optimal settings in Improve. Other options ignore or misinterpret interaction structure (A, B) or incorrectly invalidate the model (D). --- A Black Belt is evaluating a proposed solution that reduces average lead time from 12 days to 9 days. The standard deviation is 3 days before and after, and the lower specification limit (LSL) is 0, upper specification limit (USL) is 18 days. Assuming normality and centered distribution, what is the approximate change in short-term sigma level corresponding to this improvement? A. No change in sigma level B. Increase of about 0.5 sigma C. Increase of about 1.0 sigma D. Increase of about 2.0 sigma Answer: C Reason: Sigma level ≈ (USL − Mean) / σ for an upper spec only. Before: (18 − 12) / 3 = 2. After: (18 − 9) / 3 = 3. The improvement is from 2 to 3 sigma, an increase of about 1.0 sigma. Other options understate, overstate, or deny the calculated improvement.

bottom of page