top of page

1.4.3 Lean & Six Sigma

Lean & Six Sigma Foundation: Purpose and Mindset Lean & Six Sigma combines two complementary perspectives: - Lean: Eliminate waste and improve flow so value moves smoothly to the customer. - Six Sigma: Reduce variation and defects so processes are predictable and capable. The integrated purpose is to design, measure, analyze, improve, and control processes so they are: - Valuable: Aligned with customer requirements. - Efficient: Minimal waste in time, effort, and resources. - Capable: Meeting specifications with low variation. - Stable: Performance sustained over time. A rigorous, data-driven approach is used to identify causes of poor performance, design effective countermeasures, and institutionalize improvements. --- Lean Fundamentals in Lean & Six Sigma Value, Waste, and Flow A Lean & Six Sigma practitioner first clarifies what the customer values, then examines how work flows. - Value-added work: Transforms input to output the customer wants, done right the first time. - Non–value-added but necessary: Required by regulation or current technology. - Pure waste: Adds no value and is not required. The classic eight wastes (often remembered as TIMWOODS) are central: - Transport: Unnecessary movement of materials or information. - Inventory: Excess materials, WIP, or data queues. - Motion: Unnecessary movement of people or equipment. - Waiting: Idle time for people, machines, or information. - Overproduction: Producing earlier or more than needed. - Overprocessing: More work or higher precision than required. - Defects: Rework, scrap, and inspection due to errors. - Skills underutilized: Failing to use people’s knowledge and talents. Flow is improved by systematically identifying and removing these wastes so process steps occur in the right order, at the right time, with minimal interruption. Value Stream Mapping Value Stream Mapping (VSM) provides a high-level visual of how materials and information flow. Key elements include: - Process steps: High-level blocks showing the main activities. - Process data: Cycle time, changeover, uptime, defect rate, batch size. - Material flow: Arrows showing movement of physical items. - Information flow: Arrows showing how instructions, schedules, or signals move. - Timeline: Value-added vs non–value-added time across the stream. Use VSM to: - See the entire value stream, not isolated steps. - Quantify total lead time and value-added ratio. - Identify bottlenecks, excessive inventory, and rework loops. - Design a future state with better flow, smaller batches, and pull. Pull Systems and Just-in-Time Lean & Six Sigma emphasizes: - Pull: Work is released based on actual demand, not forecasts. - Just-in-Time: Right item, right quantity, right time, right place. Common principles include: - Takt time: Available time / customer demand; rate at which the process must produce. - Flow: Reducing batch sizes and queues to match takt. - Visual controls: Kanban or other signals to replenish only what was consumed. Pull systems stabilize flow, reduce inventory, and make problems visible so Six Sigma analysis can address root causes. 5S and Visual Management Stable, visible work environments are necessary for reliable processes. 5S is used to create and maintain such environments: - Sort: Remove unnecessary items. - Set in order: Arrange needed items for easy access. - Shine: Clean and inspect to reveal abnormalities. - Standardize: Define methods for maintaining the first three S’s. - Sustain: Embed behaviors through discipline and accountability. Visual management makes status and problems obvious: - Clear locations and labels. - Standard work posted at workstations. - Simple indicators for normal vs abnormal conditions. This supports faster detection of deviation and easier adherence to process standards. --- Six Sigma Fundamentals in Lean & Six Sigma Defects, Variation, and the Sigma Concept Six Sigma focuses on reducing defects and variation. Key concepts: - Defect: Any instance where an output fails to meet a requirement. - Defective unit: A unit with one or more defects. - Defects per unit (DPU): Total defects / total units. - Defects per million opportunities (DPMO): Defects / (units × opportunities) × 10⁶. The sigma level expresses process performance relative to specifications. Higher sigma levels mean fewer defects. The Six Sigma performance benchmark (about 3.4 defects per million opportunities) assumes a long-term 1.5 sigma shift between short-term and long-term performance. Understanding and applying: - Yield, rolled throughput yield (RTY). - DPU, DPMO, and sigma level conversions. - Opportunities definition consistent with critical requirements. DMAIC as the Core Improvement Structure Lean & Six Sigma process improvement typically follows DMAIC: - Define: Clarify the problem, scope, stakeholders, and goals (often using a project charter). - Measure: Understand current performance and validate the measurement system. - Analyze: Identify root causes of performance gaps using data and statistical tools. - Improve: Develop, test, and implement solutions that address verified root causes. - Control: Standardize and monitor the improved process to sustain gains. Lean tools focus on flow and waste within DMAIC, while Six Sigma tools address variation and capability. --- Define Phase: Problem, Customers, and Requirements Problem Definition and Project Focus Clear problem definition anchors all Lean & Six Sigma work. Essential elements: - Problem statement: Fact-based description of what is wrong, where, and since when. - Goal statement: Target performance, with a specific measure and timeframe. - Business case: Why it matters in terms of cost, quality, delivery, safety, or customer satisfaction. - Scope: Boundaries of what is included and excluded. - Constraints and assumptions: Known limits and conditions. A well-defined problem is measurable, time-bound, and neutral (no implied solutions). Voice of the Customer and Critical Requirements Lean & Six Sigma aligns improvements with customer needs. Key activities: - Collect VOC: Complaints, surveys, interviews, observations, usage data. - Translate VOC: Convert customer language into measurable requirements. - Define CTQ (Critical to Quality): Measurable characteristics essential to satisfy the customer. - Define CTP/CTD/CTC: Critical to process, delivery, or cost as appropriate. - Develop CTQ tree: Break down broad needs into specific, measurable requirements. Outputs of this work become the primary Y’s (outputs) to be measured and improved. SIPOC and High-Level Process Understanding SIPOC (Suppliers, Inputs, Process, Outputs, Customers) captures the process at a high level. Purpose and use: - Clarify process boundaries and major steps. - Identify key inputs (X’s) and outputs (Y’s). - Link outputs to customers and their requirements. - Provide a foundation for detailed process mapping in the Measure phase. SIPOC ensures the team understands the process context before collecting data. --- Measure Phase: Data and Baseline Performance Operational Definitions and Data Types Reliable measurement starts with clear definitions. - Operational definition: Precise, agreed description of how to measure a characteristic. - What is included and excluded. - How, when, and by whom it is measured. - Decision rules for classification. Distinguish data types: - Continuous (variable): Measured on a scale, such as time, length, temperature. - Discrete (attribute): - Count of units (defectives). - Count of events (defects). - Categories (pass/fail, types of defects). Data type determines appropriate statistical methods and charts. Measurement System Analysis: Gage R&R and Attribute Studies Before analyzing process data, evaluate measurement quality. For continuous data: - Gage R&R (Repeatability & Reproducibility): - Repeatability: Variation when the same operator measures the same item. - Reproducibility: Variation between operators. - Core concepts: - Total observed variation decomposed into part-to-part and measurement variation. - Percent contribution of measurement system to overall variation. - Acceptance criteria such as: - %GRR of total variation. - Distinct categories (number of distinguishable part levels). For attribute data: - Attribute agreement analysis: - Assess consistency within raters and between raters. - Evaluate agreement with a standard if available. - Common indicators: - Percent agreement. - Cohen’s kappa or similar measures. If the measurement system is inadequate, improve it or redesign it before making process decisions based on data. Process Mapping and Data Collection Planning Detailed process mapping is used to understand where and how to measure. Maps may include: - Deployment (swimlane) maps: Show steps by function or role. - Detailed flowcharts: Decision points, loops, rework paths. - Spaghetti diagrams: Physical movement of people or materials. A data collection plan specifies: - Measures (Y’s and key X’s). - Data type and units. - Sampling plan (how many, when, where). - Data sources and collection method. - Responsibilities. The objective is to obtain representative, unbiased data that characterizes current process performance. Baseline Capability and Performance Capability and baseline performance measure how well the process meets requirements. For continuous data with known specifications: - Short-term indices: - Cp, Cpk (assumes data from a relatively stable, single condition). - Long-term indices: - Pp, Ppk (refers to overall performance across time and conditions). - Formulas (conceptual): - Cp: Compares spread of process to spread of spec limits. - Cpk: Considers centering relative to each spec limit. - Pp, Ppk: Analogous to Cp, Cpk but use overall standard deviation. Interpretation: - Higher values of Cp/Cpk or Pp/Ppk indicate better capability. - Values below 1.0 usually indicate process spread exceeds specification range. - Evaluate normality and stability before capability analysis; otherwise use non-normal or alternative methods. For attribute data: - Defect rate, DPU, DPMO, yield, and RTY are calculated. - Baseline sigma level is estimated from defect metrics. Baseline performance provides the reference against which improvements are measured. --- Analyze Phase: Root Causes and Statistical Relationships Process Behavior and Stability Before attributing causes, assess if the process is stable over time. Use control charts: - For continuous data: - Xbar-R, Xbar-S, or individuals (I-MR) charts. - For attribute data: - p, np, c, u charts. Interpretation focuses on: - Common cause variation (natural fluctuation). - Special cause variation (signals indicating change or disturbance). Rules often include: - Points outside control limits. - Non-random patterns (runs, trends, cycles). - Clusters or systematic shifts in center. Unstable processes require investigation of special causes before deeper capability or cause–effect analysis. Graphical Analysis and Stratification Visual tools reveal patterns and potential X’s (inputs) that affect Y (output). Key tools: - Histograms: Distribution shape, spread, and centering. - Boxplots: Median, spread, and outliers, especially by subgroup or factor. - Time series plots: Trends, seasonality, and shifts. - Pareto charts: Prioritize defect types, causes, or locations. - Scatter plots: Relationship between two continuous variables. Stratify data by factors such as product family, supplier, equipment, shift, or region to locate where performance differs. Cause–Effect and Root Cause Techniques Structured thinking about causes supports statistical testing. Core methods: - Cause-and-effect (Ishikawa) diagrams: Categorize possible causes (for example, Methods, Machines, Materials, Measurements, People, Environment). - 5 Whys: Iteratively ask “why” to drill down from symptom to underlying cause. - Process failure points: Identify steps that generate rework, delay, or variation. These tools generate hypotheses about critical X’s to validate with data. Hypothesis Testing Foundations Hypothesis tests compare groups or conditions to determine if observed differences are likely due to random variation or real effects. Key concepts: - Null hypothesis (H₀): Assumes no effect or no difference. - Alternative hypothesis (H₁): Assumes an effect or difference. - Significance level (α): Risk of rejecting H₀ when it is true. - p-value: Probability of observing data as extreme or more if H₀ is true. Decision rule: - If p-value ≤ α, reject H₀ and infer statistical significance. - If p-value > α, do not reject H₀ (insufficient evidence). Consider: - Practical significance: Magnitude of difference and its impact. - Power and sample size: Ability to detect meaningful differences. Selecting and Applying Statistical Tests Choice of test depends on data type and comparison structure. Continuous data: - One-sample t-test: Compare mean to a target. - Two-sample t-test: Compare means of two independent groups. - Paired t-test: Compare means of matched pairs (before–after on same units). - ANOVA (analysis of variance): Compare means of three or more groups. - Tests for variance: F-test, Levene’s test. Attribute data: - One proportion test: Compare a proportion to a target. - Two proportion test: Compare proportions between two groups. - Chi-square test: Association between categorical variables, or goodness-of-fit. Assumptions: - Independence of observations. - Approximate normality and equal variances for standard t and ANOVA (use transformations or nonparametric options if violated). Correlation, Regression, and Predictive Models To quantify relationships between Y and X’s: - Correlation: - Pearson correlation for linear relationships in continuous data. - Spearman rank correlation for monotonic relationships or non-normal data. - Correlation does not prove causation. - Simple linear regression: - Model Y as a function of a single X. - Interpret slope, intercept, R², and residuals. - Multiple regression: - Model Y as a function of several X’s. - Identify significant predictors and understand combined effects. - Check multicollinearity, residual patterns, and influential points. Regression supports: - Quantifying effect size of X’s on Y. - Predicting performance under different conditions. - Prioritizing factors to target in improvements. Design of Experiments (DOE) for Causal Understanding DOE systematically varies factors to identify causal relationships and interactions. Key ideas: - Factors: Inputs purposely varied (e.g., temperature, speed). - Levels: Settings of factors (e.g., high/low). - Responses: Outputs measured (Y’s). - Main effects: Independent effect of each factor. - Interactions: Combined effects of factors not explained by their main effects alone. Typical designs in Lean & Six Sigma: - Full factorial (2ᵏ): All combinations of factor levels. - Fractional factorial: Subset of combinations to reduce runs while still screening. - Response surface methods: Refine settings near optimum once key factors are known. DOE allows confirmation of which X’s truly cause changes in Y, beyond correlation. --- Improve Phase: Solutions and Optimization Generating and Selecting Solutions Lean & Six Sigma improvement designs are based on verified root causes. Effective solution development includes: - Idea generation: Brainstorming, benchmarking within the process, and adapting Lean principles. - Screening: Use impact vs effort, feasibility, risk, and alignment with CTQs. - Piloting: Test selected solutions on a limited scale to validate effects. Solutions typically target: - Eliminating or reducing identified wastes. - Reducing variation of key inputs. - Simplifying process steps and decision points. - Error-proofing high-risk steps. Lean Tools for Flow and Waste Reduction Lean-focused improvements may use: - Cycle time reduction: Removing non–value-added activities and delays. - Workload balancing: Aligning tasks to takt time, leveling work across resources. - Cellular layout: Grouping steps to reduce handoffs and movement. - Quick changeover (SMED principles): Reducing setup and changeover time to enable smaller batches. - Standard work: Defining best-known method, sequence, and timing for tasks. These changes are evaluated with data to confirm their impact on throughput, lead time, and quality. Error-Proofing and Robust Solutions Error-proofing (poka-yoke concepts) prevents or makes defects easy to detect. Typical mechanisms: - Control devices: Prevent incorrect action (e.g., fixtures, connectors that fit only one way). - Warning devices: Visual or audible alerts when conditions deviate. - Checks built into process: Verification steps embedded into work rather than added at the end. Robust solutions are: - Insensitive to normal variation in noise factors. - Simple to execute and maintain. - Designed to fail safe where possible. Statistical confirmation: - Use pre- and post-comparisons, control charts, and capability analysis to verify sustained improvement. DOE and Optimization in Improve Use DOE in Improve to: - Confirm factors and interactions identified during Analyze. - Optimize factor levels for targeted responses. - Balance multiple responses (e.g., quality and cycle time). Response surface methods (for example, central composite or Box–Behnken designs) help refine: - Optimal region for key X’s. - Trade-offs between responses using desirability or similar approaches. The aim is not just “better” performance, but statistically optimized and validated settings. --- Control Phase: Sustaining Gains Control Plans and Standardization A control plan documents how the improved process will be maintained. Key elements: - Characteristics to control: CTQs and key X’s. - Specifications and targets: Acceptable ranges. - Measurement methods: How and how often measurements are taken. - Control methods: Control charts, visual controls, checklists. - Reaction plans: Specific actions when performance deviates or signals appear. Standardization ensures the improved method becomes the normal method: - Updated procedures, work instructions, and job aids. - Integration into training and qualification. - Alignment with performance metrics and incentives. Control Charts and Ongoing Monitoring Control charts continue to be used after implementation. Purpose: - Verify that improved performance is stable. - Detect new special causes early. - Distinguish random fluctuation from real shifts. Selection follows the same principles used in Analyze: - Continuous vs attribute data. - Subgroup structure and sampling frequency. Interpretation focuses on: - Maintaining the new level of performance (center line). - Detecting drifts or step changes. - Investigating assignable causes when rules are violated. Response to Out-of-Control and Out-of-Spec Conditions Two related but distinct conditions must be addressed: - Out-of-control: Statistical signal of special cause. - Out-of-spec: Individual units failing to meet specifications. For out-of-control signals: - Stop and contain when appropriate. - Identify and eliminate the special cause. - Document learnings and update procedures. For out-of-spec results: - Segregate and disposition nonconforming items. - Investigate process conditions at the time. - Improve process or controls to prevent recurrence. Escalation paths and responsibilities should be clear in the control plan. --- Integration of Lean and Six Sigma Balancing Flow and Capability Lean & Six Sigma integrates: - Lean focus: Flow, speed, and waste elimination. - Six Sigma focus: Capability, defect reduction, and variation control. Key integration practices: - Use Lean tools early to: - Simplify processes. - Reduce obvious waste and complexity. - Make variation and defects more visible. - Use Six Sigma tools to: - Quantify and model performance. - Pinpoint critical X’s and their interactions. - Optimize process capability. Focused integration ensures that speed does not hide defects, and quality improvements are not achieved with unnecessary cost and complexity. Critical X–Y Relationship The central logic of Lean & Six Sigma is: - Y = f(X₁, X₂, …, Xₙ) Where: - Y: Critical outcomes (CTQs, CTDs, CTCs). - X’s: Controllable and uncontrollable factors. The objective is to: - Identify which X’s significantly influence Y. - Control or optimize those X’s. - Reduce or manage the influence of noise factors. Lean & Six Sigma success is measured by enduring improvements in Y through effective management of key X’s, validated by data and maintained through robust controls. --- Summary Lean & Six Sigma integrates Lean’s waste elimination and flow improvement with Six Sigma’s statistical control of variation and defects. Work proceeds through DMAIC: - Define: Clarify problem, scope, customers, and CTQs. - Measure: Build reliable measurement systems and establish baseline performance. - Analyze: Use graphical tools, hypothesis testing, regression, and DOE to identify and verify root causes. - Improve: Design, pilot, and implement Lean and statistical solutions that address verified causes and optimize performance. - Control: Standardize, monitor with control charts, and manage deviations with clear control plans. Mastery of Lean & Six Sigma requires skillful application of these concepts and tools to design processes that are both efficient and highly capable, and to sustain those gains over time.

Practical Case: Lean & Six Sigma Context A mid-sized hospital’s lab processes routine blood tests for inpatients and outpatients. Physicians complain that lab results arrive too late, delaying treatment decisions. Problem Average turnaround time from blood draw to reported result is inconsistent and often exceeds the hospital’s internal target. Staff blame “high volume” and “not enough people,” but leadership wants a data-based solution. Lean & Six Sigma Application The team used DMAIC, with a Lean focus on flow and waste. Define Mapped the problem to one critical metric: turnaround time from sample collection to result available in the electronic record for routine tests. Measure Pulled four weeks of timestamp data: collection, arrival at lab, analysis start, analysis end, result verification. Verified data accuracy and established baseline performance and variation. Analyze Created a value stream map. Identified: - Long idle time between collection and lab receipt. - Frequent batching of samples before analysis. - Rework due to mislabeled tubes. Used a Pareto chart and root-cause discussions to confirm that transport delays and labeling errors were main contributors, not analyzer capacity. Improve Implemented: - Standard work for phlebotomists: immediate tube labeling at bedside with barcode verification. - Scheduled, small-batch transport runs every 15 minutes instead of irregular bulk pickups. - Visual management at the lab intake bench to prioritize time-sensitive tests. - A simple error-check in the IT system to flag incomplete orders before collection. Piloted changes on two wards, then rolled out hospital-wide after confirming stability. Control Added a daily turnaround-time dashboard by ward and test type. Set trigger thresholds for investigation when variation increased. Conducted monthly audits on labeling compliance and transport adherence. Result Turnaround time became both faster and more consistent. Late results dropped sharply, and clinicians reported fewer delays in starting or adjusting treatment. The hospital avoided adding staff or new analyzers, sustaining performance through the new standard work and visual controls. End section

Practice question: Lean & Six Sigma A Black Belt is validating a measurement system for a continuous CTQ. Ten parts covering the full process range are measured twice by three operators. Which study design is most appropriate? A. Test–retest study with a single operator repeating measurements on all parts B. Type 1 Gage R&R using a single master part measured multiple times C. Crossed Gage R&R with all operators measuring all parts in random order D. Nested Gage R&R with operators each measuring a unique subset of parts Answer: C Reason: A crossed Gage R&R (each operator measures all parts, multiple trials, randomized) is the standard design to assess repeatability and reproducibility for continuous data when the same parts can be measured by all operators. It supports estimation of part-to-part, operator, and interaction effects. Other options either use only one operator (A, B) or a nested design (D), which is used when parts cannot practically be shared among operators and does not fit the described condition. --- A process mean is 50 units with a standard deviation of 2 units and is normally distributed. The specification limits are 44 and 56 units. Assuming the process is centered, what is the process capability index Cp? A. 1.00 B. 1.50 C. 2.00 D. 3.00 Answer: C Reason: Cp = (USL − LSL) / (6σ) = (56 − 44) / (6 × 2) = 12 / 12 = 1.0; however this is incorrect—recalculate carefully: spec width = 56 − 44 = 12; 6σ = 6 × 2 = 12; thus Cp = 12/12 = 1.0. [Correction required: The correct calculation yields Cp = 1.00, not 2.00. The correct answer should be A.] Because exam conditions demand precision, the correct interpretation is: Cp = (56 − 44) / (6 × 1) would only be 2.00 if σ = 1, which is not the case here. B and D do not match any plausible recomputation. --- A Black Belt runs a hypothesis test comparing the proportion of defective units before and after a process improvement. Sample sizes are both ≥ 100. Which test is most appropriate? A. Two-sample t-test for means B. 2-proportion z-test C. Chi-square goodness-of-fit test D. One-way ANOVA Answer: B Reason: The CTQ is a proportion defective (binomial), with two independent groups (before/after) and large sample sizes, making the 2-proportion z-test appropriate to compare proportions. Other options are for means (A, D) or for comparing an observed distribution to a theoretical one (C), not two proportions. --- A team is mapping an administrative process and identifying non–value-added time. Which Lean tool is most appropriate to quantify waiting, rework, and transportation in the process? A. SIPOC diagram B. Value stream map C. Kano analysis D. House of Quality Answer: B Reason: Value stream mapping is the Lean tool used to visualize and quantify value-added and non–value-added activities, including wait time, rework loops, transportation, and inventories across the process. SIPOC is high-level scoping (A), while Kano (C) and House of Quality (D) are customer/quality function deployment tools, not detailed waste quantification tools. --- A Black Belt is modeling the relationship between a continuous CTQ (Y) and three continuous X variables. Residual plots show a curved pattern versus one X and non-constant variance. What is the most appropriate next step? A. Remove the X with curvature from the model B. Apply a suitable transformation (e.g., Box–Cox) to Y and/or add polynomial terms C. Switch to a chi-square test for independence D. Combine the three Xs into a single average predictor Answer: B Reason: Curvature and non-constant variance indicate that the linear regression assumptions may be violated; adding polynomial (e.g., quadratic) terms and/or transforming Y (Box–Cox) can address nonlinearity and heteroscedasticity, improving model fit. Removing the predictor (A) risks omitting a relevant X, C is for categorical data, and D arbitrarily aggregates predictors and can distort relationships.

bottom of page