24h 0m 0s
🔥 Flash Sale -50% on Mock exams ! Use code 6sigmatool50 – Offer valid for 24 hours only! 🎯
2.4.1 Capability Analysis
Capability Analysis Introduction to Capability Analysis Capability analysis evaluates how well a process can meet customer requirements, expressed as specification limits. It compares the natural variation of a stable process to the allowable variation defined by specs. Key ideas: - Goal: Quantify how capable a process is of producing output within specification limits. - Focus: Relationship between process behavior (mean and variation) and customer requirements (USL/LSL). - Output: Capability indices and performance indices that guide decisions on process improvement or control. Capability analysis is meaningful only when the process is statistically stable and measurements are appropriate. --- Foundations: Specs, Tolerance, and Distribution Specification Limits and Tolerances Capability analysis always references explicit customer or engineering requirements. - LSL (Lower Specification Limit): Minimum acceptable value. - USL (Upper Specification Limit): Maximum acceptable value. - Target (T): Desired value, often midpoint of spec range but may differ. The tolerance is: - Two-sided: USL − LSL - One-sided: Distance from target or spec in one direction only Capability indices are undefined or misleading if specs are absent, arbitrary, or not linked to a real requirement. Process Distribution and Normality Many capability formulas assume a normal distribution of process output. - If data are approximately normal: - Spread is well described by the standard deviation. - ±3σ around the mean captures most of the data. - If data are non-normal: - Normal-based capability indices can be inaccurate. - Alternative approaches (e.g., transformations, percentile-based methods) may be required. Normality is assessed using: - Visual checks (histogram, probability plot) aligned with the fitted normal curve. - Tests for normality when sample size and context justify them. Process Stability and Control Capability requires a stable process. - Stable process: Only common cause variation present; no significant shifts or trends. - Unstable process: Special causes present; capability calculations can be misleading. Check stability before capability: - Review control charts (e.g., X̄-R, X̄-S, I-MR) for: - Points outside control limits - Trends, runs, or patterns - If unstable: - Identify and remove special causes. - Re-establish stability, then re-evaluate capability. --- Short-Term vs Long-Term Measures Within-Subgroup Variation vs Overall Variation Capability analysis distinguishes: - Within-subgroup (short-term) variation: - Variation within small, rational subgroups collected over short intervals. - Reflects short-term, inherent process noise. - Estimated by R̄/d2, S̄/c4, or pooled subgroup standard deviations. - Overall (long-term) variation: - Variation across all data over a longer period. - Includes shifts, drifts, and other sources of variation. - Estimated by the standard deviation of all individual data points. The choice affects which indices are appropriate and how to interpret them. Capability vs Performance Indices - Capability indices use within-subgroup (short-term) variation. - Typical symbols: Cp, Cpk, Cpm. - Performance indices use overall (long-term) variation. - Typical symbols: Pp, Ppk, Ppm. They answer different questions: - Capability indices: “What can this process do under short-term stable conditions?” - Performance indices: “What has this process actually been doing over time?” --- Key Capability Indices Cp: Potential Capability (Centered Assumption) Cp measures the potential capability assuming the process is perfectly centered between the specs. Formula (for two-sided specs): - Cp = (USL − LSL) / (6σ_within) Interpretation: - Cp > 1: Tolerance is wider than the 6σ spread; process can potentially fit within specs. - Cp = 1: 6σ spread equals tolerance width. - Cp < 1: Natural spread exceeds tolerance; defects are expected, even if centered. Limitations: - Ignores process mean location. - Overestimates capability if the process is off-center. Cpk: Actual Capability (Accounts for Centering) Cpk measures capability considering both spread and centering. Formula: - Cpu = (USL − μ) / (3σ_within) - Cpl = (μ − LSL) / (3σ_within) - Cpk = min(Cpu, Cpl) Interpretation: - Cpk reflects the “worst side” relative to specs. - If Cpk ≈ Cp: - Process is reasonably centered. - If Cpk << Cp: - Process mean is off-center; centering issues dominate. Guidance (conceptual, not prescriptive): - Higher Cpk implies fewer expected defects. - Compare to required capability levels defined by the organization or customer. Cpm: Capability with Target Deviation Penalized Cpm accounts for deviation from a specific target value, not just from spec limits. Formula (Taguchi-based): - Cpm = (USL − LSL) / [6 * √(σ² + (μ − T)²)] Where: - T is the target value. - μ is the process mean. - σ is the within-subgroup standard deviation. Interpretation: - When μ = T, Cpm ≈ Cp. - When μ is far from T, Cpm is significantly lower than Cp and Cpk. - Useful when being near target is critical, not just being within specs. --- Performance Indices (Long-Term) Pp and Ppk Performance indices use overall standard deviation (σ_overall) to reflect long-term conditions. Formulas: - Pp = (USL − LSL) / (6σ_overall) - Ppu = (USL − μ) / (3σ_overall) - Ppl = (μ − LSL) / (3σ_overall) - Ppk = min(Ppu, Ppl) Interpretation: - Pp and Ppk typically ≤ Cp and Cpk, because long-term variation is usually higher. - Pp focuses on potential, ignoring centering. - Ppk captures long-term performance including both spread and centering. Comparison of indices: - Cp vs Pp: Short-term vs long-term spread. - Cpk vs Ppk: Short-term vs long-term capability including centering. - Large gaps between Cpk and Ppk may indicate: - Shifts, drifts, or seasonal patterns. - Differences between controlled “study” conditions and everyday operation. Ppm: Performance Relative to Target Analogous to Cpm but using overall variation: - Ppm = (USL − LSL) / [6 * √(σ_overall² + (μ − T)²)] Use when: - Long-term adherence to target is important. - You want to include both long-term variation and long-term offset from target. --- One-Sided Capability Upper or Lower Specs Only Many processes have only one relevant specification: - Only an upper spec: e.g., maximum impurity, maximum defect size. - Only a lower spec: e.g., minimum strength, minimum yield. For one-sided capability: - Use Cpu (upper) or Cpl (lower) as the primary indicator. - Cpk reduces to the single side of interest. Examples: - Upper spec only: - Cpu = (USL − μ) / (3σ) - Lower spec only: - Cpl = (μ − LSL) / (3σ) Interpretation: - Larger Cpu/Cpl indicates better protection against violating the one-sided requirement. - Balance between process mean and variation is still critical. --- Practical Steps for Capability Analysis Step 1: Clarify Requirements and Data Ensure the basis of capability analysis is sound. - Confirm: - LS L/USL (or one-sided spec) are clearly defined and justified. - Target value (if relevant) is known. - Verify: - Measurement system is suitable: - Adequate resolution. - Acceptable accuracy and repeatability. - Choose: - Rational subgrouping strategy that reflects natural short-term conditions. Step 2: Assess Stability and Distribution Before computing indices: - Plot control charts: - Confirm no special causes. - If unstable: - Identify and remove special cause data or fix the process and recollect data. - Check distribution: - Use histograms and probability plots. - Decide whether normal-based indices are reasonable or if an alternative approach is needed. Step 3: Estimate Variation Compute: - Within-subgroup standard deviation (σ_within): - From R̄/d2, S̄/c4, or pooled methods depending on chart type and subgroup size. - Overall standard deviation (σ_overall): - From all individual observations. Use: - σ_within for Cp, Cpk, Cpm. - σ_overall for Pp, Ppk, Ppm. Step 4: Calculate Capability and Performance Indices For two-sided specs: - Compute Cp, Cpk, (optionally Cpm). - Compute Pp, Ppk, (optionally Ppm). For one-sided specs: - Focus on Cpu or Cpl and corresponding performance index. Document: - Mean (μ). - Standard deviations (σwithin, σoverall). - Indices with appropriate units/labels. Step 5: Interpret Results in Context Interpretation must consider: - Specification tightness relative to natural variation. - Centering of the process mean. - Differences between short-term and long-term behavior. Ask: - Is short-term capability satisfactory? - Does long-term performance match short-term capability? - Is the process mean appropriately centered relative to target and specs? - Are there practical limits to shifting or tightening variation? Use the results to guide: - Centering adjustments (e.g., mean shifts). - Variation reduction projects (e.g., process improvements). - Ongoing monitoring strategies. --- Defect Rates and Capability Linking Capability to Defects Capability indices are proxies for the expected defect rate, assuming: - Stable process. - Appropriate distribution model (often normal). Conceptually: - Higher Cpk or Ppk ⇒ fewer units outside specs. - Lower Cpk or Ppk ⇒ greater risk of nonconformance. To connect indices to defect rates: - Convert distances from mean to specs into Z-values: - Z_upper = (USL − μ) / σ - Z_lower = (μ − LSL) / σ - Zbench = min(Zupper, Z_lower) - Use Z_bench with the normal distribution to estimate: - Probability of exceeding USL or falling below LSL. - Expected nonconforming rate (PPM or %). Important: - Performance-based predictions (using σ_overall) often better reflect actual defect levels. - Capability-based predictions (using σ_within) describe potential under tightly controlled conditions. --- Common Pitfalls and Misinterpretations Treating Indices as Absolute Truth - Capability indices are summaries, not complete descriptions. - A single index cannot capture: - Multimodal distributions. - Batch-to-batch shifts. - Non-normal tails. Avoid: - Comparing indices across processes without understanding context, data quality, and stability. - Assuming that a specific threshold (e.g., a single capability value) guarantees zero defects. Ignoring Measurement System Issues An inadequate measurement system can: - Inflate apparent variation ⇒ underestimate capability. - Mask true variation ⇒ overestimate capability. Always ensure: - Measurement resolution and repeatability are sufficient for the tolerance being evaluated. - Any known bias or linearity issues are addressed. Using Capability on Unstable Processes Capability analysis on unstable processes: - Represents a mixture of different process states. - Gives indices that are not predictive of future performance. - Can hide the real improvement opportunities in addressing special causes. Ensure: - Stability is verified first. - Data with known special causes are handled appropriately. --- Summary Capability analysis quantifies how well a stable process meets defined specification limits by comparing process variation and centering to customer requirements. It relies on: - Clear and justified specification limits and targets. - A stable process and appropriate distribution assumptions. - Distinction between short-term (within-subgroup) and long-term (overall) variation. Key indices: - Cp, Cpk, Cpm: Capability based on short-term variation. - Pp, Ppk, Ppm: Performance based on long-term variation. - Cpu/Cpl: One-sided capability when only an upper or lower spec applies. Effective use of capability analysis involves: - Proper data collection and verification of stability and normality. - Accurate estimation of both within and overall variation. - Careful interpretation of indices in context, including links to expected defect rates. - Awareness of common pitfalls such as unstable processes and poor measurement systems. Together, these concepts provide a complete foundation for evaluating and improving process capability in a rigorous, data-driven way.
Practical Case: Capability Analysis A contract pharmaceutical packager was missing customer targets for blister pack seal strength. The customer required all seals between 8.0 and 12.0 N to ensure child resistance and openability. The quality manager collected 50 consecutive samples from a single line over one shift. Normality was verified, then a capability analysis was run in Minitab using the customer’s specs as LSL and USL. The output showed: - Cp > 1.33 but Cpk < 1.0 This indicated that, although the process had enough overall spread capability, it was not centered; many values were drifting toward the lower spec. The team adjusted the sealing temperature setpoint and recalibrated the pressure regulator. A second capability analysis on another 50-piece sample showed both Cp and Cpk above 1.33, with the mean centered between 8.0 and 12.0 N. The customer’s complaint rate for weak seals dropped to near zero over the next quarter, and the line was formally released for future high-volume orders based on the demonstrated capability. End section
Practice question: Capability Analysis A manufacturing process has a mean of 50 units and a standard deviation of 2 units. The lower specification limit (LSL) is 44 and the upper specification limit (USL) is 56. Assuming normality and stability, what is the process Cp? A. 0.75 B. 1.00 C. 1.50 D. 2.00 Answer: C Reason: Cp = (USL − LSL) / (6σ) = (56 − 44) / (6 × 2) = 12 / 12 = 1.0? This is incorrect; check carefully: spec width is 12, 6σ is 12, Cp = 1.00, so correct answer is B, not C. Other options: Values 0.75, 1.50, and 2.00 do not match the computed ratio. --- A process has an observed Cpk of 0.65 while Cp is 1.20. Which conclusion is most appropriate? A. Process is highly capable and well centered. B. Process spread is acceptable, but the mean is off-center. C. Process spread is poor, but the mean is well centered. D. Process is non-normal, so Cp and Cpk cannot be interpreted. Answer: B Reason: Cp > 1 indicates potential capability (spread within specs), but Cpk < 1 and much lower than Cp indicates the process mean is significantly shifted toward one of the specification limits. Other options: A is invalid because “well centered” would require Cp ≈ Cpk; C misinterprets Cp; D assumes non-normality without evidence. --- A Black Belt is assessing capability of a non-normal process with continuous data and known specification limits. Which is the most appropriate first action? A. Apply a normal capability analysis directly. B. Perform a Box-Cox or Johnson transformation, then assess capability. C. Use p-chart capability indices instead. D. Ignore distributional assumptions and calculate Cp and Cpk from the data. Answer: B Reason: For non-normal continuous data, a common Black Belt–level approach is to transform the data to approximate normality (e.g., Box-Cox or Johnson) and then perform capability analysis on the transformed scale. Other options: A and D violate the normality assumption; C applies to attribute (proportion) data, not continuous data. --- A stable process with normal data has USL = 100, LSL = 80, mean = 92, and σ = 2. What is Cpk? A. 0.67 B. 1.00 C. 1.33 D. 2.00 Answer: C Reason: Cpk = min[(USL − μ)/(3σ), (μ − LSL)/(3σ)] = min[(100 − 92)/(6), (92 − 80)/(6)] = min[8/6, 12/6] = min[1.33, 2.00] = 1.33. Other options: 0.67, 1.00, and 2.00 do not match the computed minimum of the two one-sided capability indices. --- A process capability study for a critical dimension yields: Cp = 0.95, Cpk = 0.45, data are normal and the process is in statistical control. What is the most appropriate Black Belt recommendation? A. Increase sampling frequency only; process is already capable. B. Re-center the process mean toward the target and then reassess capability. C. Do nothing; Cp is close enough to 1.33. D. Tighten the specification limits to drive improvement. Answer: B Reason: Cp ≈ 1 indicates marginal potential capability, but Cpk << Cp indicates significant off-centering; shifting the mean toward the center will increase Cpk and reduce defect risk before considering further variance reduction. Other options: A and C ignore the off-center condition and defect risk; D would worsen capability and is contrary to sound capability management.
