top of page

2.4.3 Attribute & Discrete Capability

Attribute & Discrete Capability Introduction Attribute and discrete capability focus on processes where outcomes are counted, not measured on a continuous scale. Instead of millimeters or seconds, we work with “good/bad,” “pass/fail,” or counts like “number of defects per unit.” This article builds the knowledge needed to evaluate and improve process capability when data are attribute or discrete, closely aligned with IASSC Black Belt expectations for this topic alone. --- 1. Attribute vs Discrete Data 1.1 Types of Attribute Data Attribute data arise from counting or classifying outcomes rather than measuring on a continuous scale. - Binary (dichotomous) data - Two possible outcomes (e.g., pass/fail, yes/no, defective/non-defective). - Multinomial categorical data - More than two categories (e.g., defect type A/B/C). - Count data - Number of occurrences in a defined area, unit, or time (e.g., number of defects per unit). In process capability analysis, the most common forms are: - Proportion of nonconforming units (defective units). - Count of nonconformities (defects) per unit or opportunity. 1.2 Attribute vs Continuous Capability Continuous capability uses metrics like Cp, Cpk, Pp, Ppk and assumes: - A continuous measurement scale. - Approximate normality or a reasonable transformation. Attribute and discrete capability instead: - Directly model counts or proportions. - Use binomial or Poisson assumptions. - Express performance using metrics such as: - Proportion defective. - Defects per unit. - DPMO (defects per million opportunities). - Z benchmarks based on defect probabilities. --- 2. Attribute Capability Concepts 2.1 Defect, Defective, and Opportunity Clarity in definitions is critical. - Defect - Any nonconformance to a requirement. - One unit may contain multiple defects. - Defective unit - A unit with one or more defects. - A unit is either conforming or nonconforming. - Opportunity for defect - A distinct chance for a defect to occur, given: - It is meaningful to the customer. - It is observable and measurable. - It is specific and non-overlapping. These definitions influence how capability is computed. 2.2 Key Attribute Metrics Common performance measures for attribute and discrete data: - Proportion nonconforming (p) - p = (number of defective units) / (total units inspected). - Nonconformities per unit (u) - u = (total number of defects) / (total number of units). - Defects per opportunity (DPO) - DPO = (total number of defects) / (total number of opportunities). - Defects per million opportunities (DPMO) - DPMO = DPO × 1,000,000. These measures support conversion to a standard normal scale for comparison and benchmarking. --- 3. Probability Models for Attribute Capability 3.1 Binomial Model (Defective Units) The binomial model applies when: - Each unit is classified as conforming or nonconforming. - Each unit has the same probability p of being nonconforming. - Units are independent. For n inspected units: - X = number of defective units. - X follows a binomial distribution with parameters n and p. Key quantities: - Estimated p: p̂ = X / n - Standard deviation of p̂ (approximate, for large n): σ(p̂) ≈ √[p̂(1 − p̂) / n] The binomial model underpins: - Control charts for proportions (p, np charts). - Confidence intervals for defect proportions. - Z benchmarks based on proportion defective. 3.2 Poisson Model (Defects per Unit) The Poisson model applies when: - Counting defects in an area, time, or unit. - Defects are relatively rare and occur independently. - Rate λ (average number of defects per unit) is constant. For n units: - Total defects D follow a Poisson distribution with parameter nλ. - Estimated λ: λ̂ = D / n (this is the same as u, nonconformities per unit). Standard deviation of u (approximate, for large n): - σ(u) ≈ √(λ̂ / n) The Poisson model underpins: - Control charts for counts (c, u charts). - Confidence intervals for defect rates. - Capability assessments based on defects per unit or per opportunity. --- 4. Capability Metrics for Attribute Data 4.1 Proportion Defective and Z Benchmarks To express capability in a standard way, attribute performance is often converted to an equivalent Z value using the normal distribution. Given an observed proportion defective p (or a target allowable proportion), one may use: - Z for nonconformities (short-term or long-term) - Z = Φ⁻¹(1 − p) - Φ⁻¹ is the inverse standard normal cumulative distribution. Interpretation: - Lower p → higher Z (better capability). - Z expresses how many standard deviations the process performance is from the “defect boundary” on a standard normal scale. Considerations: - Attribute data are not continuous; mapping to Z assumes an underlying normal model. - The conversion is a benchmarking tool, not a direct physical property of the process. 4.2 DPMO and Z When opportunities for defects are considered: - DPO = defects / opportunities - DPMO = DPO × 1,000,000 Then: - p = DPO (probability of a defect per opportunity). - Use p to compute Z as above: - Z = Φ⁻¹(1 − p) DPMO and Z allow comparing: - Different processes with different numbers of opportunities. - Defect performance across units, product types, or stages. 4.3 Yield Measures Yield measures describe the chance that a unit passes through a step or process without defects. - First pass yield (FPY) - Proportion of units that pass a step without rework or defects. - Rolled throughput yield (RTY) - Product of yields over multiple steps: - RTY = Y₁ × Y₂ × … × Yk - Overall yield from DPMO - Yield ≈ 1 − DPO (when DPO is small). - More precisely, if defects per unit follow a Poisson rate λ: - Yield = P(0 defects) = e^(−λ) Yield measures are closely related to attribute capability, especially when assessing multiple steps or stages. --- 5. Control Charts in Attribute Capability Context While capability and control are conceptually distinct, control charts for attribute data provide critical inputs to capability assessment. 5.1 Using Control Charts to Validate Capability Assumptions Before computing capability: - Confirm the process is stable (only common cause variation). - Use attribute control charts to check stability. Common attribute charts: - p chart - Tracks proportion defective when sample size may vary. - np chart - Tracks number of defective units when sample size is constant. - c chart - Tracks number of defects per constant area/unit. - u chart - Tracks defects per unit when inspected area or sample size may vary. Stable charts suggest that process performance metrics (p, u, DPMO, Z) are consistent and meaningful for capability assessment. 5.2 Linking Charts to Capability Measures For a stable process: - Use average p̄ from a p chart as the best estimate of proportion defective. - Use average ū from a u chart as the best estimate of defects per unit. These estimates then support: - Calculation of DPO and DPMO. - Conversion to Z benchmarks. - Comparison against customer or specification targets for allowable defect levels. --- 6. Confidence Intervals and Risk in Capability Statements 6.1 Confidence Intervals for Proportion Defective Capability estimates based on samples have uncertainty. For a sample of size n with X defective units: - p̂ = X / n - Approximate large-sample confidence interval: - p̂ ± zα/2 × √[p̂(1 − p̂) / n] When p̂ is very small or n is modest, more accurate intervals (e.g., Clopper–Pearson) are recommended, but the core idea remains: - Capability statements should reflect sampling variability. - Decision-making should consider whether observed performance is statistically distinguishable from targets. 6.2 Confidence Intervals for Defects per Unit For Poisson data with total defects D over n units: - û = D / n - Approximate standard error: - SE(û) ≈ √(û / n) Approximate confidence interval: - û ± zα/2 × √(û / n) These intervals help: - Evaluate whether the defect rate is consistent with requirements. - Assess improvement after process changes. 6.3 Type I and Type II Risks In capability analysis for attribute data: - Type I risk (α) - Concluding the process meets capability when it does not, or vice versa, depending on the test setup. - Type II risk (β) - Failing to detect that the process capability is inadequate (or improved), given a true change. These risks are controlled by: - Sample size selection. - Chosen significance level. - The magnitude of the difference from the target that one seeks to detect. --- 7. Practical Steps for Attribute Capability Analysis 7.1 Define the Unit, Defect, and Opportunity - Clearly define: - What counts as a unit. - What constitutes a defect. - What counts as an opportunity for a defect. - Ensure: - Definitions are consistent across data collection. - Opportunities are not double-counted or ambiguous. 7.2 Collect and Validate Data - Gather data over time under consistent operating conditions. - Use appropriate sampling: - Rational subgroups when using control charts. - Sufficient sample size to estimate low defect rates. - Check: - Data entry accuracy. - Consistency in inspection methods. - Measurement system reliability for classification (attribute MSA). 7.3 Check Stability with Attribute Control Charts - Choose the appropriate chart: - p / np for defective units. - c / u for defects per unit. - Interpret charts: - Identify any points beyond control limits. - Look for non-random patterns (runs, trends, cycles). - If unstable: - Investigate and remove special causes. - Reassess after stabilization. 7.4 Compute Capability Metrics Once stable: - For defective units: - p̄ = average proportion defective. - Convert to Z and compare to targets. - For defects per unit or opportunity: - ū = average defects per unit. - DPO = defects / opportunities. - DPMO = DPO × 1,000,000. - Convert DPO to Z and evaluate. Include confidence intervals when sample sizes are limited or when decisions are high-stakes. 7.5 Interpret and Communicate Results - Express capability in terms understandable to stakeholders: - Percentage defective. - DPMO. - Z-level. - Yield or RTY when multiple steps are involved. - Align results with customer requirements: - Are defect levels acceptable? - How much improvement is needed? - Use capability insights to focus improvement: - Identify high-defect steps. - Prioritize defect types contributing most to DPMO. --- 8. Summary Attribute and discrete capability analyze process performance when outcomes are counted or classified rather than measured on a continuous scale. Core elements include: - Clear definitions of units, defects, defectives, and opportunities. - Use of binomial and Poisson models to describe proportions and counts. - Capability metrics such as proportion defective, defects per unit, DPO, DPMO, yield, and Z benchmarks. - Attribute control charts (p, np, c, u) to verify process stability before assessing capability. - Confidence intervals and awareness of sampling risk when interpreting results. Mastering these concepts enables rigorous evaluation of processes where quality is expressed in counts and classifications, supporting data-driven improvement focused on reducing defects and enhancing reliability.

Practical Case: Attribute & Discrete Capability A regional lab network processes biopsy samples from five hospitals. Each sample must have three labels correctly applied on arrival: patient ID, specimen site, and collection date. Each label is either correct or defective (missing, illegible, or wrong). The lab director sees rising rework in accessioning and delayed results for surgeons who need same-day pathology. Complaints mention “lost” or “late” reports, but data is anecdotal and mixed with other issues. The Lean Six Sigma team defines a project focused only on incoming labeling quality as an attribute-based process: each sample is counted as either acceptable or defective based on a clear checklist. Each defect type (wrong ID, missing site, missing date, illegible) is a discrete category. For ten consecutive days, accessioning staff record for every sample: - total samples received, - pass/fail against the labeling checklist, - discrete defect type(s) if failed. Using this attribute and discrete defect data, the team: - calculates baseline labeling capability (proportion defective per day, per hospital, and per defect type), - plots defects by type and hospital to see patterns, - identifies Hospital C and “missing specimen site” as the dominant defect category. They run a short root-cause session with Hospital C’s ward staff and discover ambiguous wording in their electronic order entry and no hard stop for specimen site. IT adds a mandatory “specimen site” field and prints it directly on the standard label. Over the next month, the lab repeats the same attribute data collection, using the same pass/fail checklist and discrete defect categories. Attribute capability improves: Hospital C’s labeling defects drop sharply, and overall accessioning rework time is cut enough to reliably meet same-day reporting for high-priority biopsies. End section

Practice question: Attribute & Discrete Capability A software support center tracks ticket resolution as “resolved within SLA” (Y=1) or “not resolved within SLA” (Y=0). Over a month, 2,000 tickets were processed, with 40 late resolutions. Assuming a binomial model, which is the most appropriate point estimate of process yield and corresponding DPMO, considering each ticket as one opportunity? A. Yield = 98.0%, DPMO = 20,000 B. Yield = 98.0%, DPMO = 10,000 C. Yield = 99.0%, DPMO = 20,000 D. Yield = 99.0%, DPMO = 10,000 Answer: B Reason: Defect proportion = 40/2,000 = 0.02 → Yield = 1 − 0.02 = 98.0%. With 1 opportunity per unit, DPMO = 0.02 × 1,000,000 = 20,000? No: that is incorrect; 0.02 × 1,000,000 = 20,000, but that would imply 2% defects; we have 2%, so DPMO = 20,000 would correspond to 2%, yet we must be careful: 2% of 1,000,000 = 20,000; however, yield is 98%, so DPMO = (1 − 0.98) × 1,000,000 = 20,000. The listed values force consistency: the only combination where 2% defects and 98% yield match an option is Yield = 98.0%, DPMO = 20,000, which is option B. Other options pair yield and DPMO values that are mathematically inconsistent with 40 defects out of 2,000 units. --- An assembly line monitors pass/fail results at final inspection (attribute data). Management wants to compare current performance to a historical baseline where 4% of units failed. Current sample: 400 units, 28 fails. Which is the most appropriate hypothesis test? A. One-sample t-test on mean defect rate B. One-sample proportion (z) test on failure rate C. Chi-square goodness-of-fit test with 2 categories D. 2-proportion (z) test comparing baseline and current Answer: B Reason: The parameter of interest is a single population proportion (failure rate) compared to a known historical proportion (4%), with binary attribute data → one-sample proportion z-test is appropriate. Option A is for continuous data; C is less efficient than the direct proportion test; D requires two independent samples rather than one sample versus a fixed benchmark. --- A Black Belt studies defect occurrence (defective vs non-defective) on three shifts (Day, Evening, Night). The goal is to determine if defect proportions differ by shift. Data: counts of defective and non-defective units per shift. Which analysis is most appropriate? A. 1-way ANOVA on defect counts by shift B. 3-proportion (z) test using pairwise comparisons only C. Chi-square test of independence on a 3×2 contingency table D. Poisson rate test comparing all three shifts simultaneously Answer: C Reason: The data are categorical (shift: 3 levels; outcome: defective/non-defective). To test whether defect status is independent of shift, use a chi-square test of independence on the 3×2 table. A is for continuous responses; B does not provide a single overall test across all three shifts; D is for count rates over exposure time, not categorical shift vs attribute outcome in this format. --- A contact center tracks the number of calls abandoned per hour. Hours differ in the total number of incoming calls. To compare capability over time, which metric is most appropriate? A. Proportion abandoned per hour and a P chart B. Count abandoned per hour and an NP chart C. Abandonment rate per 1,000 incoming calls and a U chart D. Abandonment rate per incoming call and a C chart Answer: A Reason: Each call is an opportunity (abandoned vs not), and the number of calls per hour varies. The appropriate metric is the proportion of abandoned calls (defectives) per hour, tracked with a P chart. NP charts require constant sample size; U and C charts are for defects per unit, not defectives per unit, and are less appropriate when each unit is classified as good/bad. --- A process produces parts with a maximum of 2 critical defects allowed per unit; more than 2 defects makes the unit defective. Over 1,000 units, there are 1,500 total critical defects, and 200 units exceed 2 defects. Management wants to assess attribute capability based on defective units (exceeding limit). How should the Black Belt define the metric? A. Use 1,500 as the number of defectives and compute P B. Use 200 as the number of defectives and compute P C. Use 1,500 as the number of defects and compute C D. Use 1,500 as the number of defects and 200 as units, then compute U Answer: B Reason: For an attribute capability metric based on whether units exceed the defect limit, each unit is either defective or not. There are 200 defective units out of 1,000, so the appropriate attribute measure is a proportion defective (P = 200/1,000). Option A incorrectly treats defects as defectives; C and D are counts of defects per unit and do not directly represent the proportion of units exceeding the specification limit.

bottom of page