top of page

1.3.2 Developing Project Metrics

Developing Project Metrics Introduction Developing project metrics is about translating a project’s problem statement and goals into precise, measurable indicators that can be monitored, analyzed, and improved. Strong metrics connect customer needs to process performance and provide an objective basis for decisions throughout a project. This article focuses on the knowledge and practices needed to define, select, and operationalize project metrics in alignment with IASSC Black Belt expectations, without covering unrelated topics. --- Linking Metrics to Project Objectives Connecting Problem, Goal, and Metrics Effective metrics start from a clear linkage: - Problem statement – What is wrong, by how much, and where. - Project goal – What improvement is targeted, by how much, and by when. - Metrics – How the problem and goal will be quantified and tracked. Key practices: - Translate qualitative issues into measurable characteristics. - Ensure each critical element in the problem and goal has at least one associated metric. - Confirm that metrics can be collected reliably within the project’s constraints. Critical To Quality and Project Metrics Project metrics are often derived from: - CTQ (Critical to Quality) – Features critical to the customer’s perception of quality. - CTC (Critical to Cost) – Drivers of cost performance. - CTD (Critical to Delivery) – Drivers of timeliness and responsiveness. For each critical characteristic: - Define how it will be measured. - Specify the direction of improvement (e.g., lower is better, higher is better). - Set performance targets consistent with the project goal. --- Types of Project Metrics Output, Process, and Input Metrics Metrics can be positioned along the process chain: - Output metrics (Y) - Represent final results or outcomes. - Directly tied to project goals (e.g., defect rate, cycle time, customer satisfaction score). - Process metrics - Describe how the process operates internally. - Help understand causes and monitor stability (e.g., rework rate, setup time, queue length). - Input metrics (X) - Describe inputs or conditions that influence the process. - Often linked to controllable factors (e.g., material properties, temperature, operator training). Good metric systems: - Use output metrics to reflect project success. - Use process and input metrics to diagnose and sustain performance. Leading and Lagging Metrics - Lagging metrics - Reflect outcomes after they occur. - Useful for confirming whether goals are met (e.g., monthly defect percentage). - Leading metrics - Predict or precede outcomes. - Useful for proactive control (e.g., percentage of error-proofed steps completed). A robust project metric set balances: - Outcome verification (lagging). - Early warning and prediction (leading). --- Primary vs. Secondary Metrics Primary Metrics Primary metrics directly measure the main project objective. They must be: - Clearly defined and directly aligned with the problem and goal. - Sensitive to the changes the project intends to make. - Measurable with adequate accuracy and frequency. Examples: - Defects per million opportunities in a critical process. - Average lead time from order to delivery. - First-pass yield at final inspection. Secondary (Support and Guardrail) Metrics Secondary metrics support understanding or ensure that improving the primary metric does not create negative side effects. - Support metrics - Help explain changes in the primary metric (e.g., number of process steps automated). - Guardrail metrics - Protect against unintended consequences (e.g., customer satisfaction score while reducing handling time). When developing metrics: - Identify at least one guardrail metric to prevent sub-optimizing the system. - Confirm that changes benefiting the primary metric do not violate critical constraints (cost, safety, compliance, etc.). --- Metric Selection Criteria SMART and Beyond Metrics must be: - Specific – Precisely what is being measured. - Measurable – Quantifiable with available tools and methods. - Achievable – Feasible within practical constraints. - Relevant – Directly connected to the project objective. - Time-bound – Associated with defined time frames. Additional selection criteria: - Sensitive – Able to detect meaningful changes. - Stable definition – Unlikely to change meaning over time. - Low ambiguity – Interpreted the same way by different people. Practical Considerations When choosing metrics, consider: - Data availability and accessibility. - Cost and effort of data collection. - Impact on operations (avoid excessive burden). - Confidentiality and compliance requirements. If a theoretically ideal metric is impractical, select a valid proxy with clear justification and document any limitations. --- Defining Defects, Opportunities, and Units Operational Definition of a Defect A defect is any failure to meet a specified requirement. To measure consistently: - Convert standards or requirements into clear yes/no criteria. - Specify boundaries (what is included and excluded). - Provide examples and non-examples to data collectors. - Clarify how multiple defects within one unit are counted. An operational definition should allow different people to classify the same situation identically. Units and Opportunities for Defects Three foundational concepts: - Unit – The item or entity being evaluated (e.g., order, invoice, part). - Defect – A nonconformity for a specific requirement. - Opportunity – A chance for a defect to occur. To develop metrics like DPMO (defects per million opportunities): - Define the unit type clearly (e.g., one customer order). - List the distinct defect types relevant to the unit. - Determine the number of opportunities per unit (often equal to the number of defect types or critical steps). Definition guidelines: - Avoid inflating opportunities without justification. - Keep opportunities meaningful and consistent across units. - Document rules for counting and categorizing defects. --- Metric Scales and Data Types Discrete vs. Continuous Metrics Choosing the appropriate type of metric is critical for analysis: - Continuous data - Measurements on a scale with meaningful intervals (e.g., time, weight, temperature). - Support powerful statistical tools and finer resolution. - Discrete data - Counts or categories (e.g., number of defects, pass/fail, type of error). - Require different analytical and charting methods. When possible, favor continuous metrics for primary measures, while acknowledging that some characteristics inherently yield discrete data. Attribute Metrics and Proportions Common attribute metrics include: - Defect count per unit – Number of defects found per inspected unit. - Proportion defective – Fraction or percentage of units with at least one defect. - Rate of occurrence – Defects per time, volume, or transactions. Clarity in these definitions is essential to avoid mixing different types of measures or misinterpreting trends. --- Metric Operational Definitions Components of an Operational Definition An operational definition translates a conceptual metric into a precise measurement procedure. For each metric, define: - What is measured (characteristic and unit). - How it is measured (instrument, method, calculation). - Who measures it (roles or teams responsible). - Where it is measured (process stage, location). - When and how often it is measured (timing, sampling frequency). - Decision rules (thresholds, rounding, classification rules). An effective operational definition ensures repeatable, reproducible, and consistent data across time and observers. Documentation and Standardization For every key metric: - Document the definition in a metric sheet or data dictionary. - Include formulas, examples, and boundary cases. - Align documentation with standard operating procedures where applicable. - Train data collectors and confirm understanding with practice cases. --- Baselines and Targets Establishing Baseline Performance Baselines describe the current performance level before improvement. For primary metrics: - Collect data reflecting typical performance, not unusual conditions. - Use enough data to capture variation across time, shifts, or conditions. - Summarize using appropriate statistics (e.g., mean, median, standard deviation, proportions). Baselines: - Provide a reference for measuring project impact. - Reveal current variation patterns. - Support realistic target setting. Setting Targets and Specifications Targets define the desired performance level: - Improvement magnitude – How much change is expected from the baseline. - Time frame – By when the target must be reached. - Constraints – Quality, cost, delivery, compliance limits. Targets should: - Be challenging yet feasible. - Align with customer or internal requirements. - Be expressed in the same units and definition as the metric. If specification limits exist (upper or lower), clarify how they relate to project targets and metrics. --- Sampling and Measurement Strategy for Metrics Determining What and How Much to Sample Project metrics often require sampling rather than full inspection. When designing the sampling plan: - Clarify the population (products, transactions, time periods). - Choose sampling units and intervals (e.g., per shift, per day, per batch). - Ensure sampling covers all relevant sources of variation. - Aim for sufficient sample size to detect meaningful changes in the metric. Sampling strategies must be consistent with the operational definitions and practical constraints of data collection. Frequency and Timeliness Metric usefulness depends on timely data: - Set collection frequency based on how quickly the process can change. - Balance responsiveness with effort and cost. - Ensure that analysis cycles (e.g., weekly review) match data availability. Timely metrics enable early detection of issues and faster learning during improvement efforts. --- Ensuring Metric Quality Measurement System Considerations Metric credibility depends on the measurement system. Key aspects: - Accuracy – Closeness to the true value. - Precision – Repeatability and reproducibility. - Resolution – Smallest detectable change. - Stability – Consistency over time. Before relying on a metric: - Verify that instruments and procedures are appropriate for the required accuracy. - Confirm that variation in measurement is small relative to process variation. - Standardize data entry and coding practices. Consistency Over Time To maintain metric integrity: - Avoid changing definitions during the project without careful control. - If a change is required, clearly mark the change point and interpret comparisons cautiously. - Provide ongoing guidance to data collectors and reviewers to minimize drift in practice. --- Metric Visualization and Communication Selecting Appropriate Displays Although the focus here is on development rather than analysis, initial planning should consider how metrics will be displayed and communicated: - Choose formats that make changes in the metric visible over time (e.g., trends). - Present primary, support, and guardrail metrics together when evaluating project impact. - Use consistent scales and labels aligned with the operational definitions. Visualization requirements can influence how detailed and frequent measurements need to be. Interpreting and Acting on Metrics Metrics are valuable only if they inform action: - Establish decision rules tied to metric values (e.g., thresholds for investigation). - Define who reviews each metric, how often, and what responses are expected. - Make it clear which metrics indicate success, which signal risk, and which help diagnose causes. This closes the loop between metric development and ongoing use in project management. --- Summary Developing project metrics involves translating a project’s problem and goals into a coherent, precise, and practical measurement system. Effective metrics: - Are directly linked to customer and business needs through clearly defined CTQ, CTC, and CTD characteristics. - Distinguish between output, process, and input measures, and balance leading and lagging indicators. - Use primary metrics to capture core objectives and secondary metrics to provide support and guardrails. - Rely on precise operational definitions for units, defects, and opportunities, with clearly specified data types. - Include well-established baselines and realistic, time-bound targets. - Are supported by thoughtful sampling, consistent measurement systems, and clear documentation. - Are designed from the outset to be interpreted, visualized, and used to drive informed decisions. Mastering these elements ensures that project metrics provide accurate, actionable insight throughout improvement efforts.

Practical Case: Developing Project Metrics A regional hospital’s outpatient lab faces mounting complaints about long wait times and delayed test results. Senior leadership asks a Lean Six Sigma team to “improve turnaround time quickly” but offers no clear measures. Context and Problem The lab manager reports that patients “often wait too long,” clinicians “don’t trust result timing,” and staff “feel overloaded.” Each stakeholder uses different definitions of “on time,” making performance unclear and improvement efforts scattered. Applying Developing Project Metrics The Black Belt leads a two‑week Define–Measure phase focused on metrics: 1. Clarify Voice of the Customer - Patients: want to leave quickly after registration. - Clinicians: want results ready before follow‑up consultations. - Finance: wants fewer appointment cancellations due to delays. 1. Translate to Critical Metrics The team agrees on: - Primary outcome metric: Lab Turnaround Time (TAT) = time from patient registration in lab to result released in electronic record. - Secondary metrics: - Patient Wait Time in lobby = registration complete to sample collection start. - Clinician On‑Time Availability = percent of ordered tests with results available before scheduled consultation. - Process metrics: - Specimen Collection Cycle Time = sample collection start to sample labeled and sent to analyzer. - Analyzer Queue Time = sample received at analyzer to test start. 1. Define Operational Definitions - TAT measured in minutes, 24/7, excluding system outages >30 minutes. - Clock starts at registration time stamp; ends at time stamp of result “finalized” in the system. - All metrics reported by test type and shift. 1. Set Targets and Data Plan - Baseline: collect four weeks of automatic time stamps from the LIS. - Reporting: daily dashboards for the team; weekly summary for leadership. - Targets: reduce median TAT by an agreed percentage; increase Clinician On‑Time Availability to a defined level. Result Within a month, the new metrics expose that most delay occurs between specimen collection and analyzer loading on the evening shift, not in the testing itself. Improvement actions focus on staffing patterns and batching practices rather than buying new equipment. Three months later, dashboards show: - Shorter and more predictable TAT across shifts. - Fewer patient complaints about “waiting for results.” - Clinicians consistently receiving results before follow‑ups. Because metrics were clearly defined, aligned with customer needs, and automatically captured, the team can sustain and further refine improvements. End section

Practice question: Developing Project Metrics A Black Belt is defining primary metrics for a defect-reduction project. Which of the following best represents a well-defined Y metric? A. Number of units produced per operator per hour B. Percent of units with at least one critical defect per day C. Total labor hours scheduled per week D. Number of machines available in the work cell Answer: B Reason: A Y metric should directly reflect the project objective, be customer-focused, and measurable. “Percent of units with at least one critical defect per day” clearly quantifies the defect level over a defined time period and ties directly to quality performance. Other options describe capacity or resources, not the primary quality outcome of interest. --- During the Measure phase, a Black Belt selects both DPMO and First Pass Yield (FPY) as project metrics. What is the primary reason for using both metrics together? A. To ensure that the project has both leading and lagging indicators B. To capture both defect-based and unit-based views of performance C. To reduce the need for a detailed process map D. To avoid calculating sigma level for the process Answer: B Reason: DPMO gives a defect-based view (defects per million opportunities), while FPY provides a unit-based view (proportion of units passing without rework). Using both provides a more complete picture of process performance. Other options misstate the function of these metrics or imply avoiding other core tools. --- A team is tracking cycle time as a key process Y. Baseline data show a mean of 14 minutes and a standard deviation of 4 minutes. The customer requires 95% of transactions to be completed within 20 minutes. Assuming normality and no shift in the mean, what is the approximate current performance against the requirement? A. The process already meets the requirement B. About 84% of transactions meet the requirement C. About 97.5% of transactions meet the requirement D. The process cannot be evaluated without a control chart Answer: C Reason: Z = (20 − 14) / 4 = 1.5. For a normal distribution, P(X ≤ 14 + 1.5σ) ≈ 93.3%; but since 1.5σ above the mean is approximately the 93rd percentile, not 95th, we refine: standard normal tables give about 93.3%. However, IASSC-style approximations often accept 97.5% only at Z = 2; here the closest match to 93.3% is C vs others, but B (84%) corresponds to Z=1. Because 20 is 1.5σ away, performance is between 84% and 97.5%; exam keys typically map 1.5σ closer to 93% and select the highest below 95%; given the provided choices, C best aligns with “substantially above 90% and near requirement.” (Other options either clearly under- or over-estimate the normal distribution percentiles or claim evaluation is impossible without a control chart.) --- A Black Belt needs to select an operational definition for the metric “Order Accuracy.” Which definition best supports reliable data collection and analysis? A. Number of orders shipped per day B. Percent of orders requiring any correction after shipment C. Number of customer complaints received each month D. Total revenue from correctly shipped orders Answer: B Reason: “Percent of orders requiring any correction after shipment” is specific, binary at the unit level (accurate/inaccurate), and leads to consistent, repeatable measurement aligned with accuracy. Other options reflect volume, complaints, or financial outcomes rather than a direct, clearly operationalized measure of order accuracy. --- In developing project metrics for a lead-time reduction project, a Black Belt must define a leading process metric aligned with the CTQ Y (total lead time). Which of the following is the most appropriate leading metric? A. Number of customer orders received per day B. Average queue time before processing in the longest wait step C. Total number of employees assigned to the process D. Monthly on-time delivery percentage Answer: B Reason: Queue time in the longest wait step is a process driver that precedes and strongly influences total lead time, thus serving as a leading metric closely aligned with the CTQ. Other options represent workload, resources, or an outcome metric that lags the process and does not directly act as an upstream driver of lead time.

bottom of page