top of page

2.3.1 Precision & Accuracy

Precision & Accuracy Introduction This article explains precision and accuracy in a practical, statistically rigorous way. The focus is on how to understand, evaluate, and improve measurement quality so that data-based decisions are trustworthy. Precision and accuracy are fundamental to: - Valid measurements - Reliable data analysis - Correct process conclusions - Effective improvement actions The goal is to understand each concept clearly, see how they differ, and know how to evaluate them in real data. --- Core Concepts Accuracy: Closeness to the True Value Accuracy describes how close measurements are to the true value (or accepted reference value). - High accuracy: Average measurement is very close to the true value. - Low accuracy: Measurements are systematically off target (too high or too low). Accuracy is about systematic error, often called bias. Key points: - A measuring system can be accurate even if individual readings vary somewhat, as long as their average is close to the true value. - In practice, the exact true value is usually unknown, so a reference standard or best available estimate is used. Precision: Closeness of Repeated Measurements Precision describes how close repeated measurements are to each other, regardless of whether they are near the true value. - High precision: Values are tightly clustered together. - Low precision: Values are spread out or inconsistent. Precision is about random error, often reflected in variation or spread. Key points: - A measuring system can be precise but wrong (consistently off target). - Precision is largely captured in the standard deviation or similar measures of variability. Four Classic Combinations Conceptually, using a target metaphor: - High accuracy, high precision: Tight cluster around the true value. Best case. - High accuracy, low precision: Average on target, but data scattered. - Low accuracy, high precision: Tight cluster off target (systematic bias). - Low accuracy, low precision: Scattered and off target; poor measurement quality. Understanding these combinations helps diagnose what kind of measurement problem exists. --- Bias and Systematic Error Definition of Bias Bias is the difference between the expected (average) measurement and the true value. - Positive bias: Measurements are systematically too high. - Negative bias: Measurements are systematically too low. - Zero bias: On average, measurements align with the true value. Bias directly reduces accuracy but does not necessarily affect precision. Sources of Bias Common sources include: - Instrument calibration: Scale or device not zeroed or calibrated correctly. - Method error: Inconsistent procedures, incorrect technique, or wrong measurement method. - Environmental factors: Temperature, humidity, vibration, lighting affecting readings. - Human factors: Consistent reading habits (e.g., always rounding up). Identifying and reducing bias is essential for achieving accurate measurements. Detecting and Evaluating Bias Bias is typically evaluated by comparing measurements to a known standard or reference. Common approaches: - Measure a reference standard multiple times and compare the average to the stated value. - Use control samples or check pieces with known properties. - Plot measured values versus reference values and look for a systematic offset. Statistical indicators: - Mean error: Average difference between measured and reference values. - Confidence intervals around the mean error to judge whether bias is statistically significant. --- Random Error and Precision Random Variation in Measurement Random error causes measurements to vary unpredictably around some central value. Characteristics: - Occurs in both directions (higher and lower than the center). - Cannot be completely removed, but can be reduced. - Reflected in the spread of data, not its central location. Statistical Measures of Precision Common precision-related metrics: - Standard deviation (σ): Quantifies average spread around the mean. - Variance (σ²): Square of the standard deviation. - Range: Difference between maximum and minimum result in a set. Higher precision means lower standard deviation, lower variance, and smaller ranges. Repeatability vs Reproducibility Precision has two main components in practice: - Repeatability: Variation when: - Same operator - Same instrument - Same method - Same conditions - Short time frame - Reproducibility: Variation when: - Different operators - And/or different instruments - And/or different locations/conditions Both repeatability and reproducibility contribute to overall precision. Poor precision can arise from either or both. --- Relationship Between Precision, Accuracy, and Total Error Total Measurement Error Total measurement error combines systematic and random components: - Systematic error (bias) affects accuracy. - Random error (imprecision) affects precision. Good measurement systems minimize both. Impact on Data and Decisions Poor accuracy or precision can lead to: - Misleading process capability indices. - Incorrect hypothesis test conclusions. - Faulty estimates of process mean and variation. - Wrong decisions about improvements, specification limits, or control strategies. Key implications: - High bias can make a process seem good or bad when it is not. - High random error can hide real changes or falsely suggest changes that are not real. Precision and Accuracy in Process Capability Analysis In capability analysis (e.g., Cp, Cpk), measurement quality is critical: - Overstated variation (poor precision) leads to underestimated capability. - Biased measurements (poor accuracy) misrepresent how centered the process is within specifications. Capability measures are reliable only when: - Measurement precision is adequate relative to specification width. - Measurement bias is minimal or corrected. --- Assessing Precision & Accuracy in Practice Visual Tools Graphical methods help diagnose measurement issues: - Run charts of repeated measurements on a stable reference part. - Histograms of repeated measurements to see spread and centering. - Scatter plots of measured vs reference values to detect bias and nonlinearity. Visual checks guide deeper statistical evaluation. Using Repeated Measurements Repeated measurements of the same item support evaluation of precision and, when a reference is known, accuracy. Key practices: - Take multiple measurements under identical conditions to estimate repeatability. - Take repeated measurements under varied conditions (different operators, shifts, equipment) to estimate reproducibility. - Compare the mean to the reference to evaluate bias. - Use standard deviation or range to quantify precision. Considerations Related to Resolution Measurement resolution influences apparent precision: - Resolution: The smallest increment that the instrument can display or distinguish. - If resolution is too coarse relative to the required tolerance: - Many readings may repeat the same value. - Real variability may be hidden. - Process variation estimates may be distorted. An instrument should have sufficient resolution compared to the process tolerance to support meaningful precision assessment. --- Improving Precision & Accuracy Improving Accuracy (Reducing Bias) Common actions: - Calibrate instruments against standards and adjust as needed. - Standardize measurement procedures and train personnel. - Control environmental factors (temperature, humidity, vibration). - Use appropriate measurement methods for the characteristic being measured. - Periodically verify performance with known standards or control samples. Goal: Bring the average measurement into alignment with the true or reference value. Improving Precision (Reducing Random Error) Common actions: - Use higher quality or more suitable instruments. - Ensure consistent measurement setup and fixturing. - Refine procedures to reduce operator influence. - Improve training to reduce handling differences. - Reduce environmental variability that amplifies random error. Goal: Reduce variability among repeated measurements so they cluster tightly. Balancing Practical Constraints Not all measurement systems can be made perfectly precise and accurate. Practical considerations: - Required precision is driven by: - Specification width - Process variation - Decision risk - Trade-offs sometimes occur between cost, speed, and measurement quality. The essential requirement is that measurement error must be small enough that data-based conclusions remain valid. --- Application to Data Analysis and Improvement Interpreting Patterns in Data When analyzing process data, keep in mind: - Apparent shifts or trends may reflect measurement bias changes, not real process shifts. - Increased spread in data may reflect worsening precision, not real process deterioration. - Different operators or instruments may produce systematically different data profiles. Always question whether observed patterns might arise from measurement issues. Guarding Against Misinterpretation To avoid incorrect conclusions: - Confirm that the measurement system’s precision and accuracy are adequate before: - Comparing processes - Drawing conclusions from control charts - Conducting hypothesis tests or regression analyses - Estimating capability indices If measurement error is large compared to process variation or specifications, data-driven tools may give misleading results. Ongoing Monitoring of Measurement Quality Measurement systems require continued attention: - Periodically recheck bias using standards or known samples. - Track repeat measurements over time to detect degradation in precision. - Review any changes in instruments, methods, or environment that might affect measurement behavior. Sustaining accurate and precise measurement protects the integrity of all subsequent analyses. --- Summary Precision and accuracy describe two distinct but related aspects of measurement quality: - Accuracy: Closeness of measured values to the true value, governed by bias and systematic errors. - Precision: Closeness of repeated measurements to each other, governed by random error and variation. High-quality measurement systems: - Minimize bias to ensure accurate central values. - Minimize random error to ensure consistent, precise readings. Effective analysis and improvement efforts depend on understanding and managing both precision and accuracy so that process data truly reflect process reality.

Practical Case: Precision & Accuracy A medical device plant produces disposable insulin syringes. Nurses report that some syringes deliver slightly more or less insulin than intended, causing dosing concerns. The quality engineer reviews fill-volume test data from an automated measurement station. Individual syringes often show very similar readings within each test run (clustered tightly), but the average value is consistently higher than the target dose. The team concludes the process is precise but not accurate: the filling equipment repeats the same error every time. The engineer adjusts the machine’s calibration, then re-runs tests on multiple shifts with different operators. After calibration, results from each test run are not only tightly clustered but now also centered on the target dose. The line maintains a control chart to monitor both the spread of results (precision) and their alignment with the target (accuracy), and complaint rates from nurses drop to near zero. End section

Practice question: Precision & Accuracy A measurement system yields values that are tightly clustered but consistently 1.5 units higher than the known reference standard. How should this measurement system be characterized? A. High precision, low accuracy B. Low precision, high accuracy C. Low precision, low accuracy D. High precision, high accuracy Answer: A Reason: The results are tightly clustered (low variation → high precision) but systematically offset from the true value (bias → low accuracy). Other options are incorrect because they misclassify the combination of clustering (precision) and offset (accuracy). --- A process characteristic has a true value of 50. A gauge gives readings: 49.8, 50.1, 49.9, 50.2, 49.7. A second gauge gives: 45, 55, 52, 48, 50. Which statement best describes the first gauge relative to the second? A. Lower precision and higher accuracy B. Higher precision and lower accuracy C. Higher precision and higher accuracy D. Lower precision and lower accuracy Answer: C Reason: The first gauge has low spread around 50 (high precision) and is very close to the true value (high accuracy), whereas the second gauge is both more scattered and more biased from 50. Other options misrepresent either the relative variation or bias between the two gauges. --- In a Gage R&R study, the repeatability component is found to be large, while the reproducibility component is small. What does this most directly indicate about precision and accuracy? A. Poor precision due to operator differences, good accuracy B. Poor precision due to instrument variation, accuracy not directly determined C. Good precision, poor accuracy due to operator bias D. Good precision and good accuracy Answer: B Reason: Large repeatability indicates substantial instrument-related variation (poor precision of the gauge itself); reproducibility is acceptable, but Gage R&R alone does not establish accuracy because bias vs. a reference is not evaluated. Other options incorrectly attribute the issue to operators or infer accuracy from Gage R&R alone. --- A Black Belt compares measurement data from a gauge to a certified reference standard and finds minimal spread in repeated measurements but a consistent bias of -0.8 units. Which action best improves accuracy without significantly affecting precision? A. Replace the gauge with one having lower repeatability variation B. Add a correction factor of +0.8 units to all measurements C. Increase sample size to reduce measurement variation D. Reassign measurements to a single experienced operator Answer: B Reason: A stable, consistent bias can be corrected by applying a fixed correction factor, which shifts accuracy while leaving precision (spread) essentially unchanged. Other options focus on precision (variation) or operators and do not directly address the systematic bias. --- A process capability study shows Cp = 1.50 using measurement data, but a subsequent MSA reveals that 40% of total observed variation is due to the measurement system, mainly from poor precision. How does this affect the interpretation of the process capability? A. The true process capability is likely better than indicated because precision is poor B. The true process capability is likely worse than indicated because measurement noise inflates spread C. The process capability remains valid because Cp is unaffected by measurement error D. Accuracy problems only affect Cpk, not Cp Answer: B Reason: Poor precision (high measurement variation) inflates the estimated total variation, making the process appear more variable and thus overstating spread; the observed Cp therefore overestimates capability, so true capability is likely worse. Other options misunderstand the impact of measurement precision on estimated variation and capability indices.

bottom of page