Sample Size Matters: Uncertainty in Measurement

In my previous post, I gave a brief introduction to populations and samples, and stated that sample size impacts our ability to know what a population really looks like. In this post, I want to show this relationship in more detail. In future posts, I will look at how sample size considerations impact our engineering process and what impacts this has on the business.

Mean and sample size

The error in our estimate of the mean, E, is proportional to the standard deviation of the sample, S, and the sample size, n.

We can visualize this easily enough by plotting the 95% confidence interval. When we sample and calculate the sample mean (\overline{X}), the true population mean, \mu, (what we really want to know) is likely to be anywhere in the shaded region of the graph below.

Uncertainty in mean

This graph shows the 95% confidence region for the true population mean, \mu; there’s a 95% chance that the true population mean is within this band. The “0” line on the y axis is our estimate of the mean, \overline{X}. We can’t know what the true population mean is, but it’s clear that if we use more samples, we can be sure that our estimate is closer to the true mean.

Standard deviation and sample size

Likewise, when we calculate the sample standard deviation, S, the true standard deviation, \sigma has a 95% chance of being within the confidence band below. For small sample sizes (roughly less than 10), the measured standard deviation can be off from the true standard deviation by several times. Even for ten samples, the potential error is nearly \pm 1 standard deviation.

Uncertainty in standard deviation

Proportion and sample size

For proportions, the situation is similar: there is a 95% chance that the true sample proportion, p, is within the shaded band based on the measured sample proportion \hat{p}. Since this confidence interval depends on \hat{p} and cannot be standardized the way \mu and \sigma can be, confidence intervals for two different proportions are plotted.

Uncertainty in proportion

For small n, proportions data tells us very little.

Process capability and production costs

The cost of poor quality in product or process design can be characterized by the Cpk:

Where USL is the upper specification limit (also called the upper tolerance) and LSL is the lower specification limit (or lower tolerance).

We can estimate the defect rate (defects per opportunity, or DPO) from the Cpk:

DPO = 1 - \Pr\left(X < 3 \times Cpk - 1.5\right)

That probability function is calculated in R with pnorm(3 * Cpk - 1.5) and in Excel with NORMSDIST(3 * Cpk - 1.5). The 1.5 is a typical value used to account for uncorrected or undetected process drift.

Since we don’t know \mu and \sigma, we have to substitute \overline{X} and S. The uncertainty in these estimates of the population \mu and \sigma mean that we have uncertainty in what the true process Cpk (or defect rates) will be once we’re in production. When our sample testing tells us that the Cpk should be 1.67 (the blue line), the true process Cpk will actually turn out to be somewhere in the shaded band:

Uncertainty in Cpk

Below the blue line, our product or process is failing to meet customer expectations, and will result in lost customers or higher warranty costs. Above the blue line, we’ve added more cost to the production of the product than we need to, reducing our gross profit margin. Since that gray band doesn’t completely disappear, even at 100 samples, we can never eliminate these risks; we have to find a way to manage them effectively.

The impact of this may be more evident when we convert from Cpk to defect rates (ppm):

Uncertainty in ppm

Summary and a look forward

With a fair sampling process, samples will look similar to—and statistically indistinguishable from—the population that they were drawn from. How much they look like the population depends critically on how many samples are tested. The uncertainties, or errors in our estimates, resulting from sample size decisions have impacts all through our design analysis and production planning.

In the next post, I will explore in more detail how these uncertainties impact our experiment designs.

Sample Size Matters

I find that Six Sigma and Design for Six Sigma courses are often eye-opening experiences for participants. There is an experience of discovering that there are tools available to answer problems that have vexed them, and learning that good engineering and science decisions can lead directly to good business outcomes through logical steps.

One of the most remarkable such moments is when students realize the importance of sample size. In the best cases, there is a forehead-slapping moment where the student realizes that much of the testing they’ve done in the past has probably been a complete waste of time; that while they thought they were seeing interesting differences and making good decisions, they were in fact only fooling themselves by comparing too-small data sets.

I want to show in the next few blog posts why sample size matters, both from a technical perspective and from a business perspective.

Design example

Throughout the next few posts, I’ll use the example of a manufactured product which the customer requires weigh at least 100 kg, sells for about $140 and that costs $120 to manufacture and convert to a sale (the cost of goods sold, or COGS, is $120).

Amount
Sales 140
COGS 120
Material 60
Labor and Overhead 60
Gross Profit 20

We want to develop a new version of the product, using a modified design and a new process that, by design, will reduce the cost of material by 10%. The old cost of material was 50% of COGS, or $60. To achieve the material cost reduction of 10%, we have to remove $6 in material costs, improving gross profit to $26.

We believe that the current design masses 120 kg, so we estimate that our new part mass should be 120 - 0.1 \times 120 = 108 kg.

Current Design New Design Target
Part Weight 120 108

Seems like we might be done at this point, and I’ve seen plenty of engineering projects that stop here. Unfortunately, this isn’t the whole story. Manufacturing will be unable to produce parts of exactly 108 kg, so they’ll need a tolerance range to check parts against. We have that customer requirement for at least 100 kg, so any variation has to stay above that. We also want to save money relative to the current design, so we don’t want many parts to weigh much more than this, especially since the customer isn’t really willing to pay us for the “extra” material beyond 100 kg.

Population versus sample statistics

Most of process or product improvement is concerned with reducing the standard deviation, \sigma, shifting the mean (a.k.a. average), \mu, or reducing a proportion, p, of a process or product characteristic. These summary statistics refer to the population characteristics—the mean, standard deviation or proportion of all parts of a certain design that will ever be produced, or all times that a production step will ever be completed in the intended manner.

Since we can’t measure the whole population up front—we will be producing parts for a long time—we have to draw a sample from the population, and use the statistics of that sample to gain insight into the total population. We can visualize this, somewhat crudely, with the following:

Sampling from known population

We can imagine that the blue circles are conforming parts, and the orange octagons are non-conforming parts. If the sampling process is fair, then the sample proportion \hat{p} will be close to—and statistically indistinguishable from—the true population proportion p. In the population we have 44 parts total, 8 defective parts and 36 conforming parts. In the sample that we drew, we have 10 parts total, 9 conforming and 1 defective. While (p = 8/36 = 1/4 \ne \hat{p} = 1/9, statistically we have

matrix(c(1, 8, 10-1, 44-8), ncol=2) %&gt;% 
  chisq.test(simulate.p.value = TRUE)
## 
##  Pearson's Chi-squared test with simulated p-value (based on 2000
##  replicates)
## 
## data:  matrix(c(1, 8, 10 - 1, 44 - 8), ncol = 2)
## X-squared = 0.3927, df = NA, p-value = 0.6692

With such a high p-value (0.67), we fail to reject the null hypothesis that \hat{p} = p; in more colloquial terms, we conclude that the apparent difference between 8/36 and 1/9 is only due to random errors in sampling. (For larger counts of successes and failures, prop.test() would also work and would be more informative.)

From our perspective, of course, we don’t know what the population looks like. We don’t have any way of knowing with certainty—or accessing data about—future performance, so there is no way for us to know what the total population looks like. In lieu of population data, we develop a sampling process that allows us to fairly draw a sample from that population.

Sampling from unknown population

While we want to know the true population mean, \mu, the true population standard deviation, \sigma, or the true population proportion p, we can only calculate the sample mean, \overline{X}, the sample standard deviation, S, or the sample proportion \hat{p}.

From the known sample, we then reason backward to what the true population looks like. This is where statistics comes into play; statistics allows us to place rigorous boundaries on what the population may look like, without fooling ourselves. Sample size is critical to controlling the uncertainty in these boundaries.

Summary and a look forward

Testing in product development—and usually in production—involves sampling a product or process. Samples never look exactly like the population that we are concerned about, but if the sampling process is fair then the samples will be statistically indistinguishable from the population. With due awareness of the statistical uncertainties, we can use samples to make decisions about the population.

In the next post, I will look at how sample size impacts the uncertainty in our estimation of population statistics like the mean and standard deviation. In a later post, I will look at how this uncertainty impacts the business.

A short aside on statistical tests for proportions

The usual way to compare two proportions would be a proportions test (prop.test() in R), but because we have so few samples to compare, the results may be unreliable and prop.test() generates an appropriate warning. fisher.test() provides an exact estimate of the p-value, but the assumptions are violated with data like this, where we are sampling a fixed number of parts (i.e. row sums are fixed, but column sums are not controlled). This leaves us with using a chi-squared test (chisq.test() in R) which is less informative but does the job. Either the Barnard test or Bayesian estimation based on Monte Carlo simulation would be more informative and possibly more robust.

The Most Useful Data Plot You’ve Never Used

Those of us working in industry with Excel are familiar with scatter plots, line graphs, bar charts, pie charts and maybe a couple of other graph types. Some of us have occasionally used the Analysis Pack to create histograms that don’t update when our data changes (though there is a way to make dynamic histograms in Excel; perhaps I’ll cover this in another blog post).

One of the most important steps in data analysis is to just look at the data. What does the data look like? When we have time-dependent data, we can lay it out as a time-series or, better still, as a control chart (a.k.a. “natural process behavior chart”). Sometimes we just want to see how the data looks as a group. Maybe we want to look at the product weight or the cycle time across production shifts.

Unless you have Minitab, R or another good data analysis tool at your disposal, you have probably never used—maybe never heard of—boxplots. That’s unfortunate, because boxplots should be one of the “go-to” tools in your data analysis tool belt. It’s a real oversight that Excel doesn’t provide a good way to create them.

For the purpose of demonstration, let’s start with creating some randomly generated data:

head(df)
##   variable   value
## 1   group1 -1.5609
## 2   group1 -0.3708
## 3   group1  1.4242
## 4   group1  1.3375
## 5   group1  0.3007
## 6   group1  1.9717
tail(df)
##     variable   value
## 395   group1  1.4591
## 396   group1 -1.5895
## 397   group1 -0.4692
## 398   group1  0.1450
## 399   group1 -0.3332
## 400   group1 -2.3644

If we don’t have much data, we can just plot the points:

library(ggplot2)

ggplot(data = df[1:10,]) +
  geom_point(aes(x = variable, y = value)) +
  coord_flip() +
  theme_bw()

Points plot with few data

But if we have lots of data, it becomes hard to see the distribution due to overplotting:

ggplot(data = df) +
  geom_point(aes(x = variable, y = value)) +
  coord_flip() +
  theme_bw()

Points plot with many data

We can try to fix this by changing some parameters, like adding semi-transparency (alpha blending) and using an open plot symbol, but for the most part this just makes the data points harder to see; the distribution is largely lost:

ggplot(data = df) +
  geom_point(aes(x = variable, y = value), alpha = 0.3, shape = 1) +
  coord_flip() +
  theme_bw()

Points plot with many data using alpha blending

The natural solution is to use histograms, another “go-to” data analysis tool that Excel doesn’t provide in a convenient way:

ggplot(data = df) +
  geom_histogram(aes(x = value), binwidth = 1) +
  theme_bw()

histogram

But histograms don’t scale well when you want to compare multiple groups; the histograms get too short (or too narrow) to really provide useful information. Here I’ve broken the data into eight groups:

head(df)
##   variable   value
## 1   group1 -1.5609
## 2   group1 -0.3708
## 3   group1  1.4242
## 4   group1  1.3375
## 5   group1  0.3007
## 6   group1  1.9717
tail(df)
##     variable   value
## 395   group8 -0.6384
## 396   group8 -3.0245
## 397   group8  1.5866
## 398   group8  1.9747
## 399   group8  0.2377
## 400   group8 -0.3468
ggplot(data = df) +
  geom_histogram(aes(x = value), binwidth = 1) +
  facet_grid(variable ~ .) +
  theme_bw()

Stacked histograms

Either the histograms need to be taller, making the stack too tall to fit on a page, or we need a better solution.

The solution is the box plot:

ggplot() +
  geom_boxplot(data = df, aes(y = value, x = variable)) +
  coord_flip() +
  theme_bw()

Stacked boxplots

The boxplot provides a nice, compact representation of the distribution of a set of data, and makes it easy to compare across a large number of groups.

There’s a lot of information packed into that graph, so let’s unpack it:

Boxplot with markup

Median
A measure of the central tendency of the data that is a little more robust than the mean (or arithmetic average). Half (50%) of the data falls below this mark. The other half falls above it.
First quartile (25th percentile) hinge
Twenty-five percent (25%) of the data falls below this mark.
Third quartile (75th percentile) hinge
Seventy-five percent (75%) of the data falls below this mark.
Inter-Quartile Range (IQR)
The middle half (50%) of the data falls within this band, drawn between the 25th percentile and 75th percentile hinges.
Lower whisker
The lower whisker connects the first quartile hinge to the lowest data point within 1.5 * IQR of the hinge.
Upper whisker
The upper whisker connects the third quartile hinge to the highest data point within 1.5 * IQR of the hinge.
Outliers
Any data points below 1.5 * IQR of the first quartile hinge, or above 1.5 * IQR of the third quartile hinge, are marked individually as outliers.

We can add additional values to these plots. For instance, it’s sometimes useful to add the mean (average) when the distributions are heavily skewed:

ggplot(data = df, aes(y = value, x = variable)) +
  geom_boxplot() +
  stat_summary(fun.y = mean, geom="point", shape = 10, size = 3, colour = "blue") +
  coord_flip() +
  theme_bw()

Stacked box plots with means

Graphs created in the R programming language using the ggplot2 and gridExtra packages.

References

  1. R Core Team (2014). R: A language and environment for statistical computing. R Foundation for Statistical
    Computing, Vienna, Austria. URL http://www.R-project.org/.
  2. H. Wickham. ggplot2: elegant graphics for data analysis. Springer New York, 2009.
  3. Baptiste Auguie (2012). gridExtra: functions in Grid graphics. R package version 0.9.1.

    http://CRAN.R-project.org/package=gridExtra

Flowing Requirements from the VoC or VoP

In a previous post, I talked about the voice of the customer (VoC), voice of the process (VoP) and the necessity of combining the two when specifying a product. Here, I’d like to offer a general method for applying this in the real world, which can be implemented as a template in Excel.

Recap

I showed that there was a cost function associated with any specification that derived from both the VoC (expressed as tolerances or specification limits) and from the process capability. An example cost function for a two-sided tolerance is reproduced below.

Percent of target production costs given an average production weight and four different process capabilities.

Percent of target production costs given an average production weight and four different process capabilities.

I argued that, given this cost function, specifying a product requires specifying both the product specification limits (or tolerances) and the minimally acceptable process capability, Cpk. Ideally, both of these should flow down from a customer needs analysis to the finished product, and from the finished product to the components, and so on to materials.

Requirements flow down and up

To flow all requirements down like this, we would need to know the transfer functions, Y = f(X), for each requirement Y and each subcomponent characteristic X. There are methods for doing this, like Design for X or QFD, but they can be difficult to implement. In the real world, we don’t always know these transfer functions, and determining them can require non-trivial research projects that are best left to academia.

As an illustration, we will use the design of a battery (somewhat simplified), where we have to meet a minimum requirement that is the sum of component parts. The illustration below shows the component parts of a battery, or cell. It includes a container (or “cell wall”), positive and negative electrodes (or positive and negative “plates”), electrolyte and terminals that provide electrical connection to the outside world. Usually, we prefer lighter batteries to heavier ones, but for this example, we’ll suppose that a customer requires a minimum weight. This requirement naturally places limits on the weight of all components.

In the absence of transfer functions, we often make our best guess, build a few prototypes, and then adjust the design. This may take several iterations. A better approach is to estimate the weight specification limits and minimum Cpk by calculation before any cells are actually built.

General drawing of the structure of aircraft battery's vented type NiCd cell. Ransu. Wikipedia, [http://en.wikipedia.org/wiki/ File:Aircraft_battery_cell.gif]. Accessed 2014-04-04.

General drawing of the structure of aircraft battery’s vented type NiCd cell. Ransu. Wikipedia, [http://en.wikipedia.org/wiki/ File:Aircraft_battery_cell.gif]. Accessed 2014-04-04.

Suppose the customer specifies a cell minimum weight of 100 kg. From similar designs, we know the components that contribute to the cell mass and have an idea of the percentage of total weight that each component contributes.

m_{cell}=m_{container}+m_{terminals}+m_{electrolyte}+m_{poselect}+m_{negelect}

Each individual component is therefore a fraction fm of the total cell mass, e.g.

m_{container}=f_{m,container}m_{cell}

More generally, for a measurable characteristic c, component i has an expected mean or target value of T_{i,c}=f_{i,c}\mu_{parent,c} or T_{i,c}=f_{i,c}T_{parent,c}.

In our example, we may know from similar products or from design considerations that we want to target the following percents for each fraction fm:

  • 5% for container
  • 19% for terminals
  • 24% for electrolyte
  • 26% for positive electrodes
  • 26% for negative electrodes

Specification Limits

Upper Specification Limit (USL)
The maximum allowed value of the characteristic. Also referred to as the upper tolerance.
Lower Specification Limit (LSL)
The minimum allowed value of the characteristic. Also referred to as the lower tolerance.

Since the customer will always want to pay as little as possible, a specified lower weight of 100 kg is equivalent to saying that they are only willing to pay for 100 kg of material; any extra material is added cost that reduces our profit margin. If we tried to charge them for 150 kg of material, they would go buy from our competitors. The lower specification limit, or lower tolerance, of the cell weight is then 100 kg.

If the customer does not specify a maximum weight, or upper specification limit, then we determine the upper limit by the maximum extra material cost that we are willing to bear. In this example, we decide that we are willing to absorb up to 5% additional cost per part. Assuming that material and construction contributes 50% to the total cell cost, the USL is then 110 kg. To allow for some variation, we can set a target weight in the middle: 105 kg. From data on previous designs and the design goals, we can apportion the target weight to each component of the design, as shown in the table below.

We can apply the same fractions to the cell USL and LSL to obtain a USL and LSL of each component. As long as parts are built within these limits, the cell will be within specification. The resulting specification for cell and major subcomponents is illustrated in table [tblSpecification]. Further refinement of the allocation of USL and LSL to the components is possible and may be needed if the limits do not make sense from a production or cost perspective.

Part Percent Target LSL USL
/kg /kg /kg
Cell 100% 105 100 110
Container 5% 5.2 5 5.5
Terminals 19% 19.9 19 20.9
Electrolyte 24% 25.2 24 26.4
Positive electrodes 26% 27.3 26 28.6
Negative electrodes 26% 27.3 26 28.6

Variance of components and Cpk

When a characteristic is due to the sum of the part’s components, as with cell mass, the part-to-part variation in the characteristic is likewise due to the variation in the components. However, where the characteristic adds as the sum of the components,

m_{cell}=m_{container}+m_{terminals}+m_{electrolyte}+m_{poselect}+m_{negelect}

the variance, \sigma^{2} adds as the sum of squares

\sigma_{cell}^{2}=\sigma_{container}^{2}+\sigma_{terminal}^{2}+\sigma_{electrolyte}^{2}+\sigma_{poselect}^{2}+\sigma_{negelect}^{2}

The variance of any individual component is therefore a function of the total parent part variance

\sigma_{container}^{2}=\sigma_{cell}^{2}-\sigma_{terminal}^{2}-\sigma_{electrolyte}^{2}-\sigma_{poselect}^{2}-\sigma_{negelect}^{2}

or

\displaystyle \sigma_{container,mass}^{2}=f_{\sigma,container}\sigma_{cell,mass}^{2}

Since this is true for all components, the two fractions f_{m} and f_{\sigma} will be approximately equal. Therefore if we don’t know the fractions f_{\sigma}, we can use the fraction f_{m}, which usually easier to work out, to allocate the variance to each component:

\displaystyle \sigma_{container,mass}^{2}=f_{m,container}\times\sigma_{cell,mass}^{2}

More generally, for measurable characteristic c of a subcomponent i of a parent component,

\displaystyle \sigma_{i,c}=\sqrt{f_{c,i}}\:\sigma_{c,parent}

Since the given \sigma is the maximum allowed for the parent to meet the desired Cpk, this means that \sigma_{i}^{2} is an estimate for the maximum allowed component variance. Manufacturing can produce parts better than this specification, but any greater variance will drive the parent part out of specification.

Calculating Specification Limits

In general, there are two conflicting goals in setting specifications:

  1. Make them as wide as possible to allow for manufacturing variation while still meeting the VoC.
  2. Make them as narrow as possible to stay near the minimum of the cost function.

For this, Crystall Ball or iGrafx are very useful tools during development, as we can simulate a set of arts or processes, analyze the allowed variation in the product and easily flow that variation down to each component. In the absence of these tools, Minitab or Excel can be used to derive slightly less robust solutions.

Calculating from Customer Requirements

  1. Identify any customer requirements and set specification limits (USL and LSL) accordingly. If the customer requirements are one-sided, determine the maximum additional cost we are willing to accept, and set the other specification limit accordingly. Some approximation of costs may be needed.
  2. If no target is given, set the target specification for each requirement as the average of USL and LSL.
  3. Set the minimum acceptable Cpk for each specification. Cpk = 1.67 is a good starting value. Use customer requirements for Cpk, where appropriate, and consider, also, whether the application requires a higher Cpk (weakest link in the chain….
  4. Calculate the maximum allowed standard deviation to meet the Cpk requirement as \sigma_{parent}=\left(USL-LSL\right)/\left(6\times Cpk\right).
  5. For each subcomponent (e.g. the cell has subcomponents of container, electrodes, electrolyte, and so on), apportion the target specification to each of the subcomponents based on engineering considerations and judgement. If the fractions f are known, T_{i}=f_{i}\times T_{parent}.
  6. Calculate the fraction f_{i} (or percent) of the parent total for each subcomponent if not already established in step (5).
  7. Calculate the USL and LSL for each subcomponent by multiplying the parent USL and LSL by the component’s fraction of parent (from step 6). USL_{i}=f_{i}\times USL_{parent} and LSL_{i}=f_{i}\times LSL_{parent}.
  8. Estimate the allowed standard deviation \sigma_{i} for each subcomponent as
    \displaystyle \sigma_{i}=\mathtt{SQRT}\left(f_{i}\right)\times\sigma_{parent}.
  9. Calculate the minimum allowed Cpk for each subcomponent from the results of (5), (7) and (8), using the target, T, for the mean, \mu.
    \displaystyle Cpk_{i}=minimum\begin{cases}\frac{USL_{i}-T_{i}}{3\sigma_{i}}\\\frac{T_{i}-LSL_{i}}{3\sigma_{i}}\end{cases}
  10. Repeat steps (5) through (9) until all components have been specified.
  11. For each component, report the specified USL, LSL, target T and maximum Cpk.

Calculating from Process Data

When there is no clear customer-driven requirement or clear requirement from parent parts (e.g. dimensional specifications that can be driven by the fit of parts), but specification limits are still reasonably needed, we can start from existing process data.

This is undesirable because any change to the process can force a change to the product specification, without any clear understanding of the impact on customer needs or requirements; the VoC is lost.

The calculation of USL and LSL from process data is also somewhat more complicated, as we have to use the population mean and standard deviation to determine where to set the USL and LSL, without really knowing what that mean and standard deviation are.

In the real world, we have to live with such constraints. To deal with these limitations, we will use as much data as is available and calculate the confidence intervals on both the mean and the standard deviation. The calculation for USL and LSL becomes

\setlength\arraycolsep{2pt}\begin{array}{rl}\displaystyle USL &=\textrm{upper 95\% confidence on the mean}\smallskip\\ \displaystyle &\quad +k\times\textrm{upper 95\% confidence on the standard deviation}\end{array}
\setlength\arraycolsep{2pt}\begin{array}{rl}\displaystyle LSL &=\textrm{lower 95\% confidence on the mean}\smallskip\\ &\quad -k\times\textrm{upper 95\% confidence on the standard deviation}\end{array}

where k is the number of process Sigmas desired, based on the tolerance cost function. Most of the time, we will use k=5, to achieve a Cpk of 1.67.

We always use the upper 95% confidence interval on the standard deviation. We don’t care about the lower confidence interval, since a small \sigma will not help us in setting specification limits.

  1. Calculate the mean (\mu_{parent}) from recent production data. In Excel, use the AVERAGE() function on the data range.
  2. Calculate the standard deviation (\sigma_{parent}) from recent production data. In Excel, you can use the STDEV() function on the data range.
    1. If the order of production data is known, or SPC is in use, a better method is to use the range-based estimate from the control charts. This will be discussed in subsequent training on control charts.
  3. Count the number of data points, n, that were used for the calculations (1) and (2). You can use the COUNT() function on the data range.
  4. Calculate the 95% confidence level on the mean. In Excel, this is accomplished with
    CL=\mathtt{TINV}\left(\left(1-0.95\right);n-1\right)\times\sigma_{parent}/\mathtt{SQRT}\left(n\right)

    In Excel 2010 and later, TINV() should be replaced with T.INV.2T().

  5. Calculate the 95% confidence interval on the mean as CI_{upper}=\mu+CL and CI_{lower}=\mu-CL.
  6. Calculate the upper and lower 95% confidence limits on the standard deviation. In Excel, this is accomplished with
    \sigma_{upper}=\sigma_{parent}\times\mathtt{SQRT}\left(\left(n-1\right)/\mathtt{CHIINV}\left(\left(1-0.95\right)/2;n-1\right)\right)

    and

    \sigma_{lower}=\sigma_{parent}\times\mathtt{SQRT}\left(\left(n-1\right)/\mathtt{CHIINV}\left(1-\left(1-0.95\right)/2;n-1\right)\right)

    In Excel 2010 and later, CHIINV() can be replaced with CHISQ.INV.RT() for improved accuracy.

  7. Calculate the LSL as LSL_{parent}=CI_{lower}-k\sigma_{upper}. You might use a value other than 5 if the customer requirements or application require a higher process Sigma.
  8. Calculate the USL as USL_{parent}=CI_{upper}+k\sigma_{upper}.
  9. For each subcomponent (e.g. the cell has subcomponents of positive electrode, negative electrode, electrolyte, and so on), apportion the parent part mean to each of the subcomponents based on engineering considerations and judgement. If the fractions f are known, T_{i}=f_{i}\times\mu_{parent}.
  10. If the the fraction (or percent) f_{i} of the parent total for each subcomponent is not known, calculate it using the results of step (9).
  11. Calculate the USL and LSL for each subcomponent by multiplying the parent USL and LSL by the component’s fraction of parent (from step 6). USL_{i}=f_{i}\times USL_{parent} and LSL_{i}=f_{i}\times LSL_{parent}.
  12. Estimate the allowed standard deviation \sigma_{i} for each subcomponent as \sigma_{i}=\mathtt{SQRT}\left(f_{i}\right)\times\sigma_{lower}
  13. Calculate the minimum allowed Cpk for each subcomponent from the results of (5), (7) and (8), using the target T_{i} for the mean, \mu_{i}.
    \displaystyle Cpk_{i}=minimum\begin{cases}\frac{USL_{i}-T_{i}}{3\sigma_{i}}\\\frac{T_{i}-LSL_{i}}{3\sigma_{i}}\end{cases}
  14. Repeat steps (9) through (13) until all components have been specified.
  15. For each component, report the specified USL, LSL, target T and maximum Cpk.

Can We do Better than R-squared?

If you're anything like me, you've used Excel to plot data, then used the built-in “add fitted line” feature to overlay a fitted line to show the trend, and displayed the “goodness of fit,” the r-squared (R2) value, on the chart by checking the provided box in the chart dialog.

The R2 calculated in Excel is often used as a measure of how well a model explains a response variable, so that “R2 = 0.8” is interpreted as “80% of the variation in the 'y' variable is explained by my model.” I think that the ease with which the R2 value can be calculated and added to a plot is one of the reasons for its popularity.

There's a hidden trap, though. R2 will increase as you add terms to a model, even if those terms offer no real explanatory power. By using the R2 that Excel so helpfully provides, we can fool ourselves into believing that a model is better than it is.

Below I'll demonstrate this and show an alternative that can be implemented easily in R.

Some data to work with

First, let's create a simple, random data set, with factors a, b, c and response variable y.

head(my.df)
##       y a       b      c
## 1 2.189 1 -1.2935 -0.126
## 2 3.912 2 -0.4662  1.623
## 3 4.886 3  0.1338  2.865
## 4 5.121 4  1.2945  4.692
## 5 4.917 5  0.1178  5.102
## 6 4.745 6  0.4045  5.936

Here is what this data looks like:

Plot of the response and factors in a linear model.

Calculating R-squared

What Excel does when it displays the R2 is create a linear least-squares model, which in R looks something like:

my.lm <- lm(y ~ a + b + c, data = my.df)

Excel also does this when we call RSQ() in a worksheet. In fact, we can do this explicitly in Excel using the Regression analysis option in the Analysis Pack add-on, but I don't know many people who use this, and Excel isn't known for its reliability in producing good output from the Analysis Pack.

In R, we can obtain R2 via the summary() function on a linear model.

summary(my.lm)
## 
## Call:
## lm(formula = y ~ a + b + c, data = my.df)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -1.2790 -0.6006  0.0473  0.5177  1.5299 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(&gt;|t|)  
## (Intercept)    2.080      0.763    2.72    0.034 *
## a             -0.337      0.776   -0.43    0.679  
## b             -0.489      0.707   -0.69    0.515  
## c              1.038      0.817    1.27    0.250  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.1 on 6 degrees of freedom
## Multiple R-squared:  0.833,  Adjusted R-squared:  0.75 
## F-statistic:   10 on 3 and 6 DF,  p-value: 0.00948

Since summary() produces a list object as output, we can grab just the R2 value.

summary(my.lm)$r.squared
## [1] 0.8333

Normally, we would (somewhat loosely) interpret this as telling us that about 83% of the variation in the response y is explained by the model.

Notice that there is also an "adjusted r-squared” value given by summary(). This tells us that only 75% of the variation is explained by the model. Which is right?

The problem with R-squared

Models that have many terms will always give higher R2 values, just because more terms will slightly improve the model fit to the given data. The unadjusted R2 is wrong. The calculation for adjusted R2 is intended to partially compensate for that “overfit,” so it's better.

It's nice that R shows us both values, and a pity that Excel won't show the adjusted value. The only way to get an adjusted R2 in Excel is to run the Regression analysis; otherwise, we have to calculate adjusted R2 manually.

Both R2 and adjusted R2 are measures of how well the model explains the given data. However, in industry we usually want to know something a little different. We don't build regression models to explain only the data we have; we build them to think about future results. We want R2 to tell us how well the model predicts the future. That is, we want a predictive R2. Minitab has added the ability to calculate predictive R2 in Minitab 17, and has a nice blog post explaining this statistic.

Calcuting predictive R-squared

Neither R nor Excel provide a means of calculating the predictive R2 within the default functions. While some free R add-on packages provide this ability (DAAG, at least), we can easily do it ourselves. We'll need a linear model, created with lm(), for the residuals so we can calculate the “PRESS” statistic, and then we need the sum of squares of the terms so we can calculate a predictive R2.

Since the predictive R2 depends entirely on the PRESS statistic, we could skip the added work of calculating predictive R2 and just use PRESS, as some authors advocate. The lower the PRESS, the better the model is at fitting future data from the same process, so we can use PRESS to compare different models. Personally, I'm used to thinking in terms of R2, and I like having the ability to compare to the old R2 statistic that I'm familiar with.

To calculate PRESS, first we calculate the predictive residuals, then take the sum of squares (thanks to (Walker’s helpful blog post) for this). This is pretty easy if we already have a linear model. It would take a little more work in Excel.

pr <- residuals(my.lm)/(1 - lm.influence(my.lm)$hat)
PRESS <- sum(pr^2)
PRESS
## [1] 19.9

The predictive R2 is then (from a helpful comment by Ibanescu on LikedIn) the PRESS divided by the total sum of squares, subtracted from one. The total sum of squares can be calculated directly as the sum of the squared residuals, or obtained by summing over Sum Sq from an anova() on our linear model. I prefer using the anova function, as any statistical subtleties are more likely to be properly accounted for there than in my simple code.

# anova to calculate residual sum of squares
my.anova <- anova(my.lm)
tss <- sum(my.anova$"Sum Sq")
# predictive R^2
pred.r.squared <- 1 - PRESS/(tss)
pred.r.squared
## [1] 0.5401

You'll notice that this is smaller than the residual R2, which is itself smaller than the basic R2. This is the point of the exercise. We don't want to fool ourselves into thinking we have a better model than we actually do. One way to think of this is that 29% (83% – 54%) of the model is explained by too many factors and random correlations, which we would have attributed to our model if we were just using Excel's built-in function.

When the model is good and has few terms, the differences are small. For example, working through the examples in Mitsa's two posts, we see that for her model 3, R2 = 0.96 and the predictive R2 = 0.94, so calculating the predictive R2 wasn't really worth the extra effort for that model. Unfortunately, we can't know, in advance, which models are “good.” For Mitsa's model 1 we have R2 = 0.95 and predictive R2 = 0.32. Even the adjusted R2 looks pretty good for model 1, at 0.94, but we see from the predictive R2 that our model is not very useful. This is the sort of thing we need to know to make correct decisions.

Automating

In R, we can easily wrap these in functions that we can source() and call directly, reducing the typing. Just create a linear model with lm() (or an equivalent) and pass that to either function. Note that pred_r_squared() calls PRESS(), so both functions have to be sourced.

pred_r_squared <- function(linear.model) {
    lm.anova <- anova(linear.model)
    tss <- sum(lm.anova$"Sum Sq")
    # predictive R^2
    pred.r.squared <- 1 - PRESS(linear.model)/(tss)
    return(pred.r.squared)
}
PRESS <- function(linear.model) {
    pr <- residuals(linear.model)/(1 - lm.influence(linear.model)$hat)
    PRESS <- sum(pr^2)
    return(PRESS)
}

Then we just call the function to get the result:

pred.r.squared <- pred_r_squared(my.lm)
pred.r.squared
## [1] 0.5401

I've posted these as Gists on GitHub, with extra comments, so you can copy and paste from here or go branch or copy them there.

References and further reading

Combining the Voice of the Customer with the Voice of the Process

Setting product specifications is an iterative and challenging process, combining lab test data, historical data and educated guesses. All too often, the result is a set of product and process specifications that must be changed to meet manufacturing needs and that do not meet all customer requirements. To achieve specifications that are correct and defendable, the engineer must understand how the voice of the customer and the voice of the process interact, and must fully specify product through both specification limits (also called tolerance limits) and production capability, or Cpk.

Voice of the Customer

What the customer expects a product to do—what they are willing to pay for—is known as the “voice of the customer” (VoC). A customer’s expectations may not all be written or explicitly stated, and unwritten or unrecognized needs or wants can be even more important than the written ones. As we design a product, we first translate the VoC to engineering requirements and then flow the requirements down to subcomponents.

For simplicity, I will adopt the convention of referring to the specifications or tolerances as the Target Specification, the Upper Specification Limit (USL) and the Lower Specification Limit (LSL). The target, USL and LSL are the engineering translation of the VoC. I will refer to the measure being specified—length, mass, voltage, etc.—as a characteristic.

Voice of the Process

What we know about the parts actually produced—maximum and minimum, average, standard deviation, outliers, etc.—is known as the “voice of the process” (VoP). The VoP tells us the limits of our manufacturing abilities.

Suppose that we know from the production plant that the typical weight of our product is between 99.2 and 104.2 kg. This is the VoP; it may or may not be acceptable to the customer or fit within the USL and LSL. When we engage in statistical process control (SPC), we are listening to the VoP, but not the VoC.

The engineer must design to the VoC while considering the VoP.

Specification Limits

First, some definitions:

Target Specification
The desired value of the characteristic.
Mean Specification
The average of the upper and lower specification values—the value in the middle of the specification or tolerance range.
Nominal Specification
Used in different ways. May mean the target value of the specification. Sometimes the value printed in marketing literature or on the product, which may be the minimum or some other value.
Upper Specification Limit
The maximum allowed value of the characteristic. Sometimes referred to as the upper tolerance.
Lower Specification Limit
The minimum allowed value of the characteristic. Also referred to the lower tolerance.

Suppose that the customer has said that they want our product to weigh at least 100 kg. Since the customer will always want to pay as little as possible, a customer-specified lower specification limit of 100 kg is equivalent to saying that they are only willing to pay for 100 kg worth of costs; any extra material is added cost that reduces our profit margin.

If the customer does not specify a maximum weight, or upper specification limit, then we can choose the upper limit by the maximum extra material cost we want to bear. For this example, we decide that we are willing to absorb up to 5% additional cost. For our product, material and construction contributes 50% to the total assembled part cost, so the USL on weight is 110 kg.

Defect Rates and Cpk: Where VoC and VoP Intersect

Defect rates are expressed variously as Sigma, Cpk (“process capability”), Ppk (“process performance”), parts per million (ppm), yield (usually as a percent) or defects per million opportunities (DPMO). “Sigma” refers to the number of standard deviations that fit between the tolerance limits and the process mean. A 1-Sigma process has 1 standard deviation (\sigma) between the mean (\mu) and the nearest specification limit. Cpk is a measure of the number of times that three standard deviations (3\sigma) fit between the mean and the nearest specification limit. Cpk is often used by customers, especially automotive OEMs. We can easily convert between Cpk, Sigma yield, ppm and DPMO.

Production tells us that historically, they usually see a range of about 5 kg in assembled part weight, with weights between 99.2 kg and 104.2 kg. With enough data, the range of observed data will cover roughly the range from [mean - 3 standard devations] to [mean + 3 standard devations], so this gives us \mu \approx 101.7 kg and \sigma \approx 0.83 kg. From (\mu-LSL)/\sigma=Sigma, \left(101.7-100\right)/0.83=2.0, we have a 2-Sigma process. With this data, we could estimate the percent of parts that will be below the LSL.

Defect Rate Calculation

We can use this data to estimate the percent of product that will be out of the customer’s specification. We’ll assume that our process produces parts where the weight is normally distributed (having a Gaussian or bell-shaped curve).

Using Minitab, you can do this by opening the “Calc” menu, the “Probability Distributions” submenu, and choosing “Normal….” Then choose “Cumulative probability,” enter “101.7” for the mean and “0.83” for the standard deviation. Select “Input constant” and enter “100.” Click “OK.” The result in the Session Window looks like:

Cumulative Distribution Function

Normal with mean = 101.7 and standard deviation = 0.83

  x  P( X <= x )
100     0.02027

This is read as: the probability of seeing a weight X less than or equal to the given value x = 100 is 0.02, so we can expect about 2% of parts to be out of specification. This can also be done in Excel using NORMDIST(100, 101.7, 0.83, TRUE) or, in Excel 2010 and later, NORM.DIST(100, 101.7, 0.83, TRUE). In R we would use pnorm(100, 101.7, 0.83).

Minitab can also display this graphically. Open the “Graph” menu, select “Probability Distribution Plot…” Select “View Probability” (the right-most option) and click “OK,” On the “Distribution” tab, select “Normal” from the distribution menu and enter “101.7” for the mean and “0.83” for the standard deviation. Then switch to the “Shaded Area” tab, select “X Value,” “Left Tail” and enter “100” for the “X value.” Click on “OK” to obtain a plot like that below.

Probability density distribution showing cumulative probability below a target value of 100 shaded in red.

Probability density distribution showing cumulative probability below a target value of 100 shaded in red.

Of course, not all processes produce parts that are normally distributed, and you can use a distribution that fits your data. However, a normal distribution will give you a good approximation in most circumstances.

Cpk

The calculation for Cpk is:

where Cpu and Cpl are defined as:

If \mu is centered between the USL and the LSL, then by simple algebra we have

which is often a convenient form to use to estimate the “best case” process capability, even when we do not know what \mu is.

The combination of VoC-derived specification limits (USL, LSL and T) with process data (\mu and \sigma) ties together the VoC and the VoP, and allows us to predict future performance against the requirements.

Multiplying Cp or Cpk by \sqrt{1+\left(\left(\mu-T\right)/\sigma\right)^{2}} produces a measure of the process capability that includes centering on the target value, T, referred to as Cpm or Cpkm, respectively. These versions are more informative but much less commonly used than Cp and Cpk.

In the example, \mu-LSL=101.7-100=1.7=2\times0.83=2\sigma. We can then calculate the Cpk:

Acceptable values of Cpk are usually 1.33, 1.67 or 2.0. Automotive OEMs typically require Cpk = 1.33 for non-critical or new production processes, and Cpk = 1.67 or 2.0 for regular production. In safety-critical systems, a Cpk should be 6 or higher. The “Six Sigma” improvement methodology and Design For Six Sigma refers to reducing process variation until six standard deviations of variation fit between the mean and the nearest tolerance (i.e. Cpk = 2), achieving a defect rate of less than 3.4 per million opportunities. Some typical Cpk, and corresponding process sigma and process yield are provided in table [tblCpkSigmaYield].

Cpk Sigma Yield (max) Yield (likely)
0.33 1 85.% 30.%
1.00 3 99.9% 90.%
1.33 4 99.997% 99.%
1.67 5 99.99997% 99.98%
2.00 6 99.9999999% 99.9997%

In the table, “Yield (max)” assumes that the process is perfectly stable, such that parts produced today and parts produced weeks from now all exhibit the same mean and variance. Since no manufacturing process is perfectly stable, “Yield (likely)” assumes additional sources of variation that shift the process by 1.5\sigma (e.g. seasonal effects, or differences in setups across days or shifts). This 1.5\sigma shift is a standard assumption in such calculations when we do not have real data about the long-term process stability of our processes.

Specification Limits and Cost

When parts are out of specification—that is, they may be identified prior to shipment as defective or nonconforming—they they can have four possible impacts:

  • they must be reworked;
  • they must be scrapped;
  • they come back as warranty claims; or
  • they result in lost customers (reduced revenue without reducing “fixed” costs).

For example, underweight or damaged injection-molded plastics might be melted and reprocessed, but this adds cost in capital for the added equipment to proces the parts and cost for extra electricity and labor to move, sort and remelt. Later in production, defective parts will have to be scrapped.

We can see, then, that a cost function can be associated with each end of a specification range. The specification limits must be derived from the VoC, but the VoP imposes the cost function. The figures below illustrate this for both one-sided and two-sided specifications.

Percent of target production costs given an average production weight and four different process capabilities.

Percent of target production costs given an average production weight and four different process capabilities.

Percent of target production costs given an average production weight and four different process capabilities.

Percent of target production costs given an average production weight and four different process capabilities.

The minimum of the cost function is a good place to set our target specification, unless we have some strong need to set a different target.

Variance of Components and Cpk

When a part characteristic is the sum of the part’s components, as with weight, then the variation in the part characteristic is likewise due to the variation in the individual components. However, while the weight adds as the sum of the components,

the variance in weight, \sigma^{2}, adds as the sum of squares

Since the given \sigma is the maximum allowed for the assembled part to meet the desired Cpk, this means that the component variances, \sigma_{i}^{2}, are an estimate for the maximum allowed component variance. Manufacturing can produce parts better than this specification, but any greater variance will drive the parent part out of specification.

Specifying Characteristics

Very often, only specification ranges (or tolerances)—the USL and LSL—are provided by engineering when designing parts. Almost as often, these ranges are based on a target value (derived from the VoC) with some “allowed” tolerance based on what manufacturing says they can achieve on the existing equipment and processes (the VoP). It should be clear from the above that a minimally-adequate part specification includes USL and LSL derived from the VoC and the minimum acceptable Cpk derived from both the VoC and the VoP. The inclusion of the minimum process capability is the only way to ensure that the parts are made within the target costs and at the target quality level.

The next challenge is to flow these requirements down to the components. Having laid the groundwork for the basis of requirements flow-down, I will look at a way to do this in a future post.

References

All graphs created in R using ggplot2.

  • R Core Team (2014). R: A language and environment for statistical computing. R Foundation for
    Statistical Computing, Vienna, Austria. URL http://www.R-project.org/.
  • H. Wickham. ggplot2: elegant graphics for data analysis. Springer New York, 2009.

Combining Expert Judgements

Both product development and project planning often require making educated guesses. There are two models that I’ve seen for doing this. The first is to have a designated subject matter expert (SME) who provides The Answer. The second is to get a group of SMEs together to discuss and arrive at a consensus answer.

Under most circumstances, I’m not a fan of the first approach. Individuals are simply too fallible, too easily swayed by anecdotes rather than real data or too busy to consider a problem fully and in depth. The exception is when you are estimating work; then the SME is the person who will actually have to do the work, and their estimate is better than anyone else’s. The second solution often suffers from several other problems. Too many people working a problem can take a long time, answers can be driven by the most vocal or the most risk-averse members of the group and groups sometimes deliver bad answers due to diffusion of responsibility or other “group think” effects.

One solution to the problems of group decisions is to use Affinity Diagramming and the Analytical Hierarchy Process (AHP) to structure the problem, gather individual judgements of the alternatives, and then determine their relative values or importance. Affinity diagramming followed by AHP is incredibly powerful, and does a great job taking the individual bias out of the equation. I’ve used it with teams to rank the importance of product requirements, as a training exercise to accurately estimate the relative area of geometric shapes and personally to decide which car to buy. It works.

Unfortunately, AHP takes time, and for any but the simplest assessments, you really need custom software to support the process. Affinity diagramming can be done very effectively with Post-It notes, but the calculations of AHP cannot be easily set up in a program like Excel. It also requires discrete alternatives to choose from. For estimates of a single variable, such as lifetime, or other performance characteristics, I have had to develop a different approach.

The technique below works when you want to create a point estimate of a continuous variable. For instance, you might need to estimate product lifetime and establish a warrant period, or you might need to estimate a performance level that can be communicated to customers (implying a performance guarantee), or you might need to estimate the duration of a set of project activities. We can easily implement the calculations in Excel, R, or a similar tool.

Example

What we’re going to do is get the SMEs to individually estimate a range of possible results (low, most likely, high). Then we’ll generate triangular probability distributions from each estimate and finally combine those estimates. To get a robust result, we will account for the SME’s estimation of their own accuracy, and treat their different estimates discrete distributions when combining.

The below is the data entered by the SMEs.

Subject Matter Experts data entries for estimating a continuous variable

Using the above data, we can calculate triangular probability distributions for each, then combine them by treating them as discrete distributions. This produces the sequence of distributions, below.

Combining estimation distribution from Subject Matter Experts

We can then summarize the combined distribution with some useful values:

most likely: 7087
likely low: 4360
likely high: 10711
50% between 5162 and 8370

The process

  1. Bring a few SMEs together. While the technique will work with any number, usually five to six is more than sufficient, and ten should be considered the maximum for useful input. We’ll call this number N so we can refer to it later.
  2. Define the problem. Everyone should agree precisely what you are estimating.
  3. Agree on a basis for the evaluation. This might be an agreed conceptual model of how the product behaves or a set of criteria or goals to evaluate against.
  4. Collect three estimates—low, most likely, high—from each SMEs for the variable of interest (call this variable X).

Collecting a range is important; we need to be honest with ourselves that we don’t know what the value will actually be: there is uncertainty in the SME judgement—if we had the data to be more precise then we’d use that—and there’s variation to consider. Ranges allow us to derive probabilistic estimates that represent both the limits of our SME’s knowledge and the natural variation.

  1. Ask each SME for their assessment of three probabilities on a scale of 0% to 100%:
    1. How likely are we to see the real data fall within this range if your reasoning or model is correct? This will usually be very high, like 95% or higher.
    2. How likely are we to see the real data fall within this range even if your reasoning has some flaw? This may also be high, but you can use 50% if the answer is “I don’t know.”
    3. How likely is it that the reasoning is correct? Again, this will usually be pretty high. That’s why they’re SMEs.

At this point, you have all the information that you need from the SMEs and can proceed to the calculations.

  1. Using the three probabilities of the argument accuracy, calculate the values of X at end points for a triangular probability distribution.
  2. For each SME’s guesstimates, use the triangle distributions to generate a large number of probabilities across the total range of possible Xs.
  3. For each value of X used in the generation of the probability distributions, randomly choose a probability from one of the SMEs. For each value of X, we now have N probabilities that we treat as part of a discrete distribution.
  4. Use the resulting combined distribution to determine values of interest, e.g. the median, fifth percentile, ninety-fifth percentile and so on.

From a diverse set of estimates, you now have a single, robust estimation for a variable. The resulting probability distribution will not be smooth, but you will be able to pull out single values that are meaningful and robust, such as the ninety-fifth percentile for duration estimates.

A few tips

How you ask for the estimates matters. Just asking for “low,” “middle” and “high” estimates will get you very inconsistent results. Likewise, asking for “worst-case” or “best-case” will often get you some pretty wild estimates. You want to ask questions like “how long would this take if many things went right,” “how long would the product last in more severe operating conditions” or “what is the most likely power available?” You don’t want the “middle” estimate, but a “most likely” estimate; you’re not shooting for half-way between the low and high estimates, but for a high-probability point estimate.

When some SMEs know more about the variable being estimated than others, you could also weight their judgements. This weighting is used in the second to last step by adjusting the probability of randomly selecting a value from each SME’s probability distribution.