Flowing Requirements from the VoC or VoP

In a previous post, I talked about the voice of the customer (VoC), voice of the process (VoP) and the necessity of combining the two when specifying a product. Here, I’d like to offer a general method for applying this in the real world, which can be implemented as a template in Excel.

Recap

I showed that there was a cost function associated with any specification that derived from both the VoC (expressed as tolerances or specification limits) and from the process capability. An example cost function for a two-sided tolerance is reproduced below.

Percent of target production costs given an average production weight and four different process capabilities.

Percent of target production costs given an average production weight and four different process capabilities.

I argued that, given this cost function, specifying a product requires specifying both the product specification limits (or tolerances) and the minimally acceptable process capability, Cpk. Ideally, both of these should flow down from a customer needs analysis to the finished product, and from the finished product to the components, and so on to materials.

Requirements flow down and up

To flow all requirements down like this, we would need to know the transfer functions, Y = f(X), for each requirement Y and each subcomponent characteristic X. There are methods for doing this, like Design for X or QFD, but they can be difficult to implement. In the real world, we don’t always know these transfer functions, and determining them can require non-trivial research projects that are best left to academia.

As an illustration, we will use the design of a battery (somewhat simplified), where we have to meet a minimum requirement that is the sum of component parts. The illustration below shows the component parts of a battery, or cell. It includes a container (or “cell wall”), positive and negative electrodes (or positive and negative “plates”), electrolyte and terminals that provide electrical connection to the outside world. Usually, we prefer lighter batteries to heavier ones, but for this example, we’ll suppose that a customer requires a minimum weight. This requirement naturally places limits on the weight of all components.

In the absence of transfer functions, we often make our best guess, build a few prototypes, and then adjust the design. This may take several iterations. A better approach is to estimate the weight specification limits and minimum Cpk by calculation before any cells are actually built.

General drawing of the structure of aircraft battery's vented type NiCd cell. Ransu. Wikipedia, [http://en.wikipedia.org/wiki/ File:Aircraft_battery_cell.gif]. Accessed 2014-04-04.

General drawing of the structure of aircraft battery’s vented type NiCd cell. Ransu. Wikipedia, [http://en.wikipedia.org/wiki/ File:Aircraft_battery_cell.gif]. Accessed 2014-04-04.

Suppose the customer specifies a cell minimum weight of 100 kg. From similar designs, we know the components that contribute to the cell mass and have an idea of the percentage of total weight that each component contributes.

m_{cell}=m_{container}+m_{terminals}+m_{electrolyte}+m_{poselect}+m_{negelect}

Each individual component is therefore a fraction fm of the total cell mass, e.g.

m_{container}=f_{m,container}m_{cell}

More generally, for a measurable characteristic c, component i has an expected mean or target value of T_{i,c}=f_{i,c}\mu_{parent,c} or T_{i,c}=f_{i,c}T_{parent,c}.

In our example, we may know from similar products or from design considerations that we want to target the following percents for each fraction fm:

  • 5% for container
  • 19% for terminals
  • 24% for electrolyte
  • 26% for positive electrodes
  • 26% for negative electrodes

Specification Limits

Upper Specification Limit (USL)
The maximum allowed value of the characteristic. Also referred to as the upper tolerance.
Lower Specification Limit (LSL)
The minimum allowed value of the characteristic. Also referred to as the lower tolerance.

Since the customer will always want to pay as little as possible, a specified lower weight of 100 kg is equivalent to saying that they are only willing to pay for 100 kg of material; any extra material is added cost that reduces our profit margin. If we tried to charge them for 150 kg of material, they would go buy from our competitors. The lower specification limit, or lower tolerance, of the cell weight is then 100 kg.

If the customer does not specify a maximum weight, or upper specification limit, then we determine the upper limit by the maximum extra material cost that we are willing to bear. In this example, we decide that we are willing to absorb up to 5% additional cost per part. Assuming that material and construction contributes 50% to the total cell cost, the USL is then 110 kg. To allow for some variation, we can set a target weight in the middle: 105 kg. From data on previous designs and the design goals, we can apportion the target weight to each component of the design, as shown in the table below.

We can apply the same fractions to the cell USL and LSL to obtain a USL and LSL of each component. As long as parts are built within these limits, the cell will be within specification. The resulting specification for cell and major subcomponents is illustrated in table [tblSpecification]. Further refinement of the allocation of USL and LSL to the components is possible and may be needed if the limits do not make sense from a production or cost perspective.

Part Percent Target LSL USL
/kg /kg /kg
Cell 100% 105 100 110
Container 5% 5.2 5 5.5
Terminals 19% 19.9 19 20.9
Electrolyte 24% 25.2 24 26.4
Positive electrodes 26% 27.3 26 28.6
Negative electrodes 26% 27.3 26 28.6

Variance of components and Cpk

When a characteristic is due to the sum of the part’s components, as with cell mass, the part-to-part variation in the characteristic is likewise due to the variation in the components. However, where the characteristic adds as the sum of the components,

m_{cell}=m_{container}+m_{terminals}+m_{electrolyte}+m_{poselect}+m_{negelect}

the variance, \sigma^{2} adds as the sum of squares

\sigma_{cell}^{2}=\sigma_{container}^{2}+\sigma_{terminal}^{2}+\sigma_{electrolyte}^{2}+\sigma_{poselect}^{2}+\sigma_{negelect}^{2}

The variance of any individual component is therefore a function of the total parent part variance

\sigma_{container}^{2}=\sigma_{cell}^{2}-\sigma_{terminal}^{2}-\sigma_{electrolyte}^{2}-\sigma_{poselect}^{2}-\sigma_{negelect}^{2}

or

\displaystyle \sigma_{container,mass}^{2}=f_{\sigma,container}\sigma_{cell,mass}^{2}

Since this is true for all components, the two fractions f_{m} and f_{\sigma} will be approximately equal. Therefore if we don’t know the fractions f_{\sigma}, we can use the fraction f_{m}, which usually easier to work out, to allocate the variance to each component:

\displaystyle \sigma_{container,mass}^{2}=f_{m,container}\times\sigma_{cell,mass}^{2}

More generally, for measurable characteristic c of a subcomponent i of a parent component,

\displaystyle \sigma_{i,c}=\sqrt{f_{c,i}}\:\sigma_{c,parent}

Since the given \sigma is the maximum allowed for the parent to meet the desired Cpk, this means that \sigma_{i}^{2} is an estimate for the maximum allowed component variance. Manufacturing can produce parts better than this specification, but any greater variance will drive the parent part out of specification.

Calculating Specification Limits

In general, there are two conflicting goals in setting specifications:

  1. Make them as wide as possible to allow for manufacturing variation while still meeting the VoC.
  2. Make them as narrow as possible to stay near the minimum of the cost function.

For this, Crystall Ball or iGrafx are very useful tools during development, as we can simulate a set of arts or processes, analyze the allowed variation in the product and easily flow that variation down to each component. In the absence of these tools, Minitab or Excel can be used to derive slightly less robust solutions.

Calculating from Customer Requirements

  1. Identify any customer requirements and set specification limits (USL and LSL) accordingly. If the customer requirements are one-sided, determine the maximum additional cost we are willing to accept, and set the other specification limit accordingly. Some approximation of costs may be needed.
  2. If no target is given, set the target specification for each requirement as the average of USL and LSL.
  3. Set the minimum acceptable Cpk for each specification. Cpk = 1.67 is a good starting value. Use customer requirements for Cpk, where appropriate, and consider, also, whether the application requires a higher Cpk (weakest link in the chain….
  4. Calculate the maximum allowed standard deviation to meet the Cpk requirement as \sigma_{parent}=\left(USL-LSL\right)/\left(6\times Cpk\right).
  5. For each subcomponent (e.g. the cell has subcomponents of container, electrodes, electrolyte, and so on), apportion the target specification to each of the subcomponents based on engineering considerations and judgement. If the fractions f are known, T_{i}=f_{i}\times T_{parent}.
  6. Calculate the fraction f_{i} (or percent) of the parent total for each subcomponent if not already established in step (5).
  7. Calculate the USL and LSL for each subcomponent by multiplying the parent USL and LSL by the component’s fraction of parent (from step 6). USL_{i}=f_{i}\times USL_{parent} and LSL_{i}=f_{i}\times LSL_{parent}.
  8. Estimate the allowed standard deviation \sigma_{i} for each subcomponent as
    \displaystyle \sigma_{i}=\mathtt{SQRT}\left(f_{i}\right)\times\sigma_{parent}.
  9. Calculate the minimum allowed Cpk for each subcomponent from the results of (5), (7) and (8), using the target, T, for the mean, \mu.
    \displaystyle Cpk_{i}=minimum\begin{cases}\frac{USL_{i}-T_{i}}{3\sigma_{i}}\\\frac{T_{i}-LSL_{i}}{3\sigma_{i}}\end{cases}
  10. Repeat steps (5) through (9) until all components have been specified.
  11. For each component, report the specified USL, LSL, target T and maximum Cpk.

Calculating from Process Data

When there is no clear customer-driven requirement or clear requirement from parent parts (e.g. dimensional specifications that can be driven by the fit of parts), but specification limits are still reasonably needed, we can start from existing process data.

This is undesirable because any change to the process can force a change to the product specification, without any clear understanding of the impact on customer needs or requirements; the VoC is lost.

The calculation of USL and LSL from process data is also somewhat more complicated, as we have to use the population mean and standard deviation to determine where to set the USL and LSL, without really knowing what that mean and standard deviation are.

In the real world, we have to live with such constraints. To deal with these limitations, we will use as much data as is available and calculate the confidence intervals on both the mean and the standard deviation. The calculation for USL and LSL becomes

\setlength\arraycolsep{2pt}\begin{array}{rl}\displaystyle USL &=\textrm{upper 95\% confidence on the mean}\smallskip\\ \displaystyle &\quad +k\times\textrm{upper 95\% confidence on the standard deviation}\end{array}
\setlength\arraycolsep{2pt}\begin{array}{rl}\displaystyle LSL &=\textrm{lower 95\% confidence on the mean}\smallskip\\ &\quad -k\times\textrm{upper 95\% confidence on the standard deviation}\end{array}

where k is the number of process Sigmas desired, based on the tolerance cost function. Most of the time, we will use k=5, to achieve a Cpk of 1.67.

We always use the upper 95% confidence interval on the standard deviation. We don’t care about the lower confidence interval, since a small \sigma will not help us in setting specification limits.

  1. Calculate the mean (\mu_{parent}) from recent production data. In Excel, use the AVERAGE() function on the data range.
  2. Calculate the standard deviation (\sigma_{parent}) from recent production data. In Excel, you can use the STDEV() function on the data range.
    1. If the order of production data is known, or SPC is in use, a better method is to use the range-based estimate from the control charts. This will be discussed in subsequent training on control charts.
  3. Count the number of data points, n, that were used for the calculations (1) and (2). You can use the COUNT() function on the data range.
  4. Calculate the 95% confidence level on the mean. In Excel, this is accomplished with
    CL=\mathtt{TINV}\left(\left(1-0.95\right);n-1\right)\times\sigma_{parent}/\mathtt{SQRT}\left(n\right)

    In Excel 2010 and later, TINV() should be replaced with T.INV.2T().

  5. Calculate the 95% confidence interval on the mean as CI_{upper}=\mu+CL and CI_{lower}=\mu-CL.
  6. Calculate the upper and lower 95% confidence limits on the standard deviation. In Excel, this is accomplished with
    \sigma_{upper}=\sigma_{parent}\times\mathtt{SQRT}\left(\left(n-1\right)/\mathtt{CHIINV}\left(\left(1-0.95\right)/2;n-1\right)\right)

    and

    \sigma_{lower}=\sigma_{parent}\times\mathtt{SQRT}\left(\left(n-1\right)/\mathtt{CHIINV}\left(1-\left(1-0.95\right)/2;n-1\right)\right)

    In Excel 2010 and later, CHIINV() can be replaced with CHISQ.INV.RT() for improved accuracy.

  7. Calculate the LSL as LSL_{parent}=CI_{lower}-k\sigma_{upper}. You might use a value other than 5 if the customer requirements or application require a higher process Sigma.
  8. Calculate the USL as USL_{parent}=CI_{upper}+k\sigma_{upper}.
  9. For each subcomponent (e.g. the cell has subcomponents of positive electrode, negative electrode, electrolyte, and so on), apportion the parent part mean to each of the subcomponents based on engineering considerations and judgement. If the fractions f are known, T_{i}=f_{i}\times\mu_{parent}.
  10. If the the fraction (or percent) f_{i} of the parent total for each subcomponent is not known, calculate it using the results of step (9).
  11. Calculate the USL and LSL for each subcomponent by multiplying the parent USL and LSL by the component’s fraction of parent (from step 6). USL_{i}=f_{i}\times USL_{parent} and LSL_{i}=f_{i}\times LSL_{parent}.
  12. Estimate the allowed standard deviation \sigma_{i} for each subcomponent as \sigma_{i}=\mathtt{SQRT}\left(f_{i}\right)\times\sigma_{lower}
  13. Calculate the minimum allowed Cpk for each subcomponent from the results of (5), (7) and (8), using the target T_{i} for the mean, \mu_{i}.
    \displaystyle Cpk_{i}=minimum\begin{cases}\frac{USL_{i}-T_{i}}{3\sigma_{i}}\\\frac{T_{i}-LSL_{i}}{3\sigma_{i}}\end{cases}
  14. Repeat steps (9) through (13) until all components have been specified.
  15. For each component, report the specified USL, LSL, target T and maximum Cpk.

Can We do Better than R-squared?

If you're anything like me, you've used Excel to plot data, then used the built-in “add fitted line” feature to overlay a fitted line to show the trend, and displayed the “goodness of fit,” the r-squared (R2) value, on the chart by checking the provided box in the chart dialog.

The R2 calculated in Excel is often used as a measure of how well a model explains a response variable, so that “R2 = 0.8” is interpreted as “80% of the variation in the 'y' variable is explained by my model.” I think that the ease with which the R2 value can be calculated and added to a plot is one of the reasons for its popularity.

There's a hidden trap, though. R2 will increase as you add terms to a model, even if those terms offer no real explanatory power. By using the R2 that Excel so helpfully provides, we can fool ourselves into believing that a model is better than it is.

Below I'll demonstrate this and show an alternative that can be implemented easily in R.

Some data to work with

First, let's create a simple, random data set, with factors a, b, c and response variable y.

head(my.df)
##       y a       b      c
## 1 2.189 1 -1.2935 -0.126
## 2 3.912 2 -0.4662  1.623
## 3 4.886 3  0.1338  2.865
## 4 5.121 4  1.2945  4.692
## 5 4.917 5  0.1178  5.102
## 6 4.745 6  0.4045  5.936

Here is what this data looks like:

Plot of the response and factors in a linear model.

Calculating R-squared

What Excel does when it displays the R2 is create a linear least-squares model, which in R looks something like:

my.lm <- lm(y ~ a + b + c, data = my.df)

Excel also does this when we call RSQ() in a worksheet. In fact, we can do this explicitly in Excel using the Regression analysis option in the Analysis Pack add-on, but I don't know many people who use this, and Excel isn't known for its reliability in producing good output from the Analysis Pack.

In R, we can obtain R2 via the summary() function on a linear model.

summary(my.lm)
## 
## Call:
## lm(formula = y ~ a + b + c, data = my.df)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -1.2790 -0.6006  0.0473  0.5177  1.5299 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(&gt;|t|)  
## (Intercept)    2.080      0.763    2.72    0.034 *
## a             -0.337      0.776   -0.43    0.679  
## b             -0.489      0.707   -0.69    0.515  
## c              1.038      0.817    1.27    0.250  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.1 on 6 degrees of freedom
## Multiple R-squared:  0.833,  Adjusted R-squared:  0.75 
## F-statistic:   10 on 3 and 6 DF,  p-value: 0.00948

Since summary() produces a list object as output, we can grab just the R2 value.

summary(my.lm)$r.squared
## [1] 0.8333

Normally, we would (somewhat loosely) interpret this as telling us that about 83% of the variation in the response y is explained by the model.

Notice that there is also an "adjusted r-squared” value given by summary(). This tells us that only 75% of the variation is explained by the model. Which is right?

The problem with R-squared

Models that have many terms will always give higher R2 values, just because more terms will slightly improve the model fit to the given data. The unadjusted R2 is wrong. The calculation for adjusted R2 is intended to partially compensate for that “overfit,” so it's better.

It's nice that R shows us both values, and a pity that Excel won't show the adjusted value. The only way to get an adjusted R2 in Excel is to run the Regression analysis; otherwise, we have to calculate adjusted R2 manually.

Both R2 and adjusted R2 are measures of how well the model explains the given data. However, in industry we usually want to know something a little different. We don't build regression models to explain only the data we have; we build them to think about future results. We want R2 to tell us how well the model predicts the future. That is, we want a predictive R2. Minitab has added the ability to calculate predictive R2 in Minitab 17, and has a nice blog post explaining this statistic.

Calcuting predictive R-squared

Neither R nor Excel provide a means of calculating the predictive R2 within the default functions. While some free R add-on packages provide this ability (DAAG, at least), we can easily do it ourselves. We'll need a linear model, created with lm(), for the residuals so we can calculate the “PRESS” statistic, and then we need the sum of squares of the terms so we can calculate a predictive R2.

Since the predictive R2 depends entirely on the PRESS statistic, we could skip the added work of calculating predictive R2 and just use PRESS, as some authors advocate. The lower the PRESS, the better the model is at fitting future data from the same process, so we can use PRESS to compare different models. Personally, I'm used to thinking in terms of R2, and I like having the ability to compare to the old R2 statistic that I'm familiar with.

To calculate PRESS, first we calculate the predictive residuals, then take the sum of squares (thanks to (Walker’s helpful blog post) for this). This is pretty easy if we already have a linear model. It would take a little more work in Excel.

pr <- residuals(my.lm)/(1 - lm.influence(my.lm)$hat)
PRESS <- sum(pr^2)
PRESS
## [1] 19.9

The predictive R2 is then (from a helpful comment by Ibanescu on LikedIn) the PRESS divided by the total sum of squares, subtracted from one. The total sum of squares can be calculated directly as the sum of the squared residuals, or obtained by summing over Sum Sq from an anova() on our linear model. I prefer using the anova function, as any statistical subtleties are more likely to be properly accounted for there than in my simple code.

# anova to calculate residual sum of squares
my.anova <- anova(my.lm)
tss <- sum(my.anova$"Sum Sq")
# predictive R^2
pred.r.squared <- 1 - PRESS/(tss)
pred.r.squared
## [1] 0.5401

You'll notice that this is smaller than the residual R2, which is itself smaller than the basic R2. This is the point of the exercise. We don't want to fool ourselves into thinking we have a better model than we actually do. One way to think of this is that 29% (83% – 54%) of the model is explained by too many factors and random correlations, which we would have attributed to our model if we were just using Excel's built-in function.

When the model is good and has few terms, the differences are small. For example, working through the examples in Mitsa's two posts, we see that for her model 3, R2 = 0.96 and the predictive R2 = 0.94, so calculating the predictive R2 wasn't really worth the extra effort for that model. Unfortunately, we can't know, in advance, which models are “good.” For Mitsa's model 1 we have R2 = 0.95 and predictive R2 = 0.32. Even the adjusted R2 looks pretty good for model 1, at 0.94, but we see from the predictive R2 that our model is not very useful. This is the sort of thing we need to know to make correct decisions.

Automating

In R, we can easily wrap these in functions that we can source() and call directly, reducing the typing. Just create a linear model with lm() (or an equivalent) and pass that to either function. Note that pred_r_squared() calls PRESS(), so both functions have to be sourced.

pred_r_squared <- function(linear.model) {
    lm.anova <- anova(linear.model)
    tss <- sum(lm.anova$"Sum Sq")
    # predictive R^2
    pred.r.squared <- 1 - PRESS(linear.model)/(tss)
    return(pred.r.squared)
}
PRESS <- function(linear.model) {
    pr <- residuals(linear.model)/(1 - lm.influence(linear.model)$hat)
    PRESS <- sum(pr^2)
    return(PRESS)
}

Then we just call the function to get the result:

pred.r.squared <- pred_r_squared(my.lm)
pred.r.squared
## [1] 0.5401

I've posted these as Gists on GitHub, with extra comments, so you can copy and paste from here or go branch or copy them there.

References and further reading

Issue Logs and Risk Registers

Every product development project includes uncertainty over what will happen. The uncertainty—each assumption or best guess—reduces our chances of project success. The job of the project manager and team members is to ensure success by managing risk.

When something goes wrong—deviates from the plan—it stops being a risk and becomes an issue that must be addressed to ensure success. Issues are those conditions that are having a negative impact on your ability to execute the project plan. You can easily identify them because they directly cause schedule slippage and extra work.

There are two simple tools that can—and should—be used on every project to manage risks and issues to prevent disaster. One is the risk register; the other is the issue log. In my experience, these two documents are often conflated, but they are distinct documents that should contain different information and drive different actions.

The risk register is a means of capturing risks that we want to monitor over the life of the project so that we can take action before they have a negative impact on the project. These are conditions that you have decided not to explicitly work into the plan, but don’t want to let “slip under the radar” to create big issues for you later.

The issue log is where you record any problems that were not accounted for in the plan and that threaten to delay the project, push it off budget or reduce the scope (e.g. reduce product performance).

Issue Log Risk Register
Description of the issue Description of the risk
Underlying problem or cause of the issue Risk profile—sources of uncertainty and the potential impact
Action plan Potential actions
Priority or scheduling Monitoring plan
Who is responsible for assuring this issue is resolved Who is responsible for monitoring
Date opened and date resolved, sometimes a tracking number or other ID Date last updated, tracking ID

Issue Log

The issue log is fundamentally about corrective actions. The project has deviated from the plan, and now we need to get back on course to complete the project on time, on budget and with the agreed goals. The issue log is used to capture this information.

While the cause of the problem is often obvious, it is always a good idea to probe for deeper, systemic causes that could lead to further delays. Asking “why?” five times in order to permanently and irrevocably fix a problem doesn’t take very long compared to the total delays that a project can experience.

Risk Register

The hard part of a risk register is the risk profile. Different people respond differently to risk, and some are more comfortable with thinking about uncertain outcomes than others. These differences between people lead to a lot of variation and debate in identifying risks; a good strategy for making risk registers easy is to standardize. The best practices are to focus on the causes of the risk and the probable impacts and to standardize the process.

There has been a lot written about risk management. Some of the best, in my opinion, is the work by De Meyer, Loch and Pich, which was first brought to my attention by Glenn Alleman over at the Herding Cats blog. In their excellent book, Managing the Unknown: A New Approach to Managing High Uncertainty and Risk in Projects, they break down risk into two major components: relationship complexity and task complexity.

When the relationships of stakeholders or partners are complex—groups aren’t aligned—then you can expect disagreements and conflict. Successful strategies for dealing with relationship complexity include increased communication and more rigidly defined relationships.

When tasks are complex—there are many links between tasks, so that changing one task can affect many, or there is a high degree of uncertainty in what needs to be done—then the successful strategies range from critical path management to an entrepreneurial approach of working multiple solutions in parallel (see also De Meyer 2001).

By implementing these pairings of source of risk with management strategy in a risk register template, we can greatly simplify the process and drive more consistent risk management results. Adding in a simple analysis of the impact can help us with prioritization (where do we spend our resources monitoring) and monitoring frequency.

Monitoring is all about how you will know when to do something about the risk. i.e. You want to decide in advance what condition will trigger you to transition this risk to the issue log. Measures should be relevant to the risk, quantitative where possible and the method of measurement should be clearly defined (you don’t want people disagreeing over the project plan just because they measure something differently). Set up measurement intervals that make sense by asking yourself how long you can go without knowing that you have a problem. Plot the results as a time series or on a control chart to allow you to distinguish between normal variation in the measurement and a condition that requires action.

References

  • Loch, Christoph H, Arnoud De Meyer, and Michael T Pich. Managing the Unknown. Hoboken, New Jersey: John Wiley & Sons, 2006. Print.
  • De Meyer, Arnoud, Christoph H Loch, and Michael T Pich. “Uncertainty and Project Management: Beyond the Critical Path Mentality.” 2001 : 1–23. Print.

R Function Reference

Updated below

The R Function Reference is a mind map that I created as a guide for novice and intermediate users of the R statistics language. When you first open it, I suggest that you collapse all the nodes by clicking on the “Expand/Collapse all nodes” button in the bottom left of the screen to make the map easier to navigate. You can also adjust the zoom level with the slider next to that button.

R Function Reference screenshot

The top-level nodes of the R Function Reference

The mind map is arranged in eight sections, or main branches, arranged by task. What do you want to do? Each branch covers a general set tasks, such as learning to use R, running R, working with data, statistical analysis or plotting data. The end of each string of nodes is generally a function and example. The Reference provides code fragments, rather than details of the function or complete reproducible code blocks. Once you’ve followed the Reference and have an idea of how to accomplish something, you can look up the details in R’s help system (e.g. “?read.csv” to learn more about using the read.csv() function), or search Google or the online R-Help mailing list archives for answers using the function name.

There are a lot of useful nodes and examples, especially in the “Graphs” section, but the mind map is not complete; some trails end before you get to a useful function reference. I am sorry for that, but it’s a work in progress, and will be slowly updated over time.

Comments and suggestions are welcome.

Update 1

In comments, several users reported problems opening the mind map. With a little investigation, it appears that the size of the mind map is the problem. To try to fix the problems , I have split the mind map out into several small mind maps, all linked together.

The new main mind map is the R Function Reference, Main. The larger branches on this main map no longer expand to their own content, but contain a link to a “child” mind map. The link looks like a sheet of paper with an arrow pointing to the right, click on it and little cartoon speech bubble will pop up with a link that you have to click on to go to the child mind map. Likewise, the central nodes on the child mind maps contain a link back to the main mind map.

Due to load times and the required extra clicks, this may slightly reduce usability for users who didn’t have a problem with the all-in-one version, but will hopefully make the mind map accessible to a broader audience.

I have to offer praise to the developers of Mind42. Though I couldn’t directly split branches off into their own mind maps or duplicate the mind map, it was very easy to export the mind map as a native Mind42 file and then import it multiple times, editing the copies without any loss of data or links. The ability to link directly between mind maps within Mind42 was also a key enabling feature. Considering that this is a free web app, its capabilities are most impressive. They were also quick to respond when I posted a call for help on the Mind42 forum.

Please let me know how the new, “improved” version works.

The old mind map, containing everything, is still available, but I will not update it.

Process Stability

(Updated below)

While performing a web search, I remembered how difficult the concept of “process stability” can be. How do you know when a process is stable?

D. C. Montgomery, one of the recognized authorities on the subject of statistical process control, seems to give conflicting advice on this. For instance, he’s careful to point out the assumptions underlying all of the measures that one would use on a process, and unstable processes invalidate most or all of these assumptions. How do you know if a process is stable if none of your analyses are applicable?

Process stability needs an operational definition. Luckily, there are at least two:

1) No signals on the appropriate process behavior chart (a.k.a. control chart);

2) Cpk / Ppk == 1 and Cp / Pp == 1

Signals on a process behavior chart do not necessarily mean that a process is out of control (i.e. false signals are possible, and expected at certain mathematically determinable rates), but we can be sure of process stability if there are no signals.

Likewise, we can take issue with using the process capability indices Pp, Ppk, Cp and Cpk in this manner. All assume a normal distribution, which you only get with a stable process, so you shouldn’t trust them as measures of process capability. In this case, that’s fine: don’t report the actual values; just report the ratio of Cp to Pp or Cpk to Ppk. When the ratio is 1, the process is stable; the larger the ratio, the worse the process. Donald Wheeler discusses this use of Ppk and Cpk, and the measures’ relation to production costs, in his latest column for Quality Digest.

Whether or not the process is economical (i.e. Cpk and Ppk are high enough) is a question completely separate from stability.

Update:

I was discussing this with a friend who, for various reasons, needs to allow for some process drift. In other words, a Ppk less than Cpk is expected and acceptable, but only up to a certain point. The nice thing about the Cpk/Ppk ratio is that it’s simple: a ratio of 1 means the process is stable; a ratio greater than 1 means the process is not stable; a ratio of less than 1 means someone has made a mistake or is lying. If we need to allow for some process drift, we lose this simplicity.

So suppose that we have a Cpk of 1.66. There are then five standard deviations between the process mean and the nearest specification limit. Assuming a process drift of 1.5 Sigmas, our Ppk is 1.16, giving us a ratio Cpk/Ppk of 1.43. If, however, our Cpk is 1.00, then a process drift of 1.5 Sigmas gives us a Cpk/Ppk ratio of 2.00.

With an allowed process drift of a fixed number of Sigma, it’s no longer so simple to determine, from the Cpk/Ppk ratio, whether or not a process is “stable” within the limits set by management.

A slightly more sophisticated calculation is needed, then. What we can calculate is the ratio

(Short Term SigmaLong Term Sigma) / Allowed Process Drift

If the result is less than or equal to 1, then the process is “good enough” (i.e. within our allowed drift). If the ratio is greater than 1, then the process is considered out of control and action needs to be taken to eliminate sources of variation. If the ratio is less than 0, then someone made a mistake or is lying (i.e. long-term Sigma can never be less than short-term Sigma).

Definitions

I was recently asked a question that raised some good design issues. The question went “why should changing this cause a change in that characteristic?”

The immediate and obvious answer was that it wouldn’t and couldn’t. Theoretically, a large decrease in this (X) might cause an increase of a few percent in that (Y); nothing more. Only someone was claiming that decreasing X decreased Y, too.

They were right. No, the theoretical relationship isn’t wrong. It’s right.

The theoretical calculation is fairly straightforward. You put so much of X in, and, after some calculation, you get so much of Y out. The less X you have, the more Y you get. The hard part is figuring out just how much of X you’re putting in.

The measurement of Y introduces a bunch of variation based on other factors. You measure by changing certain conditions A, B and C. These, in turn, affect some other factors, M and N. X, A, M and N together determine what value you measure for Y.

So decreasing X affects the other factors in such a way that the net effect is a decrease in the measured value of Y.

“Oh, sure,” you respond. “But the theoretical calculation should account for that.”

Not really. The theoretical calculation should tell us what the best case is…what our target should be. The actual measurement is going to produce different results based on various factors, some of which we control and some we can’t. A calculation based on the measurement process would require uncertainty ranges and return a probability distribution; not a singular value. Messy.

Engineers and researchers need to consider both of these as definitions. If you’re designing for some characteristic, as a researcher or engineer you’re usually going to be concerned with the theoretical calculations. This is how you were taught in school, and you’ll naturally be interested in getting as close to the best case as possible. However, not everyone is going to be interested in the theoretical calculation. The folks in Quality who are checking the product for conformance will be more interested in how it’s measured, the operational definition, than in the theoretical definition. The manufacturing plant only want to hear about the operational definition; for them, the world would be a better place without the theoretical definition.

As a design engineer, you need to be more concerned about the operational definition. You’ll be arguing that you designed a part for Y performance (or to “do Y“). The next question that management and your customers should (and probably will) ask is, how do you know you designed it to do that? The answer is always by data analysis. How do you get the data? Via the operational definition. What you know is determined by how you measure, and that’s the operational definition.

This has applicability well outside of engineering design. Physicists have been arguing this very point ever since Bohm and Heisenberg developed the Copenhagen interpretation of quantum physics. Management by objective depends on the ability to close the loop by measuring outcomes. This means that management by objectives requires operational definitions of every objective (though few organizations actually get this far, and management by objectives becomes management by manager gut feeling). Even more enlightened management techniques, such as those advocated by Deming and Scholtes, require operational definitions to enable an organization’s performance improvement (e.g. through the use of control charts, which are only possible with operational definitions).

Use the theoretical definition to tell you the best possible case, but be sure to design according to the operational definition.

Beginning to End

Product development covers all activities from program initiation and concept development through the start of production or service delivery. There are many process models for product development, among them the classic waterfall, the spiral, the Systems V model, Lean and Agile. In the U.S. automotive industry, the product development process is defined, or at least constrained, by the Advanced Product Quality Planning (APQP) manual from the Automotive Industry Action Group, or AIAG. The standard in academia seems to be laid out in Ulrich and Eppinger’s Product Design and Development (U&E).Most of these have some common features. Many start with defining the business goals and authorizing the project. The rest start with the next step: identifying customer needs. They also end somewhere between the hand-off to manufacturing and post-manufacturing support.

We can see that product development is a process that starts with the customer and ends with the customer. The output of product development is customer fulfillment; not merely an engineering design. The input is customer needs; not a product specification. Product development is not simply an engineering activity; it’s blend of business and engineering activities, the goal of which is to maximize company profit through customer fulfillment. Product development is a customer-focused process, and it looks something like this rather cycle:

Product Development as Customer-Focused Process

From a customer’s perspective, though, this process looks much simpler:

Product Development as Customer-Focused Process, From the Customer’s Perspective

One of the primary problems with product development is this delay. For you, the developer, all of the technical and market risk are wrapped up in this delay, and the market risk is the more troublesome of the two risks. Market risk is the risk that the customer will change their mind, developing a new set of priorities, or that a competitor will enter the market with a similar product before you do.

One of the key mitigation strategies for product development is the reduction of this delay. Design and manage your product development processes to bring the customer closer to their fulfillment.

To achieve this in a consistent and effective manner, you have to understand the economics of your development projects and the market. Every decision in product development is a trade off, and these trade-offs need to be focused on the goal: increasing profit through by increasing the gap between value and costs. For instance, you will be faced with a choice: spending more time in requirements gathering and analysis vs. decreasing the delay in delivering product to the customer. Just how you balance this depends on the cost of a performance shortfall (technical risk) vs. the cost of delay. With highly risk-averse customers, the cost of a performance shortfall is much greater than the cost of delay, which is probably why aerospace projects are notorious for falling behind schedule yet often held up as the gold standard for safety and technical performance. In contrast, consumer electronics tend to have very short time-to-market, but notoriously poor reliability; the customers value immediate fulfillment over technical performance.

It is important, too, to recognize that these kinds of trade-offs are not made just once; they are made on a daily basis. The upper management of the development organization needs to understand these economics so that they can design the product portfolio and product development strategy (e.g. selecting between more modular designs, shifting technology development off the critical path of customer deliverables, vs. more integrated designs that are more tailored to fit the customer needs). The program managers and design responsible engineers need this knowledge, too, in order to intelligently design and manage the product.

Your product development processes, then, must be designed to provide rapid feedback to project managers and engineers relative to these trade-offs between risks, and to assist them in making consistent decisions. The natural result of this line of reasoning is the development of decision tools, standard cost models and standard measurements focused on technical performance risk, project expenses, product costs and delays.