Minimum Sample Size, is it relevant?

“What is the minimum sample size?” may be the most common question a statistician or a lean six sigma practitioner will be asked.  Everyone assumes that this is a question with a straightforward answer, but it truly a complicated question to answer.  Surprisingly the answers to this question are more based on business considerations than it is on statistical issues.

Business Issues with a Minimum Sample Size

These are the easiest to recognize, because we will all face the business issues one time or another.

  • Limited Time to obtain data
  • The budget will only afford a finite number of data points.

In most of my Lean Six Sigma experience, it is time or money that limits my data collection before I ever find a need to perform a minimum sample size analysis.  In these cases, the minimum sample size is all that you can obtain before you hit the time or cost constraint.

Considerations for all minimum sample size questions

The minimum sample size required for any analysis is a function of the same three factors; variation in the system, the minimum detectable difference, and confidence level of the decision.

Minimum sample size

  • The higher the system variation, the more samples will be needed
  • The smaller the change that is being evaluated, the more samples will be needed.
  • The higher the confidence desired, the more samples are required.
  • The higher the power of the test, the more samples are required.

This information is taught to all Lean Six Sigma Students, but it is not really sufficient when they are facing a real business situation.

How do I know that the training is not sufficient?  I routinely ask folks in our Master Black Belt classes, who are all strong black belts, how often they use the sample size calculators in their projects.  Guess what I hear….   Most of them have not used the sample size calculators since their black belt classes.   The reasons given for this usage is that the students never have the time or ability to get all of the data they would like, they are provided all of the data that is available.  This supports my advice to most belts, the data you have is enough.

My minimum sample size example to students is a Black Belt student that completed a successful project when there was only three historical data points in for his project.  This project was to reduce the time it took to take a building from concept until the design went to a contractor to be bid on.  We had three buildings put up in the past 15 years but expected to put up 8 buildings in the next two years.  This belt chose to find as many activities that were used in the design process that were performed in other areas of the business and used them to evaluate and improve.  Then he created a process with the best practices he found and simulated the new process and showed that the new process was expected to have a lead time of around 40% of the prior efforts.  After implementation, the new process ran close to 30% of the lead-time experienced before his project.  The change was so significant; he only needed two buildings to demonstrate that it was a significant change.  So he had three historical data points and two new process data points and he was able to statistically prove a significant change.  I cannot imagine that I will ever experience a lean six sigma project with a total of 5 data points that shows a significant change.

When are sample size calculators used in Lean Six Sigma?

I find that my primary use of the minimum sample size calculators is after I have implemented a change.  I want to find out how long I need to run the new process before I can test to see if the change is significant.  At the end of a project, I have an idea how much change I will introduce into the process along with the expected standard deviation of the process.  Using these two facts, the sample size calculator to determine how far in the future I need to wait before testing for a significant improvement.  My questioning of master black belt students has confirmed that this is the most common use of the calculators.

What is relevant to sampling?

I believe that Lean Six Sigma should change the teaching about minimum sample sizes.  The actual size of a sample is not usually in the control of a Black Belt.   The training should spend more time on representative samples!

Knowing that the only reason we have a sample is to infer characteristics of the population of the data, the “goodness” of the sample is more important than the size of the sample.  The goodness of the sample is a measure of how representative is the sample of the population.  Most training under-sells the importance of the quality of the data.  The discussions usually talk about random samples, but I have learned that random sampling is generally not a concern of a lean six sigma practitioner.  Most of our historical data for our projects includes all of the existing data.  This existing data set is provided without a lot of context or understanding of the process performance when it was collected.  I have observed so many problems with historical data that I have learned to be skeptical of its quality.  Problems like;

  • A failure to record known bad process data.
  • A failure to include process start-up data because it is known to be a bit off.
  • Inclusions of only first pass good product or transaction data.  Keeping reworked or manually processed data separate.
  • Excluding process or transaction data that did not follow the common process, such as delayed items.
  • Not collecting all of the data that is expected for any number of reasons.
  • Collection of cycle time data that does not include queue time (very common)
  • Efficiency data that excludes events that would reduce the efficiency calculations.
  • and on and on….

Most of my biggest failures in lean six sigma efforts have involved working with non-representative data that I thought was represented the process.

Small data sets can be sufficient for a project if the data is representative of the process population.  Consider the difference between the data sample size for a Design of Experiments and a traditional ANOVA.  We are comfortable with a DOE that has 16 observations but we would want samples of around 25 per group for an ANOVA test, if not more.  The difference in the two statistical test data samples is the effort in a DOE to ensure that each observation is as perfectly representative of that test setup but for ANOVA we do not worry about it.

Representative Sampling

In our Smarter Solutions belt courses we minimize the teaching of the sample size calculators and replace it with discussions of sampling methods.  We talk about three basic methods and stress that the representative nature of the sample is the most important characteristic to manage.

No sampling, use all data:  This is common but can easily be non-representative for reasons listed in the prior section.  In this case the belt must spend a lot of time to ensure the data is representative and may need to adjust the data set by removing some data or adding addition data of missing process condition data.  I experience this condition often when there is not a lot of data available.

Random Sampling:  This is a reasonable sampling method when you have a data source that you know very little about.  Such as a batch of products or transactions that are provided without order or source information.  In this case, just randomly sample to ensure every item as an equal chance of being selected.  This is probably my least used method.

Stratified Sampling:  When there is a lot of process data and there is a lot of knowledge about the process.  We know that the process produces different types of products or transactions using multiple shifts of people and/or different equipment and locations.  The knowledge of these differences in the process that could easily cause variation or differences in the process data and that these differences are not random, but occur in a fixed manner provides us the information to create a stratified sampling plan.  If we randomly sample, we may not obtain data that represents process conditions that do not occur often.  A stratified sample uses the process knowledge to define multiple process conditions that exist and then samples randomly from each process condition in order to provide a sample that has the same ratio of process condition data that would be found in the entire data set.

example:  I find that a process experiences 30% to experience an automated processing in location A, 55% to experience an  automated processing in location B, 12% are processed manually in location A, and 3% are manually processed in location B.  In this example I have 4 groups that are expected to be slightly different.  I will choose to have a minimum of 21 in the small group (manual in location B) which will lead to a requirement to randomly select 210 (7*30) from the location A automated processing, 385 (7*55) from the location B automated data, and 84 (12*7) from the location A manual processed data.  Now I have a sample of 700 that has the exact ratio of the 4 conditions that exist in the full population.

Systematic Sampling:  When your process has a continuous flow (no lots or batches) then the most representative sampling is to pull your samples on a constant count or time basis.  One sample every 100 items/transactions or one per hour.  This becomes your best sampling method because the required randomness exists within the continuous process.  Your sampling goal is to collect a small sample that represents the entire day; so spreading the samples out regularly across the day will collect a sample that represents all the variability that existed within the process.

The ultimate tool to identify a minimum sample size

The smallest and most representative sample that I believe can be achieved, which will also be sufficient to make the decisions that you need to make will be obtained using a Design of Analysis methodology.  DOE uses methods that create a data set that is perfectly uncorrelated with respect to the selected factors.  We understand how to use it for optimization, but it can be used in sampling.

List all of the factors that need to be examined with your sample.  In DMAIC projects that would be all of the factors that transfer from Measure to Analyze.  Select a high and low value for each factor that represent the range of the process.  Now create a DOE with those factor high and low settings.  Before you go forward, make sure that all combinations are achievable, if not then you need to re-assign the factors until all the combinations are achievable.  The next step is to create a data collection plan that will only collect data when the selected factor combinations exist.

This will require an analysis of your product or transaction mix and targeting certain jobs.  You my also need to slightly alter some jobs or schedule them in a non-standard method in order to achieve the right combinations.  In some cases you may be required to pull-up jobs and release them early so they can be measured.  I have even built a job that was not ordered to obtain data I needed.  We figured it would sell at some point, but obtaining the best data for the project was more important.

There is no better method to obtain a small sample; i.e., minimum sample size.

Random Sampling and Minimum Sample Size Summary

The goal is to collect a representative sample that is as small as you can get away with.  Most sampling is not limited by statistical rules, but it is limited by the business.  Limited time, limited budget, or limited events will limit your sample size.

With the business limiting the amount of data you will be able to get the most from your limited data.  This is achieved through thoughtfully sampling.  Use systematic sampling for a continuous process and use stratified sampling for a non-continuous process that has identified and different performing groups in the population.

If you are really constrained in cost or time, consider using a Design of Experiments tool to create a sampling plan.  Collect data that matches the identified DOE factor combinations and ignore the rest.

A good lean six sigma program should teach you all of this as part of its taught Define-Measure-Analyze-Improve-Control (DMAIC) process improvement roadmap.


minimum sample considerations in lean Six Sigma roadmap


Lean Six Sigma training is most beneficial when it follows enhanced business management and process improvement roadmap books that can be referenced long after the training.


Books that discuss minimum sample


Contact Us to set up a time to discuss with Forrest Breyfogle how your organization might gain much from an Integrated Enterprise Excellence (IEE) Business Process Management System and its enhanced lean Six Sigma training.

Comments are closed.

Scroll to Top