Predictable Performance Report Example with Analytics

This predictable performance report example illustrates how a traditional table of numbers can be transitioned to a futuristic statement.  When the statement about predictive expectations are not desirable, analytics is used to give insight to where focus should be given to improve the output response.

How to improve performance reports with probability plots has much value both as a general practice and also as a primary component when creating an Operational Excellence (OE) system.

In an Operational Excellence system, this reporting technique can help orchestrate the determination and execution of strategic performance measure improvement needs that benefit the big picture.

Providing performance measures and reporting Key Performance Indicators(KPIs) in a format that leads to the best organizational behaviors have much bottom-line value and can, at the same time, significantly reduce business risks.

The following case study example illustrates how a 30,000-foot-level report-out, with its probability plot, provided information that both highlighted data issues that were not previously uncovered and gave valuable additional insight to the process that could be used to improve the process.

The 30,000-foot-level reporting format improves performance reports though an alternative to traditional KPI report outs with the inclusion of an easy-to-understand probability plot picture of how a continuous-output process is performing.

Case Study: Understanding and Improving a KPI Performance Metric

This article’s data set analysis is the result of recent work with a client.  Our client gave us permission to use the data.  In this case study, it will be shown how a 30,000-foot-level predictive performance report, with a probability plot, highlights that a past change occurred and that the current stable process is not meeting expectations.  In order to improving performance reports (i.e, improve the current KPI response), process change is needed.  The 30,000-foot-level report-out analysis also provided insight on where these process enhancements should focus.

Application of Lean Six Sigma tools and/or a lean kaizen event can be used to facilitate the process change.  This 30,000-foot-level performance report-out is then used not only to verify that change occurred but to quantify the magnitude of improvement from the process improvement effort.

Benefits of 30,000 foot-level Reporting

Traditional control charts (e.g., x-bar and R charts and p-charts) can create firefighting common-cause variability as though it were special cause.  This issue is resolved when using 30,000-foot-level reporting.  This high-level reporting format can be useful for the tracking of organization’s KPIs and most other time-series events.  A 30,000-foot-level performance report-out provides a prediction statement when a process is determined to be stable from a high-level point of view.

Organizations benefit when they employ this form of reporting throughout their organization, using an Integrated Enterprise Excellence (IEE) operational excellence system’ value chain. This IEE value chain addresses the metric reporting issues described in a one-minute video, also providing a structured linkage with the processes that created them. This form of reporting when data are automatically updated provides a vehicle for transparency in metric reporting where no special report-outs are needed for executive management reviews; i.e., avoiding risks of situations where unpleasant metric occurrences are filtered out in executive reports.  This transparency reporting format could help organizations avoid very detrimental issues, even death.  With the described form of reporting in this article and the desire for transparency reporting at the executive level, even catastrophic events such as the BP oil spill and Blue Bell Ice Cream listeria contamination might have been avoided. This could have been accomplished if corrective action were taken when early safety/contamination issues had been initially identified through 30,000-foot-level transparency reporting (to all) before ballooning into a major crisis.

When data are continuous, 30,000 foot-level reporting includes a probability plot from a normal or some other distribution; however, organizational leadership can be reluctant to include this plot in a report-out.  A common stated reason for this position to improving performance reports is that the plot looks too complicated and is not necessary to describe how the process is performing.  However, the plot is not complicated, and its benefit can be shown in a few minutes.  Much can be gained when people at all levels of an organization become familiar with the usage of this valuable tool in their metric reporting.

The example below illustrates the reason for including a probability plot in a continuous-response performance metric report-out and what risks occur when this plot is not included in a scorecard report-out.

Original KPI Data for Creating Performance Measures

The following time series data illustrate duration to perform machine maintenance in a production process; however, the data could also represent:

  • Outage time for a computer server during scheduled or unscheduled events
  • Resolution time for one randomly weekly chosen customer problem
  • Resolution duration time for one daily randomly chosen customer call

Downtime Table, Initial Data

Traditional Performance Metric Report-outs

To gain additional time-series information, one could report the data in a time-series chart, as shown in the following Minitab statistical software output figure:

Downtime Time Series Plot, Initial Data


This chart indicates that there is a lot of output response variability but does not provide technically statistical insight as to whether the process has any special cause or unusual event conditions, and how the process is performing relative to expectations.

An X-MR Chart When Multiple Samples Occur in a Subgroup

An XmR control chart can be used to determine if any special cause conditions have occurred.

Downtime Time Series Plot, Initial Data


This chart indicates a large downward swing in the process moving range at one point, which might be ignored or investigated in the real world. This chart also indicates process stability but gives no indication of how the process is performing relative to business needs or desires.  Traditionally, process capability indices provide this information using values such as Cp, Cpk, Pp, and Ppk; however, these report outputs have issues.  They can be difficult to interpret, are a function of how the process is sampled, and require a specification.   For this data set, there is no specification.  Using a goal or desire is not the same as a specification and should not be used for determining these indices.

Using a Probability Plot to Quantify Process Capability when there is No Specification

A better approach than using capability indices is creating a probability plot of the data with a report-out of the estimated population median (or mean) and 80% frequency of occurrence.

Downtime Process Capability Probability Plot, Initial Data


From this probability plot one could state that the process is performing with an approximate median response of 114.2 minutes and 80% frequency of occurrence between 47.4 and 181.1 minutes.

With 30,000-foot-level reports, the individual chart and probability plot appear in one graph report-out; however, executives in an organization may have decided that they think probability plots are too technical or not valuable.  Because of this, they make it a policy that probability plots are not to be included in organizational reports.  With this type of policy, one could create a report-out that looks like:

Process Downtime, Modified 30,000-foot-level Report-out, Initial Data


This form of reporting where a statement is made at the bottom of the chart that describes how the process is performing can be appealing when executives state that they do not want to see a probability plot included in the report-out.  However, executives and others can lose a very large amount of insight as to how the process is actually performing with the exclusion of this data presentation and whether any action should be taken or not.

Often any statement at the bottom of the chart is simply read but not really appreciated.  Yes, the statement in the  above report-out describes a process median response of 114 and four out of five times (i.e., 80%) the response is expected to be between 47.4 minutes and 181.1 minutes.  But does everyone really appreciate the meaning of this statement?  In addition, did the person who determined this statement value that was placed at the bottom of the chart determine the value correctly?

It has been my experience that a statement without a picture does not have the same impact as one with the inclusion of a picture.  Also much insight is missing when there is not a picture that illustrates the amount of process variability that is occurring.  A probability plot may initially look intimidating, but the graph is quite simple to understand once any plot-appearance intimidation is overcome.

Among other things, a modified 30,000-foot-level report-out, which does not include an appropriate probability plot has the following issues:

  • Insight is not present as to whether the person who made the decision about a process’ stability and whether the median reported value with its 80% frequency of occurrence or proportion non-conformance rate is correct.  This bottom-of-the-chart statement is a function of one person’s opinion about process stability, whether multiple distributions are present and/or whether a transformation should be made, among other things.  With this alternative form of reporting, bottom-of-the-control-chart statements typically are the result of inputs from someone who is not necessarily familiar with the process.
  • Basic understanding of the process’ variability from a picture point of view is not prevalent.  Process variability should not be assessed from the output of a control chart.  A control chart is to assess process stability, determine if a process has changed, and identify special-cause events; it does not provide a process capability statement.  A 30,000-foot-level chart addresses these issues from a high-level point of view, while a traditional control chart is to ″control″ a process.
  • When subgrouping of a continuous-response variable occurs, a 30,000-foot-level chart includes an individuals chart of the mean response and standard deviation (or log of standard deviation) for the purpose of assessing process stability. Input for creating a probability plot is the raw data since the creation of any staging that may have been included in the control chart.  Including both a mean and standard-deviation plot with a simple performance statement statement below this two-control-chart-plot pair can be very confusing to a reader of the chart.

With inclusion of a probability plot, both people close to the process and others can ask questions that can provide much value as to:

  • Whether any action or non-action should be taken relative to the processes’ performance.
  • Presenting the process data differently so that 30,000-foot-level report presents better or more accurate information.

Performance Metric Report-out with Probability Plot and Identification of Data Issues

The inclusion of a probability plot in a 30,000-foot-level report-out of the above data would yield:

Process Downtime, 30,000-foot-level Report-out, Initial Data


From this plot one notices that a zero is within the control limits.  This is an indicator that a transformation is needed; however, the probably plot does not show curvature as one would expect when a transformation is needed.  However, the probability plot indicates that there could be a knee in the probability plot; i.e., the three lowest points may be from a different distribution.

Someone who is familiar with the process stated to me that the time to complete the maintenance task could not be as low as the values from these three data points.  They pointed out that the task should be completed in about 90 minutes. Upon closer examination of the data it was noted that some maintenance occurred over more than one shift, where a time was initially recorded for each shift’s participation in the maintenance;i.e., 9/2/15, 11/19/15, 4/30/16, 5/15/16, 6/17/16, 8/26/16.

Downtime Table , Problems with Intitial Data

30,000-foot-level Performance Measurement Report-out with Adjusted Data Set

The actual maintenance time for the equipment should be the sum of the times for each shift, as noted by the highlighted values in the following table. When these highlighted maintenance times are combined, the resulting data set is:

Downtime Table, Corrected Data


A 30,000-foot-level report-out of this data set is:

Downtime 30,000-foot-level Report-out No Staging, Corrected Data


From the probability plot it appears that there could still be a bi-modal distribution; i.e., three or four especially low values.  The person who is familiar with the process said that these values are lower than what should occur in the process.  He also commented that some of these probability point values are much larger than what should be occurring.

Data Analyses of Performance Metric Reporting Data

It appears that the process variability has decreased since 3 April 2016. The following control chart indicates that control charting rule 7 occurred; i.e., 9 points were in a row within one standard deviation.

Downtime Control Chart Indicating Process Change, Corrected Data


From the following dot plot, it appears that there is more variability in the response before 3 April 2016 than on 3 April 2016 and later; however, the mean duration does not seem to have changed.  This conclusion is supported through the following statistical analyses.



Two-Sample T-Test and CI: Equipment Minutes Down, Grouping


Two-sample T for Equipment Minutes Down


Grouping                     N   Mean  StDev  SE Mean

Apr2016andLater  15  133.8   16.4      4.2

BeforeApr2016        18  135.1   53.8       13



Difference = μ (Apr2016andLater) – μ (BeforeApr2016)

Estimate for difference:  -1.3

95% CI for difference:  (-29.2, 26.6)

T-Test of difference = 0 (vs ≠): T-Value = -0.10  P-Value = 0.923  DF = 20


Dwontime Two Variance Analysis Before and After Aprinl 2016 Cor


To address the question why there was a difference, an Analysis of Means (ANOM) plot of the data was conducted to compare each shift’s average-response to the overall mean, both before and after the change.  The before 3 April 2016 analysis indicates that shift 3 took significantly longer time on the average than the other shifts or shift-combination times.


Downtime ANOM Comparison by Shift before April 2016, Corrected Data


A similar analysis for the data from 3 April 2016 and later indicate did not detect a difference from any shift or the blended-shift occurrence.

Downtime ANOM Comparison by Shift April 2016 and Later, Corrected Value


30,000-foot-level Predictive Performance Metric Report Out using Adjusted Data

A 30,000-foot-level report out with the inclusion of a stage at 3 April 2016 is:

Improving Performance Reports with 30,000-foot-level Reporting


In the above normal probability plot, the data follow a straight line; hence, the belief is that one distribution (i.e., the normal distribution) is an adequate model to provide a statement of how the process is not only currently performing but expected to perform in the future.  Also, zero is not now in the individuals control chart limits, which is another indicator that no transformation is needed.

Conclusions from 30,000-foot-level Analyses, which included a Probability Plot, for Improving Performance Reports

Presentation of a combination report-out of an individuals chart with a probability plot yielded:

  • More proactive data investigation than through a simple statement at the bottom of a chart (e.g., median duration of 134 minutes and 80% frequency of occurrence between 82 and 187 minutes), which typically is consider a for-your-information statement and not considered as a potentially important actionable need that should be addressed.
  • Determination that a process change occurred and the reason for the change.
  • A statement of how the process is now performing and what could be expected in the future unless something were changed; i.e., estimated median maintenance time duration of 134 minutes and 80% frequency of occurrence of 113 and 155.
  • An expert in the process stated that the current and prediction time is longer than it should be.  This expert is planning to investigate with operations why this process is taking so long to execute.  Note, if this process is an operational constraint to its business flow, improvement to this process improves productivity for the entire business!
  • An expert in the process expected on 3 April 2016 that the mean would have decreased in addition to a reduction of variability since third shift does not now typically conduct this equipment maintenance.  This expert is planning to investigate with operations why a reduction in maintenance time did not occur at this point in time.
  • When an organization presents data using the above format throughout the business, improvement efforts can be determined that benefit the enterprise as a whole!

Implementing Operational Excellence for Improving Performance Reports

The last six words of  a previous Wikipedia’s definition for  Operational Excellence was ″sustainable improvement of key performance metrics.″   For an organization to determine that an improvement was actually made  in a key performance metric, a business needs to report performance measures from a process point of view, which is what 30,000-foot-level metric report-outs provide.  The inclusion of a probability plot with this report-out, when appropriate, can provide valuable insight to how the metric might be improved.

Organizations benefit when they include 30,000-foot-level performance metric through an organization where there is a structure linkage to the processes that were involved in the creation of these metrics.  This need can be addressed through an Integrated Enterprise Excellence value chain that can be clickable and automatically updated through linkage to existing ERP systems, spreadsheets, etc.

Easy Creation of 30,000-foot-level reports: Improving Performance Reports with Probability Plots

A no-charge Minitab add-in is available for downloading, where users can easily create 30,000-foot-level charts.

A one-minute video describes how the IEE system addresses issues with traditional scorecard and improvement systems:


predictable performance report example video description





Contact Us to set up a time to discuss with Forrest Breyfogle how your organization might gain much from an Integrated Enterprise Excellence (IEE) Business Process Management System implementation and its 30,000-foot-level predictive scorecard methodology. 


Scroll to Top