Effective KPI management of an organization’s Key Performance Indicators is critical; however, there are issues with traditional metric reporting and its accompanying goal-setting practices. KPI metrics reporting is an elephant in the room problem where organizations can waste much time and resources that unnecessarily cost a lot of money and lead to much frustration.
KPI management should lead to the best actions or non-actions in an organization, but more often than not, this does not occur. This document describes commonplace issues with KPI management practices and how to resolve these issues with an alternative 30,000-foot-level KPI management reporting methodology.
The following video and its description shows the benefits and use of a free app to create 30,000-foot-level charts for organizational datasets.
KPI Management: Traditional Approach
The time it takes for someone to commute from home to work will be used to illustrate the benefits of a powerful 30,000-foot-level KPI management reporting approach.
Commute time would not be the same every day because travel time is a function of the amount of typical traffic encountered and delays from traffic signals, among other things. Mathematically this relationship is Y=f(X), where Y is commute time and Xs are the inputs to the process, e.g., amount of traffic and traffic signals delays.
A commuter tracked her commute time for several weeks and observed that it typically takes 25-35 minutes to travel. One can consider this variation of “noise” within her commute from home-to-work process time, i.e., a KPI management reporting component.
Our commuter noticed two unusually long commute times when collecting daily commute times (See Figure 1). These two commute times were over an hour, much longer than her typical 25-35 minutes commute time. A significant traffic accident caused one commuting delay, and in the other situation, the travel delay occurred because of a snow storm.
From a data analysis point of view, we can “talk about” the specifics of exceptionally long commute times, e.g., a major traffic accident and inclement weather.
When we want to understand and focus on our typical commute times, we should remove these abnormal points from our KPI management metric’s tracking, as shown in Figure 2.
After removing these atypical events, one should not “talk about” the specifics of what occurred on a particular day within the 25-35 typical commute-time response. Discussing the particulars of any “noise” datum point in a process response can lead to an erroneous conclusion and inappropriate actions relative to improving the magnitude of a future process output response.
This commuter wanted to reduce her commute time; hence, she set a goal for her time to commute, similar to the measured goal-setting approach used in the company where she is employed. Her goal was for commute time to be no longer than 33 minutes, as shown by the line in Figure 3.
With this KPI management metric reporting approach, if a commute time exceeded 33 minutes, she would try to determine what happened during that commute and decide what to do differently so this problem commute time would not happen again.
However, as previously stated, one should not focus on attempting to understand the specifics of what occurred for specific process response “noise” datum points.
This metric reporting approach can lead to much organizational firefighting and wasted efforts that can even be destructive.
Our commuter’s goal-setting KPI management approach for her home-to-work commute is similar to organizations using red-yellow-green scorecards (See Figure 4) to monitor the output response from processes. For this metric tracking approach, a red color triggers action for this scorecard approach because a specific datum point did not meet a goal.
Red-yellow-green scorecards and similar goal-tracking methods for data points can lead to firefighting “noise” variation as though it were abnormal occurrences, resulting in much organizational waste and even destructive behaviors.
In the 1980s, Dr. Edwards Deming highlighted problems with this commonplace meet-the-numbers goals management approach in his workshops using a red bead experiment exercise. The issues described by Deming in this workshop exercise are no different than those of red-yellow-green scorecards, as described in the published article “The Improvement of Scorecard Management: Comparing Deming’s Red Bead Experiment to Red-Yellow-Green Scorecards.”
A 30,000-foot-level report format addresses this significant issue, where unusual events are separated from the typical noise of the process response. In addition, with 30,000-foot-level KPI management reporting, variation in the output’s response is included in the metric dashboard report-out.
The benefits of using an alternative 30,000-foot-level metric tracking approach will now be described.
KPI Management: 30,000-foot-level Metric Reports
Figure 5 shows a 30,000-foot-level report for our commuter data.
The 30,000-foot-level dashboard report shown in Figure 5 for our commuter’s initial commute times has three components:
- The individuals plot (left graph) is for assessing a measured response for “stability” from a high-level point of viewpoint, i.e., a 30,000-foot-level point of view (not unlike a window’s view perspective from an airplane in a flight of the terrain below). The mathematics for creating this chart is the same as a Statistical Process Control (SPC) individuals control chart; however, the primary use of the individuals chart is entirely different in a 30,000-foot-level reporting application. With a 30,000-foot-level report dashboard, one is not attempting to “control” a process and identify special-cause signals for resolution. With 30,000-foot-level reporting, the individuals chart only assesses whether a process is stable or not from a high-level vantage point. Suppose there are no points beyond the data-calculated UCL (upper control limit) and LCL (lower control limit) lines. In that case, the process is considered stable and predictable from this high-level perspective.
- The probability plot (right graph) is a plot of data from an individuals chart. If a recent region of a process’ output measurements is stable/predictable, the data for creating this plot is considered a random sample of future responses. This prediction statement assumes that future process inputs will be similar to those in the past.
- The statement at the bottom of the chart addresses process predictability. If the process is considered stable, a prediction statement is determined from the probability plot and provided at the bottom of this two-chart pair.
This figure shows a commute time that is not stable because of the two abnormal commute times caused by a major traffic accident and a snowstorm. Since we want to reduce commute time under normal conditions and have identified the cause for these atypical responses, one can remove these two data points from our chart to create a typical commute response time, as shown in Figure 6.
From this individuals chart (a 30,000-foot-level KPI management reporting component), one observes that the process is now considered stable; hence, the process is predictable. The bottom of the 30,000-foot-level report statement states that her current commute approach has an estimated mean response of about 30 minutes, with 80% (i.e., four out of five times) of her commute times between 25 and 34 minutes.
Our commuter thought of another potential significant X in the Y=F(X) relationship besides the amount of traffic and traffic signal delays. This X was her commute departure time, and she thought that if she departed ½ hour earlier, she would encounter less traffic, which could significantly reduce her commute time. She decided to record commute times if she left ½ hour earlier as an experiment.
The 30,000-foot-level commute-time chart in Figure 7 includes both before and after her change in home departure times.
Figure 7 shows a distinct shift in the magnitude of the individuals chart response where our commuter’s new process began. Because of this shift, the individuals chart was staged. The probability plot uses only data from the recent stability region of the individuals chart (i.e., after its staging) to create the 30,000-foot-level probability plot and determine a reported predictive statement at the bottom of the report.
Figure 7’s 30,000-foot-level report shows that our commuter’s new process (departing ½ hour earlier) has an estimated mean commute time of about 20 minutes, with four out of five commute times (80%) between 19 and 22 minutes.
This ½ hour earlier departure time process change resulted in a mean-commute-duration time that is much less (and with reduced variation) than her previous process (i.e., not departing earlier) – an estimated mean commute time of 30 minutes, with four out of five commute times (80%) between 25 and 34 minutes.
Traditional KPI management metrics reporting encourages reacting to the ups and downs of commonplace “noise” responses, leading to much organizational wasted efforts and even destructive behaviors. Organizations need to avoid this elephant-in-the-room form of KPI management report-outs.
30,000-foot-level reporting provides a high-level process output response perspective that encourages process improvement, when needed, to improve a process-output metric’s response. Figure 8 compares the two dashboard reporting methodologies described in this article.
Do you think it would be best if your organization used dashboard metric reports that treat each datum point as something to react to or not? Or, do you think it is better to show the output of processes from a high-level perspective with its response-output variation so that the metric report’s most appropriate action or non-action occurs?
One might initially think that 30,000-foot-level reporting is too complex, but 30,000-foot-level reporting is easier to understand than traditional KPI management reports.
With 30,000-foot-level KPI management, the statement at the bottom of the chart can provide, in an easy-to-understand wording, a prediction statement that includes process variation, where if this futuristic statement is undesirable, process improvement is needed. What can be easier to understand?
It needs to highlight that with 30,000-foot-level reports, if a specification exists for a process-output response, a 30,000-foot-level report would provide an estimated non-conformance rate at the bottom of the report-out when the process is considered stable.
Suppose you think 30,000-foot-level reporting could benefit your organization but are concerned that others will not “buy into” the concept. In that case, the problem becomes how to advocate selling the consideration of 30,000-foot-level reporting in your organizations.
One good place to start to better understand and explain 30,000-foot-level reporting to others with its benefits is to select a KPI metric that is important to the business. Compile data for this metric over time and create a 30,000-foot-level report for the data using the free 30,000-foot-level app. Next, create a presentation that shows the organization’s traditional metric reporting approach with a 30,000-foot-level reporting alternative. A 30,000-foot-level KPI report will provide much more process insight than ever seen before. This additional insight can be very beneficial in determining what to do differently to improve a KPI’s future response – long term.
One can use this metric reporting comparison results to “advocacy sell” the consideration of 30,000-foot-level reporting in your organization.
ASQ Quality Progress September 2022 published article titled “Driving Better Solutions: Metrics reporting that can lead to the best behaviors” by Forrest Breyfogle describes the tracking of process-output metrics and the establishing of metric goals that lead to the best behaviors. Downloaded this article through the following link.Download
Contact us for assistance with enhancing your KPI management efforts.