We all are aware of these famous rule, for beginners let’s understand the basis of these rule. All rules are applied to one half of the control chart. The probability of getting a reaction to the test is ~0.01.
A single point outside 3σ control limits or beyond zone A.
Probability of finding a point in this region = 0.00135 if caused by the normal process. Anything in this region is a case of assignable cause.
Two out of three consecutive points in zone A (beyond 2σ).
Probability of getting 2 consecutive points in zone A = 0.0227*0.0227 = 0.00052
Probability of 2 out of 3 points in zone A = 0.0227*0.0227*0.9773*3 = 0.0015
Four out of 5 consecutive points in zone B (beyond 1σ)
Probability of getting one points in zone B = 0.1587
Probability of 4 points in zone B and 1 point in other part of the control chart = 0.1587*0.1587*0.1587*0.1587*0.8413*5 = 0.0027
Eight consecutive points on one side of the central line.
Probability of getting one points in beyond central line = 0.5
Probability of 8 points in in succession on one side of the central line = 8*0.5 = 0.0039
A process was running in a chemical plant. The final stage of the process was the crystallization, which gave the pure product. There were two crystallizer used for the purpose, each operated by a different individual. The SOP says that crystallizer has to be maintained between 30-40°C and for 110 to 140 minutes. The data for a month is captured below
In order to understand the process, I-MR control chart was plotted (for simplicity, R-chart is not captured).
As we have learned from the earlier blog, the alternate points above and below the central line represents some short of stratification (see the short connecting arms and the concentration of data points in zone B and C).
We plotted the histogram of the above data set and kept on increasing the number of classes. What we saw was the emergence of a bimodal distribution as we kept on increasing the number of classes.
So, one thing was sure, there were two processes running in the plant. Now question that was to be answered was “What is causing this stratification?”
We started with crystallizer, as soon as we plotted the simple run chart of the process with groups using Minitab®, we could see the difference. Crystallizer-2 was always giving better yield. This should not happen because both the crystallizer were identical and were connected to same utilities. Then we thought about the different operators might be the reason for this behavior, as this was the only factor that was different for both the crystallizer.
When we plotted the same run chart with grouping, but this time operator was used for the purpose of grouping. We got the same result as was found with the crystallizers, the operator-2 working on the crystallizer-2 was producing more quantity of the product. This run chart is not shown here.
We further grilled down to the operating procedure adopted by the two operators. We studied temperature and the maintenance time using scatter plot. The results are shown below
Finally, it was found that operator-2 was maintaining the crystallizer-2 at the lower end of the prescribed temperature and for longer duration. Hence, specification for temperature and the maintenance time was revised.
Visual Inspection of the Control Charts for Unnatural Patterns
Besides above famous rules, there are patterns on the control charts that needs to be understood by every quality professionals. Let’s understand these patterns using following examples. It would be easier to understand them if we can imagine the type of distribution of the data displayed on the control chart.
Case-1: Non-overlapping distribution
As a production-in-charge, I am using two different grades of raw material with different quality attributes (non-overlapping but at the edge of the specification limits) and I am assuming that the quality attributes of the final product will be normally distributed i.e. I am assuming that most of final product will hit the center of the process control limits.
If the quality of the raw material is detrimental to the quality of the final product then my assumption about the output is wrong. Because the distribution of the final product quality would take a bimodal shape with only few data at the junction of the distribution. Same information would be reflected onto the control chart with high concentration of data points near the control limits and fewer or no points near the center. Here is the control chart of the final product
In this completely non-overlapping distribution, there will be unusual long connecting arms in the control charts. There will be absence of points near the central line.
If we plot the histogram of this data set and go on increasing the number of classes, the two distribution would get separated.
So, whenever we see a control charts with the data points concentrated towards the control limits and no points at the center of the control charts, immediately we should assume that it is a mixture of two non-overlapping distribution. Remember long connecting arms and few data points at the center of the control chart.
Case-2: Partially overlapping distribution
Assume this scenario: A product is being produced in my facility in two shifts by two different operators. Each day I have two batches, one in each shift. There is a well written batch manufacturing record indicating that the temperature of the reactor should be between 50 to 60 °C. The control chart of a quality attribute of the product is represented by following control chart.
We can see that the data points on the control chart are arranged in an alternate fashion around the central line. The first batch (from the 1st shift) is below the central line and next batch (from the 2nd shift) is above the central line. This control chart shows that even we are following the same manufacturing process, there is a slight difference in the process. It was found that the 1st shift in-charge was operating towards 50 °C and the 2nd shift in-charge was operating towards 60 °C. This type of alternate arrangement is indication of stratification (due to operators, machines etc.) and is characterized by short connecting arms.
There are the cases of partially overlapping distribution resulting in a bimodal distribution, which means that there will be few points in the central region of the control charts but, majority of the data points would be distributed in zone C or B. In such cases, it would be appropriate to plot the histogram with groups (like operator, shift etc).
Case-3: Significant Overlapping distribution
If there is significant overlap between the two input distributions then it would be difficult to differentiate them in the final product and the combined distribution would give a picture of a single normal distribution. Suppose the operators in the above case-2 were performing the activity at 55 °C and 60 °C respectively. This would result in an overlapping distribution as shown below
Case-4: Mixture of unequal proportion
As a shift-in-charge, I am running short of the production target. What I did to meet the production target was to mix the current batch with some of the material produced earlier for some other customer with slightly different specification. I hoped that it wouldn’t be caught by the QA!. The final control chart of the process looked like
We can see from the control chart that if two distributions are mixed in an unequal proportions then the combined distribution would be an unsymmetrical distribution. In this case one-half of the control chart (in present case the lower half) would have maximum data points and other half would have less data points.
Case-5: Cyclic trends
If one observe a repetition of the trend on the control chart, then there is a cyclic effect like sales per month of the year. Sales in some of the specific months are higher than the sales in some other months.
Case-6: Gradual shift in the trend
A gradual change in the process is indicated by the change in the location of the data points on the control charts. This chart is most commonly encountered during the continuous improvement programs when we compare the process performance before and after the improvement program.
If it is observed that this shift is gradual on the control charts, then there must be a reason for the same, like wear and tear of machine, problem with the calibration of the gauges etc.
If one observe that the data points on the control charts are gradually moving up or down, then it is a case of trend. This is usually cause by gradual shift in the operating procedure due to wear and tear of machines, gauges going out of calibration etc.
Summary of unnatural pattern on the control charts
Symptom in control chart
Large shift (strays, freaks)
Sudden and high change
Points near and or beyond control limits
Smaller sustained shift
Sustained smaller change
Series of points on the same side of the central line
A continuous changes in one direction
Steadily increasing or decreasing run of points
Small differences between values in a long run, absence of points near the control limits
A long run of points near the central line on the both sides
Saw-tooth effect, absence of points near the central line
A run of consecutive points on both sides of central line, all far from the central line
Systematic Variation or stratification
Regular alternation of high and low values
A long run of consecutive points alternating up and down
In general, I have seen that people are plotting the control chart of the final critical quality attribute of a product (or simply a CQA). But the information displayed by these control charts is historical in nature i.e. the entire process has already taken place. Hence, even if the control chart is showing a out of control point, I can’t do anything about it except for the reprocessing and rework. We often forget that these CQAs are affected by some critical process parameters (CPPs) and I can’t go back in time to correct that CPPs. The only thing we can do is to start a investigation.
HENCE PLOTTING CONTROL CHARTS IS LIKE DOING A POSTMORTEM OF A DEAD (FAILED) BATCH.
Instead, if we can plot the control chart of CPPs and if these control charts shows any out of control points, IMMEDIATLY WE CAN FORECAST THAT THIS BATCH IS GOING TO FAIL or WE CAN TAKE A CORRECTIVE ACTION THEN AND THERE ITSELF. This is because CPPs and CQA are highly correlated and if CPPs shows an out of control point on its control chart, then we are sure that that batch is going to fail.
Hence, the control charts of CPPs would help us in forecasting about the output quality (CQA) of the batch because, the CPP would fail first before a batch fails. This will also help us in saving the time that goes into the investigation. This is very important for the pharmaceutical industry as everyone in the pharmaceutical industry knows, how much time and resource goes into the investigation!
I feel that we need to plot the control chart of CPPs along with the control chart of CQA, with more focus on the control chart of CPPs. This will help us in taking timely corrective actions (if available) or we can scrap the batch, saving downstream time and resource (in case no corrective action available).
Another advantage of plotting the CPP is for looking for the evidence that a CPP is showing a trend and in near future it will cross the control limits as shown below, this will warrant a timely corrective action of process or machine.
This is the most important topic to be covered in the 7QC tools. But in order to understand it, just remember following point for the moment as right now we can’t go into the details
Two things that we must understand beyond doubt are
There is a customer’s specifications, LSL & USL (upper and lower specification limits)
Similarly there is a process capability, LCL & UCL (upper and lower control limits)
The Process capability and customer’s specifications are two independent things however, it is desired that UCL-LCL < USL-LSL. The only way we can achieve this relationship is by decreasing the variation in the process as we can’t do anything about the customer’s specifications (they are sacrosanct).
If a process is stable, will follow the bell shaped curve called as normal curve. It means that, if we plot all historical data obtained from a stable process – it will give a symmetrical curve as shown below. The σ represents the standard deviation (a measurement of variation)
The main characteristic of the above curve is shown below. Example, the area under ±2σ would contain 95% of the total data
Any process is affected by two types of input variables or factors. Input variables which can be controlled are called as assignable or special causes (e.g., person, material, unit operation, and machine), and factors which are uncontrollable are called noise factors or common causes (e.g., fluctuation in environmental factors such as temperature and humidity during the year).
From the point number 2, we can conclude that, as long as the data is within ±3σ, the process is considered stable and whatever variation is there it is because of the common causes of variation. Any data point beyond ±3σ would represent an outlier indicating that the given process has deviated or there is an assignable or a special cause of variation which, needs immediate attention.
Measurement of mean (μ) and σ used for calculating control limits, depends on the type and the distribution of the data used for preparing control chart.
Having gone through the above points, let’s go back to the point number 2. In this graph, the entire data is plotted after all the data has been collected. But, these data were collected over a time! Now if we add a time-axis in this graph and try to plot all data with respect to time, then it would give a run-chart as shown below.
The run-chart thus obtained is known as the control chart. It represents the data with respect to the time and ±3σ represents the upper and lower control limits of the process. We can also plot the customer’s specification limits (USL & LSL) if desired onto this graph. Now we can apply point number 3 and 4 in order to interpret the control chart or we can use Western Electric Rules if we want to interpret it in more detail.
The Control Charts and the Continuous Improvement
A given process can only be improved, if there are some tools available for timely detection of an abnormality due to any assignable causes. This timely and online signal of an abnormality (or an outlier) in the process could be achieved by plotting the process data points on an appropriate statistical control chart. But, these control charts can only tell that there is a problem in the process but cannot tell anything about its cause. Investigation and identification of the assignable causes associated with the abnormal signal allows timely corrective and preventive actions which, ultimately reduces the variability in the process and gradually takes the process to the next level of the improvement. This is an iterative process resulting in continuous improvement till abnormalities are no longer observed in the process and whatever variation is there, is because of the common causes only.
It is not necessarily true that all the deviations on control charts are bad (e.g. the trend of an impurity drifting towards LCL, reduced waiting time of patients, which is good for the process). Regardless of the fact that the deviation is ‘good’ or ‘bad’ for the process, the outlier points must be investigated. Reasons for good deviation then must be incorporated into the process, and reasons for bad deviation needs to be eliminated from the process. This is an iterative process till the process comes under statistical control. Gradually, it would be observed that the natural control limits become much tighter than the customer’s specification, which is the ultimate aim of any process improvement program like 6sigma.
The significance of these control charts is evident by the fact that it was discovered in the 1920s by Walter A. Shewhart, since then it has been used extensively across the manufacturing industry and became an intrinsic part of the 6σ process.
To conclude, the statistical control charts not only help in estimating these process control limits but also raises an alert when the process goes out of control. These alerts trigger the investigation through root cause analysis leading to the process improvements which in turn leads to the decreased variability in the process leading to a statistical controlled process.
Most of the times continuous improvement programs in an organization gradually cease to exists after consultants leaves. This really disappoint me because, it fails despite the fact that, everyone in the organization knows its benefit. The importance of these initiatives are well known across all industry and this is vetted by the number of vacancies for lean and 6sigma professionals on any job portal. (check it on LinkedIn and other job portals).
The main reasons that I have experienced are following
In order to drive a lean or a 6sigma program, you need to be an external consultant or you need to be at some authorative position within the organization (this will ensure that you get the job done). The main purpose is to have a backing from the higher management.
External consultant will be in direct touch with management hence, people would cooperate
Higher position ensures that your message percolates down the line very well.
If you are at middle management, it is going to be difficult for you to implement these changes even if you have the backing of the higher management (unless they are fully involved.
Above scenario can be well understood by drawing an analogy with the stretching of a spring. As long as consultants are there, spring (employees) remain stretched and as soon as they leave, spring comes back to its original position. Hence, these initiatives should focus on changing the mind-set of the employees and have their buy-in prior to the start of any initiative. So, focus of these initiative should be cultural change rather than focusing on the short term financial gain.
“The quality of an organization can never exceed the quality of the minds that make it up.” Harold McAlindon
It took Toyota 30 years to implement, what is now called as TPS!
Usually, these initiatives are not the part of business strategy but, are usually initiated during the crisis situation and once the crisis is over and consultants leaves, it’s over! Spring regains its original state!
Another reason is the lack of trained man-power in the area of lean and 6sigma. I remember when we were searching for a 6sigma black belt, HR team gave us a list of ~65 candidates claiming to have 6sigma/lean expertise. Believe me, we could find only two persons (requirement was ~10-15) out of 65 having the required skill set.
Out of the curiosity we kept on asking people “from where they have got the certification?” Most of them answer that they have undergone 3-5 days of classroom training followed by the examination to get their black belt! That’s true in most of the cases but, I wonder “how a five day course can qualify a person to be a black belt unless you really sweat at the shop-floor with your team?
There is also a lack of trained people within the organization, who can really interview such candidates. Imagine that I want a black belt for my company to drive the initiative, either I have to believe that a candidate knows the concepts or I have to hire someone who can really interview these people. Latter option is much better! These days QbD has become a buzz word in the pharmaceutical industry, just include that in your CV and you will get an immediate raise.
But the main reason that I experienced was the compartmentalized view of an organization, where right hand doesn’t know what left hand is doing.
Let’s assume that the whole company is excited about the initiative, even then it fails! The major reason being the presence of many compartments/departments within the system and they are habituated to work in silos! They remain committed to their KRAs and their work-flow and doesn’t know much about the processes of the department from where they are receiving the inputs or how their processes affects the processes of the next department (internal customer). These silos are becoming the vertical coffins for the organization. Before we go any further, let’s understand “what is business?” or “How business is being carried out to generate revenue?”
The central planning team, based on the monthly forecast, gives the targets to all vertical coffins for that month. All vertical coffins then perform their duty in silos to complete their target.
Now, if we really look at the business, it is not the departments that makes the product and generates the revenues instead it is the culmination of a process-flow encompassing the entire organization. In order to give a clarity, let’s look at the following example
It is just a flow of the process across the departments that adds value to the raw material for the customers. The most important point is that these processes are being performed by the shop-floor people and not by the management. What I meant to say is that, the material flow happens in horizontal direction at the bottom of the pyramid but processes are being managed vertically and in silos. As a result there is an information gap between the decision point and the execution point. So the shop-floor people are no better than the robots who are busy in meeting their targets. In this scenario we just can’t implement the continuous improvement unless these vertical coffins are dismantled and the gap between information and the material flow diminishes. This can only be made possible through delegation and by empowering the shop-floor people.
Wait a minute! What are you talking about? If we are going to delegate our duty, then what we are going to do? What will be our role? These are the thought that may pop-up in the minds of higher management.
My dear friend, just leave these daily operation to the middle management, do something new, read something new, think something new or make some new strategy for the company. Give some new direction to the company with your vast experience. This is because if you get involved in day-to-day operations, then there is no difference between a shift in-charge and you! If you act like this, ideally your CTC should be added to the overhead of the product! Isn’t it?
Get a right person in the middle management and just get the daily updates from him, interfere when needed. I read somewhere (can’t recall) that as you grow higher in the management, you should distance yourself from the day-to-day operations and focus more on mentoring and drawing future roadmap for the company.
Once this conducive environment is established i.e. delegation and empowering the shop-floor people, it would easier to implement any continuous improvement initiative in the organization, and this is because the real action (process, value addition) happens at the shop-floor. Even if you look at most of the lean and 6sigma tools, you would find that it is being implemented successfully at the shop floor by the shop floor people!
We have seen that the DoE is a integral part of any QbD studies however, we seldom apply it properly during developmental stage. I have seen scientist afraid of using it because, they think that DoE means more experiments. But in reality what I have seen that they end-up in doing more experiments than that suggested by DoE. To give you an idea, scientist would perform 40-50 experiments if they are investigating 4-5 variables whereas, DoE con do that job in 15-20 experiments. This is because, when DoE experiments proposes 15-20 experiments in a single go, it appears more for the developmental team. Other issue is that the lack of knowledge on how to exploit the DoE using fractional factorial designs, Plackett-Burma or D-optimal etc.
This article describes the flow diagram for conduction a successful DoE.