7QC Tools: Basis of Western Electric Rules of Control Charts

for posts

We all are aware of these famous rule, for beginners let’s understand the basis of these rule. All rules are applied to one half of the control chart. The probability of getting a reaction to the test is ~0.01.

picture106

picture108

  1. A single point outside 3σ control limits or beyond zone A.
    • Probability of finding a point in this region = 0.00135 if caused by the normal process. Anything in this region is a case of assignable cause.
  2. Two out of three consecutive points in zone A (beyond 2σ).
    • Probability of getting 2 consecutive points in zone A = 0.0227*0.0227 = 0.00052
    • Probability of 2 out of 3 points in zone A = 0.0227*0.0227*0.9773*3 = 0.0015
  3. Four out of 5 consecutive points in zone B (beyond 1σ)
    • Probability of getting one points in zone B = 0.1587
    • Probability of 4 points in zone B and 1 point in other part of the control chart = 0.1587*0.1587*0.1587*0.1587*0.8413*5 = 0.0027
  4. Eight consecutive points on one side of the central line.
    • Probability of getting one points in beyond central line = 0.5
    • Probability of 8 points in in succession on one side of the central line = 8*0.5 = 0.0039

7QC Tools: My bitter experience with statistical Process Control (SPC)!

for posts

I just want to share my experience in SPC.

In general, I have seen that people are plotting the control chart of the final critical quality attribute of a product (or simply a CQA). But the information displayed by these control charts is historical in nature i.e. the entire process has already taken place. Hence, even if the control chart is showing a out of control point, I can’t do anything about it except for the reprocessing and rework. We often forget that these CQAs are affected by some critical process parameters (CPPs) and I can’t go back in time to correct that CPPs. The only thing we can do is to start a investigation.

picture21

HENCE PLOTTING CONTROL CHARTS IS LIKE DOING A POSTMORTEM OF A DEAD (FAILED) BATCH.

Instead, if we can plot the control chart of CPPs and if these control charts shows any out of control points, IMMEDIATLY WE CAN FORECAST THAT THIS BATCH IS GOING TO FAIL or WE CAN TAKE A CORRECTIVE ACTION THEN AND THERE ITSELF. This is because CPPs and CQA are highly correlated and if CPPs shows an out of control point on its control chart, then we are sure that that batch is going to fail.

picture92

Hence, the control charts of CPPs would help us in forecasting about the output quality (CQA) of the batch because, the CPP would fail first before a batch fails. This will also help us in saving the time that goes into the investigation. This is very important for the pharmaceutical industry as everyone in the pharmaceutical industry knows, how much time and resource goes into the investigation!

picture95

I feel that we need to plot the control chart of CPPs along with the control chart of CQA, with more focus on the control chart of CPPs. This will help us in taking timely corrective actions (if available) or we can scrap the batch, saving downstream time and resource (in case no corrective action available).

Another advantage of plotting the CPP is for looking for the evidence that a CPP is showing a trend and in near future it will cross the control limits as shown below, this will warrant a timely corrective action of process or machine.

picture93


CQA: Critical Quality attribute

CPP: Critical Process Parameter

OOS: out of specification


7QC Tools — The Control Charts

picture61

The Control Charts

This is the most important topic to be covered in the 7QC tools. But in order to understand it, just remember following point for the moment as right now we can’t go into the details

  1. Two things that we must understand beyond doubt are
    1. There is a customer’s specifications, LSL & USL (upper and lower specification limits)
    2. Similarly there is a process capability, LCL & UCL (upper and lower control limits)
    3. The Process capability and customer’s specifications are two independent things however, it is desired that UCL-LCL < USL-LSL. The only way we can achieve this relationship is by decreasing the variation in the process as we can’t do anything about the customer’s specifications (they are sacrosanct).
    4. Picture13
  2. If a process is stable, will follow the bell shaped curve called as normal curve. It means that, if we plot all historical data obtained from a stable process – it will give a symmetrical curve as shown below. The σ represents the standard deviation (a measurement of variation)
    • picture88
  3. The main characteristic of the above curve is shown below. Example, the area under ±2σ would contain 95% of the total data
    • picture19
  4. Any process is affected by two types of input variables or factors. Input variables which can be controlled are called as assignable or special causes (e.g., person, material, unit operation, and machine), and factors which are uncontrollable are called noise factors or common causes (e.g., fluctuation in environmental factors such as temperature and humidity during the year).
  5. From the point number 2, we can conclude that, as long as the data is within ±3σ, the process is considered stable and whatever variation is there it is because of the common causes of variation. Any data point beyond ±3σ would represent an outlier indicating that the given process has deviated or there is an assignable or a special cause of variation which, needs immediate attention.
    • picture89
  6. Measurement of mean (μ) and σ used for calculating control limits, depends on the type and the distribution of the data used for preparing control chart.

Having gone through the above points, let’s go back to the point number 2. In this graph, the entire data is plotted after all the data has been collected. But, these data were collected over a time! Now if we add a time-axis in this graph and try to plot all data with respect to time, then it would give a run-chart as shown below.

picture90

The run-chart thus obtained is known as the control chart. It represents the data with respect to the time and ±3σ represents the upper and lower control limits of the process. We can also plot the customer’s specification limits (USL & LSL) if desired onto this graph. Now we can apply point number 3 and 4 in order to interpret the control chart or we can use Western Electric Rules if we want to interpret it in more detail.

The Control Charts and the Continuous Improvement

A given process can only be improved, if there are some tools available for timely detection of an abnormality due to any assignable causes. This timely and online signal of an abnormality (or an outlier) in the process could be achieved by plotting the process data points on an appropriate statistical control chart. But, these control charts can only tell that there is a problem in the process but cannot tell anything about its cause. Investigation and identification of the assignable causes associated with the abnormal signal allows timely corrective and preventive actions which, ultimately reduces the variability in the process and gradually takes the process to the next level of the improvement. This is an iterative process resulting in continuous improvement till abnormalities are no longer observed in the process and whatever variation is there, is because of the common causes only.

It is not necessarily true that all the deviations on control charts are bad (e.g. the trend of an impurity drifting towards LCL, reduced waiting time of patients, which is good for the process). Regardless of the fact that the deviation is goodor badfor the process, the outlier points must be investigated. Reasons for good deviation then must be incorporated into the process, and reasons for bad deviation needs to be eliminated from the process. This is an iterative process till the process comes under statistical control. Gradually, it would be observed that the natural control limits become much tighter than the customer’s specification, which is the ultimate aim of any process improvement program like 6sigma.

The significance of these control charts is evident by the fact that it was discovered in the 1920s by Walter A. Shewhart, since then it has been used extensively across the manufacturing industry and became an intrinsic part of the 6σ process.

picture12

To conclude, the statistical control charts not only help in estimating these process control limits but also raises an alert when the process goes out of control. These alerts trigger the investigation through root cause analysis leading to the process improvements which in turn leads to the decreased variability in the process leading to a statistical controlled process.


How to provide a realistic range for a CQAs during product development to avoid Unwanted OOS-2 Case Study

picture61

Suppose we are in the process of developing a 500 mg Ibuprofen tablets (actual specification is 497 to 502 mg). A tablet contains many other ingredients along with 500 mg of the active molecule (ibuprofen). Usually these ingredients are mixed and then compressed into the tablets. During the product development, three batches each of 500 tablets were prepared with 15 minutes, 25 minutes and 35 minutes of blending time respectively. A sample of 10 tablets were collected from each batch and analyzed for the ibuprofen. The results are given below

picture8

The regression analysis of the entire data (all three batches) provides a quantitative relationship between blending time and the expected value of ibuprofen content for that blending time.

Expected ibuprofen content = 493±0.242×blending time

Now, we want to estimate the average ibuprofen content of the entire batch of 500 tablets, based on the sample of 10 tablets for a given mixing time (say 15, 25 and 35 minutes). Let’s calculate the 95% and 99% confidence interval (CI) for each of the mixing time.

picture10

In reality, we can never know the average ibuprofen content of the entire batch unless we do the analysis of the entire batch, which is not possible.

We can see that the 99% CI is wider than the 95% CI (hope you are clear about what 95% CI means?). The 99% CI for a mixing time of 35 minutes seems to be closer to my desired strength of 497 to 502 mg. Hence, in developmental report, I would propose a wider possible assay range of 499.6 to 502.57 for a mixing time of 35 minutes with 99% CI.

This means that, if we take 100 samples, then the CI given by 99 samples would contain the population mean.

Now if we look at this 99% CI i.e. 499.6 to 502.57 mg which is narrower than the specifications (407 to 502 mg). Hence, I want to estimate the some interval (like CI) with a mixing time of say 32 minutes (note: we have not conducted any experiments with this mixing time!) to check if we can meet the specification there itself. We can do it, because we have derived the regression equation. What we are doing is to predict an interval for a future batch with a mixing time of 35 minutes. As we are predicting for a future observation, this interval is called as prediction interval of a response for a given value of the process parameter. Usually prediction intervals are wider than the corresponding confidence intervals.

Using the equation discussed earlier, we can have expected average value of the mean strength for a mixing time of 32 minutes.

Expected ibuprofen content for a blending time of 32 minutes = 500.74

picture11

Till now, what we have learnt is that CI can estimate an interval that will contain the average ibuprofen content of the entire batch (already executed) for a given value of blending time. Whereas, the prediction interval estimates the interval that would contain the average response of a future batch for a given value of blending time.

In present context,

For a blending time of 35 minutes, a 95% CI indicates that the average strength of the entire batch of 500 tablets (population) would be between 499.99 and 502.18.

Whereas a 95% PI helps in predicting the average strength of the next batch would be between 499.6 and 502.57 for a blending time 35 minutes.

Now question is: can we propose any of these intervals (95% CI or 95% PI) as the process control limits?

What I think is, we can’t! Because above control limits doesn’t tell me anything about the distribution of the population within this interval. What I mean to say that we can’t assume that entire 500 tablets (entire batch) would be covered by these interval, it’s only the average mean of the entire batch would fall in this interval.

For me it is necessary to know the future trend of the batches when we transfer the technology for commercialization. We should not only know the interval containing the mean (of any CQA) of the population but also the proportion or percentage of the total population falling in that interval. This will help me determining the expected failure rate in future batches if all CPPs are under control (even 6sigma process has a failure rate of 3.4ppm!). Once I know that, it would help me in deciding when to start investigating an OOS (once number of failures would cross the expected failure rate). For this statement, I am assuming that there is no special cause for OOS.

This job is done by the tolerance interval (TI). In general TI is reported as follows

A 95% TI for the tablet strength (Y) containing 99% of the population of the future batches for a blending time of 35 minutes (X).

It means that, whatever TI is calculated at 95% confidence level would encompass 99% of the future batches that will be manufactures with a blending time of 35 minutes. In other words, there is 1% of the batches would fail. Now, I will start investigating an OOS only if there are two or more failures in next 100 batches (assuming that there are no special causes for OOS and all process parameters are followed religiously).

The TI for the batches at different blending time is given below

Tolerance Interval type: two sided

Confidence level: 95%

Percentage of population to be covered: 99

picture12


Above discussion can be easily understood by following analogy described below

You have to reach the office before 9:30 AM. Now tell me how confident you are about reaching the office exactly between

9:10 to 9:15 (hmm…, such a narrow range, I am ~90% confident)

9:05 to 9:20 (a-haa.., now I am 95% confident)

9:00 to 9:25 (this is very easy, I am almost 100% confident)

The point to be noted here is that as the width of the time interval increases, your confidence also increases.

It is difficult to estimate the exact arrival time, but we can be certain that mean arrival time to office would be in between

Average arrival time on (say 5 days) ± margin of error

How to provide a realistic range for a CQAs during product development to avoid unwanted OOS-1.

 picture61
It is very important to understand the concept of CI/PI/TI before we can understand the reasons for OOS.

Let’s start from following situation

You have to reach the office before 9:30 AM. Now tell me how confident are you about reaching the office exactly between

(A) 9:10 to 9:15 (hmm…, such a narrow range, I am ~90% confident)

(B) 9:05 to 9:20 (a-haa.., now I am 95% confident)

(C) 9:00 to 9:25 (this is very easy, I am almost 99% confident)

The point to be noted here is that , your confidence increases with widening time interval (remember this for rest of the discussion).

More important thing is that, it is difficult to estimate the exact arrival time, but we can say with some confidence that my arrival time would be between some time interval.

Say my arrival time for last five days (assuming all other factors remains constant)  was 9:17 AM, so I can say with certain confidence (say 95%) that my arrival time would be given by

Average arrival time on (say 5 days) ± margin of error

The confidence we are showing is called as confidence level and the interval estimated by above equation at a given confidence level is called as CONFIDENCE INTERVAL (CI). This confidence interval may or may not contain my mean arrival time.

Now let’s go a manufacturing scenario

We all are aware of the diagram given below, the critical quality attribute (CQA or y) of any process is affected by many inputs like critical material attribute (CMA), critical process parameter (CPP) and other uncontrollable factors.

Picture21

Since, CQAs are affected by CPPs and CMAs, it is said that CQA or any output Y is a function of X (X = CPPs/CMAx).

Picture23

The relationship between Y and X is given by following regression equation

Picture33

Following points worth mentioning are

  1. Value of Y depends on the value of Y, it means that if there is deviation in X then there will be a corresponding deviation in Y. e.g. if the level of any impurity (y) is influenced by the temperature then any deviation in impurity level will be attributed to the change in temperature (x).
  2. If you hold X constant at some value and performs the process many times (say 100) then all 100 products (Y) would not be of same quality because of inherent variation/noise in the system which in turn is because of other uncontrollable factor. That’s why we have error term in our regression equation. If error term becomes zero, then the relationship would be described perfectly by a straight line y = mx + C. In this condition the regression line gives expected value of Y, represented by E(Y) = b0+b1X1.

Picture34

As we have seen that there will be a variation in Y even if you hold X constant. Hence, the term ‘expected value of Y’ represents the average value of Y for a given value of X.

picture2

It’s fine that for a given value of X, there will be a range of Y values because of inherent variation/noise in the process and the average of Y values is called as expected value of Y for a given value of X, but, tell how this is going to help me in investigating OOS/OOT?

Let’s come to the point, assume that we have manufactured one million tablets of 500 mg strength with a mixing time of 15 minutes (= x), Now I want to know the exact mean strength of all the tablets in the entire batch?

In statistical terms,

Picture24

It’s not possible to estimate the exact mean strength of all the tablets in the entire batch as it would require destructive analysis of the entire one million tablets.

Then, what is the way out? How we can estimate the mean strength of the entire batch?

Best thing we can do is to take out a sample and analyze it and based on the sample mean strength, we can make an intelligent guess about the mean strength of the entire batch … but it would be with some error, as we are using sample for the estimation. This error is called as sampling error. The sample data would give an interval that may contain the population mean is given by

Sample mean ± margin of error = confidence interval (CI)

The term “Sample mean ± margin of error ” is called as confidence interval which may or may not contains the population mean.

picture4

It is unlikely that two samples from a given population will yield identical confidence intervals (CI), it means that every sample would provide a different interval but, if we repeat the sampling many times and calculate all CI, then a certain percentage of the resulting confidence intervals would contain the unknown population parameter. The percentage of these CI that contain the parameter is called as confidence level of the interval. The interval estimated by the sample is called as confidence interval (CI). This CI is for a given value of X. This CI will change, with change in X.

Picture25

 Note: Don’t get afraid of the formulas, we will we covering it latter

If 100 samples are withdrawn then we can have following confidence level

A 90% confidence level would indicate that the confidence interval (CI) generated by 90 samples (out of 100) would contain the unknown population parameter.

A 95% confidence level indicates that the CI estimated by 95 samples (out of 100) would contain the unknown population parameter.

Picture30

To summarize, we can estimate the population mean by using confidence interval with certain degree of confidence level.

It’s fine that CI helps me in determining the range within which there is 95% or 99% probability of finding the mean strength of the entire batch. But I have an additional issue, I am also interested in knowing the number of tablets (out of one million tablets) that would be bracketed by this interval or any other interval and how many are outside this interval? This will help me in determining the failure rate once we compare this interval with customer’s specifications.

More precisely we want to know the interval which would contain the 99% of the tablets with desired strength and how confident we are about this interval that it will contain 99% of the population?

picture5

If we can get this interval, we can compare it with the customer’s specification which in turn would tell me something about the process capability. How this can be resolved?

Let’s understand the problem once again

If we understood the issue correctly, then we want to estimate an interval (with required characteristics) based on the sample data that will cover say 99% or 95% of the population and then we want to overlap this interval with the customer’s specification to check the capability of the process. This is represented by scenario-1 and scenario-2 (ideal) in the figure given below.

picture6

Having understood the issue, the solution lies in calculating another interval known as Tolerance Interval for the population with a desired characteristics (Y) for a given value of process parameter X.

Tolerance Interval: this interval captures the values of a specified proportion of all future observations of the response variable for a particular combination of the values of the predictor variables with some high confidence level.

We have seen that CI width is entirely due to the sampling error. As the sample size increases and approaches the entire population size, the width of the confidence interval approaches zero. This is because the term “margin of error” would become zero.

In contrast, the width of a tolerance interval is due to both sampling error and variance in the population. As the sample size approaches the entire population, the sampling error diminishes and the estimated percentiles approach the true population percentiles.

e.g. A 95% tolerance interval that captures 98 % of the population of a future batch of the tablets at a mixing time of 15 minutes is 485.221 to 505.579 (this is Y).

Now, if customer’s specification for the tablet strength is 497 to 502 then we are in trouble (representing scenario-1 in above figure) because, we need to work on the process (increase the mixing time) to reduce the variability.

Let’s assume that we increased the mixing time to 35 minutes and as a result, 95% tolerance interval which captures 99% of the population is given by 498.598 to 501.902. Now we are comfortable with the customer’s specification (scenario-2 in above figure). Hence, we need to blend the mixture for 35 minutes before compressing it into tablets.

We need to be careful while understanding the tolerance interval as it contains two types of percentage terms. The first one, 95% is the confidence level and the second term i.e. 98% is the proportion of the total population with required quality attributes that we want to bracket by the tolerance interval for a constant mixing time of 5 minutes.

To summarize: in order to generate tolerance intervals, we must specify both the proportion of the population to be covered and a confidence level. The confidence level is the likelihood that the interval actually covers the proportion.

This is what we wanted during the product development.

picture13

Let’s calculate the 95% CI using excel sheet

In next post we try to clarify the confusion that we have created in this post by a real time example. So, keep visiting us

Related posts:

Why We Have Out of Specifications (OOS) and Out of Trend (OOS) Batches?

Proposal for Six Sigma Way of Investigating OOT & OOS in Pharmaceutical Products-1

Proposal for Six Sigma Way of Investigating OOT & OOS in Pharmaceutical Products-2


Note on Regression Equation:

Regression line represents the expected value of y = E(yp) for a given value of x = xn. Hence, the point estimate of y for given value of x = xn s given by

Picture37

xn = given value of x

yn = Value of output y corresponding to xn

E(yp) = mean or expected value of y for given value of x = xn, it denotes the unknown mean value of all y’s where x = xn.

Theoretically, Picture38is the point estimate of E(yp) hence should be equal. But in general it seldom happens. If we want to measure, how close the true mean value E(yp) is to the point estimatorPicture38, then we need to measure the standard deviation of Picture38for given value xp.

Picture44

Confidence interval for the expected value E(yp) is given by

Picture42Why we need this equation right now? (I don’t want you to get terrified!)but, if you focus on the numerator part of the standard deviation formula, then one important observation is that if

then the standard deviation would be minimum and as you move away from the mean, the standard deviation goes on increasing. It implies that the CI would be narrower at Picture43and it would widen as you move away from the mean.

Hence, the width of the CI depends on the value of CPP (x)

Picture45

 

Why Do We Have Out of Specifications (OOS) and Out of Trend (OOS) Batches

picture61

While developing a product, we are bound by the USP/EP/JP monographs for product’s critical quality attributes (CQAs) or by the ICH guidelines and we have seen regular OOT/OOS in commercial batches. It’s fine that, every generic company have developed an expertise in investigating and providing corrective & preventive action (CAPA) for all OOT and OOS, but question that remained in our heart and mind is that,

Why can’t we stop them from occurring? 

Answers lies in following inherent issues at each level of product life cycle,

We assume customer’s specification and process control limits are same thing during the product development.

Let’s assume that USP monograph gives a acceptable assay range of a drug product between 97% to 102%. The product development team immediately start working on the process to meet this specifications. The focus is entirely on developing a process to give a drug product within this range. But we forget that even a 6sigma process has a failure rate of 3.4ppm. Therefore in absence of statistical knowledge, we consider customer’s specification as the target for the product development.

The right approach would be to calculate the required process control limits so that a given proportion of the batches (say 95% or 99%) should be in between customer’s specifications.

Here, I would like to draw an analogy where the customer’s specification like the width of a garage and the process control limits is like the width of the car. The width of the car should be much less than the width of the garage to avoid any scratches. Hence the target process control limits should be narrower for the product development.

For detail see earlier blog on car parking and 6sigma“.

Inadequate statistical knowledge leads to wrong target range  for a given quality parameters during Product development.

Take the above example once again, customer’s specification limit for the assay is 97% to 102% (= garage width) now, the question is, what should be the width of the process (= car’s width) that we need to target during the product development to reduce number of failures during commercialization? But one thing is clear at this point, we can’t take customer’s specification as a target for the product development.

Calculating the target range for the development team

In order to simplify it, I will take the formula for Cp

picture16

Where, Cp = process capability, σ = standard deviation of the process, USL & LSL are the upper and lower specification of the customer. The number 1.33 is least desired Cp for a capable process = 3.9 sigma process.

Calculating for σ

picture17

Calculating the σ for the above process

picture18

Centre of the specification = 99.5 hence the target range of the assay for the product development team is given by

Specification mean ± 3σ

  = 99.5±3×σ = 99.5±1.89 = 97.61 to 101.39

Hence, product development team has to target an assay range of 97.61 to 101.39 instead of targeting the customers specifications.

There is other side of the coin, whatever range we take as a target for development, there is a assumption that 100% of the population would be in between that interval. This is not true because, even a 6 sigma process has a failure rate of 3.4 ppm. So the point I want to make here is that we should also provide a expected failure rate corresponding to the interval that we have chosen to work with.  

picture19

For further discussion on this topic, keep vising for the forth coming article on Confidence, prediction and Tolerance intervals

Not Giving Due Respect to the Quality by Design Principle and PAT tools

Companies not having in-house QbD capability can have an excuse but even the companies with QbD capability witness failures during scale-up even though they claim to have used QbD principle. They often think that QbD and DoE are the same thing. For the readers I want to highlight that DoE just a small portion of QbD. There is a sequence of events that constitute QbD and DoE is just on of those events.

I have seen that people will start DoE directly on the process, scientist used to come to me that these are the critical process parameter (CPPs) and ask for DoE plan. These CPPs are selected mostly based on the chemistry knowledge like, moles, temperature, concentration, reaction time etc. Now thing is that, these variables will seldom vary in the plant because warehouse won’t issue you less or more quantity of the raw material and solvents, temperature won’t deviate that much. What we miss is the process related variables like heating and cooling gradient, hold up time of the reaction mass at a particular temperature, work-up time in plant (usually much higher than lab workup time, type of agitator, exothermicity,  waiting time for the analysis and other unit operations. We don’t understand the importance of these at the lab level, but these monsters raises their head during commercialization.

Therefore a proper guidelines is required for conducting a successful QbD studies in the lab (see the forth coming article on DoE). In general if we want a successful QbD then we need to make a dummy batch manufacturing record of the process in the lab and then perform the risk analysis to the whole process for identifying CPPs and CMAs. Brief QbD process is described below

Picture1

 Picture6
Improper Control Strategy in the Developmental Report

Once the product is developed in the lab, there are some critical process parameters (CPPs) that can affect the CQAs. These CPPs are seldom deliberated in detail by the cross functional team to mitigate the risk by providing adequate manual and engineering control. This is because we are in a hurry to file ANDA/DMF and other reasons. Once the failures become the chronic issue, we take actions. Because of this CPPs vary in the plant resulting n OOS.

Monitoring of CQAs instead of CPPs during commercialization.

I like to call ourselves “knowledgeable sinners”. This because we know that a CQA is affected by the CPPs even then we continue to monitor the CQA instead of CPPs. This is because, if CPPs is under control, then CQA will have to be under control. For example, we know that if reaction temperature shoots, it will lead to impurities, even then we continue to monitor the impurities level using control charts but not the temperature itself. We can ask ourselves what we can achieve by monitoring the impurities after the batch is complete? Answer is we achieve nothing but a failed batch, investigation, loss of raw material/energy/manpower/production time, to summarize we can only do a postmortem of a failed batch and nothing else.

Instead of impurity, if we have monitored the temperature which was critical, we could have taken an corrective action then and there itself. Knowing that this batch is going to fail, we could have terminated the batch thereby saving loss of manpower/energy/production time etc. (imagine a single OOS investigation required at least 5-6 people working for a week, which is equal to 30 man days.

Picture18

Role of QA is mistaken for Policing and auditing rather than in continuous improvement.

The QA department in all organization is frequently busy with audit preparation! Their main role has got restricted to documentation and keep the facility ready for audits (mostly in the pharmaceutical field). What I feel is that, within the QA there has to be a statistical process control (SPC) group, whose main function is to monitor the processes and suggest the areas of improvements.  This function should have sound knowledge of engineering and SPC so that they can foresee the OOT and OOS by monitoring CPPs on the control charts. So, role of QA is not only policing but also assisting other departments in improving quality. I understand that at present SPC knowledge is very limited among QA and other department, which we need to improve.

Lack of empowerment to the operators for reporting deviation occurred

You all will agree, the best process owner of any product is the shop-floor peoples or the operators but, we seldom give importance to their contribution. The pressure on them is to deliver a given number of batches per month to meet the sales target. Due to this production target, they often don’t report deviations in CPPs because they know if they do it, it will lead to investigation by QA and the batch will be only cleared once the investigation is over. In my opinion, QA should empower operators to report deviations, the punishment should not be there for the batch failure but for not asking for the help. It is fine to miss the target by one or two batch but the knowledge gained from those batches with deviation would improve the process.

Lack of basic statistical knowledge across the technical team (R&D, Production, QA, QC)

I am saying that everyone should become an statistical expert, but at least we can train our people on basic 7QC tools! that is not a rocket science. This will help everyone to monitor and understand the process, shop-floor people can themselves use these tools (or QA  can empower them after training and certification) to plot histogram, control charts etc.. pertaining to the process and can compile the report for QA.

What are Seven QC Tools & How to Remember them?

Other reasons for OOT/OOS are as follows which are self explanatory
  1. Frequent vendor change (quality comes for a price). Someone has to bear the cost of poor quality.
    1. Not linking vendors in your continuous improvement journey. The variation in his raw material can create a havoc in your process.
  2. Focusing on delivery at the cost of preventive maintenance of the hardware’s

 Related Topics

Proposal for Six Sigma Way of Investigating OOT & OOS in Pharmaceutical Products-1

Proposal for Six Sigma Way of Investigating OOT & OOS in Pharmaceutical Products-2