We have seen that the DoE is a integral part of any QbD studies however, we seldom apply it properly during developmental stage. I have seen scientist afraid of using it because, they think that DoE means more experiments. But in reality what I have seen that they end-up in doing more experiments than that suggested by DoE. To give you an idea, scientist would perform 40-50 experiments if they are investigating 4-5 variables whereas, DoE con do that job in 15-20 experiments. This is because, when DoE experiments proposes 15-20 experiments in a single go, it appears more for the developmental team. Other issue is that the lack of knowledge on how to exploit the DoE using fractional factorial designs, Plackett-Burma or D-optimal etc.
This article describes the flow diagram for conduction a successful DoE.
As a chemist I was always in a hurry to perform a DoE for optimizing a chemical reaction. More often I failed and gradually I learned that it can’t be done in a hurry, we need to do some homework before that.
Basic concept behind QbD is the Juran’s concept of “building/designing quality into the product”[i] rather than “complying product with the quality”. Designing quality into the product could be achieved by having better control on the process and this can be achieved by proper understanding of the relationship between the CQAs (y) and CPPs/CMAs (x) as shown in Figure 4 and 7. This concept of building quality into the product is based on the quality risk management[ii] where one needs to assess the risk of each PPs/MAs on CQAs. Basic outline of QbD in process development is shown in Figure 9. It involves following steps
For a successful DoE, we need to divide the whole process into two phases
Phase-1: Preparing for DoE
It deals with the preliminary homework, like
what quality parameters we want to study & why?
What are the process parameters and material attributes that can affect the selected quality parameters?
Phase-2: Performing DoE and analysis followed by proposing the control strategy
Once quality parameters and most probable process parameters and material attributes are identified, its time for performing DoE to establish cause and effect relationship between the two.
Based on the DoE study, CPPs and CMAs are identified and a design space is generated within which CPPs & CMAs could be varied to keep CQAs under control.
Finally, before commercialization, a control strategy is proposed to keep CPPs/CMAs under specified range, either by proposing a engineering or manual control.
DoE: Design of Experiment
CQA: Critical quality attribute —these qualities of the product is critical for the customers
CPP: critical process parameters — these process parameters affects CQAs
CMA: Critical Material Attributes — These are input material attributes affecting the CQAs.
[i]. Juran on Quality by Design: The New Steps for Planning Quality Into Goods and Services, J. M. Juran, Simon and Schuster, 1992
[ii]. José Rodríguez-Pérez, Quality Risk Management in FDA-Regulated Industry, ASQ Quality Press, Milwaukee, 2012.
Mostly what happens during any investigation is that, we collect lot of data to prove or disapprove our assumption. Problem with this methodology is that, we can have false correlation between variables
e.g. increase in the internet connection and death due to cancer over last 4 decades!
Is there a relation between the two (internet connections and death due to cancer)? Absolutely not, so in order to avoid such confusions we need to have a way to establish such relationships. In this regard we use DoE, these are statistical way of conducting experiments which establishes cause & effect relationship. General sequence of events in DoE is as follows
Why DoE is important at R&D stage?
Just remember these two quotes
“Development speed is not determined by how fast we complete the R&D but by how fast we can commercialize the process”
“Things we do before tech transfer is more important that what is there in tech pack!”
In order to avoid the unnecessary learning curves, and to have a control on the COGS we need to deploy QbD as shown below
Suppose we are in the process of developing a 500 mg Ibuprofen tablets (actual specification is 497 to 502 mg). A tablet contains many other ingredients along with 500 mg of the active molecule (ibuprofen). Usually these ingredients are mixed and then compressed into the tablets. During the product development, three batches each of 500 tablets were prepared with 15 minutes, 25 minutes and 35 minutes of blending time respectively. A sample of 10 tablets were collected from each batch and analyzed for the ibuprofen. The results are given below
The regression analysis of the entire data (all three batches) provides a quantitative relationship between blending time and the expected value of ibuprofen content for that blending time.
Expected ibuprofen content = 493±0.242×blending time
Now, we want to estimate the average ibuprofen content of the entire batch of 500 tablets, based on the sample of 10 tablets for a given mixing time (say 15, 25 and 35 minutes). Let’s calculate the 95% and 99% confidence interval (CI) for each of the mixing time.
In reality, we can never know the average ibuprofen content of the entire batch unless we do the analysis of the entire batch, which is not possible.
We can see that the 99% CI is wider than the 95% CI (hope you are clear about what 95% CI means?). The 99% CI for a mixing time of 35 minutes seems to be closer to my desired strength of 497 to 502 mg. Hence, in developmental report, I would propose a wider possible assay range of 499.6 to 502.57 for a mixing time of 35 minutes with 99% CI.
This means that, if we take 100 samples, then the CI given by 99 samples would contain the population mean.
Now if we look at this 99% CI i.e. 499.6 to 502.57 mg which is narrower than the specifications (407 to 502 mg). Hence, I want to estimate the some interval (like CI) with a mixing time of say 32 minutes (note: we have not conducted any experiments with this mixing time!) to check if we can meet the specification there itself. We can do it, because we have derived the regression equation. What we are doing is to predict an interval for a future batch with a mixing time of 35 minutes. As we are predicting for a future observation, this interval is called as prediction interval of aresponse for a given value of the process parameter. Usually prediction intervals are wider than the corresponding confidence intervals.
Using the equation discussed earlier, we can have expected average value of the mean strength for a mixing time of 32 minutes.
Expected ibuprofen content for a blending time of 32 minutes = 500.74
Till now, what we have learnt is that CI can estimate an interval that will contain the average ibuprofen content of the entire batch (already executed) for a given value of blending time. Whereas, the prediction interval estimates the interval that would contain the average response of a future batch for a given value of blending time.
In present context,
For a blending time of 35 minutes, a 95% CI indicates that the average strength of the entire batch of 500 tablets (population) would be between 499.99 and 502.18.
Whereas a 95% PI helps in predicting the average strength of the next batch would be between 499.6 and 502.57 for a blending time 35 minutes.
Now question is: can we propose any of these intervals (95% CI or 95% PI) as the process control limits?
What I think is, we can’t! Because above control limits doesn’t tell me anything about the distribution of the population within this interval. What I mean to say that we can’t assume that entire 500 tablets (entire batch) would be covered by these interval, it’s only the average mean ofthe entire batch would fall in this interval.
For me it is necessary to know the future trend of the batches when we transfer the technology for commercialization. We should not only know the interval containing the mean (of any CQA) of the population but also the proportion or percentage of the total population falling in that interval. This will help me determining the expected failure rate in future batches if all CPPs are under control (even 6sigma process has a failure rate of 3.4ppm!). Once I know that, it would help me in deciding when to start investigating an OOS (once number of failures would cross the expected failure rate). For this statement, I am assuming that there is no special cause for OOS.
This job is done by the tolerance interval (TI). In general TI is reported as follows
A 95% TI for the tablet strength (Y) containing 99% of the population of the future batches for a blending time of 35 minutes (X).
It means that, whatever TI is calculated at 95% confidence level would encompass 99% of the future batches that will be manufactures with a blending time of 35 minutes. In other words, there is 1% of the batches would fail. Now, I will start investigating an OOS only if there are two or more failures in next 100 batches (assuming that there are no special causes for OOS and all process parameters are followed religiously).
The TI for the batches at different blending time is given below
Tolerance Interval type: two sided
Confidence level: 95%
Percentage of population to be covered: 99
Above discussion can be easily understood by following analogy described below
You have to reach the office before 9:30 AM. Now tell me how confident you are about reaching the office exactly between
9:10 to 9:15 (hmm…, such a narrow range, I am ~90% confident)
9:05 to 9:20 (a-haa.., now I am 95% confident)
9:00 to 9:25 (this is very easy, I am almost 100% confident)
The point to be noted here is that as the width of the time interval increases, your confidence also increases.
It is difficult to estimate the exact arrival time, but we can be certain that mean arrival time to office would be in between
Average arrival time on (say 5 days) ± margin of error
It is very important to understand the concept of CI/PI/TI before we can understand the reasons for OOS.
Let’s start from following situation
You have to reach the office before 9:30 AM. Now tell me how confident are you about reaching the office exactly between
(A) 9:10 to 9:15 (hmm…, such a narrow range, I am ~90% confident)
(B) 9:05 to 9:20 (a-haa.., now I am 95% confident)
(C) 9:00 to 9:25 (this is very easy, I am almost 99% confident)
The point to be noted here is that , your confidence increases with widening time interval (remember this for rest of the discussion).
More important thing is that, it is difficult to estimate the exact arrival time, but we can say with some confidence that my arrival time would be between some time interval.
Say my arrival time for last five days (assuming all other factors remains constant) was 9:17 AM, so I can say with certain confidence (say 95%) that my arrival time would be given by
Average arrival time on (say 5 days) ± margin of error
The confidence we are showing is called as confidence level and the interval estimated by above equation at a given confidence level is called as CONFIDENCE INTERVAL (CI). This confidence interval may or may not contain my mean arrival time.
Now let’s go a manufacturing scenario
We all are aware of the diagram given below, the critical quality attribute (CQA or y) of any process is affected by many inputs like critical material attribute (CMA), critical process parameter (CPP) and other uncontrollable factors.
Since, CQAs are affected by CPPs and CMAs, it is said that CQA or any output Y is a function of X (X = CPPs/CMAx).
The relationship between Y and X is given by following regression equation
Following points worth mentioning are
Value of Y depends on the value of Y, it means that if there is deviation in X then there will be a corresponding deviation in Y. e.g. if the level of any impurity (y) is influenced by the temperature then any deviation in impurity level will be attributed to the change in temperature (x).
If you hold X constant at some value and performs the process many times (say 100) then all 100 products (Y) would not be of same quality because of inherent variation/noise in the system which in turn is because of other uncontrollable factor. That’s why we have error term in our regression equation. If error term becomes zero, then the relationship would be described perfectly by a straight line y = mx + C. In this condition the regression line gives expected value of Y, represented by E(Y) = b0+b1X1.
As we have seen that there will be a variation in Y even if you hold X constant. Hence, the term ‘expected value of Y’ represents the average value of Y for a given value of X.
It’s fine that for a given value of X, there will be a range of Y values because of inherent variation/noise in the process and the average of Y values is called as expected value of Y for a given value of X, but, tell how this is going to help me in investigating OOS/OOT?
Let’s come to the point, assume that we have manufactured one million tablets of 500 mg strength with a mixing time of 15 minutes (= x), Now I want to know the exact mean strength of all the tablets in the entire batch?
In statistical terms,
It’s not possible to estimate the exact mean strength of all the tablets in the entire batch as it would require destructive analysis of the entire one million tablets.
Then, what is the way out? How we can estimate the mean strength of the entire batch?
Best thing we can do is to take out a sample and analyze it and based on the sample mean strength, we can make an intelligent guess about the mean strength of the entire batch … but it would be with some error, as we are using sample for the estimation. This error is called as sampling error. The sample data would give an interval that may contain the population mean is given by
Sample mean ± margin of error = confidence interval (CI)
The term “Sample mean ± margin of error ” is called as confidence interval which may or may not contains the population mean.
It is unlikely that two samples from a given population will yield identical confidence intervals (CI), it means that every sample would provide a different interval but, if we repeat the sampling many times and calculate all CI, then a certain percentage of the resulting confidence intervals would contain the unknown population parameter. The percentage of these CI that contain the parameter is called as confidence level of the interval. The interval estimated by the sample is called as confidence interval (CI). This CI is for a given value of X. This CI will change, with change in X.
Note: Don’t get afraid of the formulas, we will we covering it latter
If 100 samples are withdrawn then we can have following confidence level
A 90% confidence level would indicate that the confidence interval (CI) generated by 90 samples (out of 100) would contain the unknown population parameter.
A 95% confidence level indicates that the CI estimated by 95 samples (out of 100) would contain the unknown population parameter.
To summarize, we can estimate the population mean by using confidence interval with certain degree of confidence level.
It’s fine that CI helps me in determining the range within which there is 95% or 99% probability of finding the mean strength of the entire batch. But I have an additional issue, I am also interested in knowing the number of tablets (out of one million tablets) that would be bracketed by this interval or any other interval and how many are outside this interval? This will help me in determining the failure rate once we compare this interval with customer’s specifications.
More precisely we want to know the interval which would contain the 99% of the tablets with desired strength and how confident we are about this interval that it will contain 99% of the population?
If we can get this interval, we can compare it with the customer’s specification which in turn would tell me something about the process capability. How this can be resolved?
Let’s understand the problem once again
If we understood the issue correctly, then we want to estimate an interval (with required characteristics) based on the sample data that will cover say 99% or 95% of the population and then we want to overlap this interval with the customer’s specification to check the capability of the process. This is represented by scenario-1 and scenario-2 (ideal) in the figure given below.
Having understood the issue, the solution lies in calculating another interval known as Tolerance Interval for the population with a desired characteristics (Y) for a given value of process parameter X.
Tolerance Interval: this interval captures the values of a specified proportion of all future observations of the response variable for a particular combination of the values of the predictor variables with some high confidence level.
We have seen that CI width is entirely due to the sampling error. As the sample size increases and approaches the entire population size, the width of the confidence interval approaches zero. This is because the term “margin of error” would become zero.
In contrast, the width of a tolerance interval is due to both sampling error and variance in the population. As the sample size approaches the entire population, the sampling error diminishes and the estimated percentiles approach the true population percentiles.
e.g. A 95% tolerance interval that captures 98 % of the population of a future batch of the tablets at a mixing time of 15 minutes is 485.221 to 505.579 (this is Y).
Now, if customer’s specification for the tablet strength is 497 to 502 then we are in trouble (representing scenario-1 in above figure) because, we need to work on the process (increase the mixing time) to reduce the variability.
Let’s assume that we increased the mixing time to 35 minutes and as a result, 95% tolerance interval which captures 99% of the population is given by 498.598 to 501.902. Now we are comfortable with the customer’s specification (scenario-2 in above figure). Hence, we need to blend the mixture for 35 minutes before compressing it into tablets.
We need to be careful while understanding the tolerance interval as it contains two types of percentage terms. The first one, 95% is the confidence level and the second term i.e. 98% is the proportion of the total population with required quality attributes that we want to bracket by the tolerance interval for a constant mixing time of 5 minutes.
To summarize: in order to generate tolerance intervals, we must specify both the proportion of the population to be covered and a confidence level. The confidence level is the likelihood that the interval actually covers the proportion.
This is what we wanted during the product development.
Regression line represents the expected value of y = E(yp) for a given value of x = xn. Hence, the point estimate of y for given value of x = xn s given by
xn = given value of x
yn = Value of output y corresponding to xn
E(yp) = mean or expected value of y for given value of x = xn, it denotes the unknown mean value of all y’s where x = xn.
Theoretically, is the point estimate of E(yp) hence should be equal. But in general it seldom happens. If we want to measure, how close the true mean value E(yp) is to the point estimator, then we need to measure the standard deviation of for given value xp.
Confidence interval for the expected value E(yp) is given by
Why we need this equation right now? (I don’t want you to get terrified!)but, if you focus on the numerator part of the standard deviation formula, then one important observation is that if
then the standard deviation would be minimum and as you move away from the mean, the standard deviation goes on increasing. It implies that the CI would be narrower at and it would widen as you move away from the mean.
Hence, the width of the CI depends on the value of CPP (x)
While developing a product, we are bound by the USP/EP/JP monographs for product’s critical quality attributes (CQAs) or by the ICH guidelines and we have seen regular OOT/OOS in commercial batches. It’s fine that, every generic company have developed an expertise in investigating and providing corrective & preventive action (CAPA) for all OOT and OOS, but question that remained in our heart and mind is that,
Why can’t we stop them from occurring?
Answers lies in following inherent issues at each level of product life cycle,
We assume customer’s specification and process control limits are same thing during the product development.
Let’s assume that USP monograph gives a acceptable assay range of a drug product between 97% to 102%. The product development team immediately start working on the process to meet this specifications. The focus is entirely on developing a process to give a drug product within this range. But we forget that even a 6sigma process has a failure rate of 3.4ppm. Therefore in absence of statistical knowledge, we consider customer’s specification as the target for the product development.
The right approach would be to calculate the required process control limits so that a given proportion of the batches (say 95% or 99%) should be in between customer’s specifications.
Here, I would like to draw an analogy where the customer’s specification like the width of a garage and the process control limits is like the width of the car. The width of the car should be much less than the width of the garage to avoid any scratches. Hence the target process control limits should be narrower for the product development.
Inadequate statistical knowledge leads to wrong target range for a given quality parameters during Product development.
Take the above example once again, customer’s specification limit for the assay is 97% to 102% (= garage width) now, the question is, what should be the width of the process (= car’s width) that we need to target during the product development to reduce number of failures during commercialization? But one thing is clear at this point, we can’t take customer’s specification as a target for the product development.
Calculating the target range for the development team
In order to simplify it, I will take the formula for Cp
Where, Cp = process capability, σ = standard deviation of the process, USL & LSL are the upper and lower specification of the customer. The number 1.33 is least desired Cp for a capable process = 3.9 sigma process.
Calculating for σ
Calculating the σ for the above process
Centre of the specification = 99.5 hence the target range of the assay for the product development team is given by
Specification mean ± 3σ
= 99.5±3×σ = 99.5±1.89 = 97.61 to 101.39
Hence, product development team has to target an assay range of 97.61 to 101.39 instead of targeting the customers specifications.
There is other side of the coin, whatever range we take as a target for development, there is a assumption that 100% of the population would be in between that interval. This is not true because, even a 6 sigma process has a failure rate of 3.4 ppm. So the point I want to make here is that we should also provide a expected failure rate corresponding to the interval that we have chosen to work with.
For further discussion on this topic, keep vising for the forth coming article on Confidence, prediction and Tolerance intervals
Not Giving Due Respect to the Quality by Design Principle and PAT tools
Companies not having in-house QbD capability can have an excuse but even the companies with QbD capability witness failures during scale-up even though they claim to have used QbD principle. They often think that QbD and DoE are the same thing. For the readers I want to highlight that DoE just a small portion of QbD. There is a sequence of events that constitute QbD and DoE is just on of those events.
I have seen that people will start DoE directly on the process, scientist used to come to me that these are the critical process parameter (CPPs) and ask for DoE plan. These CPPs are selected mostly based on the chemistry knowledge like, moles, temperature, concentration, reaction time etc. Now thing is that, these variables will seldom vary in the plant because warehouse won’t issue you less or more quantity of the raw material and solvents, temperature won’t deviate that much. What we miss is the process related variables like heating and cooling gradient, hold up time of the reaction mass at a particular temperature, work-up time in plant (usually much higher than lab workup time, type of agitator, exothermicity, waiting time for the analysis and other unit operations. We don’t understand the importance of these at the lab level, but these monsters raises their head during commercialization.
Therefore a proper guidelines is required for conducting a successful QbD studies in the lab (see the forth coming article on DoE). In general if we want a successful QbD then we need to make a dummy batch manufacturing record of the process in the lab and then perform the risk analysis to the whole process for identifying CPPs and CMAs. Brief QbD process is described below
Improper Control Strategy in the Developmental Report
Once the product is developed in the lab, there are some critical process parameters (CPPs) that can affect the CQAs. These CPPs are seldom deliberated in detail by the cross functional team to mitigate the risk by providing adequate manual and engineering control. This is because we are in a hurry to file ANDA/DMF and other reasons. Once the failures become the chronic issue, we take actions. Because of this CPPs vary in the plant resulting n OOS.
Monitoring of CQAs instead of CPPs during commercialization.
I like to call ourselves “knowledgeable sinners”. This because we know that a CQA is affected by the CPPs even then we continue to monitor the CQA instead of CPPs. This is because, if CPPs is under control, then CQA will have to be under control. For example, we know that if reaction temperature shoots, it will lead to impurities, even then we continue to monitor the impurities level using control charts but not the temperature itself. We can ask ourselves what we can achieve by monitoring the impurities after the batch is complete? Answer is we achieve nothing but a failed batch, investigation, loss of raw material/energy/manpower/production time, to summarize we can only do a postmortem of a failed batch and nothing else.
Instead of impurity, if we have monitored the temperature which was critical, we could have taken an corrective action then and there itself. Knowing that this batch is going to fail, we could have terminated the batch thereby saving loss of manpower/energy/production time etc. (imagine a single OOS investigation required at least 5-6 people working for a week, which is equal to 30 man days.
Role of QA is mistaken for Policing and auditing rather than in continuous improvement.
The QA department in all organization is frequently busy with audit preparation! Their main role has got restricted to documentation and keep the facility ready for audits (mostly in the pharmaceutical field). What I feel is that, within the QA there has to be a statistical process control (SPC) group, whose main function is to monitor the processes and suggest the areas of improvements. This function should have sound knowledge of engineering and SPC so that they can foresee the OOT and OOS by monitoring CPPs on the control charts. So, role of QA is not only policing but also assisting other departments in improving quality. I understand that at present SPC knowledge is very limited among QA and other department, which we need to improve.
Lack of empowerment to the operators for reporting deviation occurred
You all will agree, the best process owner of any product is the shop-floor peoples or the operators but, we seldom give importance to their contribution. The pressure on them is to deliver a given number of batches per month to meet the sales target. Due to this production target, they often don’t report deviations in CPPs because they know if they do it, it will lead to investigation by QA and the batch will be only cleared once the investigation is over. In my opinion, QA should empower operators to report deviations, thepunishment should not be there for the batch failure but for not asking for the help. It is fine to miss the target by one or two batch but the knowledge gained from those batches with deviation would improve the process.
Lack of basic statistical knowledge across the technical team (R&D, Production, QA, QC)
I am saying that everyone should become an statistical expert, but at least we can train our people on basic 7QC tools! that is not a rocket science. This will help everyone to monitor and understand the process, shop-floor people can themselves use these tools (or QA can empower them after training and certification) to plot histogram, control charts etc.. pertaining to the process and can compile the report for QA.