Why Do Pharmaceutical Industry Requires Quality by Design (QBD)

for posts

(This article is a part of PhD thesis of Mr. Abdul Qayum Mohammed, who is my PhD student)

Authors:

Abdul Qayum Mohammed, Phani Kiran Sunkari, Amrendra Kumar Roy*

*Corresponding Author, email: Amrendra@6sigma-concepts.com

KEYWORDS: QbD, 4A’s, DoE, FMEA, Design space, control strategy

ABSTRACT

QbD is of paramount importance for the patient safety but there is another side of the coin. QbD is also required for timely and uninterrupted supply of medicines into the market. This timely uninterrupted supply is required to fulfill the 4A’s requirement of any Regulatory body as it is their main KRA. But the manufacturers are given an impression that the patients are their main customer, which is not true. Due to which QbD implementation by generic API manufacturers has not picked up. This article tries to tell that the real customer is not patients but the Regulatory bodies who on the behalf of patients are dealing with manufacturer. Hence Regulators need to tell the manufacturer that QbD is required not only for the patient safety but also for meeting the 4A’s requirement, which is equally important. This article tries to correlate the effect of inconsistent manufacturing process on the KRA of the Regulatory bodies and makes a business case out of it. It will help in developing a strong customer-supplier relationship between the two parties and can trigger the smooth acceptance of QbD by generic players. This article also presents the detail sequence of steps involved in QbD using by a process flow diagram.

Introduction:

Nowadays, Quality by design (QbD) is an essential and interesting topic in the pharmaceutical development, be it for drug substance or drug product. Various guidelines have been published by different Regulatory agencies.[i] There is a plethora of literature available on the QbD approach for the process development[ii],[iii] of drug substance, drug product and analytical method development[iv]. Most of the available literature mainly focus on patient safety (QTPP) but if QbD has to sails through, then the generic manufacturer must know why and for whom it is required (apart from patients) and what is there in for them? They should not be taking regulators as an obstacle to their business but as a part of their business itself. There has to be business perspective behind QbD, as everything in this world is driven by economics. It has to be win-win situation for Regulators and the manufacturers. This means that, there has to be synchronization of each other’s expectation. This synchronization will be most effective if API manufacturer’s (i.e. supplier’s) consider Regulators as their customer and try to understand their requirement. In this context it is very important to understand the Regulator’s expectation and their responsibility towards their fellow countrymen.

Regulators Expectations:

Sole responsibility of any Regulator towards its country is to ensure not only acceptable (quality, safety and efficacy) and affordable medicines but also they need to ensure its availability (no shortage) in their country all the time. Even that is not enough for them; those medicines must be easily accessible to patients at their local pharmacies. These may be called as 4A’s and are the KRA of any Regulatory body. If they miss any one of the above ‘4As’, they will be held accountable by their Government for endangering the life of the patients.

In earlier days when the penetration of health services to large section of the society was not there, the main focus of Regulators was on the quality and price of the medicines. During those days margins were quite high and the effect of reprocessing and reworks on manufacturer’s margins were not much. So Regulators were happy as they were getting good quality at best price for their citizens. Gradually the health services gained penetration in to the large section of the society in developed countries and as a result they needed more and more quantities of medicine at affordable price. The KRA of Regulators changed from “high quality and low price” to “quality medicine at affordable price which is available all the time at the doorstep of patients”. Another event that led to the further cost erosion was the arrival of medical insurance and tender based purchasing system in hospitals. Increased demand made manufacturer to increase their batch size but because of insurance and tender based purchasing system, now they don’t have the advantage of high margins and couldn’t afford batch failures/reprocessing anymore. But now, these wastages led to erratic production and irregular supply of medicine in the market, thereby creating a shortage. This affected the KRA (4A’s) of the regulatory bodies; hence they were forced to interfere with the supplier’s system. They realized that in order to ensure their 4A’s, there has to be a robust process at manufacturer’s site and if it is done the medicines would automatically be available in their country (no shortages) and will be accessible to all patients at affordable price. This process robustness is possible with the use of some proven statistical tools like six sigma and QbD during the manufacturing of an API. This path to robust process was shown by the Regulators in the form of Q8/Q9/Q10/Q11 guidelines1 where QbD was made mandatory for formulators but and it is strongly recommended for API manufacturer and soon it would be made mandatory. While making QbD mandatory, they are emphasizing on how QbD is related to patient safety and how it will make the process robust for the manufacturers which in turn would eliminate the fear of audits. Regulators are right but somewhere they missed to communicate the business perspective, that was behind the QbD implementation i.e. manufacturers were not having much clue about the Regulator’s KRA and as a result a customer-supplier relationship never developed.

picture1

Figure 1: Regulator’s unsaid expectations

Manufacturer’s point of view

As Regulators were insisting on QbD, manufacturers have their own constraints in plant due to inconsistency of the process (Figure 2). As Regulator’s emphasis was on the patient’s safety rather than 4A’s, manufacturer took patients as their customer instead of Regulators and they make sure that there is no compromise with the quality of the medicines to delight the customer ie, patients. It doesn’t matter to manufacturer, if the quality is achieved by reprocessing/rework as far as the material is of acceptable quality to the customers. Due to this misconception about who the real customer is, 4A’s got neglected by the manufacturer.

picture2

Another problem is the definition of quality perceived by two parties. Quality of an API from the customer’s perspective has always been defined with respect to the patient safety (i.e. QTPPs which is indeed very important) but for the manufacturer quality meant only the purity of the product as he enjoyed handsome margin.

picture15

Profit = MP – COGS                                                                         Eq-1

MP                =  market Price

COGS             = genuine manufacturing cost + waste cost (COPQ)

COPQ             = Variation/Batch failure/Reprocessing & rework /product    recall = increase in drug product/drug substance cost = loosing customer faith (intangible cost)

Coming to prevailing market scenario, the manufacturers doesn’t have luxury to define the selling price, now the market is very competitive and the price of goods and services are dictated by the market, hence it is called as market price (MP) instead of selling price (SP). This lead to the change in the perception of quality, now quality was defined as producing goods and services meeting customer’s specification at the right price. The manufacturers are now forced to sell their goods and services at the market rate. As a result the profit is now defined as the difference of market rate and cost of goods sold (COGS). If manufacturing process is not robust enough then COPQ will be high resulting in high COGS and either (patient or manufacturer) of the party has to bear the cost. According to Taguchi, it is a loss to the society as a whole as neither of the party is getting benefitted. If these failures are more frequent it leads to production loss and as a result timely availability of the product in the market is not there and manufacturer is not able to fulfill the 4A’s criteria of the customer. This not only leads to loss of market share but also loss of customer’s confidence and customer in turn would look for other suppliers who can fulfill their requirements. This is an intangible loss to the manufacturer.

The COPQ has direct relationship with the way in which process has been developed. There are two ways in which a process could be optimized (Figure 3). It is clear from the Figure 3 that if one focus on the process optimization, it will lead to less COPQ and process would be more robust in terms of quality, quantity and timelines thereby reducing the COGS by elimination COPQ. This raises another question, how process optimization is different from product optimization and how it is going to solve all problems related to inconsistency? This can be understood by understanding the relationship between QTPPs/CQAs and CPPs/CMAs. As a manufacturer we must realize that any CQA (y) is a function of CPPs & CMAs (x) i.e. the value of CQA is dictated by the CPPs/CMAs and not vice versa (Figure 4 & 7). It means that by controlling CPPs/CMAs we can control CQAs but in order to do this we need to study and understand the process very well. This will help in quantifying the effect of CPPs/CMAs on CQAs and once it is done, it is possible to control the CQAs at a desired level just by controlling the CPPS/CMAs. This way of process development is called as process optimization and QbD insists on it. Another important concept associated with process optimization is the way in which in-process monitoring of the reaction is done. Traditionally, a desired CQA is monitored for any abnormality during the reaction whereas process optimization methodology it is required to monitor the CPP/CMA (Figure 4) which is responsible for that CQA. Hence it requires a paradigm shift in which the process is developed and control strategy is formulated by a manufacturer if the focus is on the process optimization.

picture4

Figure 3: Two ways of optimization

From the above discussion, it is clear that the real customer for a generic manufacturer is not the patients but the Regulators. This is because patients can’t decide and they don’t have capability to test the quality of the medicines, for them all brands are same. Hence Regulators comes into the pictures, who on the behalf of patients are dealing with manufacturers because they have all means and capability of doing so. Going by the Figure 5, patients are the real customer for the Regulators and who in turn are the customer for the manufacturer. In business sense, patients are just the end user of the manufacturer’s product once the product is approved by Regulators for use.

picture6

Figure 4: Relationship between CQAs and CPPs/CMAs

As it is clear that the Regulators are the real customers for the manufacturer and with the current inefficient process, manufacturer is not helping his customer in meeting their goal (4A’s). They can now understand the relationship between his inefficient manufacturing process and the customer’s KRA (Figure 6). In addition, they can clearly visualize the advantage of the process optimization over product optimization and how QbD can act as an enabler in developing a robust process thereby fulfilling the requirement of 4A’s . This will encourage manufacturer to adopt QbD because now it makes a strong business case for them for retaining the existing market and also as a strategy for entering the new market. This is a win-win situation for both the parties. Therefore, QbD should be pursued by manufacturer not because of the regulatory fear but as a tool for fulfilling the customer’s KRA which in-turn would benefit manufacturer by minimizing COPQ. In addition, it helps in building customer’s trust which is an intangible asset for any manufacturer. This will enable the manufacturers to accept Regulators as their customer rather than as an obstacle. This would result in better commitment from manufacturers about implementing QbD because the definition of customer as defined by Mahatma Gandhi is very relevant even today.

“A customer is the most important visitor on our premises. He is not dependent on us. We are dependent on him. He is not an interruption in our work. He is the purpose of it. He is not an outsider in our business. He is part of it. We are not doing him a favor by serving him. He is doing us a favor by giving us an opportunity to do so.”

― Mahatma Gandhi

picture7

Figure 5: Dynamic Customer-Suppliers relationship throughout the supply chain

picture8

Figure 6: Manufacturer perception after understanding customer-supplier relationship

Manufacturer in customer’s shoes:

Another reason provided by the manufacturer for inconsistency is the quality of KSM supplied by their vendors and any quality issue with KSM will affect the quality of the API as shown by Figure 7 and equation 2. Till now manufacturer was acting as a supplier to Regulators but now manufacturer is in the shoes of a customer and can understand the problem faced by him because of the inconsistent quality of KSM from his supplier (Figure 5, Table 1). Now manufacturer can empathize with Regulatory bodies and is in a position to understand the effect of their process on his customer’s KRA(Figure 6). Table 1 is equally applicable to the relationship between manufacturer and the Regulatory bodies.

Table 1: Effect of process inconsistency from supplier/manufacturer on API quality

picture10

Consider Case-1 (Table 1) which represents the ideal condition, where process is robust at both sides. Whereas Case-2 and Case-3 represents an inconsistent process at either of the party and this inconsistency would reflect as an inconsistency in the quality of the API at manufacturer’s site. This would result in an unsatisfied customer (Regulator) and loss of market to someone else. Lastly, an inconsistent process from both the side (Case-4) would result in a disaster situation where it would be difficult for a manufacturer to control the quality of the API because the variance from both the sides would just add up (equation 2). In this case customer can’t even think of getting material from manufacturer as it would pose a threat to the patient’s life and no regulatory body would allow that.

picture11

Someone can argue that if consistency is an issue from supplier (Case 3) then they would negotiate with them for cherry-picking the good batches, but no supplier would do the cherry-picking without any extra cost, which in turn would increase the cost of the API. Another consequence of this handpicking is the interruption in the timely supply of KSM which will result in delay in the production at manufacturer’s site. This would result in increased idle time of resources thereby increased overheads which ultimately would reflect in increased API cost. Apart from increased cost it would also result in sporadic supply to the customer. Another viable option for circumventing the inconsistency at supplier’s end is to do a reprocessing of KSM at the manufacturer’s site. Obviously this is not the viable solution as it would escalate the COGS. Hence there is no choice but to take your supplier in confidence and make him understand the implication of his product quality on your business and how his business in-turn would get affected by it. Best solution is to discuss with the supplier and ask him for improving his process (if supplier has the capability) or help them in improving his KSM process (if manufacturer has the capability).

Note: Apart from robust process, Regulators are also auditing the manufacturer’s site for the safety and the ETP facility. It is being done again for the same reason of ensuring the continuous supply of medicines to their country.

How inconsistency of the process affects the quality? And How QbD will help in getting rid of this inconsistency?

Realizing that we need to have a consistent quality and uninterrupted production is not enough, as a manufacturer we must understand the various sources of inconsistency and how it can affects the quality of the API.

Any chemical reaction that is happening in a reactor is a black box (Figure 7) for us and there are three kinds of inputs that go into the reactor. The first input known as MAs are chemical entities that go into the reactor (KSM, reagents and other chemicals). The second input known as PPs are the reaction/process parameters that can be controlled by the manufacturer and third being the environmental/external factors like room temperature, age of the equipment, operators etc. that cannot be controlled. As variance (σ2) has an additive property, hence inconsistency from all the three types of factors amplifies the inconsistency of the product quality. The variation caused by the third type i.e. by external factors is called as inherent variation and we have to live with it. At most the effect of these nuisance factors could be nullified by blocking and randomization during DoE studies. Because of this inherent variation, yield or any other quality parameters are reported as a range instead of a single number. But the variation due to other two types of factors (MAs and PPs) could be controlled by studying its effect on product attributes (QAs) by using a combination of some risk analysis tools and some statistical tools for optimization. The combination of risk based assessment of MAs and PPs and use of statistical tools as DoE/MVA for optimizing the effect of MAs and PPs on QAs is called as QbD. Hence QbD is the tool that manufacturers are looking for, to eliminate the inconsistency in their product thereby fulfilling the customer’s expectations.

The variance that is being shown by Figure 7 represents the variation only at a single stage. Consider a multi-step synthesis (most common scenario) and in such scenarios the total variance at the API stage would be the culmination of variance from all the stages, resulting in a total out of control process as shown below by equation 3.

picture12

Picture32

Figure 7: Cumulative effect of variance from various sources on the variance of API quality

At what stage of product development QbD required to be applied?

The traditional approach of process development of any API is focused more on filing the DMF at earliest. As a result of this improper process development there are failures at commercial scale and process comes back to R&D for fine tuning. But if the process is developed with QbD approach at R&D stage itself, certainly it would take more time initially, but its worth investing the time as there will be less failures or no failures at commercial scale and process could be scaled up in very less time. This will reduce the reprocessing and rework at commercial scale thereby minimizing the COPQ, a win-win situation for all as depicted in Figure 8.

picture13

Figure 8: Risk and reward associated with QbD and traditional approach


[i].  (a) ICH Q8 Pharmaceutical Development, (R2); U.S. Department of Health and Human Services, Food and Drug Administration, Center for Drug Evaluation and Research (CDER): Rockville, MD, Aug 2009. (b) ICH Q9 Quality Risk Management; U.S. Department of Health and Human Services, Food and Drug Administration, Center for Drug Evaluation and Research (CDER): Rockville, MD, June 2006. (c) ICH Q10 Pharmaceutical Quality System; U.S. Department of Health and Human Services, Food and Drug Administration, Center for Drug Evaluation and Research (CDER): Rockville, MD, April 2009. Understanding Challenges to Quality by Design, Final deliverable for FDA Understanding Challenges to QbD Project, December 18, 2009.

[ii]. (a) Jacky Musters, Leendert van den Bos, Edwin Kellenbach, Org. Process Res. Dev., 2013, 17, 87. (b) Zadeo Cimarosti, Fernando Bravo, Damiano Castoldi, Francesco Tinazzi, Stefano Provera, Alcide Perboni, Damiano Papini, Pieter Westerduin, Org. Process Res. Dev., 2010, 14, 805. (c) Fernando Bravo, Zadeo Cimarosti, Francesco Tinazzi, Gillian E. Smith, Damiano Castoldi, Stefano Provera, Pieter Westerduin, Org. Process Res. Dev., 2010, 14, 1162.

[iii]. (a) Sandeep Mohanty, Amrendra Kumar Roy, Vinay K. P. Kumar, Sandeep G. Reddy, Arun Chandra Karmakar, Tetrahedron Letters, 2014, 55, 4585. (b) Sandeep Mohanty, Amrendra Kumar Roy, S. Phani Kiran, G. Eduardo Rafael, K. P. Vinay Kumar, A. Chandra Karmakar, Org. Process Res. Dev., 2014, 18, 875.

[iv]. Girish R. Deshpande, Amrendra K. Roy, N. Someswara Rao, B. Mallikarjuna Rao, J. Rudraprasad Reddy, Chromatographia, 2011, 73, 639.

 

Concept of Quality — We Must Understand this before Learning 6sigma!

picture61

Before we try to understand the 6sigma concept, we need to define the term “quality”.

 What is Quality?

The term “quality” has many interpretations, but this by the ISO definition, quality is defined as: “The totality of features and characteristics of a product or service that bear on its ability to satisfy stated or implied needs”.

If we read between the lines, then the definition varies with the reference frame we use to define the “quality”. The reference frame that we are using here are the manufacturers (who is supplying the product) and the customer (who is using the product). Hence the definition of quality with respect to above two reference frame can be defined as

picture2

This “goal post” approach to quality is graphically presented below, where a product is deemed pass or fail. It didn’t matter even if the quality is on the borderline (football just missed the goalpost and luckily a goal was scored).

picture3

This definition was applicable till the time there was a monopoly for the manufacturers or having a limited competition in the market. The manufacturers were not worried about the failures as they can easily pass on the cost to the customer. Having no choice, customer has to bear the cost. This is because of the traditional definition of profit shown below.

picture15

Coming to current business scenario, the manufacturers doesn’t have luxury to define the selling price, now the market is very competitive and the price of goods and services are dictated by the market, hence it is called as market price instead of selling price. This lead to the change in the perception of quality, now quality was defined as producing goods and services meeting customer’s specification at the right price. The manufacturers are now forced to sell their goods and services at the market rate. As a result the profit is now defined as the difference of market rate and cost of goods sold (COGS).

In current scenario if a manufacturer wants to make a profit, the only option he has is to reduce COGS. In order to do so, one has to understand the components that makes up COGS. The COGS in has many components as shown below. The COGS consist of genuine cost of COGS and the cost of quality. The genuine COGS will always be same (nearly) for all manufacturers, but the real differentiator would be the cost of quality. The manufacturer with lowest cost of quality would enjoy highest profit and can influence the market price to keep the competition at bay. But in order to keep cost of quality at its lowest possible level, the manufacturer has to hit the football, right at the center of the goalpost every time!

picture13

The cost of quality involves the cost incurred to monitor and ensure the quality (cost of conformance) and the cost of non-conformance or cost of poor quality (COPQ). The cost of conformance is a necessary evil whereas the COPQ is a waste or opportunity lost.

picture14

Coming to the present scenario, with increasing demand of goods and services, manufacturers required to fulfill their delivery commitment on time otherwise their customers would lose market share to the competitors. The manufacturers has realized that their business depends on the business prospects of their customers hence, timely supply of products and services is very important. This can be understood in a much better way using pharmaceutical industry

Sole responsibility of any Regulator (say FDA) towards its country is to ensure not only the acceptable (quality, safety and efficacy) and affordable medicines but they also need to ensure its availability (no shortage) in their country all the time. Even that is not enough for them; those medicines must be easily accessible to patients at their local pharmacies. These may be called as 4A’s and are the KRA of any Regulatory body. If they miss any one of the above ‘4As’, they will be held accountable by their Government for endangering the life of the patients. The point that need to be emphasized here is the importance of TIMELY SUPPLY of the medicines besides other parameters like quality and price.

 Hence, the definition of quality again got modified as “producing goods and services in desired quantity which is delivered on time meeting all customer’s specification of quality and price.” A term used in operational excellence called as OTIF is acronym for “on time in full” meaning delivering goods and services meeting customer’s specification on time and in full quantity.

Coming once again to the definition of profit in present day scenario

Profit=MP-COGS

We have seen that the selling price is driven by the market and hence manufacturer can’t control it beyond an extent. So what he can do to increase his margin or profit? The only option he has is to reduce his COGS. We have seen that COGS has two components, genuine GOGS and COPQ. The manufacturers have little scope to reduce the genuine COGS as it is a necessary evil to produce goods and services. We will see latter in LEAN manufacturing how this genuine COGS can be reduced to some extent (wait till then!) e.g. if we can increase the throughput, we can bring down genuine COGS (if throughput or the yield of the process is improved, which results in less scrap would decrease the RM cost per unit of the goods produced).

But the real culprit for the high COGS is the unwarranted high COPQ.


The main reasons for high COPQ are

  1. Low throughput or yield
  2. More out of specifications (OOS) products which required to be either
    1. Reprocessed
    2. Reworked or
    3. Has to be scraped
  3. Inconsistent quality leading to more after sales& service and warranty costs
  4. Biggest of all loses would be the customer’s confidence in you, which is intangible.

If we look at the outcomes of COPQ (discussed above), we can conclude one thing and that is “the process is not robust enough to meet customer’s specifications” and because of this manufacturers faces the problem of COPQ. All these wastages are called as “mudas” in Lean terminology hence, would be dealt in detail latter. But the important

What causes COPQ?

Before we can answer this important question, we need to understand the concept of variance. Let’s take a simple example, say you start from the home for office on exactly the same time every day, do you reach the office daily on exactly same time? Answer will be a big no or a better answer would be, it will take anywhere between 40-45 minutes to react the office if I start exactly at 7:30 AM. This variation in office arrival time can be attributed to many reasons like variation in starting time itself (I just can start exactly at 7:30 every day), variation in traffic conditions etc. There will always be a variation in any process and we need to control that variation. Even in the manufacturing atmosphere there are sources of variation like wear and tear of machine, change of operators etc. Because of this variation, there will always be a variation in the output (goods and services produced by the process). Hence, we will not get a product with a fixed quality attributes, but that quality attribute will have a range (called as process control limits) which need to be compared with the customer’s specification limits (goal post).

If my process control limits are towards the goal post (boundaries of the customer’s specification limits) represented by the goal post, then my failure rate would be quite high resulting in more failures, scrap, rework, warranty cost. This is nothing but COPQ.

Alternatively if my aim (process limits) are well within the goal posts (case-2), my success rate are much higher and I would be have less, scrap and rework thereby decreasing my COPQ.

picture10

picture11

Taguchi Loss Function

A paradigm shift in the definition of quality was given by Taguchi, where he gave the concept of producing products with quality targeted at the center of the customer’s specifications (a mutually agreed target). He stated that as we move away from the center of the specification, we incur cost either at the producer’s end or at the consumer’s end in the form of re-work and re-processing. Holistically, it’s a loss to the society. It states that even producing goods and services beyond customer’s specification is a loss to the society as customer will not be willing to pay for it. There is a sharp increase in the COGS as we try to improve the quality of goods and services beyond the specification.

Picture23

For example;

The purity of medicine I am producing is > 99.5 (say specification) and if I try to improve it to 99.8, it will decrease my throughput as we need to perform one extra purification that will result in yield loss and increased COGS.

Buying a readymade suit, it is very difficult to find a suit that perfectly matches your body’s contour, hence you end up going for alterations. This incurs cost. Whereas, if you get a suit stitched by a tailor that fits your body contour (specification), it would not incur any extra cost in rework.

Six Sigma and COPQ

It is apparent from the above discussion that “variability in the process” is the single most culprit for the failures resulting in high cost of goods produced. This variability is the single most important concept in six sigma that required to be comprehended very well. We will encounter this monster (variability) everywhere when we will be dealing with six sigma tools like histogram, normal distribution, sampling distribution of mean, ANOVA, DoE, Regression analysis and most importantly the statistical process control (SPC).

Hence, a tool was required by the industry to study the variability and to find the ways to reduce it. The six sigma methodology was developed to fulfill this requirement. We will look into the detail why it is called as six sigma and not five or seven sigma latter on.

Before we go any further, we must understand one very important thing and must always remember this “any goods and services produced is an outcome of a process” also “there are many input that goes into the process, like raw materials, technical procedures, men etc”.

Hence, any variation in the input (x) to a given process will cause a variation in the output (y) quality.

Picture23

Another important aspect is that the variance has an additive property i.e. the variance from all input is added to give the variance in the output.

Picture32

How Six Sigma works?

Six sigma works by decreasing the variation coming from the different sources to reduce the overall variance in the system as shown below. It is a continuous improvement journey.

picture12

Summary:
  1. Definition of Quality has changed drastically over the time, it’s no more “fit for purpose” but also include on time and in full (OTIF).
  2. In this world of globalization, market place determines the selling price and manufacturers either have to reduce their COPQ or perish.
  3. There is a customer specification and a process capability. The aim is to bring the process capability well within the customer’s specifications.
  4. Main culprit of out of specification product is the unstable process which in turn is because of variability in the process coming from different sources.
  5. Variance has an additive property.
  6. Lean is tool to eliminate the wastages in the system and six sigma is a tool to reduce the defects from the process.

References

  1.  In order to understand the consequences of a bad process, see red bead experiment designed by Deming on Youtube  https://www.youtube.com/watch?v=JeWTD-0BRS4
  2. For different definition of quality see http://www.qualitydigest.com/magazine/2001/nov/article/definition-quality.html#

 

7QC Tools: My bitter experience with statistical Process Control (SPC)!

for posts

I just want to share my experience in SPC.

In general, I have seen that people are plotting the control chart of the final critical quality attribute of a product (or simply a CQA). But the information displayed by these control charts is historical in nature i.e. the entire process has already taken place. Hence, even if the control chart is showing a out of control point, I can’t do anything about it except for the reprocessing and rework. We often forget that these CQAs are affected by some critical process parameters (CPPs) and I can’t go back in time to correct that CPPs. The only thing we can do is to start a investigation.

picture21

HENCE PLOTTING CONTROL CHARTS IS LIKE DOING A POSTMORTEM OF A DEAD (FAILED) BATCH.

Instead, if we can plot the control chart of CPPs and if these control charts shows any out of control points, IMMEDIATLY WE CAN FORECAST THAT THIS BATCH IS GOING TO FAIL or WE CAN TAKE A CORRECTIVE ACTION THEN AND THERE ITSELF. This is because CPPs and CQA are highly correlated and if CPPs shows an out of control point on its control chart, then we are sure that that batch is going to fail.

picture92

Hence, the control charts of CPPs would help us in forecasting about the output quality (CQA) of the batch because, the CPP would fail first before a batch fails. This will also help us in saving the time that goes into the investigation. This is very important for the pharmaceutical industry as everyone in the pharmaceutical industry knows, how much time and resource goes into the investigation!

picture95

I feel that we need to plot the control chart of CPPs along with the control chart of CQA, with more focus on the control chart of CPPs. This will help us in taking timely corrective actions (if available) or we can scrap the batch, saving downstream time and resource (in case no corrective action available).

Another advantage of plotting the CPP is for looking for the evidence that a CPP is showing a trend and in near future it will cross the control limits as shown below, this will warrant a timely corrective action of process or machine.

picture93


CQA: Critical Quality attribute

CPP: Critical Process Parameter

OOS: out of specification


7QC Tools — The Control Charts

picture61

The Control Charts

This is the most important topic to be covered in the 7QC tools. But in order to understand it, just remember following point for the moment as right now we can’t go into the details

  1. Two things that we must understand beyond doubt are
    1. There is a customer’s specifications, LSL & USL (upper and lower specification limits)
    2. Similarly there is a process capability, LCL & UCL (upper and lower control limits)
    3. The Process capability and customer’s specifications are two independent things however, it is desired that UCL-LCL < USL-LSL. The only way we can achieve this relationship is by decreasing the variation in the process as we can’t do anything about the customer’s specifications (they are sacrosanct).
    4. Picture13
  2. If a process is stable, will follow the bell shaped curve called as normal curve. It means that, if we plot all historical data obtained from a stable process – it will give a symmetrical curve as shown below. The σ represents the standard deviation (a measurement of variation)
    • picture88
  3. The main characteristic of the above curve is shown below. Example, the area under ±2σ would contain 95% of the total data
    • picture19
  4. Any process is affected by two types of input variables or factors. Input variables which can be controlled are called as assignable or special causes (e.g., person, material, unit operation, and machine), and factors which are uncontrollable are called noise factors or common causes (e.g., fluctuation in environmental factors such as temperature and humidity during the year).
  5. From the point number 2, we can conclude that, as long as the data is within ±3σ, the process is considered stable and whatever variation is there it is because of the common causes of variation. Any data point beyond ±3σ would represent an outlier indicating that the given process has deviated or there is an assignable or a special cause of variation which, needs immediate attention.
    • picture89
  6. Measurement of mean (μ) and σ used for calculating control limits, depends on the type and the distribution of the data used for preparing control chart.

Having gone through the above points, let’s go back to the point number 2. In this graph, the entire data is plotted after all the data has been collected. But, these data were collected over a time! Now if we add a time-axis in this graph and try to plot all data with respect to time, then it would give a run-chart as shown below.

picture90

The run-chart thus obtained is known as the control chart. It represents the data with respect to the time and ±3σ represents the upper and lower control limits of the process. We can also plot the customer’s specification limits (USL & LSL) if desired onto this graph. Now we can apply point number 3 and 4 in order to interpret the control chart or we can use Western Electric Rules if we want to interpret it in more detail.

The Control Charts and the Continuous Improvement

A given process can only be improved, if there are some tools available for timely detection of an abnormality due to any assignable causes. This timely and online signal of an abnormality (or an outlier) in the process could be achieved by plotting the process data points on an appropriate statistical control chart. But, these control charts can only tell that there is a problem in the process but cannot tell anything about its cause. Investigation and identification of the assignable causes associated with the abnormal signal allows timely corrective and preventive actions which, ultimately reduces the variability in the process and gradually takes the process to the next level of the improvement. This is an iterative process resulting in continuous improvement till abnormalities are no longer observed in the process and whatever variation is there, is because of the common causes only.

It is not necessarily true that all the deviations on control charts are bad (e.g. the trend of an impurity drifting towards LCL, reduced waiting time of patients, which is good for the process). Regardless of the fact that the deviation is goodor badfor the process, the outlier points must be investigated. Reasons for good deviation then must be incorporated into the process, and reasons for bad deviation needs to be eliminated from the process. This is an iterative process till the process comes under statistical control. Gradually, it would be observed that the natural control limits become much tighter than the customer’s specification, which is the ultimate aim of any process improvement program like 6sigma.

The significance of these control charts is evident by the fact that it was discovered in the 1920s by Walter A. Shewhart, since then it has been used extensively across the manufacturing industry and became an intrinsic part of the 6σ process.

picture12

To conclude, the statistical control charts not only help in estimating these process control limits but also raises an alert when the process goes out of control. These alerts trigger the investigation through root cause analysis leading to the process improvements which in turn leads to the decreased variability in the process leading to a statistical controlled process.


Why Do Continuous Improvement Programs like Lean & 6Sigma Fails.

for posts

Most of the times continuous improvement programs in an organization  gradually cease to exists after consultants leaves. This really disappoint me because, it fails despite the fact that, everyone in the organization knows its benefit. The importance of these initiatives are well known across all industry and this is vetted by the number of vacancies for lean and 6sigma professionals on any job portal. (check it on LinkedIn and other job portals).


The main reasons that I have experienced are following

  1. In order to drive a lean or a 6sigma program, you need to be an external consultant or you need to be at some authorative position within the organization (this will ensure that you get the job done). The main purpose is to have a backing from the higher management.
      1. External consultant will be in direct touch with management hence, people would cooperate
      2. Higher position ensures that your message percolates down the line very well.
      3. If you are at middle management, it is going to be difficult for you to implement these changes even if you have the backing of the higher management (unless they are fully involved.

     

  2. Above scenario can be well understood by drawing an analogy with the stretching of a spring. As long as consultants are there, spring (employees) remain stretched and as soon as they leave, spring comes back to its original position. Hence, these initiatives should focus on changing the mind-set of the employees and have their buy-in prior to the start of any initiative. So, focus of these initiative should be cultural change rather than focusing on the short term financial gain.

    “The quality of an organization can never exceed the quality of the minds that make it up.”                                                                        Harold McAlindon

    It took Toyota 30 years to implement, what is now called as TPS!

  3. Usually, these initiatives are not the part of business strategy but, are usually initiated during the crisis situation and once the crisis is over and consultants leaves, it’s over! Spring regains its original state!
  4. Another reason is the lack of trained man-power in the area of  lean and 6sigma. I remember when we were searching for a 6sigma black belt, HR team gave us a list of ~65 candidates claiming to have 6sigma/lean expertise. Believe me, we could find only two persons (requirement was ~10-15) out of 65 having the required skill set.
    1. Out of the curiosity we kept on asking people “from where they have got the certification?” Most of them answer that they have undergone 3-5 days of classroom training followed by the examination to get their black belt! That’s true in most of the cases but, I wonder “how a five day course can qualify a person to be a black belt unless you really sweat at the shop-floor with your team?
    2. There is also a lack of trained people within the organization, who can really interview such candidates. Imagine that I want a black belt for my company to drive the initiative, either I have to believe that a candidate knows the concepts or I have to hire someone who can really interview these people. Latter option is much better! These days QbD has become a buzz word in the pharmaceutical industry, just include that in your CV and you will get an immediate raise.
  5. But the main reason that I experienced was the compartmentalized view of an organization, where right hand doesn’t know what left hand is doing.

picture9

Let’s assume that the whole company is excited about the initiative, even then it fails! The major reason being the presence of many compartments/departments within the system and they are habituated to work in silos! They remain committed to their KRAs and their work-flow and doesn’t know much about the processes of the department from where they are receiving the inputs or how their processes affects the processes of the next department (internal customer). These silos are becoming the vertical coffins for the organization. Before we go any further, let’s understand “what is business?” or “How business is being carried out to generate revenue?”

The central planning team, based on the monthly forecast, gives the targets to all vertical coffins for that month. All vertical coffins then perform their duty in silos to complete their target.

picture6

Now, if we really look at the business, it is not the departments that makes the product and generates the revenues instead it is the culmination of a process-flow encompassing the entire organization. In order to give a clarity, let’s look at the following example

 picture1

It is just a flow of the process across the departments that adds value to the raw material for the customers. The most important point is that these processes are being performed by the shop-floor people and not by the management. What I meant to say is that, the material flow happens in horizontal direction at the bottom of the pyramid but processes are being managed vertically and in silos. As a result there is an information gap between the decision point and the execution point. So the shop-floor people are no better than the robots who are busy in meeting their targets. In this scenario we just can’t implement the continuous improvement unless these vertical coffins are dismantled and the gap between information and the material flow diminishes. This can only be made possible through delegation and by empowering the shop-floor people.

Wait a minute! What are you talking about? If we are going to delegate our duty, then what we are going to do? What will be our role? These are the thought that may pop-up in the minds of higher management.

 picture7

My dear friend, just leave these daily operation to the middle management, do something new, read something new, think something new or make some new strategy for the company. Give some new direction to the company with your vast experience. This is because if you get involved in day-to-day operations, then there is no difference between a shift in-charge and you! If you act like this, ideally your CTC should be added to the overhead of the product! Isn’t it?

Get a right person in the middle management and just get the daily updates from him, interfere when needed. I read somewhere (can’t recall) that as you grow higher in the management, you should distance yourself from the  day-to-day operations and focus more on mentoring and drawing future roadmap for the company.

picture8

Once this conducive environment is established i.e. delegation and empowering the shop-floor people, it would easier to implement any continuous improvement initiative in the organization, and this is because the real action (process, value addition) happens at the shop-floor. Even if you look at most of the lean and 6sigma tools, you would find that it is being implemented successfully at the shop floor by the shop floor people!

 

A Way to Establish Cause & Effect Relationship …..Design of Experiments or DoE

 

picture61

Mostly what happens during any investigation is that, we collect lot of data to prove or disapprove our assumption. Problem with this methodology is that, we can have false correlation between variables

e.g. increase in the internet connection and death due to cancer over last 4 decades!

Is there a relation between the two (internet connections and death due to cancer)? Absolutely not, so in order to avoid such confusions we need to have a way to establish such relationships. In this regard we use DoE, these are statistical way of conducting experiments which establishes cause & effect relationship. General sequence of events in DoE is as follows

Picture48

Why DoE is important at R&D stage?

Just remember these two quotes

“Development speed is not determined by how fast we complete the R&D but by how fast we can commercialize the process”

“Things we do before tech transfer is more important that what is there in tech pack!”

Picture50

In order to avoid the unnecessary learning curves, and to have a control on the COGS we need to deploy QbD as shown below

Picture49

Details will be covered in DoE chapter

Is this information useful to you?

Kindly provide your feedback

How to provide a realistic range for a CQAs during product development to avoid Unwanted OOS-2 Case Study

picture61

Suppose we are in the process of developing a 500 mg Ibuprofen tablets (actual specification is 497 to 502 mg). A tablet contains many other ingredients along with 500 mg of the active molecule (ibuprofen). Usually these ingredients are mixed and then compressed into the tablets. During the product development, three batches each of 500 tablets were prepared with 15 minutes, 25 minutes and 35 minutes of blending time respectively. A sample of 10 tablets were collected from each batch and analyzed for the ibuprofen. The results are given below

picture8

The regression analysis of the entire data (all three batches) provides a quantitative relationship between blending time and the expected value of ibuprofen content for that blending time.

Expected ibuprofen content = 493±0.242×blending time

Now, we want to estimate the average ibuprofen content of the entire batch of 500 tablets, based on the sample of 10 tablets for a given mixing time (say 15, 25 and 35 minutes). Let’s calculate the 95% and 99% confidence interval (CI) for each of the mixing time.

picture10

In reality, we can never know the average ibuprofen content of the entire batch unless we do the analysis of the entire batch, which is not possible.

We can see that the 99% CI is wider than the 95% CI (hope you are clear about what 95% CI means?). The 99% CI for a mixing time of 35 minutes seems to be closer to my desired strength of 497 to 502 mg. Hence, in developmental report, I would propose a wider possible assay range of 499.6 to 502.57 for a mixing time of 35 minutes with 99% CI.

This means that, if we take 100 samples, then the CI given by 99 samples would contain the population mean.

Now if we look at this 99% CI i.e. 499.6 to 502.57 mg which is narrower than the specifications (407 to 502 mg). Hence, I want to estimate the some interval (like CI) with a mixing time of say 32 minutes (note: we have not conducted any experiments with this mixing time!) to check if we can meet the specification there itself. We can do it, because we have derived the regression equation. What we are doing is to predict an interval for a future batch with a mixing time of 35 minutes. As we are predicting for a future observation, this interval is called as prediction interval of a response for a given value of the process parameter. Usually prediction intervals are wider than the corresponding confidence intervals.

Using the equation discussed earlier, we can have expected average value of the mean strength for a mixing time of 32 minutes.

Expected ibuprofen content for a blending time of 32 minutes = 500.74

picture11

Till now, what we have learnt is that CI can estimate an interval that will contain the average ibuprofen content of the entire batch (already executed) for a given value of blending time. Whereas, the prediction interval estimates the interval that would contain the average response of a future batch for a given value of blending time.

In present context,

For a blending time of 35 minutes, a 95% CI indicates that the average strength of the entire batch of 500 tablets (population) would be between 499.99 and 502.18.

Whereas a 95% PI helps in predicting the average strength of the next batch would be between 499.6 and 502.57 for a blending time 35 minutes.

Now question is: can we propose any of these intervals (95% CI or 95% PI) as the process control limits?

What I think is, we can’t! Because above control limits doesn’t tell me anything about the distribution of the population within this interval. What I mean to say that we can’t assume that entire 500 tablets (entire batch) would be covered by these interval, it’s only the average mean of the entire batch would fall in this interval.

For me it is necessary to know the future trend of the batches when we transfer the technology for commercialization. We should not only know the interval containing the mean (of any CQA) of the population but also the proportion or percentage of the total population falling in that interval. This will help me determining the expected failure rate in future batches if all CPPs are under control (even 6sigma process has a failure rate of 3.4ppm!). Once I know that, it would help me in deciding when to start investigating an OOS (once number of failures would cross the expected failure rate). For this statement, I am assuming that there is no special cause for OOS.

This job is done by the tolerance interval (TI). In general TI is reported as follows

A 95% TI for the tablet strength (Y) containing 99% of the population of the future batches for a blending time of 35 minutes (X).

It means that, whatever TI is calculated at 95% confidence level would encompass 99% of the future batches that will be manufactures with a blending time of 35 minutes. In other words, there is 1% of the batches would fail. Now, I will start investigating an OOS only if there are two or more failures in next 100 batches (assuming that there are no special causes for OOS and all process parameters are followed religiously).

The TI for the batches at different blending time is given below

Tolerance Interval type: two sided

Confidence level: 95%

Percentage of population to be covered: 99

picture12


Above discussion can be easily understood by following analogy described below

You have to reach the office before 9:30 AM. Now tell me how confident you are about reaching the office exactly between

9:10 to 9:15 (hmm…, such a narrow range, I am ~90% confident)

9:05 to 9:20 (a-haa.., now I am 95% confident)

9:00 to 9:25 (this is very easy, I am almost 100% confident)

The point to be noted here is that as the width of the time interval increases, your confidence also increases.

It is difficult to estimate the exact arrival time, but we can be certain that mean arrival time to office would be in between

Average arrival time on (say 5 days) ± margin of error

Why Do We Have Out of Specifications (OOS) and Out of Trend (OOS) Batches

picture61

While developing a product, we are bound by the USP/EP/JP monographs for product’s critical quality attributes (CQAs) or by the ICH guidelines and we have seen regular OOT/OOS in commercial batches. It’s fine that, every generic company have developed an expertise in investigating and providing corrective & preventive action (CAPA) for all OOT and OOS, but question that remained in our heart and mind is that,

Why can’t we stop them from occurring? 

Answers lies in following inherent issues at each level of product life cycle,

We assume customer’s specification and process control limits are same thing during the product development.

Let’s assume that USP monograph gives a acceptable assay range of a drug product between 97% to 102%. The product development team immediately start working on the process to meet this specifications. The focus is entirely on developing a process to give a drug product within this range. But we forget that even a 6sigma process has a failure rate of 3.4ppm. Therefore in absence of statistical knowledge, we consider customer’s specification as the target for the product development.

The right approach would be to calculate the required process control limits so that a given proportion of the batches (say 95% or 99%) should be in between customer’s specifications.

Here, I would like to draw an analogy where the customer’s specification like the width of a garage and the process control limits is like the width of the car. The width of the car should be much less than the width of the garage to avoid any scratches. Hence the target process control limits should be narrower for the product development.

For detail see earlier blog on car parking and 6sigma“.

Inadequate statistical knowledge leads to wrong target range  for a given quality parameters during Product development.

Take the above example once again, customer’s specification limit for the assay is 97% to 102% (= garage width) now, the question is, what should be the width of the process (= car’s width) that we need to target during the product development to reduce number of failures during commercialization? But one thing is clear at this point, we can’t take customer’s specification as a target for the product development.

Calculating the target range for the development team

In order to simplify it, I will take the formula for Cp

picture16

Where, Cp = process capability, σ = standard deviation of the process, USL & LSL are the upper and lower specification of the customer. The number 1.33 is least desired Cp for a capable process = 3.9 sigma process.

Calculating for σ

picture17

Calculating the σ for the above process

picture18

Centre of the specification = 99.5 hence the target range of the assay for the product development team is given by

Specification mean ± 3σ

  = 99.5±3×σ = 99.5±1.89 = 97.61 to 101.39

Hence, product development team has to target an assay range of 97.61 to 101.39 instead of targeting the customers specifications.

There is other side of the coin, whatever range we take as a target for development, there is a assumption that 100% of the population would be in between that interval. This is not true because, even a 6 sigma process has a failure rate of 3.4 ppm. So the point I want to make here is that we should also provide a expected failure rate corresponding to the interval that we have chosen to work with.  

picture19

For further discussion on this topic, keep vising for the forth coming article on Confidence, prediction and Tolerance intervals

Not Giving Due Respect to the Quality by Design Principle and PAT tools

Companies not having in-house QbD capability can have an excuse but even the companies with QbD capability witness failures during scale-up even though they claim to have used QbD principle. They often think that QbD and DoE are the same thing. For the readers I want to highlight that DoE just a small portion of QbD. There is a sequence of events that constitute QbD and DoE is just on of those events.

I have seen that people will start DoE directly on the process, scientist used to come to me that these are the critical process parameter (CPPs) and ask for DoE plan. These CPPs are selected mostly based on the chemistry knowledge like, moles, temperature, concentration, reaction time etc. Now thing is that, these variables will seldom vary in the plant because warehouse won’t issue you less or more quantity of the raw material and solvents, temperature won’t deviate that much. What we miss is the process related variables like heating and cooling gradient, hold up time of the reaction mass at a particular temperature, work-up time in plant (usually much higher than lab workup time, type of agitator, exothermicity,  waiting time for the analysis and other unit operations. We don’t understand the importance of these at the lab level, but these monsters raises their head during commercialization.

Therefore a proper guidelines is required for conducting a successful QbD studies in the lab (see the forth coming article on DoE). In general if we want a successful QbD then we need to make a dummy batch manufacturing record of the process in the lab and then perform the risk analysis to the whole process for identifying CPPs and CMAs. Brief QbD process is described below

Picture1

 Picture6
Improper Control Strategy in the Developmental Report

Once the product is developed in the lab, there are some critical process parameters (CPPs) that can affect the CQAs. These CPPs are seldom deliberated in detail by the cross functional team to mitigate the risk by providing adequate manual and engineering control. This is because we are in a hurry to file ANDA/DMF and other reasons. Once the failures become the chronic issue, we take actions. Because of this CPPs vary in the plant resulting n OOS.

Monitoring of CQAs instead of CPPs during commercialization.

I like to call ourselves “knowledgeable sinners”. This because we know that a CQA is affected by the CPPs even then we continue to monitor the CQA instead of CPPs. This is because, if CPPs is under control, then CQA will have to be under control. For example, we know that if reaction temperature shoots, it will lead to impurities, even then we continue to monitor the impurities level using control charts but not the temperature itself. We can ask ourselves what we can achieve by monitoring the impurities after the batch is complete? Answer is we achieve nothing but a failed batch, investigation, loss of raw material/energy/manpower/production time, to summarize we can only do a postmortem of a failed batch and nothing else.

Instead of impurity, if we have monitored the temperature which was critical, we could have taken an corrective action then and there itself. Knowing that this batch is going to fail, we could have terminated the batch thereby saving loss of manpower/energy/production time etc. (imagine a single OOS investigation required at least 5-6 people working for a week, which is equal to 30 man days.

Picture18

Role of QA is mistaken for Policing and auditing rather than in continuous improvement.

The QA department in all organization is frequently busy with audit preparation! Their main role has got restricted to documentation and keep the facility ready for audits (mostly in the pharmaceutical field). What I feel is that, within the QA there has to be a statistical process control (SPC) group, whose main function is to monitor the processes and suggest the areas of improvements.  This function should have sound knowledge of engineering and SPC so that they can foresee the OOT and OOS by monitoring CPPs on the control charts. So, role of QA is not only policing but also assisting other departments in improving quality. I understand that at present SPC knowledge is very limited among QA and other department, which we need to improve.

Lack of empowerment to the operators for reporting deviation occurred

You all will agree, the best process owner of any product is the shop-floor peoples or the operators but, we seldom give importance to their contribution. The pressure on them is to deliver a given number of batches per month to meet the sales target. Due to this production target, they often don’t report deviations in CPPs because they know if they do it, it will lead to investigation by QA and the batch will be only cleared once the investigation is over. In my opinion, QA should empower operators to report deviations, the punishment should not be there for the batch failure but for not asking for the help. It is fine to miss the target by one or two batch but the knowledge gained from those batches with deviation would improve the process.

Lack of basic statistical knowledge across the technical team (R&D, Production, QA, QC)

I am saying that everyone should become an statistical expert, but at least we can train our people on basic 7QC tools! that is not a rocket science. This will help everyone to monitor and understand the process, shop-floor people can themselves use these tools (or QA  can empower them after training and certification) to plot histogram, control charts etc.. pertaining to the process and can compile the report for QA.

What are Seven QC Tools & How to Remember them?

Other reasons for OOT/OOS are as follows which are self explanatory
  1. Frequent vendor change (quality comes for a price). Someone has to bear the cost of poor quality.
    1. Not linking vendors in your continuous improvement journey. The variation in his raw material can create a havoc in your process.
  2. Focusing on delivery at the cost of preventive maintenance of the hardware’s

 Related Topics

Proposal for Six Sigma Way of Investigating OOT & OOS in Pharmaceutical Products-1

Proposal for Six Sigma Way of Investigating OOT & OOS in Pharmaceutical Products-2