7QC Tools: Why do we Require to Plot X-bar and R-charts Simultaneously

for posts

Abstract:

The main purpose of the control charts is to monitor the health of the process and this is done by monitoring both, accuracy and the precision of the process. The control charts is a tool that helps us in doing so by plotting following two control charts simultaneously for accuracy and precision.
Control chart for mean (for accuracy of the process)
Control chart of variability (for Precision of the process)
E.g. X-bar  and R chart (also called averages and range chart) and X-bar  and s chart

The Accuracy and the precision

We all must be aware of the following diagram that explains the concept of precision and accuracy in that analytical development.

Case-1:

If you are hitting the target all the time at the bull’s eye is called as  accuracy and if all your shots are concentrated at the same point then it is called as Precision.

picture1

Figure-1: Accuracy and precision

Case-2:

You are off the target (inaccurate) all the time but your shots are concentrated at the same point i.e. there is not much variation (Precision)

Case-2:

It is an interesting case. Your shots are scattered around the bull’s eye but, on an average your shots are on the target (Accuracy), this is because of the average effect. But your shots are wide spread around the center (Imprecision).

Case-4

In this case all your shots are off target and precision is also lost.

Before we could correlate the above concept with the manufacturing process, we must have a look at the following diagram that explains the characteristics of a given manufacturing process.

Figure-2: Precision and Accuracy of a manufacturing process

Figure-2: Precision and Accuracy of a manufacturing process

The distance between the average of the process control limits and the target value (average of the specification limits) represents the accuracy of the process or how much the process mean is deviating from the target value.

Whereas the spread of the process i.e. the difference between LCL and UCL of the process represents the precision of the process or how much variation is there in the process.

Having understood the above two diagrams, it would be interesting to visualize the control chart patterns in all of the four cases discussed above. But, before that let’s have a look at the effect of time on a given process i.e. what happens to the process with respect to the time?

As the process continue to run, there will be wear and tear of machines, change of operators etc. and because of that there will be shift and drift in the process as represented by four scenarios described in the following diagram.

picture2

Figure-3: Process behavior in a long run

A shift in the process mean from the target value is the loss of accuracy and change in the process control limits is the loss of precision. A process shift of ±1.5σ is acceptable in the long run.

If we combine figure-1 and figure-3, we get the figure-4, which enable us to comprehend the control charts in a much better way. This gives picture of the manufacturing process in the form of control charts in four scenarios discussed above.

picture4

Figure-4: Control chart pattern in case of precision and accuracy issue

Above discussion is useful in understanding the reasons behind the importance of the control charts.

  1. Most processes don’t run under statistical control for long time. There are drifts and shift in the process with respect to the time, hence process needs adjustment at regular interval.
  2. Process deviation is caused by assignable and common factors/causes. Hence a monitoring tool is required to identify the assignable causes. This tool is called as control charts
  3. These control charts helps in determining whether the abnormality in the process is due to assignable causes or due to common causes
  4. It enables timely detection of abnormality prompt us to take timely corrective action
  5. It provides an online test of hypothesis that the process is under control
    1. Helps in taking decision whether to interfere with process or not.
      1. H0: Process is under control (common causes)
      2. Ha: Process is out of control (assignable causes)

picture7

6.  Helps in continuous improvement:

picture5

Figure-5: Control Charts provide an opportunity for continuous improvement

 

 

Concept of Quality — We Must Understand this before Learning 6sigma!

picture61

Before we try to understand the 6sigma concept, we need to define the term “quality”.

 What is Quality?

The term “quality” has many interpretations, but this by the ISO definition, quality is defined as: “The totality of features and characteristics of a product or service that bear on its ability to satisfy stated or implied needs”.

If we read between the lines, then the definition varies with the reference frame we use to define the “quality”. The reference frame that we are using here are the manufacturers (who is supplying the product) and the customer (who is using the product). Hence the definition of quality with respect to above two reference frame can be defined as

picture2

This “goal post” approach to quality is graphically presented below, where a product is deemed pass or fail. It didn’t matter even if the quality is on the borderline (football just missed the goalpost and luckily a goal was scored).

picture3

This definition was applicable till the time there was a monopoly for the manufacturers or having a limited competition in the market. The manufacturers were not worried about the failures as they can easily pass on the cost to the customer. Having no choice, customer has to bear the cost. This is because of the traditional definition of profit shown below.

picture15

Coming to current business scenario, the manufacturers doesn’t have luxury to define the selling price, now the market is very competitive and the price of goods and services are dictated by the market, hence it is called as market price instead of selling price. This lead to the change in the perception of quality, now quality was defined as producing goods and services meeting customer’s specification at the right price. The manufacturers are now forced to sell their goods and services at the market rate. As a result the profit is now defined as the difference of market rate and cost of goods sold (COGS).

In current scenario if a manufacturer wants to make a profit, the only option he has is to reduce COGS. In order to do so, one has to understand the components that makes up COGS. The COGS in has many components as shown below. The COGS consist of genuine cost of COGS and the cost of quality. The genuine COGS will always be same (nearly) for all manufacturers, but the real differentiator would be the cost of quality. The manufacturer with lowest cost of quality would enjoy highest profit and can influence the market price to keep the competition at bay. But in order to keep cost of quality at its lowest possible level, the manufacturer has to hit the football, right at the center of the goalpost every time!

picture13

The cost of quality involves the cost incurred to monitor and ensure the quality (cost of conformance) and the cost of non-conformance or cost of poor quality (COPQ). The cost of conformance is a necessary evil whereas the COPQ is a waste or opportunity lost.

picture14

Coming to the present scenario, with increasing demand of goods and services, manufacturers required to fulfill their delivery commitment on time otherwise their customers would lose market share to the competitors. The manufacturers has realized that their business depends on the business prospects of their customers hence, timely supply of products and services is very important. This can be understood in a much better way using pharmaceutical industry

Sole responsibility of any Regulator (say FDA) towards its country is to ensure not only the acceptable (quality, safety and efficacy) and affordable medicines but they also need to ensure its availability (no shortage) in their country all the time. Even that is not enough for them; those medicines must be easily accessible to patients at their local pharmacies. These may be called as 4A’s and are the KRA of any Regulatory body. If they miss any one of the above ‘4As’, they will be held accountable by their Government for endangering the life of the patients. The point that need to be emphasized here is the importance of TIMELY SUPPLY of the medicines besides other parameters like quality and price.

 Hence, the definition of quality again got modified as “producing goods and services in desired quantity which is delivered on time meeting all customer’s specification of quality and price.” A term used in operational excellence called as OTIF is acronym for “on time in full” meaning delivering goods and services meeting customer’s specification on time and in full quantity.

Coming once again to the definition of profit in present day scenario

Profit=MP-COGS

We have seen that the selling price is driven by the market and hence manufacturer can’t control it beyond an extent. So what he can do to increase his margin or profit? The only option he has is to reduce his COGS. We have seen that COGS has two components, genuine GOGS and COPQ. The manufacturers have little scope to reduce the genuine COGS as it is a necessary evil to produce goods and services. We will see latter in LEAN manufacturing how this genuine COGS can be reduced to some extent (wait till then!) e.g. if we can increase the throughput, we can bring down genuine COGS (if throughput or the yield of the process is improved, which results in less scrap would decrease the RM cost per unit of the goods produced).

But the real culprit for the high COGS is the unwarranted high COPQ.


The main reasons for high COPQ are

  1. Low throughput or yield
  2. More out of specifications (OOS) products which required to be either
    1. Reprocessed
    2. Reworked or
    3. Has to be scraped
  3. Inconsistent quality leading to more after sales& service and warranty costs
  4. Biggest of all loses would be the customer’s confidence in you, which is intangible.

If we look at the outcomes of COPQ (discussed above), we can conclude one thing and that is “the process is not robust enough to meet customer’s specifications” and because of this manufacturers faces the problem of COPQ. All these wastages are called as “mudas” in Lean terminology hence, would be dealt in detail latter. But the important

What causes COPQ?

Before we can answer this important question, we need to understand the concept of variance. Let’s take a simple example, say you start from the home for office on exactly the same time every day, do you reach the office daily on exactly same time? Answer will be a big no or a better answer would be, it will take anywhere between 40-45 minutes to react the office if I start exactly at 7:30 AM. This variation in office arrival time can be attributed to many reasons like variation in starting time itself (I just can start exactly at 7:30 every day), variation in traffic conditions etc. There will always be a variation in any process and we need to control that variation. Even in the manufacturing atmosphere there are sources of variation like wear and tear of machine, change of operators etc. Because of this variation, there will always be a variation in the output (goods and services produced by the process). Hence, we will not get a product with a fixed quality attributes, but that quality attribute will have a range (called as process control limits) which need to be compared with the customer’s specification limits (goal post).

If my process control limits are towards the goal post (boundaries of the customer’s specification limits) represented by the goal post, then my failure rate would be quite high resulting in more failures, scrap, rework, warranty cost. This is nothing but COPQ.

Alternatively if my aim (process limits) are well within the goal posts (case-2), my success rate are much higher and I would be have less, scrap and rework thereby decreasing my COPQ.

picture10

picture11

Taguchi Loss Function

A paradigm shift in the definition of quality was given by Taguchi, where he gave the concept of producing products with quality targeted at the center of the customer’s specifications (a mutually agreed target). He stated that as we move away from the center of the specification, we incur cost either at the producer’s end or at the consumer’s end in the form of re-work and re-processing. Holistically, it’s a loss to the society. It states that even producing goods and services beyond customer’s specification is a loss to the society as customer will not be willing to pay for it. There is a sharp increase in the COGS as we try to improve the quality of goods and services beyond the specification.

Picture23

For example;

The purity of medicine I am producing is > 99.5 (say specification) and if I try to improve it to 99.8, it will decrease my throughput as we need to perform one extra purification that will result in yield loss and increased COGS.

Buying a readymade suit, it is very difficult to find a suit that perfectly matches your body’s contour, hence you end up going for alterations. This incurs cost. Whereas, if you get a suit stitched by a tailor that fits your body contour (specification), it would not incur any extra cost in rework.

Six Sigma and COPQ

It is apparent from the above discussion that “variability in the process” is the single most culprit for the failures resulting in high cost of goods produced. This variability is the single most important concept in six sigma that required to be comprehended very well. We will encounter this monster (variability) everywhere when we will be dealing with six sigma tools like histogram, normal distribution, sampling distribution of mean, ANOVA, DoE, Regression analysis and most importantly the statistical process control (SPC).

Hence, a tool was required by the industry to study the variability and to find the ways to reduce it. The six sigma methodology was developed to fulfill this requirement. We will look into the detail why it is called as six sigma and not five or seven sigma latter on.

Before we go any further, we must understand one very important thing and must always remember this “any goods and services produced is an outcome of a process” also “there are many input that goes into the process, like raw materials, technical procedures, men etc”.

Hence, any variation in the input (x) to a given process will cause a variation in the output (y) quality.

Picture23

Another important aspect is that the variance has an additive property i.e. the variance from all input is added to give the variance in the output.

Picture32

How Six Sigma works?

Six sigma works by decreasing the variation coming from the different sources to reduce the overall variance in the system as shown below. It is a continuous improvement journey.

picture12

Summary:
  1. Definition of Quality has changed drastically over the time, it’s no more “fit for purpose” but also include on time and in full (OTIF).
  2. In this world of globalization, market place determines the selling price and manufacturers either have to reduce their COPQ or perish.
  3. There is a customer specification and a process capability. The aim is to bring the process capability well within the customer’s specifications.
  4. Main culprit of out of specification product is the unstable process which in turn is because of variability in the process coming from different sources.
  5. Variance has an additive property.
  6. Lean is tool to eliminate the wastages in the system and six sigma is a tool to reduce the defects from the process.

References

  1.  In order to understand the consequences of a bad process, see red bead experiment designed by Deming on Youtube  https://www.youtube.com/watch?v=JeWTD-0BRS4
  2. For different definition of quality see http://www.qualitydigest.com/magazine/2001/nov/article/definition-quality.html#

 

7QC Tools: Basis of Western Electric Rules of Control Charts

for posts

We all are aware of these famous rule, for beginners let’s understand the basis of these rule. All rules are applied to one half of the control chart. The probability of getting a reaction to the test is ~0.01.

picture106

picture108

  1. A single point outside 3σ control limits or beyond zone A.
    • Probability of finding a point in this region = 0.00135 if caused by the normal process. Anything in this region is a case of assignable cause.
  2. Two out of three consecutive points in zone A (beyond 2σ).
    • Probability of getting 2 consecutive points in zone A = 0.0227*0.0227 = 0.00052
    • Probability of 2 out of 3 points in zone A = 0.0227*0.0227*0.9773*3 = 0.0015
  3. Four out of 5 consecutive points in zone B (beyond 1σ)
    • Probability of getting one points in zone B = 0.1587
    • Probability of 4 points in zone B and 1 point in other part of the control chart = 0.1587*0.1587*0.1587*0.1587*0.8413*5 = 0.0027
  4. Eight consecutive points on one side of the central line.
    • Probability of getting one points in beyond central line = 0.5
    • Probability of 8 points in in succession on one side of the central line = 8*0.5 = 0.0039

7QC Tools — The Control Charts

picture61

The Control Charts

This is the most important topic to be covered in the 7QC tools. But in order to understand it, just remember following point for the moment as right now we can’t go into the details

  1. Two things that we must understand beyond doubt are
    1. There is a customer’s specifications, LSL & USL (upper and lower specification limits)
    2. Similarly there is a process capability, LCL & UCL (upper and lower control limits)
    3. The Process capability and customer’s specifications are two independent things however, it is desired that UCL-LCL < USL-LSL. The only way we can achieve this relationship is by decreasing the variation in the process as we can’t do anything about the customer’s specifications (they are sacrosanct).
    4. Picture13
  2. If a process is stable, will follow the bell shaped curve called as normal curve. It means that, if we plot all historical data obtained from a stable process – it will give a symmetrical curve as shown below. The σ represents the standard deviation (a measurement of variation)
    • picture88
  3. The main characteristic of the above curve is shown below. Example, the area under ±2σ would contain 95% of the total data
    • picture19
  4. Any process is affected by two types of input variables or factors. Input variables which can be controlled are called as assignable or special causes (e.g., person, material, unit operation, and machine), and factors which are uncontrollable are called noise factors or common causes (e.g., fluctuation in environmental factors such as temperature and humidity during the year).
  5. From the point number 2, we can conclude that, as long as the data is within ±3σ, the process is considered stable and whatever variation is there it is because of the common causes of variation. Any data point beyond ±3σ would represent an outlier indicating that the given process has deviated or there is an assignable or a special cause of variation which, needs immediate attention.
    • picture89
  6. Measurement of mean (μ) and σ used for calculating control limits, depends on the type and the distribution of the data used for preparing control chart.

Having gone through the above points, let’s go back to the point number 2. In this graph, the entire data is plotted after all the data has been collected. But, these data were collected over a time! Now if we add a time-axis in this graph and try to plot all data with respect to time, then it would give a run-chart as shown below.

picture90

The run-chart thus obtained is known as the control chart. It represents the data with respect to the time and ±3σ represents the upper and lower control limits of the process. We can also plot the customer’s specification limits (USL & LSL) if desired onto this graph. Now we can apply point number 3 and 4 in order to interpret the control chart or we can use Western Electric Rules if we want to interpret it in more detail.

The Control Charts and the Continuous Improvement

A given process can only be improved, if there are some tools available for timely detection of an abnormality due to any assignable causes. This timely and online signal of an abnormality (or an outlier) in the process could be achieved by plotting the process data points on an appropriate statistical control chart. But, these control charts can only tell that there is a problem in the process but cannot tell anything about its cause. Investigation and identification of the assignable causes associated with the abnormal signal allows timely corrective and preventive actions which, ultimately reduces the variability in the process and gradually takes the process to the next level of the improvement. This is an iterative process resulting in continuous improvement till abnormalities are no longer observed in the process and whatever variation is there, is because of the common causes only.

It is not necessarily true that all the deviations on control charts are bad (e.g. the trend of an impurity drifting towards LCL, reduced waiting time of patients, which is good for the process). Regardless of the fact that the deviation is goodor badfor the process, the outlier points must be investigated. Reasons for good deviation then must be incorporated into the process, and reasons for bad deviation needs to be eliminated from the process. This is an iterative process till the process comes under statistical control. Gradually, it would be observed that the natural control limits become much tighter than the customer’s specification, which is the ultimate aim of any process improvement program like 6sigma.

The significance of these control charts is evident by the fact that it was discovered in the 1920s by Walter A. Shewhart, since then it has been used extensively across the manufacturing industry and became an intrinsic part of the 6σ process.

picture12

To conclude, the statistical control charts not only help in estimating these process control limits but also raises an alert when the process goes out of control. These alerts trigger the investigation through root cause analysis leading to the process improvements which in turn leads to the decreased variability in the process leading to a statistical controlled process.


Why Do We Have Out of Specifications (OOS) and Out of Trend (OOS) Batches

picture61

While developing a product, we are bound by the USP/EP/JP monographs for product’s critical quality attributes (CQAs) or by the ICH guidelines and we have seen regular OOT/OOS in commercial batches. It’s fine that, every generic company have developed an expertise in investigating and providing corrective & preventive action (CAPA) for all OOT and OOS, but question that remained in our heart and mind is that,

Why can’t we stop them from occurring? 

Answers lies in following inherent issues at each level of product life cycle,

We assume customer’s specification and process control limits are same thing during the product development.

Let’s assume that USP monograph gives a acceptable assay range of a drug product between 97% to 102%. The product development team immediately start working on the process to meet this specifications. The focus is entirely on developing a process to give a drug product within this range. But we forget that even a 6sigma process has a failure rate of 3.4ppm. Therefore in absence of statistical knowledge, we consider customer’s specification as the target for the product development.

The right approach would be to calculate the required process control limits so that a given proportion of the batches (say 95% or 99%) should be in between customer’s specifications.

Here, I would like to draw an analogy where the customer’s specification like the width of a garage and the process control limits is like the width of the car. The width of the car should be much less than the width of the garage to avoid any scratches. Hence the target process control limits should be narrower for the product development.

For detail see earlier blog on car parking and 6sigma“.

Inadequate statistical knowledge leads to wrong target range  for a given quality parameters during Product development.

Take the above example once again, customer’s specification limit for the assay is 97% to 102% (= garage width) now, the question is, what should be the width of the process (= car’s width) that we need to target during the product development to reduce number of failures during commercialization? But one thing is clear at this point, we can’t take customer’s specification as a target for the product development.

Calculating the target range for the development team

In order to simplify it, I will take the formula for Cp

picture16

Where, Cp = process capability, σ = standard deviation of the process, USL & LSL are the upper and lower specification of the customer. The number 1.33 is least desired Cp for a capable process = 3.9 sigma process.

Calculating for σ

picture17

Calculating the σ for the above process

picture18

Centre of the specification = 99.5 hence the target range of the assay for the product development team is given by

Specification mean ± 3σ

  = 99.5±3×σ = 99.5±1.89 = 97.61 to 101.39

Hence, product development team has to target an assay range of 97.61 to 101.39 instead of targeting the customers specifications.

There is other side of the coin, whatever range we take as a target for development, there is a assumption that 100% of the population would be in between that interval. This is not true because, even a 6 sigma process has a failure rate of 3.4 ppm. So the point I want to make here is that we should also provide a expected failure rate corresponding to the interval that we have chosen to work with.  

picture19

For further discussion on this topic, keep vising for the forth coming article on Confidence, prediction and Tolerance intervals

Not Giving Due Respect to the Quality by Design Principle and PAT tools

Companies not having in-house QbD capability can have an excuse but even the companies with QbD capability witness failures during scale-up even though they claim to have used QbD principle. They often think that QbD and DoE are the same thing. For the readers I want to highlight that DoE just a small portion of QbD. There is a sequence of events that constitute QbD and DoE is just on of those events.

I have seen that people will start DoE directly on the process, scientist used to come to me that these are the critical process parameter (CPPs) and ask for DoE plan. These CPPs are selected mostly based on the chemistry knowledge like, moles, temperature, concentration, reaction time etc. Now thing is that, these variables will seldom vary in the plant because warehouse won’t issue you less or more quantity of the raw material and solvents, temperature won’t deviate that much. What we miss is the process related variables like heating and cooling gradient, hold up time of the reaction mass at a particular temperature, work-up time in plant (usually much higher than lab workup time, type of agitator, exothermicity,  waiting time for the analysis and other unit operations. We don’t understand the importance of these at the lab level, but these monsters raises their head during commercialization.

Therefore a proper guidelines is required for conducting a successful QbD studies in the lab (see the forth coming article on DoE). In general if we want a successful QbD then we need to make a dummy batch manufacturing record of the process in the lab and then perform the risk analysis to the whole process for identifying CPPs and CMAs. Brief QbD process is described below

Picture1

 Picture6
Improper Control Strategy in the Developmental Report

Once the product is developed in the lab, there are some critical process parameters (CPPs) that can affect the CQAs. These CPPs are seldom deliberated in detail by the cross functional team to mitigate the risk by providing adequate manual and engineering control. This is because we are in a hurry to file ANDA/DMF and other reasons. Once the failures become the chronic issue, we take actions. Because of this CPPs vary in the plant resulting n OOS.

Monitoring of CQAs instead of CPPs during commercialization.

I like to call ourselves “knowledgeable sinners”. This because we know that a CQA is affected by the CPPs even then we continue to monitor the CQA instead of CPPs. This is because, if CPPs is under control, then CQA will have to be under control. For example, we know that if reaction temperature shoots, it will lead to impurities, even then we continue to monitor the impurities level using control charts but not the temperature itself. We can ask ourselves what we can achieve by monitoring the impurities after the batch is complete? Answer is we achieve nothing but a failed batch, investigation, loss of raw material/energy/manpower/production time, to summarize we can only do a postmortem of a failed batch and nothing else.

Instead of impurity, if we have monitored the temperature which was critical, we could have taken an corrective action then and there itself. Knowing that this batch is going to fail, we could have terminated the batch thereby saving loss of manpower/energy/production time etc. (imagine a single OOS investigation required at least 5-6 people working for a week, which is equal to 30 man days.

Picture18

Role of QA is mistaken for Policing and auditing rather than in continuous improvement.

The QA department in all organization is frequently busy with audit preparation! Their main role has got restricted to documentation and keep the facility ready for audits (mostly in the pharmaceutical field). What I feel is that, within the QA there has to be a statistical process control (SPC) group, whose main function is to monitor the processes and suggest the areas of improvements.  This function should have sound knowledge of engineering and SPC so that they can foresee the OOT and OOS by monitoring CPPs on the control charts. So, role of QA is not only policing but also assisting other departments in improving quality. I understand that at present SPC knowledge is very limited among QA and other department, which we need to improve.

Lack of empowerment to the operators for reporting deviation occurred

You all will agree, the best process owner of any product is the shop-floor peoples or the operators but, we seldom give importance to their contribution. The pressure on them is to deliver a given number of batches per month to meet the sales target. Due to this production target, they often don’t report deviations in CPPs because they know if they do it, it will lead to investigation by QA and the batch will be only cleared once the investigation is over. In my opinion, QA should empower operators to report deviations, the punishment should not be there for the batch failure but for not asking for the help. It is fine to miss the target by one or two batch but the knowledge gained from those batches with deviation would improve the process.

Lack of basic statistical knowledge across the technical team (R&D, Production, QA, QC)

I am saying that everyone should become an statistical expert, but at least we can train our people on basic 7QC tools! that is not a rocket science. This will help everyone to monitor and understand the process, shop-floor people can themselves use these tools (or QA  can empower them after training and certification) to plot histogram, control charts etc.. pertaining to the process and can compile the report for QA.

What are Seven QC Tools & How to Remember them?

Other reasons for OOT/OOS are as follows which are self explanatory
  1. Frequent vendor change (quality comes for a price). Someone has to bear the cost of poor quality.
    1. Not linking vendors in your continuous improvement journey. The variation in his raw material can create a havoc in your process.
  2. Focusing on delivery at the cost of preventive maintenance of the hardware’s

 Related Topics

Proposal for Six Sigma Way of Investigating OOT & OOS in Pharmaceutical Products-1

Proposal for Six Sigma Way of Investigating OOT & OOS in Pharmaceutical Products-2

 

 

Understanding 6sigma: Example-3 — Problem at a Soap Manufacturing Plant

picture61

Commodities products like soaps, detergents, potato chips etc. faces lot of cost pressure. Manufacturer has to ensure right quantity of the product in each pack to ensure his margins (by avoiding packing more quantity) and avoids legal issues from consumer forum (in case if less quantity is found in the pack).

Let’s take this example

A company is in the business of making soaps with a specification of 50-55 Gms/cake. Anything less than 50 Gms may invite litigation from consumer forum and anything beyond 55 Gms would hit their bottom line. They started the manufacturing and found huge variation in the mean weight of the cakes week after week (see figure-1, January-February period). They were taking one batch per week and producing 250000 soap cakes per batch. From each batch they draw a random samples of 100 soaps for weight analysis. Average weight of 100 samples drawn per batch for the month of Jan-Feb is given below.

Picture1

In order to evaluate the performance of the process, a control chart is plotted with VOP & VOC (see below). Presently it represents the case-I scenario, Figure-6 where VOP is beyond VOC.

Picture2

They started continuous improvement program to reduce the variability in the process using DMAIC process. They were able to reduce the variability to some extent but still majority of the soap cakes were out of specifications (March-April period, Figure-3). They continued their endeavor and reduced the variability further and for the first time the control limits of the process was within the specification limits (May-June period, Figure-3). At this point their failure rates were reduced as 95% of the soaps would be meeting the specifications.

Continuous Process Improvement
Continuous Process Improvement

We can further reduce the variability to reach the 6 sigma level where the failure rates would be 3.4ppm. But now, we need do a cost benefit analysis as improvement beyond a limit would involve investment. If 5% failure rate is acceptable to the management then we would stop here.

Comments:

It is not always desirable to achieve 6 sigma level, a 3 sigma process is good enough. But there are cases where human life is involved like passenger aircraft, automobile brakes and airbags, medical devices etc. and in these cases it worth going to 6 sigma and beyond to ensure the safety of the human life.

Understanding 6Sigma: Example-1 — Child’s Counter Argument

picture61

We have seen the situation from parent’s angle (see earlier blog), now consider student as the customer and parent as supplier and student (or customer) is trying to convince his father by arguing “look dad, all universities with ranking 1 to 10 are same, it is just a statistical rating that is done for attracting students, it changes every year and the universities that you are talking about are best in science, but I want to study law for which a particular university with 6th rank is the best”. As an understanding father (supplier or vendor), he finds the argument too strong to be opposed any further and agrees to his son’s specification i.e. he modifies his current process (expectation). Here Student is setting the specifications (VOC) and father is accepting it (VOP). The process of convincing the father is six-sigma which bridges the gap between them.

Picture2

Let’s be little philosophical

All of us had a dream during college days that I want to be this, I want to be that. What we did is to provide ourselves with a specification about our future life or VOC. We were also aware about our current capability (VOP) but we never took pain of performing a gap analysis and as a result we couldn’t take appropriate steps to reduce the gap between our desire and our capability, ultimately landing somewhere else in our life. Our desires are still our desires only.

Can we apply six-sigma to build our career? Or at least help our children in doing so?

 

 Related Blog

DMAIC

What Taguchi Loss Function has to do with Cpm?

for posts

The traditional way of quality control can be called as “GOAL-POST” approach where, the possible out-come is goal or no-goal. Similarly, QA used to focus only on the end product’s quality with two possible outcomes, pass or fail.

Picture22

Later on Taguchi gave the concept of producing products with quality targeted at the center of the customer’s specifications. He stated that as we move away from the center of the specification, we incur cost either at the producer’s end or at the consumer’s end in the form of re-work and re-processing. Holistically, it’s a loss to the society.

Picture23

For example;

Buying a readymade suit, it is very difficult to find a suit that perfectly matches your body’s contour, hence you end up going for alterations. This incurs cost. Whereas, if you get a suit stitched by a tailor that fits your body contour (specification), it would not incur any extra cost in rework.

Let’s revise what we learned in “car parking” example (see links below). The Cp only focuses on how far the process control limits (UCL & LCL) are from the customer’s specification limits (USL & LSL) …. it doesn’t take into the account the deviation of process mean from the specification mean. Hence, we  require another index which can penalize the Cp for the above deviation and this new index is called as Cpm.

Related Posts

Why & How Cpm came into existence? Isn’t Cpk was not enough to trouble us?

Car Parking & Six-Sigma

What’s the big deal, let’s rebuild the garage to fit the bigger car!

How the garage/car example and the six-sigma (6σ) process are related?

Now Let’s start talking about 6sigma

What do we mean by garage’s width = 12σ and car’s width = 6σ?

Kindly provide feedback for our continuous journey