What Taguchi Loss Function has to do with Cpm?

for posts

The traditional way of quality control can be called as “GOAL-POST” approach where, the possible out-come is goal or no-goal. Similarly, QA used to focus only on the end product’s quality with two possible outcomes, pass or fail.

Picture22

Later on Taguchi gave the concept of producing products with quality targeted at the center of the customer’s specifications. He stated that as we move away from the center of the specification, we incur cost either at the producer’s end or at the consumer’s end in the form of re-work and re-processing. Holistically, it’s a loss to the society.

Picture23

For example;

Buying a readymade suit, it is very difficult to find a suit that perfectly matches your body’s contour, hence you end up going for alterations. This incurs cost. Whereas, if you get a suit stitched by a tailor that fits your body contour (specification), it would not incur any extra cost in rework.

Let’s revise what we learned in “car parking” example (see links below). The Cp only focuses on how far the process control limits (UCL & LCL) are from the customer’s specification limits (USL & LSL) …. it doesn’t take into the account the deviation of process mean from the specification mean. Hence, we  require another index which can penalize the Cp for the above deviation and this new index is called as Cpm.

Related Posts

Why & How Cpm came into existence? Isn’t Cpk was not enough to trouble us?

Car Parking & Six-Sigma

What’s the big deal, let’s rebuild the garage to fit the bigger car!

How the garage/car example and the six-sigma (6σ) process are related?

Now Let’s start talking about 6sigma

What do we mean by garage’s width = 12σ and car’s width = 6σ?

Kindly provide feedback for our continuous journey

What are Seven QC Tools & How to Remember them?

Understanding and using hard-core statistics for continuous improvement is an issue with the shop-floor people. In order to overcome this issue it was felt necessary to present statistics in graphical forms so that everyone can understand it.

The 7QC tools made the quality control more simpler so that it could be comprehended easily by all. Now statistics is not a prerogative of some experts in the company. It could easily be percolated down the ranks, irrespective whether someone has a statistical background or not.

7QC tools is a collection of statistical tools which need not to be applied in a particular sequence. However, to understand and remember it we need to connect them with each other.

  1. Flow chart
  2. Cause & Effect diagram
  3. Control charts
  4. Check list
  5. Histogram
  6. Pareto Chart
  7. Scatter Plot

One can easily remember the list by using following relationship between the above tools (you can develop some other relationship).

Picture3

If you want to remember 7QC tools then remember these sequence of events used in continuous improvement.

For starting any continuous improvement program, the first step is about defining the problem (quality characteristic ‘Y’ to be addressed). Once we define the problem, we need to understand the process in-depth using Process Flow Diagram to find the problem areas and non-value adding steps.

From the process flow diagram, find the probable sources of variations (X)  affecting the desired output (Y) using Cause & Effect Diagram.

Once we have identified the probable cause (X), then start monitoring ‘X’ and ‘Y’ using proper Control Charts. This will drop some of the ‘X’s’ came from the cause and effect diagram. Make note of ‘X’ that really affects the ‘Y’.

Once you have real ‘X’ that can affect ‘Y’ then prepare a plan for data collection using Check List to support the cause and effect relationship.

Data thus collected using check list is then arranged in graphical form using Histogram to have a quantitative pictorial view of the effect of ‘X’.

The bars of the histogram constructed above is then re-arranged in descending order to give Pareto Chart. This arranges the causes (X) in descending order of their effect on ‘Y’. Take the list of ‘X’ (usually top 3) having prominent effect on ‘Y’ for continuous improvement.

Finally show a quantitative relationship between top three ‘X’ and ‘Y’ using Scatter Plot in laboratory or by collecting more data from the plant and propose the improvement strategy by providing best conditions for ‘X’ so that ‘Y’ remains within the desired limits.

Related Blogs

7QC Tools: Flow Chart, Know Your Process Thoroughly

7QC Tools: Fish Bone or Ishikawa Diagram

7QC Tools: How to Extract More Information from the Scatter Plot?

7QC Tools: How to Draw a Scatter Plot?

7QC Tools: Scatter Plot — Caution! Misuse of Statistics!

7QC Tools: Scatter Plot

7QC Tools — How to Prioritize Your Work Using Pareto Chart?

7QC Tools — How to Interpret a Histogram?

7QC Tools — How to Draw a Histogram?

7QC Tools — Histogram of Continuous Data

7QC Tools — Histogram of Discrete Data

7QC tools — Check List

Excellent Templates for 7QC tools from ASQ

Kindly do provide feedback for continuous improvement

Why it is so Important to Know the Monster “Variance”? — part-2

 

for posts

Variance occupies the central role in the six-sigma methodology. Any process whether from manufacturing or service industry has many inputs and the variance from each input gets add up in the final product.

Hence variance has an additive property as shown below

Picture32

 Note: you can add two variances but not the standard deviations

Consequence of the variance addition and six sigma

Say if a product/services which is the output of some process, which in turn have many inputs. Then the variance from the input (Picture41) and from the process (Picture42) adds up to give the final variance (Picture43) in the product/services.

DMAIC methodology of 6Sigma try to identify the inputs that contributes maximum towards the variance in the final product and once identified, its effect is studied in detail to minimize the variance from the input. This is done by reducing the variance in the input itself.

Example: if the quality of a input material used to manufacture a product is found to be critical, then steps would be taken to reduce the fluctuation of the quality of that input material from batch to batch either by requesting/threatening the vendor or by performing the rework of the input material at your end.

Related articles:

Understanding the Monster “Variance” part-1

You just can’t knock down this Monster “Variance” —- Part-3

Is this information useful to you?

Kindly provide your feedback

Understanding the Monster “Variance” part-1

for posts

This is one of the ways of calculating the variability in the data set.  Variance helps us in understanding how the data is arranged around the mean. In order to do so, we need to calculate the deviation of each observation from the mean in the data set .

For example: following is the time taken by me during the week to reach the office. The deviation of each  observation from the mean  time is given below.

Picture3a

Now next step is to calculate the average deviation from the mean using well-known formula

Picture21

Note that the sum of all positive deviations = sum of all negative deviations which indicates that the mean divided the data set in two equal halves. As a result the sum of all deviation becomes zero, hence we need some other way to calculate this average deviation about the mean.

In order to avoid the issue, a very simple idea was used

Negative number → Square of negative number → positive number → square root of this number → parent number

Hence square of all the deviations are calculated and summed-up to give sum of squares (simply SS) [1]. This SS is then divided by total number of observations to give average variance s² around the mean.[2] The square root of this variance gives standard deviation s, the most common measure of variability.

Picture4

What it physically means is that on an average data is deviating 7.42 units or simply one standard deviation (±1s) in either of the directions in a given data set.

Picture5

Above discussion about the sample standard deviation represented by s. For population, variance is represented by σ² and standard deviation by σ.

Picture40

The sample variance s² is the estimator of the population variance σ². The standard deviation is easier to interpret than the variance because the standard deviation is measured in the same units as the data.


 [1] Popularly known as sum of squares, this most widely term used in ANOVA and Regression analysis

[2] SS divided by its degree of freedom → mean sum of squares or MSE, these concepts would appear in ANOVA & Regression analysis.

Related articles:

Why it is so Important to Know the Monster “Variance”? — part-2

You just can’t knock down this Monster “Variance” —- Part-3

Is this information useful to you?

Kindly provide your feedback

Discrete and Continuous Data

for posts

Data that is being handled in statistics are of two types

  1. Quantitative
  2. Qualitative

QUANTITAIVE DATA

These data are either countable or measurable. Countable means that the data can take some predefined values as out come of a dice through (x = 1, 2, 3, 4, 5 and 6) — these type of data are known as DISCRETE DATA.

Measurable data are those whose possible values cannot be counted and can only be described using intervals on the real number line. e.g. height of children in a given school 120 ≤ x ≤ 150. These type of data are known as CONTINUOUS DATA.

Discrete and continuous data can be understood by following example. Suppose you are crossing a shallow river and there are two options available

Picture2

Picture3

  1. Stepping stone bridge:
    • In order to cross the stream, you have to land on platform-1 then on platform-2 ….. finally on platform-5 to cross the river. You can’t land in-between the platforms. Similarly if a random variable can take some exact values it’s called as discrete data and that random variable can’t take any value between two adjacent data point, then that type of data is called as discrete data. They are countable i.e. you can count the possible values of the random variable involved.
    • Number of customers x = 0, 1, 2, …….
    • Number of phone calls x = 0, 1, 2, …….
    • Outcome of a dice throw x = 1, 2, 3, 4, 5 & 6
      • DISCRETE DATA ARE COUNTED
  2. A bridge on the same river:
    • where you can place your feet anywhere on the bridge to cross the river, there is no restriction, I can take as many steps of any size I want. I can take step size of 1 yard or half yard at a time. Similarly if a random variable can take any value between two given points, then that data is called as continuous data. They uncountable as they can take any value in-between two random variables.
    • Average purchase by a customer 100 ≤ x ≤ 200 dollars
    • Duration of the phone call 10 ≤  x ≤ 30 minutes
    • Distance covered by a car in 5 minutes 3≤  x ≤ 6 Km

CONTINUOUS DATA ARE MEASURED

 

Is this information useful to you?

Kindly provide your feedback

ANOVA by Prof. Hunter

for posts

We are excited about the quality of videos available on youtube, on almost every topic. Look at this video on ANOVA by none other than Prof. Hunter himself.  These video was shot in 1966 in black & white but experience the contents.

ANOVA-1

ANOVA-2

6sigma is like a clamp that compresses the variability

for posts

6sigma is a clam used to compress the variability

We have seen that we can’t change the garage’s width (or customer’s specifications), the only way out is to adjust the process variability (car’s width) according to the customer’s specification. This is done by continuous improvement of the process using 6sigma tools.

6sigma tools is like a clamp where we gradually tighten (continuous improvement) the screw to compress a thing (variability in the process)! 

Is this information useful to you?

Kindly provide your feedback

What do we mean by garage’s width = 12σ and car’s width = 6σ?

 

for posts

Right now we are not in a position of going into the details of the standard normal distribution hence, for the time being let’s assume that my manufacturing process is stabilized, which is represented by a symmetrical curve shown below

Picture16

The main characteristic of this curve is that the 99.7% of the product would be between LCL & UCL or within ±3σ distance from the mean (μ). Only 0.3% or 3000ppm products would be beyond ±3σ or defective products. So width of the car is equivalent to the width of the process = UCL-LCL = voice of the process = VOP = 6σ = ±3σ.

Second point is that the curves never touches the x-axis à means that there will always be some probability of failure even if you move to infinity from the mean (probability can be negligible but will be there).

Now let’s overlap the above process curve with the customer’s specifications (=12σ = ±6σ) or the garage’s specifications.

Picture17

We can see that there is a safety margin of 3σ on both side of the process control limits (LCL & UCL). In layman words, in order to produce a defective product, my process has to deviate by another 3σ, which has very remote possibility. Statistically ± (position of LSL & USL) from the mean would account for only ~3.4 ppm failure (don’t bother about the calculation right now, just understand the concept). For this has to happen, someone has to disturb the process deliberately. Compare this failure of 3.4 ppm at ±6σ level with 3000ppm at ±3σ level!

Even if the mean of the process deviate by ±1.5σ, there is enough margin of safety and it will not impact the quality and in regular production, this deviation of ±1.5σ is quite common.

Picture18

Car Parking & Six-Sigma

What’s the big deal, let’s rebuild the garage to fit the bigger car!

How the garage/car example and the six-sigma (6σ) process are related?

Now Let’s start talking about 6sigma

Download full article

Is this information useful to you?

Kindly provide your feedback