Welcome to measure phase. Validate the measurement system. I'm Catherine MacGyver, and today you're going to understand the importance of validating the measurement system and understand how to validate the measurement system.
So in our last module, we talked quite a bit about the importance of doing pre working planning when we're designing a data collection or a measurement system where we, um where we clarified the goals. So what are we looking to learn in this?
We clarified our operational definition. So how do we know that everybody's on the same page for
what it is we're looking for and how frequently. And then we designed our methodology system. So the question you may be asking yourself is, if I did all of this pre worked before, why do I want to then validate my methodology? I mean, I did a lot of thinking and prep before,
so the reason why is because you did do a lot of thinking and prep and the work that you're going to do future down. This
measurement is the foundation for your analyze, improved and ultimately your control phase because you are capturing the baseline and you are capturing the information that you are ultimately going to make decisions about your process. From with that inaccurate results are bad. So when we start talking about
measurement systems and we start talking about data collection, we have run the risk of what we call it's hype one and a type to error. So a false positive and a false negative. So when we're talking about a false positive What? This is something that it was in fact, true.
But we reported us false
in a false negative is something that was, in fact false. That we reported is true. So why this is important to you is if you think about if you were measuring defects and you
measured something that was a defect. So it became part of your baseline, which went into your DP
DPM. Oh, so your defects per 1,000,000 opportunities, but it wasn't actually a defect you. Now we're going to show ah, lower process performance level. Then you would have if it had been, in fact, correct. So the reason why we validate is because those inaccurate findings
can distort the results or make even in a more dramatic way make the data that you have collected unusable. So if you find
too many errors in the data and now you have decreased the credibility and you're gonna want to go back to the drawing board and completely redesigned. So if you remember in our y equals F of X module, we had talked about garbage and equals garbage out. So if you think about the dough, make project itself as a process.
Are the data that we collect in? Our measurement phase
is, in fact, one of the out the inputs that goes to the output of a successful lean six Sigma project. So you want to invest the time to know that you are confident in the results that you collect. Also, remember, your measurement system is going to be part of your control plan
as you wrap up the project and pass it off to your process owner.
So what's important in a measurement system? There are two things, and actually these things are so important that the technical term for validating your measurement system there are two.
There is we call a menace a because remember, lean Six Sigma people love their acronyms, which is measurement system analysis.
And then there's also gauge or in our which so Gage
measurement are in our stands for repeat ability and reproduce ability. So repeatability is if the same person can take, get the same results. Doing using the same methodology every time and reproduce ability is if different people can get the same results every time. So when we're talking about repeatability, what we're talking about is
eliminating as much subjectivity as we can in it. So, for example, if you asked me
what my office temperature was today and we measured it with a thermometer, it would be
68 little chilly. But if you ask me what my office temperature was and had me reporting, I would feel like it's a little chilly. Asked me now, two days from now, when it is also 68 like a hot, it's kind of nice. So you see that the same person with the same measurements is getting different results.
This is the reason why we want to try and rely on our new miracle data
as much as possible, because it is much easier to quantify, as compared to our categorical data reproduce ability is that if I have a thermometer that says. It's 68 you have a thermometer and you come in here and it says it's 68
Now we know that this isn't the measurements aren't dependent on the operator or the person who's actually doing
the measurements. So reproduce ability is the other aspect that is very important. If I design a data collection methodology and I do a trial run or pilot of it, if I transition that over to you, you can do the same data collection methodology. And now we're still comparing apples to apples because that is one of the key
facets in the reason why we do measurement system analysis
is to make sure that we are always comparing apples to apples.
So when we're talking about how do we validate our system? The most common way is to do comparison studies. So you and I stand in my office with thermometers at the same time, and I say I have 68.5 and you say we have 68.4. We may have some conversation about the margin of error,
but we're going to say that it's 68 degrees and it's good,
so you want to do that. You can do that side by side measurement, having when you're looking at categorical data, having multiple sets of people look at the same data and then taking the average in between. Or you could do a blind study where you have people who measure the things independently and then compare conclusions. So
when we talk about blind studies, we're talking about again back to that categorical cause you're going to do use it a lot with your yellow belt.
Now instead of you, when I comparing results, I'm gonna come up with my results. You're gonna come up with your results, and then we're gonna look for where they align and overlap. So we generally see blind studies Maur when we're working with numerical data.
Um, and that's because it is easier to quantify. We generally see side by side studies more when we're working with categorical data.
The other thing that you want to consider as you're planning your measurement system analysis is the training and the audits for the people actually doing the data collection. If during your data collection
system designed so your previous module, when you're doing your methodology prep, make sure you also include aspects on how are we going to tell the people who are doing a data collection how to do it? And then how do we go back and audit to make sure that they're doing it the same way? So using that thermometer example again,
if we have an old school Mercury Mercury Thoreau armor,
if I stare at it from the top, I do in fact get a slightly different number than if I stare at it head on. So in my methodology, I want to say, Look at it 90 degree angle from the top, or even better, use a digital thermometer that everybody can read.
So when we're talking about validating the measurement system, really, the important takeaway for this is the credibility of your project results. If your information is, or if your analysis is based on inaccurate data, it means that any of your solutions
may potentially not be in fact solutions. And remember, we used the domestic model
to ensure that our solutions do, in fact, solve the problems that we start at the beginning. We also want to make sure that we're not wasting our team's time, so we want to be working on accurate data. We talk about unstable data. This is data that is not repeatable or reproducible. So
thank you. For with validating the measurement system. Our next module is actually our measure. Told Gates, we're getting close to wrapping up measure. Thanks, guys.