Hi, guys. Welcome back. I'm Katherine MacGyver, and in this episode today, we're gonna go over validating measurement systems.
So in our last couple of modules, we talked about our data collection systems. What do we need to think about? What are we going to design from it? We're going to keep the person doing the collection in mind. On what are the things that we need to be explicit about in our data collection plan
and this module? We're gonna talk about validating it. So how do we know that what we designed will, in fact,
be able to give us the data that we're looking for for process improvement? So there, this is actually referred to in two different ways. Thes are these terms are interchangeable. It depends on what school you went, Teoh. So I wanted to give you both. So when you're having these conversations,
you can, you know, sounds spiffy and no both.
So the 1st 1 is M s A. Or measurement system analysis. So this is going to tend to be more on your transactional or people who have more experience with traditional project management. That's where this term comes from.
The other one is engage our in our, which stands for gauge repeatability and reproduce ability.
This is a more manufacturing terms. So depending on where you picked up your lean six Sigma training, you will hear either M s a more transactional information based gauge. R and R tends to be more manufacturing gauge. For those of you who don't know
is a way of measuring something length usually or with. So when we talk about gauge, we're talking about a size measurement.
In this case, it's going to be our tools, repeatability and reproduce ability.
So, um, I wanted to give you a real life example as to why you're violet your data collection system. Validation makes sense. And I was going to tell you about my first job. I was a 911 dispatcher and when you talk about measurement, But then I thought about it for a second, and I realized that
I still enough to my old shenanigans. So
we talk about measurement system analysis. What we're talking about is whether or not our information is trustworthy. So Sai Buri tells me that all of our modules have to be nine minutes.
They don't say nine minutes and zero seconds.
Um, they don't say nine minutes and 59. It's seconds. Eso I am choosing to interpret this as nine minutes and 59 seconds because the first digit is in nine and not attend my 911 dispatching example that I was going to give you is actually kind of along the same lines. So,
um, I worked in an agency where you were contacted tohave
a emergency vehicle on scene within eight minutes,
eight minutes, not eight minutes. And two significant figures, like 59 seconds. So the city who did this contract believed that it was eight minutes and 00 seconds, and the agency that I worked for
measured their accuracy rate as eight minutes and 59 seconds because it was within
eight minutes, eso the the agency had a higher percentage of meeting their targets than the city did when reporting out, though, as it were cyber a once your modules and nine minutes and I tend to view them as nine minutes and 59 seconds.
With that, when we're talking about data accuracy, remember breaking sure that we have that same operational definition we talked about in our last module. So in my city example, the city said eight minutes and 00 seconds on the agency said eight minutes and 59 seconds.
It means that the agency thought that they were far more accurate than the city thought that they were second. So when we do our measurement system analysis, we're doing a second check of data
accuracy. There are two ways that we do this. So we took a gauge, or in our we have reproduce ability, which is where we have different operators doing the same data. So in my example, you have city and agency looking at the same times for completion,
and they should come up with the same number If they do not. Your measurement there is something faulty in your
data collection system and you need to go relook at it. Generally, it is operator error. So the people who are doing the data collection itself the other way, it is repeatability. So this one is where you're gonna look a operator consistency. So you're gonna have the same operator.
But you're gonna look a different look at different
yet similar data. So an example for this would be if we think back to our rooms or rooms services example. We have the same person answering the phone and running the food up to the room. But it's going to be different nights. You should see consistent answers in your data. So we're looking at processes that
assuming that they were in control and don't have huge amounts of variability, and we'll talk a little bit about process control later on when we get to statistical process control. Um,
but arguably similar processes that have not had interventions will have similar results. You should be able to make a pizza in about the same amount of time. So if you have
where you made a pizza in 20 minutes, but then the next time you made it in 40 minutes,
the first question that you need to ask is, Do you have consistency in the operator? Say, if you got Anam violence there in eight minutes, or in eight minutes 59 seconds, The next question that you have to last quarries was there really significant variability.
So you're gonna have different operators looking at the same data, so both reproduce ability and repeatability, so different operators would be if you want. I
measured the exact same thing. So you answered the phone. You said this time run upstairs, ring the door or knock on the door. This time I watch you answer the phone. I look at my watch, I watch you knock on the door. We should get the same values. So two ways.
Inter operator, Reproduce ability Operator. Consistency, Repeatability, thes. We're going to be the ways that you want to measure your data accuracy. This is all very, very, very important because the entire premise of your land six Sigma project is based off of your data.
So if we feel like your data is not trustworthy, then you're going tohave undermining in your project.
The other thing is is that if your data is not trustworthy, you cannot make conclusions from it. So if it's something where you're like well, you know, Bob says that this was a five, but cat says that this one was a 10
on that subjectivity. You're not going to be able to make strong decisions regarding your solutions
which will impact your objectives and benefits. So what you want to do is always have an independent verification of your data collection system. The way that I usually do this
is in reproduce ability. So I tend to favour where I go through with my data collection,
and I have someone else go through with the data collection using the collection plan that I wrote and compare the results. So because you will have people who are completing the data collection plan
for you, you're gonna want to use the plan that you wrote. Make sure it's clear that your operators understand what it is they're measuring.
And then, like I said, I tend to lean towards reproduce ability where you have different operators looking at the same data than I do with repeatability. One of the things that there isn't there is a bias towards repeatability when you're starting to look at some of like the inter, um, shift errors.
So you ask yourselves, Is there
is that eight minutes, or is it eight minutes and 59 seconds?
You want to look at the operator who's actually doing the measurement system to make sure that the measures aren't moving? There tends to be a little bit of subjectivity in that as well. So
with that today, we went over validating our data collection systems. We know that there are two ways to do it there. Repeatability and reproduce ability. And we want to do this because this is the basis
for all of the activity for our projects moving forward. And in our next module, we're going to jump into statistics, so I will see you guys there.