Hi, guys. Welcome back. I'm Katherine McKeever, and this is your lean six sigma green belt. Today we're gonna go over hypothesis testing.
So we have developed our hypothesis. Now you know that we've done we've gone through the root cause analysis were firmly in our analyze phase and based off of the outcomes from our root cause analysis, we've developed some hypotheses. So now we're going to test them, and we're going to see whether or not there is any statistical significance
from those hypotheses.
So we're gonna pile it a little bit. That's how we're gonna have our comparison data. Or we might have data sets we can compare. Anyway, we'll talk through this, and you're going to be able to understand type one and type two errors on going to really drill in on this because this is one of the most common mistakes I see greenbelts making.
hypothesis. Testing tells us if there is a statistical significance between two data sets, this is very important because there can be an operational significance and it might be good enough. What we're looking for is did what we change. Have an impact.
Did implementing that solution actually give us the results that were looking for.
And when we talk about the difference between statistically and operationally, significant we are talking about
is statistical significance is whether or not we can mathematically prove that what we did change the outcome and what we are seeing is in fact a result of that so causation rather than random variation.
So statistical significance is the enemy of the Hawthorne effect. I know we've talked about it throughout the course. Often on the Hawthorne effect is when people tend to be a better because their boss is watching
that any benefits from the Hawthorne effect are not statistically significant. We wouldn't be able to say
because my boss is spying on my calendar. I am working more well, we would say that that is operationally significant. It is important for you as a lean six sigma practitioner are to be really clear with your project sponsor and or your champion.
Whether or not operationally significant is good enough,
my recommendation to you is yes. If you start seeing shifts towards your project objectives, regardless of whether or not you have enough of a change mathematically to conclusively say
yes. This was because of our intervention or no, this wasn't. If you are seeing an improvement that is sustained,
I think that improvement is an improvement for your organization.
that being said, what you are looking at is the most common statistically used significant test.
You'll notice a couple of things in here discreet and categorical. We look at chi squared, continuous. We have a lot of questions and a lot of questions and a lot of questions. We're not going to do this. This is going to be for your statisticians, your data analysts, your research scientists.
There are a lot of options out there because these air still commonly used
we're gonna use these. Um, you will be really, really, really good at these in your black belt as a greenbelt. I'm going to tell you remember your data analysis tool pack. We added a few lessons ago. All of these are available in there. So what is really important for you
is to understand what test you are looking for.
Discrete data tends to not be very easy to test. You're going to do a chi squared. And what a chi square it is, is it's looking for a difference between your actual and you're expected. So if you think about like your binomial distribution where we said, our probability is 0.5,
If we actually ran in and we saw that we were say 0.3
ah Chi Square would would say we're expecting 0.5. We saw a 0.3. There is some statistical significance there. As we switch over to our continuous. We're looking at a couple of different options we're looking at. What is the relationship and what is the difference?
So what is the relationship is what we're going to focus on as a green belt, because why equals F of X
and lean? Six Sigma is primarily in the relationship game. What is the difference? Tells us the magnitude of the shift. So paired UNP aired T test. This is going to be kind of your get down when you're comparing two data sets. A nova is your analysis of variance and what that's gonna look at is
do we have a change in variants? A nova is the get down of the black belt because remember, six Sigma
is the war on variants. So Unova tells us whether or not we did in fact, change our variants not very much focused on our measures of central tendency from a nova perspective. So for you as a greenbelt, what is the relationship and can we change it?
All right, Type one and type two errors.
A type one error is a false positive. We reject the no hypothesis when there is no statistical significance. So we say this actually did something when it didn't a take two is a false negative. We are failing to reject the null hypothesis
when there is a statistical significance.
So we said, No, there's nothing going on, but there really was something going on.
I talk about this being very important because it overlaps very closely with the idea of operational significance. When you see a false positive, we say, Oh, my God, we saved the world. But really, it's not sustainable. So remember I mentioned if you are sustainable or trending,
this is what you're gonna want to look for.
That's where you see a false positive We save the world. Oops, gnome abad. Um, false. Negative. No, we're not really that effective when you are, in fact affecting change. So keep these in mind. Also, when you're thinking about statistical significance versus operational significance,
this is how I remember Type one and type two errors. So I'm gonna pass it on to you guys.
The boy who cried Wolf had both a type one and type two error. So if you remember the story about the boy who cried Wolf, he was the one that sat upon the hillside, got really bored, decided it would be funny to say, Wolf and have everybody run to try and save their sheep. He did this a few times, and then there really was a wolf
and nobody believed him.
So both type one and type two eras. You had a false positive. We said there was a wolf when there wasn't and a false negative when there was a wolf, but nobody acted on it.
With that being said today, we went through hypothesis testing in depth. You understand the difference between significant and operational significance. You know that regression testing is gonna be your thing and you can recognize type one and type two errors there in the forefront of your mind
on with that, we're going to move into probability.
So I will see you guys there