# 10.6 Hypothesis Testing

Video Activity

Join over 3 million cybersecurity professionals advancing their career

Sign up with

Required fields are marked with an *

or

Already have an account? Sign In »

Video Transcription

00:00

Hi, guys. Welcome to improve phase hypothesis testing. I'm Catherine MacGyver. And today you're gonna have an understanding of the purpose of hypothesis testing an awareness of the statistical tools for hypothesis testing. So to orient yourself within the course, if you remember back to our analyzed phase, we had a lesson on hypothesis development

00:20

where we explore the idea of

00:22

creating questions around the relationships between our independent and are dependent variables. So our exes and our wise And if we do X, what do we see and why or what do we think we can see and why, If you remember briefly, I had mentioned that the way that you write your hypothesis helps a dictate which test you do.

00:40

We're not going to get into that. And this module, we're going to exclusively look at hypothesis testing.

00:45

That is a change.

00:47

There is no change, which is our know how offices. And there is a change which isn't all our alternative hypothesis. If you need a quick refresher on this language, you're gonna want to go to analyze vase hypothesis development.

01:00

All right, So before we start talking about statistical significance and hypothesis testing, I want to specifically call out that there is a difference between practical significance and statistical significance.

01:14

And I think that this is really important to air because a lot of organizations want a push toward this world class organization in these higher signal levels, these 4.5 these fives thes sixes, and with that,

01:25

you're going to want to show statistical significance. But practical significance is as meaningful, if not better, than statistical significance. And let me explain, my practical significance is when the improvements that you do have a destroyer

01:42

double applicable change to your process, so these can be things that can't be measured like employee satisfaction with five s. It's very hard to get a statistical significance level off of

01:55

our employees more content when they come to a clean, well organized workplace. We start talking about some of the other intangible, so the impact of visual management and what that does for your workplace in the smooth operations of your processes, thes air, things that cannot be statistically,

02:14

um,

02:15

they can't be shown with statistical significance because of the way that you do your measurement. So with that they're still important. You still see positive benefits to your organization doing these improved factors, but you can't necessarily show statistical significance. So, conversely,

02:32

statistical significance is where you are looking at groups of data

02:38

to see whether or not at a certain confidence interval you are determined. Or you can strongly determine that the relationship in your hypothesis is either Phil to rejected a null hypothesis or reject to the null hypothesis.

02:53

So we talk about, failed to reject the null hypothesis and reject the null hypothesis because we can't ever definitively say that this is the only relationship and there's not some other factor that we're not considering. So from a little bit of a terminology standpoint, for the practical significance, you're going to say yes, we know this works. We're happy your side

03:15

from a stiff

03:15

statistical significance. We are either rejecting, which is good. It means that are no hypothesis, is more than likely accurate or we're feeling to reject, which means we are not seeing a change in the data.

03:28

So with stood with hypothesis testing, we are confirming the statistical significance of those observations so different than a practical significance where it does have a positive impact. But for whatever reason, we either don't need the statistical significance interval or we can't get it because of the types of the types of measurements were doing.

03:46

But what we're looking for is whether or not the observations that are made. So the measurements that we capture

03:53

could have happened by any other cause other than the improvement or the intervention that we did to the process. So we talk a lot about large data. Differences may be exceptions rather than norm. Subtle little subtle data differences. These shows process shifts, so we start seeing trending.

04:14

The reason why we do this is we course want to validate that our improvement work is working or potentially not working. So what we're looking for is, are we seeing a repeatable impact in our process? And we want to do this as

04:29

that's possible, but definitely before we do full implementation or roll out if we're going to do statistical significance. So we're gonna want to do this with our pilot data.

04:38

Um, that being said, like I said, keep in mind, practical significance is as meaningful. You start seeing statistical significance when you start getting this Ah, higher level lean six Sigma practitioners, specifically black belts there is an expectation for them that they're able to show this for all of the project's results.

04:58

So if you remember back to our hypothesis development module, we talked about the Nolan, the alternative hypotheses where we are comparing two or more sets of data that has to do with the hypothesis statement that we developed. So we're going to be looking at either multiple groups with the same process of either different shifts or different locations,

05:16

or even different, even potentially different people within when we're starting to talk about it

05:23

internal benchmarking and then we want to look at before and after changes. So this is what was our baseline. So what we captured in our measure phase and this is what we're re measuring as we're starting to play around with our solutions and our improvements. But remember, the no hypothesis is nothing changed.

05:43

By doing this intervention,

05:45

there's no result in the process performance. The alternative hypothesis is our intervention changed the results in some way. Remember, we're not looking at positive or negative right now. Just there is a delta or a difference.

06:00

So when we talk about our test selection and I promised Joe's going stats light, but for you as a project team member, one of a semi informed about what the tests are being led for. So if you're doing it before and after you're doing a baseline, we did an intervention. And now we're re measuring. You're going to do a pair to T.

06:19

This says that these sets of data have some sort of relationship

06:24

to each other.

06:26

If you are doing two sets of data for comparison, so either, Ah, baseline and perhaps a different location. You're gonna want to do an unparalled t. And what that is is that says that these sets of data

06:39

do not have some sort of relationship to each other. So they're two completely different groups. You see this like you were comparing boys and girls in school and their performance. That's gonna be an unfair T test

06:49

if you are looking at three or more sets of data. So if you are implementing multiple solutions you're gonna want to do in a nova or an analysis of variance, you need tohave continuous data for paired tease, unparalled teas and a nova.

07:06

So if you remember back to types of data, we talked about how continuous is the gold standard.

07:12

You must have this. You can run these tests, but you will get some wacky, unreadable results or hopefully will recognize that they are wacky, unreadable results and not make not make decisions from that. So, conversely, if you're working with categorical data, so either orginal or um

07:30

nominal is a little bit different because nominal truly is attributes. But if you're working with Ordell or categorical data,

07:35

you're going to do a chi square to test and what this is is. This is a comparison of the expected accounts of those and the actual counts of those. So if you're looking at how many complaints did we get because the product was the wrong color? How many do we expect based off of historical data?

07:51

And how many have we now received based off of our improved process data?

07:56

So those were going to be the types of tests that you're going to listen for as your facilitator is leading you through statistical testing because your organization is working towards being a world class organization.

08:07

So with that, remember that have offices. Testing is a statistical exercise. We're not necessarily as focused on. We're looking at practical solutions. So whether or not it has a positive impact for our organization, as compared to whether or not there is

08:26

numerical difference between the two datasets or measures, you need to understand what type of data you are collecting to help you select the appropriate statistical test or the hypothesis test.

08:37

So if you remember back to our module on types of data, it's important to understand that and be really clear on what you're looking at because you can run those tests with different data and you get wacky results that will give you inaccurate outcomes and then hypothesis. Testing really is a black belt area. So if you are

08:56

super passionate about this idea and want to do this for your organization, continue doing your training.

09:03

Get to your black belt.

Up Next

Similar Content