Lesson three measure. What matters?
How do we know when we're winning?
When we first started this course, I said that the overall objective of any security education program should be focused on one thing.
Building the threat recognition capabilities of our colleagues throughout the organization. So how do we know when we are winning at this?
You probably thought, while it's about being able to measure, whether we're traveling in the right direction in order to meet our objective
and you'd be correct. And it's the subject of this lesson designing and capturing metrics that provide assurance that our security education program is delivering on the objectives that we set for it.
This is more than just capturing attendant statistics.
It's about metrics that create actionable intelligence
Metrics that create actionable intelligence. Enable us to make informed decisions about our organization's training programs based on measurable outcomes related to the behaviors that are being adopted on the skills and no hell that is demonstrably present in our organization.
Gathering meaningful metrics from our security education program requires more than measuring the level of participation in scheduled security education sessions or how many colleagues clicked on a fake phishing email there are many ways in which metrics like these can be misleading. For example,
measuring employee participation in security education sessions just gives an indication of who turned up on who did not.
It provides no indication of whether or not they actually learned anything much less whether or not they can actually apply that learning on a day to day basis,
fishing tests can be compromised on several levels.
Using the number reviews is who clicked on a fake email versus the number of e mails sent can be highly misleading. Firstly, the email may have been ignored by the majority of people in the organization for any number of reasons.
I would if the one person who did click on a phony email then alerted dozens of their colleagues along the lines of Hey, guys in for second. Added again with those phony emails,
users often don't learn much from fishing tests.
Think about one happens to someone who does click through on a link in one of these phony emails.
First, they hit a landing page that tells them they have been caught by a fake phishing e mail.
On often, they then have to click on a link to involving further training.
What did they learn from this?
Not much. In my experience. The initial landing page contains no information about where the red flags were in the email that was clicked on by that user
on. Then the user is further directed away to enroll in remedial training. The experience has taught them very little.
When we designed metrics for a program, any program, we need to make sure that the metrics we gather our representative of the objectives that we're aiming for.
So we need to be sure that the metrics we capture are not ambiguous in the context of information security education. We need to ensure that our metrics are giving an unambiguous picture of two things. Al colleague's behavior
by capturing metrics that capture the actions they take
on their threat recognition capability by measuring what they really know.
Measuring actions taken on a day to day basis gives a powerful insight into the way in which our colleagues are behaving.
Remember. One of our key aims for security education is to encourage safe behaviors amongst their colleagues,
measuring things like reports of potential phishing emails and attempts at social engineering. Give a really insight into what people are actually doing.
It shows two things. Firstly, it shows that they can actually recognize a threat when it manifests itself.
Secondly, it shows that they know how to select the right response when they see the threat.
This is one way in which you can obtain meaningful metrics from a simulated phishing attack.
Rather than focusing on the users who click on the fake email, why not measure the number of reports you get from this exercise?
This is a much more meaningful measure because it shows your colleagues can recognize the threat and know what to do when they see it.
When combined with the micro learning schedule, which was designed to reinforce the recognition capabilities, you have a powerful set of metrics. The exponential learning approach provides any environment. Toe home your colleagues for recognition skills and keep them appraised of current threats
on by using your simulated phishing attack tall, you have the basis to measure that your colleagues are likely to take the correct action when they do encounter a threat.
In this topic, I want to discuss how we can get metrics that measure the threat recognition capability of our colleagues so that we can assess our overall vulnerability to cyber threats where end users are the chosen attack vector
metrics like these give us an organizational vulnerability assessment to specific threats. Moving is way beyond statistics based on training course attendants. On taking us towards actionable intelligence
in Lesson two of this module, we looked at the concept of reinforcement, small interventions that reinforce the knowledge and principles covered in the main security education content
We can capture the metrics we need from these micro learning reinforces drawing actionable intelligence about the capabilities of our colleagues on their organization as a whole.
We can also adapt our use of simulated phishing attack platforms to generate complimentary metrics.
Let's now take a walk through on this topic of metrics first, using a quick reprieves of our micro learning exercises to show our journey from providing an experiential learning exercise to generating meaningful metrics from it.
Here we're sending out an e mail to our colleagues with a link to an exercise that we have created, or it may have been based on some new threat intelligence that we have acquired.
When the user receives the email, they participate in the short learning exercise and in the case, shown identifying red flags in the simulated email.
At the back end, we capture the overall results of the exercise.
As you can see, we've captured the results of our colleagues interaction with the exercise on. We have a view of who recognized the red flags with their own skill on who needed some help in this area. By capturing these metrics, we can generate an accurate picture of our colleagues capability to recognize cyber threats.
Let's turn now to a rather different use for a simulated phishing attack platform. So instead of focusing on those colleagues who clicked on a link in one of thes e mails, why don't measure who did the right thing?
So we construct and distribute a fake email in the normal way.
But what we are concerning ourselves with is how many of our colleagues recognized the email of spurious on dhe, then use the fishing reporting. But
the matrix we gain from this are again highly relevant to our objective of measuring the intrinsic threat recognition capability within our organization. The metrics that we gather from this approach tell us how many of our colleagues recognize the threat when they see it
on Dhe, then a dot the correct response.
Okay, now just a quick post assessment question on the topics that we've covered in this lesson.
What are the two key metrics that we use to measure cyber security capabilities of end users?
That's right. Threat, recognition and response.
Let's now move on to the lesson summary.
In this lesson, we have covered the metrics that we need to generate so that we can obtain assurance that our colleagues are capable of recognizing and responding correctly to cyber threats when they encounter them.
We looked at leveraging our experiential learning approach to maintain use of skills and knowledge on to help them recognize emerging threats.
In the next video, we're going to look at some practical steps that you can take to embed security consciousness in the culture of your organization.
That's it for this lesson of making it stick.
Thank you for watching, and I'll look forward to seeing you in lesson for