Time
3 hours 49 minutes
Difficulty
Beginner
CEU/CPE
3

Video Transcription

00:00
Welcome to the Idol framework. Updated course from Cyber Eri i t.
00:05
My name is Daniel Riley. I'm your subject matter expert on the idol framework.
00:09
In this video, I'm going to talk about the service transition fees, KP eyes and monitoring.
00:17
So under the change management process, we're going to keep
00:21
ah account of the major changes that we've assessed by the Change Assessment Board
00:27
and then we're going to keep a number of account of the meetings that the Change assessment board is held.
00:34
We're also gonna want to keep a count of the number of open request for comments that we have in our environment.
00:42
And then we will want to keep account of the mean days that those requests for comments stay open.
00:48
Now, this is just the sum of the days
00:51
that a request for comments open divided by the total number of requests for comments. So you take the sum of all the days for all the request for comments that are open and divided by the total number of requests for comments that are open on this. This gives you an average
01:10
on idea of how
01:11
quickly you're responding to requests for comments in your environment,
01:18
they're really keep account of the accepted request for comments. These are ones that have been closed and approved for development and the inverse we're going to keep rejected our A C count.
01:30
Now you can derive the total count of closed our f sees by adding these two together.
01:38
And finally, for this we're going to take account of the emergency changes that we have allowed through our environment. And hopefully this number is going to be close to zero.
01:49
Now for project management, KP eyes. We're going to keep a count of the major products that were planning to be released.
01:57
Uh and then we're gonna keep account of the projects that we've closed out as well.
02:01
Um, of those we're going to take a mean lifetime, which is the sum of the time a project takes to move through the entire life cycle. We're going to some all of those lifecycle time's up, and then we're gonna divide by the number of projects we've closed out.
02:21
So that number of days that a project is delayed, um, is a measure of the estimated completion date versus the actual completion date.
02:31
If you take the actual completion date and subtract out the expected completion date.
02:38
You'll get a number of days. If it's less than zero, you were able to deliver your project early. Congratulations. Ah, if it zero, that means you hate your target. Exactly. Ah, and if it's over zero, that means your project was late.
02:53
The mean days of delay is related. You will take the sum
02:58
of those days across the different projects on, then divide it by the number of closed projects as well.
03:07
So the dollar adherents to the budget eyes similar in that it's the actual and expected spend.
03:15
Um, if you get a number that is lower than zero, that means you overspent on your budget. Zero means you hit your budget. Exactly. And anything over zero mean you went under budget on your project.
03:31
Now, the mean budget adherents is going to be the sum of those above values
03:38
across different projects that completed projects.
03:43
And then we're gonna divided by the number of completed projects that we have
03:47
Now. It's important to understand if you're consistently under budget, it's okay. Ah, and possible to get a negative mean value here,
03:58
um,
03:59
and t o find a variance, which is simply the distance away from zero. You can take the absolute value here, and this will tell you even if it's a negative number, I will just simply tell you how far away from your target you were.
04:15
It's under the released employment process. We're gonna keep account of the releases that we finished, and a lot of times we'll see this come up as, ah version, tag or version. Info.
04:28
The example here. Version one, not 3.6 hyphen x 64.
04:34
This is fairly common called a major minor patch version scheme,
04:40
and I've added the tag, which a lot of companies used if they have multiple architectures that they support.
04:47
Um,
04:48
so this is a count of different types of releases we see for this service. It would has had one major release, three minor releases, six patch releases and it's tagged for the X 64 architecture.
05:06
And this is just a way of keeping track of how far along in a life cycle
05:14
ah, service might be.
05:17
So the number of times that we have to
05:23
release
05:25
is, ah, count of the days that it takes or in some instances, hours
05:31
that it takes between the time a service is submitted for deployment and the time that the deployment is ready for verification.
05:46
We're going to take the sum of those times, and we're going to divide it by the number of the releases that we've done and that will give us a mean time to release, which just describes the average amount of time that it takes to go from a request for deployment to deployment,
06:05
verification.
06:09
Then we're gonna keep count of the number of release rollbacks though you've had to perform.
06:13
And again, we're hoping that this is going to be close to zero with our controls in place.
06:19
Then we're going to keep a percent of the automated release procedures on. And this is just a count of the number of automated steps that we have in our release life cycle
06:32
and over the total number of steps in the lifecycle. Um,
06:38
and I think a good balance for this one is probably around 80%.
06:42
You wouldn't want to go to 100% automated release. You do want to have
06:47
loves
06:48
expert eyes on the situation.
06:53
So in the validation and testing phase, we're gonna keep account of the tests per release components
07:00
on, and then we're gonna take a percent of failed component tests over our total number of tests. That's the counter failed test, divided by the total number of tests. And then this is a constrained percentage between zero and one where we're hoping to get closer to zero failed tests.
07:20
We're gonna take the mean failed test percentage, which is just basically the number of tests over summed over the number of times we've performed releases.
07:34
Um,
07:35
then we're going to get the number of failure after releases, which is after we released some code. If some
07:45
system failure is reported, that is linked back to the component release that we've just performed. Um, that would go towards this metric.
07:59
We're gonna have the meantime to release fix, which is when there's a failure reported. Um, we're going to keep track of the time. From the time the failures reported, until the time we got the release fixed and back out for verification.
08:16
For all the number of failures that we've had, we're gonna take that time, we're going to sum it up, and then we're gonna divide by the number of failures that we have
08:26
and the asset configuration and management KP eyes We're going to keep account of the weeks between the verification cycles in our configuration management.
08:37
Ah, and normally you want to verify at least once a year. Uh,
08:43
but it
08:43
depending on the size of your organization, you might verify more frequently than that in sections.
08:54
So we're gonna keep a count of the number of verification cycle's completed, like I was discussing just a moment ago
09:01
on. And then we're going to keep account of the hours that we've spent on component verification.
09:05
Ideally, we would like to be able to verify more of our environment without spending
09:13
Maur hours on the verification. So anyway, we can be more efficient in our configuration verification that will help this out,
09:24
knowing I keep count of the incidents where the root cause that we identify is Miss configuration and these could be operational or security incidents.
09:33
Um,
09:35
and we really want to make sure that that we're only counting miss configuration settings or set ups that we had documented
09:46
to do one way, and we're not, uh,
09:50
configured that way.
09:50
In reality.
09:54
No, you can't keep a percentage of our environment that's covered our by our configuration management.
10:00
Um, and that's just the number of components that we have covered over the total number of components that we have
10:07
listed in our environment.
10:09
And this is constrained percentage again, between zero and one on this one, we're hoping to get closer to 100% of our components covered by some kind of configuration management.
10:24
With that, we've come to the end of this video. I'd like to thank you for watching. And as always, if you have any questions, you can contact me on cyber harry dot i t my user name ist warder T w a r T E r.

Up Next

Axelos ITIL Foundations

This ITIL Foundation training course is for beginners and provides baseline knowledge for IT service management. It is taught by Daniel Reilly, one of our many great cyber security knowledge instructors who contribute to our digital library.

Instructed By

Instructor Profile Image
Daniel Reilly
Information Security Manager
Instructor