Time
8 hours 35 minutes
Difficulty
Intermediate
CEU/CPE
9

Video Description

This lessons discusses gathering audit evidence and focuses on two types: Direct: Proves existence of a fact Indirect: more circumstantial and based on inference This lesson also discusses statistical sampling techniques: - Random sampling - Cell sampling - Fixed interval sampling Participants also learn about what types of evidence of typical for IS audits and how to use computer assisted (CAAT) audit tools. This lesson also discusses electronic discovery which is about the scope of the audit process. Participants also learn about ways to grade the evidence discovered in an audit as well as about the lifecycle of evidence. [toggle_content title="Transcript"] Okay, so now I'm going to talk about how we gather the audit evidence. We need to understand what's considered tangible evidence and what constitutes reliable evidence. So if you think about the basic function of evidence, to either prove something or disprove something, which is to also say that if we don't have evidence that means that we don't have proof. Just like it would apply to a criminal investigation of any type, or trying to find a fact related to some audit objective. Same kind of concept here - Another important thing to point out is that an auditee starts with zero points and then builds up to their final score. As each point gets proven, then that accumulates towards the final report. We need to understand the different types of evidence. Direct evidence doesn't actually require an explanation. It proves its existence; it's self-explanatory, if you will. Then we have the concept of inference, which is related to direct evidence. This means that you can draw a conclusion based on the evidence given. Then we have indirect evidence. This means that we have a hypothesis, or a guess, or a theory about what the evidence actually means. So that might include inference. It might include presumption. This is still based on circumstances and whatever facts can be gathered, but it's not as strong as direct evidence. Another phrase for indirect evidence is circumstantial evidence. So it's a reasonable bit of proof but it doesn't prove things as conclusively as direct evidence would. Then we have to think about statistical sampling. What does this really mean in the context of doing an audit? It's basically a mathematical technique that the auditor employs in order to get enough data to satisfy a requirement for something that's being tested or investigated. Typically, these statistics are presented as percentages. That's the norm, but they could be in other types of units. We have the idea of random sampling, which is self-explanatory. You've got a large body of data and you're just picking some samples from various different areas without any rhyme or reason. It's as random as you can make it. Then we have the concept of cell sampling. This is where a sample is done at some internal: a pre-defined interval. Maybe you're looking for certain types of data once a minute, or once an hour, once a day, whatever the case might be. Then we have fixed interval sampling, and this is very similar to cell sampling where you're doing an incremental interval in order to get a similar distribution of samples throughout a body of data. There's also non-statistical sampling. So this means that the auditor's judgment or their experience and opinion comes in to play. That's why this is also called judgmental sampling. So the auditor picks a sample size and the methodology for extracting the data as well as how many items are actually going to be looked at. Now, we have to think about the evidence types that would be typically used in an information systems audit. We can start with documentary evidence, which includes things like transaction logs, regular systems logs, financial transaction information, receipts. This could be a lot of different things but it's some kind of documented hard evidence that some event took place. Then we have data extraction. So if you've got large data sets, whether it's a log file, or data coming out of a database, or some kind of application, you might use certain tools to pull data out that meets certain criteria. Basically you create some kind of filter and the filter would apply to all the data and only match those things that the auditor is interested in. Then we have auditee claims. So the client or the auditee says that this is so and they put that down in a written statement. This has its own value as evidence, of course, but it may not be considered as strong of a type of evidence as direct evidence would be. You can also look at other documented evidence such as plans, policies and procedures. These are all things that we discussed earlier so we should know the differences between a policy, a standard, a procedure, a guideline, but having these in documented form helps as well because if these are items that are actually in-use within the organization, then they provide valuable evidence. We have to think about doing testing, compliance testing or substantive testing. This gives direct evidence to say that we looked at something, we did some operation, or we observed someone else doing this operation. We saw the input, we saw the behavior, and then we saw the output. The last item is the auditor observing someone in the performance of their duties, or maybe they perform a particular process so the auditor watches them do it perhaps more than once to make sure that it works the same every time. Again, we see the mention of our CAT; computer assisted audit tools. This can help in a lot of different ways. You might have tools that can help with understanding system configuration, that can do certain vulnerability scanning. Doing network scanning, running sniffers, intrusion detection systems. They even apply to systems that allow you to trace the functions of software by putting a tracer on certain operations to see how it goes through the operations that the software performs. There's also tools to analyses the configuration of applications. This kind of goes into the area of application pen testing, fuzzing, that kind of thing. We also can use tools to get an inventory of all of our software licenses on a given environment. That's an important thing to think about. Maybe you test your password policy. So this is just a sampling of some of the capabilities of computer assisted audit tools. Obviously there's more features available depending on which vendor's tool is chosen. Then we have to think about the possibility of using CAT tools for continuous auditing. This is analogous to continuous monitoring where you're looking at your security controls on a continuous basis to find problems as soon as possible and be able to take action as soon as possible. So we can start with online event monitors. These look at transaction logs, event management tools such as Arc Sight; things of this nature can let you get alerts at a moment's notice to let you know that somebody tried to login as administrator, or someone has changed a network setting on one of the production servers. These are great tools for alerting you to different events in the environment. That typically would fall into the same group as intrusion detection systems, intrusion prevention systems. Maybe you have a dashboard that shows you the events as they occur. We can also think about embedding audit hooks into software. This is something that the software developers need to do so that they can basically look for certain events and when an event happens in the software they can generate an alert that can be sent to an auditor. This is a way of putting some monitoring within the functionality of the software itself to ease the auditing process. Then, once an alert gets generated, then that transaction can be looked at in more detail because it might be considered suspicious or unusual. Therefore the alert gets generated so that the event can be investigated more fully. Then we can think about continuous and intermittent simulation, or CIS. That's another audit tool. This means that you can set-up certain criteria for events or transactions to happen, and once those criteria are met, then an alert gets generated so that the auditor can get some information for further investigation. We can also do snapshot audits. So this looks at a series of data capture events, sort of like taking snapshots of something that's moving. We look at the data from this moment, then we take another snapshot and look at the data at this moment, and so on. Then this shows a sequence that a transaction goes through in order to go from its initiation to completion. We also have the embedded audit. This is another way of interfacing with an application. So the auditor can create some dummy transactions and then they put those into the stream of live data transactions in order to see what the output looks like. So if the dummy transaction gets processed in the expected way it should appear to be correct when it's completed, and then that can be compared against transactions that are performed with live data to look for any differences in the way that the transactions were processed. Then we have system control audit review with embedded audit files, also known as SCARF, or EAM. So this selectively picks audit modules within some application software and samples transactions as needed depending on the objectives of the auditor. Alright, so we'll summarize some of the CAT methods here. We can see we've got online event monitors. That reads logs and generates alarms. Very low complexity. The audit hooks in our programs. This will flag those transactions, again low complexity. Then we've got our continuous and intermittent simulation. So defined criteria when the transaction meets those criteria, it gets alerted. This is a medium complexity because there's a little bit more involved in setting up a tool like this. Then we have snapshots that capture data through various stages of its processing - Again, a medium complexity solution. Then the embedded audit module: EAM, producing the dummy transactions and processing them alongside live data and comparing the results. That is a high complexity because it's a little bit more involved to do that. There's more planning, of course, and more analysis as a result. Then we have the last one, which is system control audit review file with embedded audit modules. This we program various modules to do different audit functions and, of course, that's the most complex, so we have a high complexity for this one as well. So a lot of good choices, depending on your resources and your requirements, you can pick one that suits your needs the best. Alright, so let's move on to electronic discovery. What we're talking about here is the difference between what the auditee and the auditor expect to be discovered during the process of doing the audit. So the scope is very important to set, so that the discovery process is appropriate in its level of effort. It doesn't go beyond what's required and doesn't stay less than what's required. So it's important to think about the limitations on the scope. So, for instance, the scope; if it's too large, it could produce a burden on your production systems. It might include things like recovered deleted data. It could address email records. It could address things that were saved on a backup tape or a backup solution. So the scope could be very far-reaching, and that makes sense that you wouldn't want to limit the scope to an individual system if the data has been moved or archived or logged or backed-up somewhere else. So the scope has to adjust accordingly in order to capture the information that's required. We also have to think about this idea of a claim of privilege. Formulas and business secrets might fall under this category. So there could be some exceptions depending on the situation that's being investigated or audited. Now, once we've got some evidence, we have to decide how to grade this evidence. We could start with material. This is a logical relationship between the item that's being investigated and the evidence that's gathered. We have to consider the objectivity of the evidence. If the evidence is objectively true, then we don't need to spend much time doing analysis or exercising judgment because it's already been proven that it's objectively accurate. If more judgment is required, then the evidence becomes less objective. So that makes sense. We have to think about who's providing the evidence. How competent are they? Where is the source of this information? The source could be an individual. What is their expertise? What is their experience? We like to get information directly from a client; not through a second-hand source or a third-hand source that potentially taints the information and dilutes its value. Then we also have to consider the independence of the evidence. Just like an auditor is expected to be independent, the provider of evidence should not have anything to gain or lose by providing some information. If that's the case, if they have something to gain or something to lose, then that evidence is not considered as independent as previously might have been considered. So if we look at the way to grade the evidence, we can start with material evidence. We can see that it's considered unrelated to being poor evidence. If it's indirect, it's considered good evidence, low relationship. The best evidence is direct: so there's no explanation or judgment required, as we talked about a minute ago. Thinking about the objectivity of evidence. It's subjective, requires some facts to explain the meaning. That's pretty standard but if it's in a best evidence category, it doesn't require any explanation. Then we think about evidence sources: unrelated to a third-party with no involvement constitutes poor evidence. If we want to have good evidence in this case, it would be indirect involvement by a second party, so a little closer to the source and then direct involvement by the first party would constitute the best evidence. So the closer you get to the source, or the source directly, the better quality of the evidence. Then think about the competency of the provider of the evidence. If it's poor evidence, the person's probably biased. Maybe they're non-biased, in the case of good evidence, and they might be non-biased and independent in the case of best evidence. So nothing to gain, nothing to lose in that case. If we're analyzing the evidence, poor evidence is analyzed by a novice. Experienced evidence provides good evidence by an experienced analyst, and then expert analyst provides our best evidence. Then lastly we have the trustworthiness of low, medium and high. Obviously best evidence is the ultimate goal in every case, but that may not always be possible, depending on what the situation is. Now we think about the life-cycle of evidence. So it does go through a life-cycle and it's important to consider these individual phases. First we start off with identification. We know that some evidence has been discovered, or identified, and it's lending its support to the objectives of the audit. Then that evidence gets collected according to the procedures that were agreed upon and according to the goals of securing the information, respecting confidentiality requirements, and so on. Then we have to preserve this evidence. Keeping it in its original state. In the case of a forensic investigation, that's a little bit more complicated. We have to consider things like chain of custody, proper gathering techniques, proper documentation, and so on. Then, once that's done, the evidence can be analyzed. This can be done with scientific tests, observation substantive tests, qualitative and semi-qualitative, or quantitative methods could also be used. Then we think about analyzing the evidence after it's been preserved. So you have to return the evidence back to where it was removed from after the analysis has been done. Sort of like the idea that you get some evidence out of the evidence locker, you do some experiments or tests with it and then you have to return it back to where it was. Now that you've got more information, you think about presenting the evidence. So this should support the auditor's report, support the auditor's opinion, and, depending on what type of evidence it is, it might have to be returned to the owner when all of this work is finally completed. [/toggle_content]

Video Transcription

00:04
Okay, so now we're talk about how we gather the audit evidence.
00:09
We need to understand what's considered tangible evidence and what constitutes reliable evidence. So you think about the basic function of evidence. I need to prove something or just prove something.
00:21
Which is to also say that if we don't have evidence,
00:25
that means that we we don't have proof
00:28
just like it would apply to a criminal investigation of any type or
00:32
trying to find ah, fact related to some audit objective,
00:37
some kind of concept here.
00:39
And the important thing to point out is an oddity starts with zero points
00:44
and then builds up to their final score. As each point gets proven
00:48
than that accumulates towards the final report.
00:52
We need to understand the different types of evidence.
00:55
Direct evidence doesn't actually require any explanation, right?
01:00
It proves its existence. It's self explanatory, if you will,
01:04
then we have the concept of in fear in CE,
01:07
which is related to direct evidence. This means that you can draw a conclusion based on the evidence given, and we have indirect evidence.
01:15
This means that that we have a, uh,
01:19
hypothesis or a guess or a theory about what
01:23
the evidence actually means,
01:26
so that might include inference that might include presumption.
01:30
This is still
01:30
based on circumstances and whatever facts could be gathered,
01:36
but it's not
01:37
as strong as direct evidence. Another word or another phrase, rather for indirect evidence, is circumstantial evidence.
01:45
So it's reasonable. A bit of proof, but not as hard of the
01:51
doesn't prove things as as conclusively as direct evidence. Would we have to think about statistical sampling? What does this really mean
01:59
in the context of doing an audit?
02:00
It's basically a
02:01
mathematical technique
02:05
that
02:06
the auditor employees in order to get enough data to satisfying a requirement for something that's being tested or investigated.
02:15
Typically, these statistics are presented as percentages. That's
02:20
that's the norm. But there could be other, another types of units.
02:25
We have the idea of random sampling,
02:29
which is self explanatory, or you've got a large body of data and you're just picking some samples
02:36
from various different areas without any rhyme or reason. It's that says random is as you could make it,
02:43
and we have the concept of cell sampling.
02:46
This is where a sample is done at some
02:49
interval, a predefined interval
02:51
Maybe you're looking for certain types of data of once a minute or
02:55
or once an hour once a day, whatever the case might be.
03:01
And then we have fixed interval sampling,
03:05
and this is very similar to sell. Sampling were
03:07
doing a
03:10
incremental interval
03:14
in order to get a similar distribution of samples throughout a body of data.
03:21
There's also non statistical sampling,
03:23
so this means that the auditors judgment or their their experience an opinion comes into play.
03:30
That's why this is also called judgmental Sandwich.
03:32
So the auditor picks the sample size
03:36
and the methodology for extracting
03:38
the data as well as how many items are actually going to be looked at. Now we have to think about
03:46
the the evidence types that would be typically used in a information systems audit.
03:53
We can start with documentary evidence,
03:55
which includes things like transaction logs, together, systems, logs,
04:01
financial transaction information receipts
04:05
could be a lot of different things, but it's some kind of documented hard evidence that some event took place
04:13
that we have data extraction.
04:15
So if you've got large data sets, whether it's a long file
04:20
or a data coming out of a database or some kind of application,
04:25
you might use certain tools to pull data out that meets certain criteria. Basically, you create some kind of filter,
04:32
and the filter would apply to all the data and only matched those things that the auditor is interested in.
04:40
And we have oddity claims.
04:42
So the client or the audit EADS says that this is so
04:46
and they put that down in a written statement.
04:48
And this has its own value as evidence, of course,
04:55
but it may not be considered as strong of a type of evidence as direct evidence would be.
05:01
Could also look at other documented evidence, such as plans, policies and procedures.
05:08
These are all things that we discussed earlier, so we should know The difference is between
05:12
a policy, a standard procedure or guideline.
05:15
But having these in documented form
05:18
helps as well, because if these are,
05:20
uh,
05:21
items they're actually in use within the organization than they provide valuable evidence,
05:28
we have to think about doing testing,
05:30
compliance testing or suggest substantive testing.
05:33
This gives direct evidence to say that we looked at something we did some operation or we observed someone else doing this operation. We saw the input. We saw the behavior, and then we saw the output.
05:46
The last item is the auditor observing someone in the performance of their duties.
05:51
Or maybe they perform
05:55
a particular process of the auditor, watches them do it
05:58
perhaps more than once to make sure that it works the same every time.
06:02
Well, again, we see the mention of our of our cat computer assisted audit tools.
06:08
This could help in a lot of different ways. You might have tools that can help with understanding a system configuration that conduce certain vulnerability. Scanning,
06:17
doing network scanning, running sniffers, intrusion detection systems.
06:23
They even apply to systems that allow you to trace the functions of software by putting a
06:30
a tracer on certain operations to see how it goes through the operations of the software performs.
06:36
It was also tools toe analyze the configuration of applications.
06:42
This kind of goes into the area of application pen testing, fuzzing that kind of thing.
06:47
We also can use tools, too.
06:50
Thio get an inventory of all of our software licenses
06:54
given environment, that's an important thing to think about.
06:57
Maybe you test your password policy.
07:00
So this is just a sampling of some of the capabilities of computer assisted audit tools.
07:04
Obviously, there's more features available, depending on which
07:09
vendors tools chosen.
07:11
Then we have to think about the possibility of using cat tools for continuous
07:15
auditing.
07:16
This is analogous to continuous monitoring, where you're looking at your security controls
07:24
on a continuous basis to find problems as soon as possible and be able to take action as soon as possible
07:30
so we can start with online event monitors.
07:32
These look at transaction logs
07:35
tools that
07:38
event management tools such as Ark site Things of this nature
07:42
can let you get alerts at a moment's notice to let you know that somebody tried to log in as administrator
07:47
or someone has changed a network setting on one of the production servers.
07:54
These air ah, great tools for alerting you to different events in the environment
07:59
and that typically would fall into the same group as intrusion detection systems, intrusion prevention systems.
08:07
Maybe you have a dashboard that shows you the events as they occur.
08:11
You can also think about
08:13
ah,
08:15
embedding audit hooks into software,
08:18
and this is something that the software developers need to do
08:22
so they can basically look for certain events. And when an event happens in the software that can generate an alert that can be sent to an auditor,
08:31
this is a way of
08:33
putting some monitoring within the functionality of the software itself to ease the auditing process
08:41
that once a Net and alert gets generated. Then that transaction could be looked at more detail because it might
08:46
be considered suspicious or unusual, and that therefore
08:50
the event gets generally a lark. It's generated, so that event could be investigated more fully
08:56
that we could think about continuous and inter minted simulation or C. I S
09:01
was another audit tool.
09:03
This means that you can set up certain criteria
09:05
for events or transactions toe happen. And once those criteria are met, an alert gets generated so that the auditor can get some information for further investigation.
09:16
We could also do snapshot audits,
09:18
so this looks at
09:20
a series of data capture events, sort of like taking snapshots of some something that's moving. We'll look at the data from this moment, then we take another snap shot. Look at the date at this moment and so long,
09:33
and then this shows a sequence that a, uh,
09:37
transaction goes through in order to go from its initiation to completion.
09:43
We also have the embedded audit.
09:46
This is another
09:48
way of interfacing with an application so the auditor can create some dummy transactions,
09:54
and then they put those into the stream of live data transactions in order to see what the output looks like.
10:03
So if the dummy transaction gets processed in the expected way, it should
10:07
appear to be correct when it's completed, and then that could be compared against transactions that are performed with live data toe. Look for any differences in the way that the transactions were a process,
10:20
and we have system control audit review
10:22
with embedded audit files also known a scarf or a M.
10:28
So this selectively
10:31
picks autumn modules within some application software
10:35
and, uh,
10:37
samples transactions as needed, depending on the objectives of the auditor.
10:41
All right, so we'll summarize some of the cat methods here.
10:43
We can see we've got online event monitors, reads logs, generates alarms,
10:48
very low complexity
10:50
the audit
10:52
hooks in our programs.
10:54
This will flag those transactions again, both low complexity.
10:58
Then we've got our continuous and intermittent simulation
11:01
so defined criteria when the transaction meets those criteria, it gets alerted a za medium transat medium complexity because there's a little bit more involved in setting up a tool like this
11:15
that we have. Snapshots
11:16
capture data through very stages of its processing
11:20
again a medium complexity
11:22
solution
11:24
than the invented autumn module E a. M.
11:28
Producing the dummy transactions and processing them alongside live data and comparing the results that eyes a high complexity because it's a little bit more involved to do that.
11:37
There's more planning, of course, and more analysis
11:41
as a result.
11:43
And then we have the last one, which is System System Control Audit Review file with embedded audit modules
11:50
and This way program various modules to do different audit functions.
11:56
And, of course, that's the most complex. So we have a high complexity for this one as well.
12:01
So a lot of good choices, depending on your resource is and your requirements you can pick one that suits your needs the best.
12:09
All right, so let's move on to Elektronik. Discovery
12:13
we're talking about here is
12:16
the the difference between what
12:18
the oddity and the auditor expected to be discovered during the process of doing the on it.
12:24
So the scope is very important to set
12:28
so that the discovery process
12:30
is appropriate and it's
12:31
a level of effort
12:33
doesn't go beyond what's required and and doesn't go doesn't stay less than what's required.
12:41
So it's important to think about
12:43
the limitations on the scope.
12:46
So, for instance, the scope might. If it's too large, it could produce a burden on your production systems.
12:52
It might include things like recovered deleted data.
12:56
It could address email records.
13:01
It could address things that were
13:03
saved on a backup
13:05
tape or a backup solution,
13:09
so the scope could be very far reaching. And that makes sense that you wouldn't.
13:13
I want a limit
13:16
the scope to a individual system if the data has been moved or are kinda log or backed up somewhere else. So the scope has to adjust accordingly
13:24
in order to capture of the information that's required.
13:28
You also have to think about this idea of a claim of privilege
13:33
formulas and business secrets might fall into this category, so there could be some exceptions, depending on the situation that's being investigated are audited now. Once we've got some evidence, we have to decide how to grade this evidence
13:48
we could start with material.
13:50
This is a, ah logical relationship between the Thea item that's being investigated and the evidence that's gathered.
14:01
We have to consider the objectivity of the evidence.
14:05
If if the evidence is objectively true, then we don't need thio spend much time doing analysis or or exercising judgment
14:13
because it's already
14:16
been proven that it's objectively
14:20
accurate.
14:22
If more judgment is required, then the evidence becomes less objective.
14:26
So that makes sense.
14:28
We have to think about who's providing the evidence. How competent are they?
14:33
Where does the source of this information
14:35
The source could be an individual? What is their expertise? What is their experience?
14:43
We'd like to get information directly from a client,
14:46
not not through a secondhand source or 1/3 hand source
14:50
that potentially taints the information and dilute its value.
14:54
And then we also have to consider the
14:58
independence of the evidence,
15:00
just like an auditor is expected to be independent.
15:03
The provider of independence Sorry, the provider of evidence.
15:09
I should not have anything to gain or lose by by providing some information.
15:13
If that's the case, if they have something to gain or something to lose than that evidence is not considered as independent as previously might have been considered. So we look at the way to grade the evidence. We can start with
15:26
material evidence.
15:28
We can see that it's considered
15:31
unrelated to being poor evidence. It's it's indirect. It's considered good evidence, low relationship,
15:39
and then the best evidence is direct.
15:41
So there's no explanation or judgment required as we talked about a minute ago.
15:48
Thinking about the objectivity of evidence
15:50
it's subjective
15:52
requires some facts to explain the meaning.
15:56
That's that's just
15:58
pretty standard. But if it's in a best evidence category, it doesn't require an explanation.
16:03
When we think about evidence sources
16:07
I'm related to 1/3 party with no involvement
16:10
constitutes poor evidence.
16:11
If we want to have good evidence in this case, it would be in direct involvement by a second party. So a little closer to the source
16:18
and then direct involvement by the first party would constitute best evidence.
16:23
So the closer you get to the source or the source directly the better quality of the evidence.
16:30
Then think about the competency of the provider of the evidence.
16:33
It's poor evidence. The person's probably biased. Maybe they're non biased in the case of good evidence,
16:38
and they might be non biased and independent
16:41
in the case of best evidence, so nothing to gain nothing to lose. In that case,
16:48
if we're analyzing the evidence,
16:51
poor evidence is analyzed by nav INS novice experienced evidence by
16:56
provides good evidence by an experienced
16:59
analysts
17:00
and that expert analysts provides our best evidence.
17:03
And then, lastly, we have the trustworthiness
17:06
of low, medium and high. Obviously, best evidence is is the ultimate goal in every case.
17:11
But that may not always be possible, depending on what
17:15
the situation is.
17:15
Now we think about the life cycle of evidence,
17:19
so it does go through a life cycle, and it's important to consider these individual
17:25
phases.
17:26
First, we start off with identification.
17:27
We know that some evidence has been
17:32
discovered or identified, and it's
17:34
lending its support to the objectives of the audit.
17:38
That evidence gets collected
17:41
according to the procedures that were agreed upon
17:45
and according to the
17:47
goals of securing the information respecting confidentiality requirements and so on.
17:53
Then we have to preserve this evidence,
17:56
keeping it in its original state.
18:00
In the case of like a forensic investigation, that's a little bit more complicated.
18:06
We have to consider things like chain of custody, proper gathering techniques, proper documentation and so on.
18:15
Once that's done, the evidence could be analyzed.
18:18
This could be done with scientific tests, observation
18:21
some 10 substantiative
18:23
tests, qualitative and semi qualitative quantitative methods could also be used.
18:30
Then we think about analyzing the evidence after it's been preserved.
18:36
So you have to return
18:38
the evidence back to where it was removed from after the analysis has been done.
18:44
So, like the idea that you get some evidence out of the evidence locker, you do some experiments or tests with, and then you have to return it
18:51
back to where it waas.
18:52
Now that you've got more information, you think about presenting the evidence
18:57
so that this should support the auditor's report, support the auditor's opinion,
19:03
and depending on what type of evidence it is, it might have to be returned to the owner when all this work is finally completed.

Up Next

Certified Information System Auditor (CISA)

In order to face the dynamic requirements of meeting enterprise vulnerability management challenges, CISA course covers the auditing process to ensure that you have the ability to analyze the state of your organization and make changes where needed.

Instructed By

Instructor Profile Image
Dean Pompilio
CEO of SteppingStone Solutions
Instructor