Gathering Audit Evidence
This lessons discusses gathering audit evidence and focuses on two types: Direct: Proves existence of a fact Indirect: more circumstantial and based on inference This lesson also discusses statistical sampling techniques: - Random sampling - Cell sampling - Fixed interval sampling Participants also learn about what types of evidence of typical for ...
This lessons discusses gathering audit evidence and focuses on two types: Direct: Proves existence of a fact Indirect: more circumstantial and based on inference This lesson also discusses statistical sampling techniques: - Random sampling - Cell sampling - Fixed interval sampling Participants also learn about what types of evidence of typical for IS audits and how to use computer assisted (CAAT) audit tools. This lesson also discusses electronic discovery which is about the scope of the audit process. Participants also learn about ways to grade the evidence discovered in an audit as well as about the lifecycle of evidence. [toggle_content title="Transcript"] Okay, so now I'm going to talk about how we gather the audit evidence. We need to understand what's considered tangible evidence and what constitutes reliable evidence. So if you think about the basic function of evidence, to either prove something or disprove something, which is to also say that if we don't have evidence that means that we don't have proof. Just like it would apply to a criminal investigation of any type, or trying to find a fact related to some audit objective. Same kind of concept here - Another important thing to point out is that an auditee starts with zero points and then builds up to their final score. As each point gets proven, then that accumulates towards the final report. We need to understand the different types of evidence. Direct evidence doesn't actually require an explanation. It proves its existence; it's self-explanatory, if you will. Then we have the concept of inference, which is related to direct evidence. This means that you can draw a conclusion based on the evidence given. Then we have indirect evidence. This means that we have a hypothesis, or a guess, or a theory about what the evidence actually means. So that might include inference. It might include presumption. This is still based on circumstances and whatever facts can be gathered, but it's not as strong as direct evidence. Another phrase for indirect evidence is circumstantial evidence. So it's a reasonable bit of proof but it doesn't prove things as conclusively as direct evidence would. Then we have to think about statistical sampling. What does this really mean in the context of doing an audit? It's basically a mathematical technique that the auditor employs in order to get enough data to satisfy a requirement for something that's being tested or investigated. Typically, these statistics are presented as percentages. That's the norm, but they could be in other types of units. We have the idea of random sampling, which is self-explanatory. You've got a large body of data and you're just picking some samples from various different areas without any rhyme or reason. It's as random as you can make it. Then we have the concept of cell sampling. This is where a sample is done at some internal: a pre-defined interval. Maybe you're looking for certain types of data once a minute, or once an hour, once a day, whatever the case might be. Then we have fixed interval sampling, and this is very similar to cell sampling where you're doing an incremental interval in order to get a similar distribution of samples throughout a body of data. There's also non-statistical sampling. So this means that the auditor's judgment or their experience and opinion comes in to play. That's why this is also called judgmental sampling. So the auditor picks a sample size and the methodology for extracting the data as well as how many items are actually going to be looked at. Now, we have to think about the evidence types that would be typically used in an information systems audit. We can start with documentary evidence, which includes things like transaction logs, regular systems logs, financial transaction information, receipts. This could be a lot of different things but it's some kind of documented hard evidence that some event took place. Then we have data extraction. So if you've got large data sets, whether it's a log file, or data coming out of a database, or some kind of application, you might use certain tools to pull data out that meets certain criteria. Basically you create some kind of filter and the filter would apply to all the data and only match those things that the auditor is interested in. Then we have auditee claims. So the client or the auditee says that this is so and they put that down in a written statement. This has its own value as evidence, of course, but it may not be considered as strong of a type of evidence as direct evidence would be. You can also look at other documented evidence such as plans, policies and procedures. These are all things that we discussed earlier so we should know the differences between a policy, a standard, a procedure, a guideline, but having these in documented form helps as well because if these are items that are actually in-use within the organization, then they provide valuable evidence. We have to think about doing testing, compliance testing or substantive testing. This gives direct evidence to say that we looked at something, we did some operation, or we observed someone else doing this operation. We saw the input, we saw the behavior, and then we saw the output. The last item is the auditor observing someone in the performance of their duties, or maybe they perform a particular process so the auditor watches them do it perhaps more than once to make sure that it works the same every time. Again, we see the mention of our CAT; computer assisted audit tools. This can help in a lot of different ways. You might have tools that can help with understanding system configuration, that can do certain vulnerability scanning. Doing network scanning, running sniffers, intrusion detection systems. They even apply to systems that allow you to trace the functions of software by putting a tracer on certain operations to see how it goes through the operations that the software performs. There's also tools to analyses the configuration of applications. This kind of goes into the area of application pen testing, fuzzing, that kind of thing. We also can use tools to get an inventory of all of our software licenses on a given environment. That's an important thing to think about. Maybe you test your password policy. So this is just a sampling of some of the capabilities of computer assisted audit tools. Obviously there's more features available depending on which vendor's tool is chosen. Then we have to think about the possibility of using CAT tools for continuous auditing. This is analogous to continuous monitoring where you're looking at your security controls on a continuous basis to find problems as soon as possible and be able to take action as soon as possible. So we can start with online event monitors. These look at transaction logs, event management tools such as Arc Sight; things of this nature can let you get alerts at a moment's notice to let you know that somebody tried to login as administrator, or someone has changed a network setting on one of the production servers. These are great tools for alerting you to different events in the environment. That typically would fall into the same group as intrusion detection systems, intrusion prevention systems. Maybe you have a dashboard that shows you the events as they occur. We can also think about embedding audit hooks into software. This is something that the software developers need to do so that they can basically look for certain events and when an event happens in the software they can generate an alert that can be sent to an auditor. This is a way of putting some monitoring within the functionality of the software itself to ease the auditing process. Then, once an alert gets generated, then that transaction can be looked at in more detail because it might be considered suspicious or unusual. Therefore the alert gets generated so that the event can be investigated more fully. Then we can think about continuous and intermittent simulation, or CIS. That's another audit tool. This means that you can set-up certain criteria for events or transactions to happen, and once those criteria are met, then an alert gets generated so that the auditor can get some information for further investigation. We can also do snapshot audits. So this looks at a series of data capture events, sort of like taking snapshots of something that's moving. We look at the data from this moment, then we take another snapshot and look at the data at this moment, and so on. Then this shows a sequence that a transaction goes through in order to go from its initiation to completion. We also have the embedded audit. This is another way of interfacing with an application. So the auditor can create some dummy transactions and then they put those into the stream of live data transactions in order to see what the output looks like. So if the dummy transaction gets processed in the expected way it should appear to be correct when it's completed, and then that can be compared against transactions that are performed with live data to look for any differences in the way that the transactions were processed. Then we have system control audit review with embedded audit files, also known as SCARF, or EAM. So this selectively picks audit modules within some application software and samples transactions as needed depending on the objectives of the auditor. Alright, so we'll summarize some of the CAT methods here. We can see we've got online event monitors. That reads logs and generates alarms. Very low complexity. The audit hooks in our programs. This will flag those transactions, again low complexity. Then we've got our continuous and intermittent simulation. So defined criteria when the transaction meets those criteria, it gets alerted. This is a medium complexity because there's a little bit more involved in setting up a tool like this. Then we have snapshots that capture data through various stages of its processing - Again, a medium complexity solution. Then the embedded audit module: EAM, producing the dummy transactions and processing them alongside live data and comparing the results. That is a high complexity because it's a little bit more involved to do that. There's more planning, of course, and more analysis as a result. Then we have the last one, which is system control audit review file with embedded audit modules. This we program various modules to do different audit functions and, of course, that's the most complex, so we have a high complexity for this one as well. So a lot of good choices, depending on your resources and your requirements, you can pick one that suits your needs the best. Alright, so let's move on to electronic discovery. What we're talking about here is the difference between what the auditee and the auditor expect to be discovered during the process of doing the audit. So the scope is very important to set, so that the discovery process is appropriate in its level of effort. It doesn't go beyond what's required and doesn't stay less than what's required. So it's important to think about the limitations on the scope. So, for instance, the scope; if it's too large, it could produce a burden on your production systems. It might include things like recovered deleted data. It could address email records. It could address things that were saved on a backup tape or a backup solution. So the scope could be very far-reaching, and that makes sense that you wouldn't want to limit the scope to an individual system if the data has been moved or archived or logged or backed-up somewhere else. So the scope has to adjust accordingly in order to capture the information that's required. We also have to think about this idea of a claim of privilege. Formulas and business secrets might fall under this category. So there could be some exceptions depending on the situation that's being investigated or audited. Now, once we've got some evidence, we have to decide how to grade this evidence. We could start with material. This is a logical relationship between the item that's being investigated and the evidence that's gathered. We have to consider the objectivity of the evidence. If the evidence is objectively true, then we don't need to spend much time doing analysis or exercising judgment because it's already been proven that it's objectively accurate. If more judgment is required, then the evidence becomes less objective. So that makes sense. We have to think about who's providing the evidence. How competent are they? Where is the source of this information? The source could be an individual. What is their expertise? What is their experience? We like to get information directly from a client; not through a second-hand source or a third-hand source that potentially taints the information and dilutes its value. Then we also have to consider the independence of the evidence. Just like an auditor is expected to be independent, the provider of evidence should not have anything to gain or lose by providing some information. If that's the case, if they have something to gain or something to lose, then that evidence is not considered as independent as previously might have been considered. So if we look at the way to grade the evidence, we can start with material evidence. We can see that it's considered unrelated to being poor evidence. If it's indirect, it's considered good evidence, low relationship. The best evidence is direct: so there's no explanation or judgment required, as we talked about a minute ago. Thinking about the objectivity of evidence. It's subjective, requires some facts to explain the meaning. That's pretty standard but if it's in a best evidence category, it doesn't require any explanation. Then we think about evidence sources: unrelated to a third-party with no involvement constitutes poor evidence. If we want to have good evidence in this case, it would be indirect involvement by a second party, so a little closer to the source and then direct involvement by the first party would constitute the best evidence. So the closer you get to the source, or the source directly, the better quality of the evidence. Then think about the competency of the provider of the evidence. If it's poor evidence, the person's probably biased. Maybe they're non-biased, in the case of good evidence, and they might be non-biased and independent in the case of best evidence. So nothing to gain, nothing to lose in that case. If we're analyzing the evidence, poor evidence is analyzed by a novice. Experienced evidence provides good evidence by an experienced analyst, and then expert analyst provides our best evidence. Then lastly we have the trustworthiness of low, medium and high. Obviously best evidence is the ultimate goal in every case, but that may not always be possible, depending on what the situation is. Now we think about the life-cycle of evidence. So it does go through a life-cycle and it's important to consider these individual phases. First we start off with identification. We know that some evidence has been discovered, or identified, and it's lending its support to the objectives of the audit. Then that evidence gets collected according to the procedures that were agreed upon and according to the goals of securing the information, respecting confidentiality requirements, and so on. Then we have to preserve this evidence. Keeping it in its original state. In the case of a forensic investigation, that's a little bit more complicated. We have to consider things like chain of custody, proper gathering techniques, proper documentation, and so on. Then, once that's done, the evidence can be analyzed. This can be done with scientific tests, observation substantive tests, qualitative and semi-qualitative, or quantitative methods could also be used. Then we think about analyzing the evidence after it's been preserved. So you have to return the evidence back to where it was removed from after the analysis has been done. Sort of like the idea that you get some evidence out of the evidence locker, you do some experiments or tests with it and then you have to return it back to where it was. Now that you've got more information, you think about presenting the evidence. So this should support the auditor's report, support the auditor's opinion, and, depending on what type of evidence it is, it might have to be returned to the owner when all of this work is finally completed. [/toggle_content]
In order to face the dynamic requirements of meeting enterprise vulnerability management challenges, CISA course covers the auditing process to ensure that you have the ability to analyze the state of your organization and make changes where needed.