Capabilities and Technology in Place

Video Activity
Join over 3 million cybersecurity professionals advancing their career
Sign up with
Required fields are marked with an *

Already have an account? Sign In »

13 hours 9 minutes
Video Transcription
Hello and welcome to another penetration. Testing execution Standard discussion. Today we're looking at capabilities and technology in place within the pre engagement interaction section. This will be being last discussion we have on this particular section.
So with that in mind, let's touch on our disclaimer. Real quick.
Be reminded that pee test videos do cover tools that could be used for system hacking.
Any tools that are discussed or used during any demonstrations should, of course, be researched by the user as well as understood by the user. Please research your laws and regulations regarding the use of such tools in your given area. So with that in mind, let's go ahead and jump over to our objectives.
Now. Today we're going to look at what a benchmark is a sw far just by definition, at a high level, we're going to look at an organization's ability to detect and respond, respond to foot printing with respect to how we would report on or what things we may look for when an organization may look for
we're going to touch on responding to in detecting scanning and vulnerability analysis,
response to infiltration and response to data exfiltration so each of these areas is really just a standard definition, a cz wella. Some key examples that we may
have an organization look for that they should be looking for to discover these types of attacks. Now, what is benchmarking
well, by its definition, it is to evaluate or check something by comparison with a standard. And so the client may or may not have established benchmarks. And so really, that's going to determine whether or not testing these different areas
would even be valuable or beneficial to the client. And so it comes back to the discussion we've had on
maturity level and the overall maturity of the security programme in which you're testing against. And so
if a client's never done a penetration test,
if they don't have any benchmarks that they have in place for their security team, if they use the third party, all of those things may need to be considered with respect, whether or not this type of reporting and review would even be necessary. But if they use 1/3 party for Security Service's or for response service is
and there should be tools and systems in place to detect
these types of attacks
as long as it would not violate any terms and conditions of the service. As long as you're not directly testing um provider equipment without permission, etcetera, then you know it's up to the client and their ideas of what would be valuable again out of that service.
And so some incident response benchmarks would be things like cost per incident. So how much on organization spends per incident and whether or not there at the appropriate threshold. Automatic detection versus manual detection is another. So how many of the incidents are detected using
automatic methods versus manual methods and vice versa?
Ah, percentage of investigations or percentage of incidents investigated versus volume. So you know, if it needs to be 90%
and your team is only evaluating your team being the business team, not so much. The tester team is evaluating 50%.
Then that means that you're not meeting that bench more record that you're not at work should be or if you're always at 50% and your bench more is essentially 50% of the volume. So if you want to get to 90% you either have to hire new employees or you know make adjustments to the system so that it's more efficient, effective
ratio of investigations to response that's also tying into investigated versus volume. So if you've got a high volume
and that could drive up
again how big your back long is, as well as
whether or not you're addressing these things in a timely manner. So if you investigate, um, 80%
of all incidents on dhe, you know you're only responding to that 80% in a timely manner. So being that if I evaluate 100 things, I only respond to 80% of them. What is that other 20%? What's going on there? So
that investigation to response ratio is beneficial and then rate of response being time to a decision is going to be key there as well. So if you take longer than I think, the industry average last I looked was 21 minutes is how long it takes the average for an actor to get into an exfiltrate data from a system.
And so if it takes you
35 minutes to respond and make a decision and get something addressed,
then there could be the potential for almost 10 minutes of ex filtration activity. It's all things to take into consideration. So when you're benchmarking,
um, a client looking at these areas, what their current averages, what they're currently doing
again would dictate whether or not this type of testing would be beneficial. But if they count a robust response effort on a robust security team,
then it may be beneficial for them.
So one of the areas we could look at would be the detective response activity with respect to foot printing. And so this is just a small example of the types of activities. But it could be
zone transfer attempts in maps gaining brute forcing of DNA's Web application discovery activities.
Really anything will were actively engaged in a system with the intention of getting additional data on that system
and so few in map,
you know, doing in maps can of a system,
then time to detect that same thing for well, application, amount of time. It takes maybe to respond to either ability to positively identify what the attacker is doing, And so if you can see I do, something's being scanned.
But we're not 100% sure what's going on or what activities taking place? Well, how can you
start to determine whether or not that activities valid?
Maybe Web applications scan attempts are being made, but you don't have any Web applications in the environment that wouldn't be anything to be concerned with, Right? But,
those types of things would be pertinent. So if you do an end, maps can
and it takes. You know the organization an hour to pick it up, then that could be something worth noting. But
it could be that the organization does not respond to
in maps scans,
so it may not be a valid concern, but they may respond to Web applications. Cans or direct scans on particular systems are in the direct methods, so it would be good to understand how they should respond to those activities and whether or not their benchmarks meet those expectations again,
whether or not it provides value to the organization is directly dependent upon whether or not they're actively engaged in monitoring and addressing those times of activities.
Now, where I would expect organizations who have security programs to pay attention and start to respond is when we're looking at vulnerability, scanning and analysis.
Okay, so Unlike information gathering activities where we may be, some put printing vulnerabilities. Gaming is typically depending on the type of skin, I guess, in the information gathering phase a little more intense and direct with the intention of potentially exploiting the system.
And so, um, it would be beneficial to one
understand when someone is doing a vulnerability scan versus and in maps can. And so they're typically indicators whether that be an automated scanner manual in nature. If an individual were to, let's say, take certain parameters and put it into ah front end Web page with the intention of testing for SQL
vulnerabilities, it would be important
to identify those types of entries. All in all, you would hope that those types of entries would be excluded from input. But that's not always the case. And so you know, if that was something that was overlooked or if it is just something that is not being addressed
and the security team should be responding to those types of activities and of course, response time and detection time is going to be huge when it comes to kind of addressing the benchmark of the organization against the actual response time and detection town.
Now, detection and response to infiltration is
just as big a CZ. Probably the next section will talk about which is data exfiltration.
And so infiltration is essentially when the attacker finally has access to the client system or network on and in doing so without authorization. And so some attacks that may lead to this would be like buffer overflow, successful fishing, successful password cracking injection attacks, longs being cleared from a system
being able to detect those types of attacks is going to be
pretty big, I mean, because if a phishing attack is successful, if injection attacks are successful
again, that could lead to data. Exfiltration, which we'll talk about next, is well as the ability for an attacker to get into a system and then move laterally to other systems. So
if, um,
you know there is a security team involved and they can't pick up password cracking activity or they don't detect injection attacks, they don't see when in system logs have been cleared.
Those things could be troublesome. Now, on entity could attempt injection attacks against a external website. That may happen all the time. There may be controls in place to block that.
Therefore, it may not be directly reported on, but
if an entity gets into a network by one of those means and then attempts to attack other systems by moving laterally, using these methods definitely would want to delineate between internal and external activity in these types of things. Because then we're not just talking about malicious
external entities. We could be testing for malicious insider
activities are a threat actor, That's that's, you know, a disgruntled employee or a technical resource. It's attempting to get unauthorized access to a system. So it's not always the things that are external to a network that you have to worry about. There could be some internal threat actors as well that are unknown to the organization that
knowing you know when these things were happening would be beneficial in kind of combating that that threat.
Now data exfiltration is definitely something we want to be able to detect. But
where I've seen this done with security teams before is really figuring out what,
um, data sets are sensitive or critical to the organization. Because if I've got the A ninja vigil that say that
five gigs worth of photos to the cloud. And then we go in and investigate and we find that that five gigs worth of data was, in fact, just soccer pictures from their Suns game. And they've been, you know, storing them on their desktop.
OK, better safe than sorry, but do we really need to monitor the photos, folders and things of that nature of the system in question? So if we've got activity that's happening on, like HR folders or accounting folders, we'd want to have some mechanism to alert us when on unauthorized person has access those
you know, folders.
And then maybe even when unauthorized person is moving rapidly through those folder structures and potentially,
that could be done through automation or some sort of compromise to being able to detect anomalous activity and behavior. There would be beneficial Thio client and then detection and response to large file transfers again, talking about like five gigs worth of photos being moved down a networker,
you know. Ah, gig worth of word documents being moved out of network
seeing when attempts were made to access file sharing Web sites and maybe commanding control or known bad I P's to transfer data things of that nature. So
understanding how long it takes a team to detect and respond to those types of activities is going to be critical to kind of again benchmarking the organization and making recommendations for reducing risk and things that you may be asking yourself like How is this
beneficial for me? Is the penetration tester Why would I want to do these things? And I mean, to be honest with you, your main goal again is risk reduction effort.
And so this is applicable to the overall effort that you provide a za consultant or as a tester. And so knowing how to document these things again, taking into account time that the test is run versus time it takes to respond. That difference is the overall
detection window. So if they take 30 minutes to actually detect it and then another 10 to respond,
okay, take him 10 minutes to respond after 30 minutes total, 40 minutes. What could happen in the meantime? How can we tighten up on that? What can we do to improve the capabilities of the organization and reduce risk overall primary components of being a penetration tester and identifying risk Now
let's do a quick check on learning true or false
testing. Client response. *** marks is not a part of penetration testing.
Well, given that I just indicated that it is a primary part of pen testing and that you should be aware of how to measure a client response to those things and understand those things,
that is a false statement. Testing client response benchmarks is a part of penetration testing with respect to their security team and their security solutions or technical controls.
in summary,
we discussed what a benchmark is and why it is important to a new organization, and it is definitely a part of our responsibility. If it's in scope to evaluate is a penetration tester we discussed ability to do take to respond to foot printing some examples of that and why it's important
the same for scanning and vulnerability analysis with respect to vulnerability, scanning and vulnerability analysis,
um, the ability to detect and respond to infiltration and data exfiltration. So again, if the client does not have a security program in place and has no such tools,
this is probably not relevant to their particular organization. But if a client does have an established security program and benchmarks and response times and solutions in place. Then it is likely relevant for a T least initial discussion and potential scoping into their penetration test.
So with that in mind, I want to thank you for your time today, and I look forward to seeing you again soon.
Up Next