Vulnerability Disclosure Program Part 2

Video Activity
Join over 3 million cybersecurity professionals advancing their career
Sign up with
Required fields are marked with an *

Already have an account? Sign In »

8 hours 10 minutes
Video Transcription
Hi, I'm Matthew Clark, and this is less than 6.5 vulnerability disclosure programs. Part two
in this lesson will begin by talking about definitions and building a vulnerability disclosure programs.
We'll discuss the nature of disclosures and also building bug bounty programs. So let's get started. So let's run through some quick definitions. Security Program is a group of related policies, processes, standards and guidelines used to achieve a specific outcome. They use people, process and technology. A couple examples.
A risk management program or security awareness program
are examples of security programs.
Vulnerability disclosure programs are overall enterprise program for handling disclosures and encapsulates how disclosures that captured or received by the organization,
and it uses a couple of different methods to achieve its purpose. You can use either a bug bounty or a responsible disclosure program.
A Bug Bounty program is a method to receive vulnerability notifications that involve a bounty. So think of pirate here, where the pirate has a get out of jail free card as long as they do everything right. In other words, no plundering villages
responsible. Disclosure program is also a method to receive vulnerability disclosures, but these encourage security researchers to self report vulnerabilities, flaws and errors, and they usually do offer a get up, get out of jail free card as well. If you agree to specific terms and conditions.
From a technical standpoint, building a vulnerability disclosure program is easy.
Politically, it's another matter.
Building a vulnerability disclosure program requires support from multiple individuals within the organization, including senior management support. Attempting to initiate this without having senior management on board is just about impossible.
This is a program that can become visible, especially if a significant vulnerabilities found,
and that cuts both ways because the vulnerability will be found regardless if you have a program or not.
But having a program that functions shows that you're prepared for it.
Executive sponsorship is also important. It's critical for success. You need to have someone that's gonna be active and visible within the program. The sip, so is also an important individual.
This is where hard skills meet soft skills. She will need to understand the technical enough to know who within the organization to get involved, and she'll need to know most probably most importantly, how to navigate through difficult waters, including political skills, knowledge of stakeholder concerns,
and the ability to work across boundaries
and stakeholder by in is a very important as well. There may be a perception that by standing up of Vulnerability disclosure program that the organization is signing up for additional work. Well, the truth is, is that this is just about correcting flaws
that have made it past all of your organization's previous attempts to find them.
Or, in other words, this is work that the organization failed to do the first time around.
Dan Wheeler and Sarah White gave a talk in October 2019. 0, wasp London called Responsible Disclosure, and I put a link to this talk in the reference material. In it, they outlined something that I thought was very helpful. They called it the principles of disclosure.
They showed a triad of communication, integrity and transparency, and this was from the viewpoint of the researcher. But I think these principles are equally applicable. At the organizational level is, well,
there's different types of disclosures. There's full disclosure, which is also known as public disclosure, where the security researcher just tells the world exactly what they found. There's partial disclosure where enough information is made public for the company not to be able to deny that there is a problem, but the information is limited so criminals can't abuse it.
Typically, this occurs when there's a breakdown in communication between the security researcher and the organization or the company's slow to act.
We have responsible disclosure, which hopefully translates into a coordinated disclosure where you work together. Security researcher discloses the vulnerability either directly to, ah, to the vendor and works closely with them or to a third party. Like certain.
We have no disclosure where if your company, this is probably the second worst outcome, with full disclosure being the first one. Because with this one, you don't even know that there is a problem.
And we have nondisclosure, also known as they paid me off, or they gave me an offer I couldn't refuse.
And this is an agreement not to say anything,
and there's two ways to ingest vulnerabilities that we've talked about. The first one is bug bounty programs where it's kind of like look all you want, we'll give you money, but you agree to our terms and responsible disclosure, which is like, we're not encouraging you to look. But if you find something let us know.
So bug bounties come in all kinds of different shapes and sizes. And here's four examples of bug bounties from hacker one bug crowd and a self directed program, and you can see the average bounties, top bounty range and total bounties paid for each one of these.
So how is a bug bounty like a pin test? Well, that's a great question, because the traditional pin test you authorize individuals to come in and simulate a cyber attack, and then they evaluate the security, both of your organization and possibly even the response of your security team.
And you selected a firm. You set boundaries.
You signed a contract, and you kind of feel comfortable about who's gonna be there
and crowdsourced um, bug bounty programs. Uh, take that point in time penetration tests and spread it out over a period of time where multiple individuals can come in, perform the same types of test under the same types of boundaries that you set up. The main difference is that
you're paying per vulnerability found,
and these vulnerabilities air things in which the Bug bounty participants have actually found vulnerabilities that they've actually found in some cases been able to exploit. And so these are things that you know that issues you know you have that you can need to resolve
When it comes to a bug, bounty programs their different platforms that you can utilize. You have commercial platforms like Hacker One and Bug Crowd and Cenac and Cobalt, and these programs are kind of self contained. They've already worked out the payment method. They worked out all the program policies,
um, and the procedures and so forth. They're the ones responsible for finding the crowd and crowdsourcing individuals toe,
um, to conduct test against your systems. You could do self directed where you basically are the Hacker one or bug crowd. Uh, it's up to you to develop the policies is up to you to come up with a way toe, handle, the payment systems and so forth.
You could also do nonprofit, and there's one out there that I know of the open bug bounty. That's the platform. It's nonprofit.
And then there's this concept of safe harbor that we need to discuss.
Safe harbor is kind of like a get out of jail free card. For security researchers. It's basically an agreement in writing that states that the organizations will not hold security researchers liable or won't try to prosecute them for criminal behavior related to the
research that they're conducting as long as they stay within certain guidelines
and agreed upon limits.
Well, that's it for this lesson. In this lesson, we continued our journey into the mysterious world of vulnerability disclosure. We defined words and identified roles. We learned about disclosure, and we investigated bug bounding, and we talked about a broad range of concepts, including Safe Harbor. I'll see you next time.
Up Next