Vulnerability Disclosure Program Part 3

Video Activity
Join over 3 million cybersecurity professionals advancing their career
Sign up with
Required fields are marked with an *

Already have an account? Sign In »

8 hours 10 minutes
Video Transcription
Hi, I'm Matthew Clark. This is less than 6.6 vulnerability disclosure programs, Part three, and this lesson will continue. Our conversations about bug bounty programs and talk about the pros and cons will also discuss more about safe harbor and responsible disclosure programs.
And finally, we'll discuss about what it takes to build a vulnerability disclosure program.
So we've been talking about bug bounty programs, which involved crowdsourcing. So what's kind of in numerator some of the pros and cons?
One of the prose is that it is proactive. People are potentially looking for problems all the time. It's also cost effective
because you're only paying for actual issues, not the potential ones.
You're not paying for how long it took someone to find a problem, and you're not paying for them. Toe. Learn on the job, either.
Another pro is that organizations get toe limit impact
because bug bounty programs have been accused of buying silence. In fact, I'm left a reference to a non article called Bug Bounty. Programs are being used to buy silence in the reference section and encourage you to read it.
The cons are that crowdsourcing is not really designed for inside the perimeter. I was watching the movie The Hobbit with my Son the other day, and Bilbo Baggins is astounded when 12 dwarves come knocking at his door and expect dinner. They eat everything that Bilbo has in his kitchen.
There's even one dwarf, the It's like three wheels of cheese,
and we'll just like Bilbo Baggins. You don't invite people inside your perimeter that you don't know because you can't always be certain that they're going to sing is they? Throw around your priceless antique china, either.
The reality is that most bug researchers don't make a lot of money doing this.
In the CSO Investigative Report about bug bounties by J. M. Poor up, he found that people who made over $100,000 the entire time they did bug bounties was actually in the low hundreds of people.
And that's $100,000 the entire time that they were doing. Bug Bali's not Annually
and Bug Bounty programs can have restrictive terms. In his March 31st 2020 article for fortune dot com, David Morris details some of the difficulties that security researchers were having while trying to use bug bounties.
MIT researchers found bugs and votes on online voting application
votes was offering a bug bounty. However, researchers almost immediately ran into trouble because the bug bounty Terms said buy votes restricted their ability to conduct proper testing.
The term stated that security researchers could not test on the actual application but had to use a test app provided by votes. The test app did not function the same as the actual app.
There's a link to the article in The Resource is, although it's behind a paywall.
This is the article I spoke about by GM. Poor up. I included a link to in the resource section. It's a great article because it sheds light on the topic of crowd sourced penetration testing. The crux of his article says that security researchers are being bought off. And how does the author say that's happening?
He says It's simple that they're being encouraged to trade safe harbor for a non disclosure agreement.
Poor abuses PayPal's terms of service To illustrate the Catch 22 the security researchers often find themselves in
PayPal reserves the right to withhold safe harbor at their own sole discretion, and the only way for researchers to achieve safe Harbor is if they agree to PayPal's India.
So going back to our previous lesson, can you now see where they made me? A better offer? Looks so tempting.
Poor up. Some of this type of conditional safe harbor, as signed, this India to report a security issue. Or we reserve the right to prosecute you under the Computer Fraud and Abuse Act and put you in jail for a decade or more.
So what is responsible disclosure? Well, responsible disclosure programs give companies a mechanism to accept disclosure. They're generally a formal program with policies and process. It may include Safe harbor in exchange for an N D. A. It is a mechanism toe limit impact to control outcomes.
And one of the main difference between this and bug bounties is the lack
of of a bounty and disclosures performed after a reasonable amount of time, usually about 90 days.
Which takes us to Google and Microsoft. Google has a 90 day policy disclosed vulnerabilities. So in January of 2015, Google published details about a Microsoft Windows vulnerability, including the exploit code. One day before Patch Tuesday, Microsoft then
publicly blasted Google for a reckless behavior and irresponsible disclosure. But it hit their 90 day window,
there are a lot of different frameworks for building a vulnerability disclosure program. We're not gonna go into each one of these, but feel free to pause the video to review.
So let's talk about what it takes to build a vulnerabilities closure program. It's probably surprising that it just takes the same things that we used to build in the other successful security program, policies and process and people in technology
On the policy side, what are the internal expectations? Can everyone agree on? What you're trying to accomplish? Is legal going to write the policy, or do you need to bring in external counsel? And certainly it'll probably probably be at least more than one discussion with senior management
and the process. Regardless of how you ingest a disclosure like Bug Bounty program responsible Disclosure program, you have to have a process for What are you going to do next?
Is there a racy involved? Who's responsible? Well, at least one person who's accountable Well, that's Onley. One person
who's consultant who's informed does engineering get to just get the disclosure to verify it? If so, who in engineering gets it. How long do they have to review it? Are they simply verifying that it's a possibility? Are they trying to recreate there? Can you recreate the conditions that were reported? What if the air goes beyond what was reported?
What if there are actually many conditions where this behavior could be observed?
Who's prioritizing one disclosure of the other? What priority does this initial investigation have ever? Normal work?
If it is verified, what's the communication out to the security researcher? How do you communicate? What do you communicate?
Who decides who's going to communicate?
How long will it take to draft the communication? Who's going to research the things? How does that patch process work and how is it prioritized against other enhancements?
Is there a scale to determine which bug fixes have a priority, and how long will that patch process take? Is it 90 days?
How long will what you do with the security researcher, or how are you going to communicate with them?
And what's the culture gonna be?
On the technology side? You have the right work flows to even get the right people involved and to track the process
on the people's side. Do you have the right people involved? Um are enough people assigned to this? How does this get prioritized in people's normal day job that they have to keep up with?
Hopefully, you can see the purpose of having a dedicated, talented professional sip. So dope lead and guide this process
during a vulnerability disclosure event. One of the most important roles the sip. So is gonna be their ability to work with the business. This is where having a defined process for handling incident management is so crucial.
You have to shield your developers and engineers from people in the business. They're constantly asking, Is it done yet? And how much longer,
if this sounds like, are we there yet? Coming from the back seat of the car during a road trip, it get ISS, but the effects are much more damaging. Whenever you bring the concentration and effort of the people thinking through a solution, then you literally halt the progress for longer than it takes to hear and answer that question.
That's the way you train of thought works.
Well, that's it for this lesson. In this lesson, we completed our journey into the mysterious world of vulnerability disclosures. We peered into the underbelly of the bug bounty programs, finding safe harbor and responsible disclosure programs as well. And we asked ourselves, How do I make my own vulnerability disclosure program?
Well, we decided we needed a framework and policy and process and a great sip, so
I'll see you next time.
Up Next