What would you do if devs refused to fix security holes?

Moderator note 07/09/2015:
Recently zombiehacker brought up a really good question on the forum (see below) that got us talking in the Bugcrowd office:

What do you do when a company refuses to fix a bug that you KNOW is a big problem?

Share your thoughts below.


Let’s say hypothetically that a company has their developers deal with every single security report that you are supposed to report, but you also report to the owner. If the developers were refusing to fix some security holes that would cause a negative impact for their users, then would you try to explain it to the owner who isn’t familiar with security terms or try to explain to the devs that “it’ll take too long” or “We’ll have to rewrite too much code” isn’t a valid answer for not fixing a security issue? There also is the third option of just letting it go, but I’d like to hear how others would handle this type of scenario.

Hey Zombiehacker!

Honestly this has happened to me many times during my career. Your first scenario, going above the head of the technical contacts you have for the company, is one way to draw attention to your debated bug. I’d just be very careful about how you word your communications here:

(Some notes)

  • Be very clear that you think that the decision to accept the risk on that bug is dangerous but, ultimately you respect the decision of your client.
  • Explain in layman’s terms why the bug can have significant impact and provide a written exploit scenario to the technical team.

In the end, as stated, it’s their application and their decision to accept the risk. I hate to say it but sometimes you just have to let bugs go. Move on to the next app and if things get really bad, don’t work with that client anymore.

As for hearing those type of excuses you mentioned, that’s sad. It tells me they see risk in the vulnerability but are just not of the mindset of fixing hard problems. Unfortunately, the solution is also the previous sentence.

These things bother me too friend. Learning to frame your position on a bug while still keeping your cool and accepting the outcome of that debate is what makes us professionals. In the long run you will win more than you lose.

just my 2c


Disclaimer: I’ve been out of enterprise security & senior management for 12 mos now.

If the organization has not transformed into one where security issues are “just bugs to fix” there is little you can do. My previous gig used to be that way. Business unit security teams chasing & berating developers + IT-related app owners with 1,000 page Nessus/Fortify reports to get fixes in. It really doesn’t work unless it’s a compliance shop and a severe bug.

When we finally managed to get security issues weaved into the bug fix (e.g. JIRA ticket) process and transparent reports about all the risks (security or otherwise) to business app owners with regular review sessions, things got much better. This still enabled the business to accept risk and/or delay fixes, but at least it was with visibility.

If you’re coming up against the former, I’d actually suggest working with management to try to adopt new development practices. Have contact info of (mid-to-senior) folks in other companies you’ve worked with who use a more modern approach to development & security and let them compare notes. You’ll actually be seen as a valuable partner who cares holistically about the org vs a “security dude who has to be right”.

Just document all your findings (as folks here most likely do) and document your communication attempts and try to not let it bother you.

1 Like

Thanks so much for the reply Jason. Yeah, sometimes you just have to let bugs go for awhile. Sorry I didn’t get back to you sooner. Devs can be hard to communicate with, however some of that is due to communication. We say a word or phrase everyone who understands security or even business says everyday and don’t give it a second thought if the other person knows what we’re saying. That’s where communication fails the most.

One thing I missed here is providing a lot of evidence for the bug.

Some things that work include:

  • CVSS score.
  • Demonstrating that competing companies patched the bug or bug class.
  • Apprising them that automated vulnerability scanner or exploit tool checks are already available for bug.
  • Demonstrating that popular frameworks have patched similar bugs.
  • Providing as many references on the bug as possible including other consultancies writeups and views on the matter.

Repeat After Me:

If you have done your job completely and explained the situation correctly and the company has decided not to prioritize the bug fix that is their right to make that decision.

Unless you can go fact to fact on their backlog and the cost (true and opportunity) to the company to stop what they are doing and fix the bug you have very little to stand on. If fixing an XSS bug pushes back a release of a project and causes the company to miss a contractual release date are you deep enough in the business to know that?

But the number one rule is always PoC||GTFO!

@jgamblin nice shirt :wink:

Nice shirt for sure!

To play slight devils advocate, is there any responsibility that we have as researchers to go that extra mile to try and convince them they are making a risky decision or do we just present what we have and put our hands up if they don’t take the advice?

Also, is there any situation that WOULD necessitate not throwing our hands in the air.

Just look at Project Zero for an example. Patch in 60-90 days or face public release. That may not be ethically correct for some people but it works every time.

Do you guys think that’s correct? Should researchers do that?

I’m not quite sure…but I think Google can get a way with a bit more since they’re Google and they have a huge amount of money and lawyers behind them. That at least removes some of the risk to the individual, with Google taking the brunt of any legal fallout that there could be.

As an individual, I’d be very concerned about legal retribution or fallout for uncoordinated public disclosure. 90 days or several months is a good starting point, though I would personally strongly encourage folks to work as hard as they can to work with a company to address the security issue that has been found.


“Ethics is knowing the difference between what you have a right to do and what is right to do.”

…but that line is diffrent for everyone.

There is a huge gap between not understanding the advice and not accepting it.

You are under an obligation to make sure they understand the vulnerability completely but under none to make them act on it (outside of your ethics).

1 Like

Currently there are laws in place in the EU, as far as I remember, that, after some time, give you the right to publicly disclose even unpatched vulnerabilities that affect current systems, as long as they have been properly disclosed.

Additionally, to this day, I am not aware of any lawsuit against Google for the Project Zero 0-days that it dropped.

I’m not saying this is the right way or what I would do, it’s just another opinion that I’d like us to discuss here today.

Totally, I’m not making a legal judgement on whether it’s legal or not to do this. I’m not a lawyer and I don’t know enough about this particular subject, so I’ll look to others to weigh in on the particulars. The legal side of things is just a concern that I have, as I would hate for anyone to be punished for (safely) publicly disclosing a bug.

I think the 90 day window is a good baseline and I’m glad that Google is helping push that conversation along. It’s very important that companies take security seriously and recognize the work done by security researchers.