This is why companies are afraid of bug bounties

Research told bug was a duplicate and he wouldn’t be rewarded anything so he goes full disclosure + talks to the press about lack of response: http://ow.ly/UWQbM

Put my thoughts in this blog:

Thanks for sharing your thoughts @jgamblin. I think there’s trust issues at play here and you’ve definitely highlighted the customer side that can cause concerns. The slow times to patch, marking as dupes, etc are often the reasons that researchers aren’t happy with some bug bounty programs.

It seems to me that there’s work to be done on both sides. What do you think Randy should have done to better handle this issue?

@rwestergren - Randy, thanks for sharing your write-up with everyone. Do you have any suggestions for how United (and other companies) could better approach this issue?

I think it’s a very interesting problem. The netsec sub-reddit is full of comments from folks that are voicing the frustrations of the researcher side and Jerry here is bringing up valid concerns from the customer side.

Like I said… I dont think @rwestergren did anything “wrong” he just did what companies worry about and reason companies list as why they are afraid of adopting a bug bounty program. It was mostly an observation.

1 Like

Fair enough. For sake of discussion, do you have any thoughts on what could have been done to help address the issues on both sides?

After looking through the disclosure timeline, it looks like United may be overloaded by the number of submissions and they may not be setup to process and patch everything that’s coming through their bug bounty.

1 Like

United could have had a better communication plan. I think if you are running a bug bounty program you should get some help from your project management and marketing team to help track and communicate.

Randy could have been less “emotionally invested” in his bug. When he was told it was a duplicate in July he could have moved on to the next thing but obviously he was interested in seeing it fixed and stuck with it.

1 Like

I wouldn’t do it, ( I haven’t done it… I’ve found some GOOD dupes)

Rant:
I think it’s underhanded, if you are participating in a bughunt then two things are happening, you are invited to participate, play, hax …and in turn for your time and effort you are rewarded in few or many ways (status, swag or cash money… and if you’re really good a lap-dance from Casey) in return for whatever security concerns/bugs you find on the platforms you are invited to (public, private and everything in between) the other thing is the company feel’s it’s mature enough to take scrutiny from some of the best bug hunters in the world (and some who are learning) … this company is saying we think we have done everything we can… but we’d love you to prove us wrong

the guy has seen his arse because he found something good that someone else already found and he doesn’t have the integrity to continue playing the game, it reflects badly on him

He’s like the kid that punches someone else’s cake at their birthday party because it’s not his birthday.

Moving forward:
A few incidents like this could be destabalizing confidence in big companies that don’t work as fast as lean startups where no one sleeps … EVER.

you really need people like Casey, Kate and co to deliver their views on how bug hunters should behave (if they want to participate in the programs with the likes of BC H1 SA etc… otherwise you’re going to get a load of professional consultants frowning on it because they know how hard it is to get things done in large environments and then you’ll have the other category of ‘f*%k um bro’ hax da matrix, FSociety etc… that haven’t got a scooby-do what’s involved in delivering, fixing,updating, refactoring maintaining a product.

United Airlines assumed they where ready for a bughunt (possibly because Vulnscans and crap pentests told them they are okay) or possibly because they saw bugbounties doing the rounds and they jumped in too early

I’d like to see the emails to build a better judgement but from what I have seen, I wouldn’t want him on my team trust is important.

I have more respect for UA for setting up a bug bounty program, this action demonstrates that they give a shit, want to improve and when issues are found (and that’s the purpose) they get burned …

** someone should get thier arse kicked for letting this by m testing phases, but isn’t that the point of a bug hunt, finding shit people missed?

2 Likes

The United program was extremely slow to respond to anything so it is understandable that a researcher would be frustrated by that. Unfortunately, C-level will point to examples of researchers going rogue as to why bug bounty programs can’t be trusted.

2 Likes

@jgamblin makes some fair points. I’ll mention that though I view bug bounty programs as a very positive step, my priority is not “selling” them to the community – it’s getting serious vulns patched in a reasonable timeline.

The insinuation that I was in some way disheartened due to my report being a duplicate (and that being why I subsequently sought out the press) is demonstrably false. If you read through most of the posts on my blog, you’ll see that I generally work with companies who do NOT operate an official bug bounty program (with no expectation of a reward). These are overwhelmingly positive experiences. In fact, the existence of a bug bounty program is largely irrelevant to a number of the points made here. Bug bounty or not, I think it’s in the interest of the security community to set reasonable deadlines. I handled United in the same way I would any other vendor.

With that said, I think communication could have been much better on United’s part. It does seem like they still have a few kinks to work out. Certainly, they should consider a 3rd party service like bugcrowd. But, I stand by my decision to pressure UA into getting the vuln fixed, as I would have with any other company (again, regardless of bug bounty program).

I want to apologize for that insinuation but that is the way it read to me on first blush. As I said I have not had the pleasure of meeting you and hope that we can grab dinner together at some point.

I am extremely interested in your “reasonable deadline” statement though as I think that is a much more meatier point for conversation.

Do researchers hold some kind of responsibility and or right to drive a development schedule if they are completely disconnected from a company and its development cycle and business pressures?

Thanks, Jerry. I appreciate your feedback.

I personally do think I have that right, but mostly not as a researcher. My general sense of responsibility/right to challenge a vendor’s status quo comes more as a customer (or, specifically, when my own information is at risk).

1 Like

How do you know what is reasonable?

How many people does Untied have working on their mobile app?
How many cards are in their backlog?
How many times a year do they release code?
What is their promotion and test & target strategy?

If you dont know the answer to those questions + a bunch more how can you make that judgment?

This is a valid argument @rwestergren made. I do not see myself as a Security Researcher only, but as a customer too. In most of the bounties at BugCrowd currently, I don’t use the service provided. There are others though that I use, or started to use after learning them through Bug Crowd.

I also have a few questions that we could discuss, in which I can “defend” both sides, but I’d like to know other people’s opinions.

  • What if a researcher found a bug in a car that allowed remote control of the car over the Internet and the vendor did not fix that issue six months after disclosure?
  • What if a researcher found a bug in an Airline system that revealed passenger information for every flight without authentication and it wasn’t patched six months after disclosure?

We may not know how many developers a company has, how many money they give to them, if they use git or svn, if they deploy software updates every day or every year, but there are issues that need to be fixed. Vendors need to at least prioritize correctly.

If United Airlines did this right, then the bug would be fixed within a month. That, or they had 7 million Remote Code Execution reports and took them six months to get through them. In a Bug Bounty Program, the researcher and the vendor need to work together. They need to have some common reference points. The vendor has to be ready to face the reports and the researchers have to be ready to face the responsibility.

1 Like

It took Jeep 10 Months (October 2014 to July 2015) for them to patch the bugs that Charlie found.
http://illmatics.com/Remote%20Car%20Hacking.pdf

Exactly and if they know they can not meet the demands of the researchers it is better for them to not have a bug bounty program and keep the legal cover in place is what they will tell you.

There’s no objective measure as to what would be considered reasonable, so it’s case-by-case for me. I can tell you how I personally got there for this issue – I think I mostly explained that in the blog post.

I come from the software development side of this issue primarily. This doesn’t mean I know exactly how UA’s system was designed, but it does mean I know what is involved in fixing this particular class of vulnerability (IDOR). I can tell you it’s very low on the scale of difficulty (probably a one-line fix).

Frankly, as a customer, knowing the answers to the questions you’ve posed doesn’t change much for me (and I don’t really care about the answers). In my opinion, the class of vulnerability identified, the potential damage it could result in, the communication with the vendor (or lack of), and the length of time it went unpatched are the main factors in considering such a “deadline” as reasonable.

But given that explanation, I’d be interested in learning what timeline you would have viewed as reasonable, and at what point you believe pressure would have been warranted (if at all).

I don’t know what is reasonable and that is the point of the argument. There is no way for a person not in a companies development stream to know the answer to that.

That is the main difference between a bug bounty program and a regular web app consultant.

A consultant finds a bug, puts it in his report and forgets about it. It is then 100% up to the company to triage and fix that in their chosen manner.

When you run a bug bounty program there is a sense of personal ownership in the bug by the researcher that wants to see the bug fixed. There is nothing wrong with that it is just a different business model that most companies are not ready for.

1 Like

The first example was of course this, just without the company name and the researcher name…

This, however, was a far more complicated fix that this one.

There are many factors that need to be taken into account and easiness to fix is one of them.

2 Likes

There is another topic here in the forum, What would you do if devs refused to fix security holes? .

In addition to this, I’d like to point out that BugCrowd has an “Average Response Time” in each program to provide a rough idea of what you should expect from this program.

Do you, researchers, believe it needs to be more detailed? Would it be a good idea to include an “Average Response Time” for every category P{1…n}?

Do you think it would be useful to have an average time between state changes? This could be “Average Time until Triaging”, “Average Time until Fix”, …

A fix was made by Chrysler for this issue and can be found in version 15.26.1. We did not extensively
study this patch although the net result is that the vehicle now no longer accepts incoming TCP/IP
packets.

http://illmatics.com/Remote%20Car%20Hacking.pdf

I think that is the definition of “not complicated”.

This does not necessarily mean they executed iptables -A INPUT --destination-port 80 -j DROP.

Completely blocking a service is not security. If it was, they would just remove all electronics and have a normal car with a manual gearbox. No hacking can affect it.

They may have simply implemented some firewall rules, port knocking, or who knows what else. They had to redesign a part of their system. It wasn’t a small part probably. It had to do with the communication of the vehicle to other entities. They should do this before releasing the vehicle, but still… They did it.

Googles Project Zero has set the bar at 90 days FYI: