Payouts: what's a bug actually worth these days?


#21

This is awesome so far!

I’m noticing a lot of the comments are focussed on the ROI to the researcher… How do you folks think about the value of a vulnerability from the perspective of the program owner?

e.g.

  • I know of a company that built out an incredibly complex bug valuation schema, which factored annualized loss expectancy, the revenue impact of every possible target, the likelihood of a threat actor exploiting a vuln (i.e. desirability to a threat, not just risk + likelihood), etc… They ran their program publicly few a few months then basically threw the whole thing out.
  • For Google, they have a very simple but clear schema which factors priority of the target, as well as the vulnerability (e.g. bugs in google.com or Gmail get more than an integrated company).

I’d be very interested to hear what researchers and customer alike think about the program owner side of the equation.


#22

The current researcher pay-out in most Bug Bounty Programs is 10-50% of what a researcher would make in the black market.

Although this should not be encouraged, bug bounty programs must be more competitive in terms of prices. Most of them (that pay out), award the minimum amount in most researchers, unless they manage to root the server or change the entire database. This should not happen. I believe a more even distribution of rewards would benefit both programs and researchers.

In general, programs that just pay people $10 - $1,000 just so they get more reports instead of the ones giving nothing, but don’t actually want to spend any serious money and give everyone $10 - $100 don’t benefit the community in my opinion.


#23

Hey Daknob,

How do you guys estimate “black market prices”? Are you using a particular darknet site as a guage?


#24

Well, mostly from experience. There are several sites in Tor and in the Internet that allow selling 0days to them for cash or BTC. There are sites where you know your code will be used to harm people but also sites that are legitimate (Even HP Zero Day Initiative). I try to lean towards the second category. Legitimate or semi-legitimate websites and leave the rest out.


#25

DaKnOb - When you’re saying a researcher can make more on the black market, do you mean in general as a career choice, or per vulnerability?

If you’re talking in general then I tend to agree, but that’s kinda like saying “my day job should pay more because I’d make more money as a drug dealer”.

If you’re talking about per vuln, it depends alot if your target is installable software or vulns in websites/networks… The two are priced very differently on the black market (as well as being far more difficult to sell), because hosted vulns tend to have a single point of remediation and a generally short TTL once they are being exploited.

I’m definitely not arguing that companies should value and therefore pay more for vulns, but the reason is because of the loss value created by the vulns itself, and the significant effort and intelligence you lot apply to rooting them out (vs any sense of having to “compete” in the market with a black market buyer for a particular vuln).

Regardless, I’m keen to understand a little better what you meant there.


#26

Well… So far I have only studied this as a per vulnerability. As a career choice it really depends on how many bugs you can find and where.

Let me clarify that I broaden the term “Black Market” by including any non-official channels in which a researcher can submit this bug to. This could include a program buying 0days (HP ZDI, VUPEN, etc.) as well as Tor sites or 1337-Day.

To counter your drug-dealer argument, for the researcher point of view it’s just discovering and reporting vulnerabilities. The difference is to who they are reporting them at. Of course, being an ice-cream seller in a truck and a drug dealer also have similarities, you could argue. Yes, this is indeed true, but one thing is legal and the other is not. In the Exploit world, this is (yet) different. Selling bugs to anyone is perfectly legal since they are considered products (past Jan 1st). Of course there may be export controls, but these are limited to this date.

Regarding the Per Vulnerability, in the black market you get paid (as a rule of thumb) depending on how many people this bug can impact. If it is a bug in a web application that is uniquely written for a website with medium traffic, then yes, it may be worth submitting it to the vendor. If it’s not, and it’s in something generic, for example Wordpress / Joomla, then this web app bug is worth way more. If you find a bug in installable software, then the differences can be huge. For example, a year or two ago, Mozilla was offering for FireFox Sandbox Escapes and Remote Code Execution a T-Shirt. The Black Market offered some thousand dollars. People who do this because of the money seem to have no choice here… You can get the money and then buy the T-Shirts from the e-shop… :wink:

In terms of payments through official channels, an even distribution, as mentioned before, would be good to have in all programs. That’s not about me, I donate most of the money I get, but that’s about the researchers in general. You need to find a way to attract more people to submit through these programs, and for better or worse, most researchers are after the money.

One final note regarding the “Black Market”… Buyers there usually tend to have cash so if they want to buy bugs for an Exploit Pack or for a specific attack they have in mind (corporate, govt., etc.) they are not just buying one bug. They buy 10 or 20. They will usually get paid much more for their work than the cost of these bugs and they are in need of them desperately.


#27

Any plans of summarizing this post? I lost track after a while…It would be nice if this can be converted into a blog as well? @samhouston


#28

Good idea @anshuman_bh ! Lots of great discussion going here, will see if there’s a way we could put a good summary together for folks :smile:


#29

@anshuman_bh Absolutely. We have plans to summarize it into a blog as well as possibly continue to the discussion in a 30-45 minute video chat with @jhaddix.
I’ll keep everyone posted when those things are coming out. :smile:


#30

I don’t mean to necro an old post but this kinda hit home this week in an interesting way.

I cant go in to too many specifics as the particular program (not here, somewhere else) has all sorts of square hoops you have wiggle through to say anything – so lets just keep this very very very generic…

Came across a situation where in the span of about 5hrs I had verified that I could seize control of quite a number of the program owners infrastructure.

  • Write access to public facing website(s) (check),
  • Write access to public facing applications (check, and check),
  • mysqladmin on Database servers (check),
  • Mailgun interface api keys (check),
  • IOS Code Signing Cert (check) -
  • AWS Keys (check)
    I literally could have laid waste to a company that
    a. had no backups of anything, and
    b. was dependent on their mobile application to make their money.

So now that we have the backstory – lets work towards where the issue laid:

  1. Time I spent to find the bug – 2hrs
  2. Time I spent verifying that everything I had was indeed correct. – 3hrs
  3. Time I spent verifying everything was within scope, crafting PoCs, creating justification for everything outside of the PoC – 3hrs

So I have no burned an 8hr day – and I haven’t even submitted…

  1. Submission #1 – 1hr
  2. Time arguing with an analyst over their interpretation of what i have submitted and it’s inherent value. – 8hrs of back and forth emails.

– Let’s frame this. All the while I explained as I did above, the grave situation the client was in. Unfortunately the analyst decided to mansplain to me how AWS worked, and then lectured me on how to write a good vuln report. Apparently ‘Change all the things…NAO!’ wasn’t business language.

So we are now at 17hrs. Program goes offline, I get a message confirming acceptance and get my payout. This payout was exponentially less than similar payout I had just previously gotten for almost the exact same vuln (though I didnt have nearly the control or business impact on that one) for a different program not two weeks ago.

The response I got “I wasn’t on the platform, and just took a guess at what we paid last time – my guess was wrong”. So there you have it – for this particular program, i was at the behest of a man, pulling numbers out of his arse. And in this case (not here – mind you, but in another program) – the dude was phoning it in.

So how much did I get? Right around 15% of the remaining budget. So, for a total ‘Game over’ scenario – I handed over a super-critical-you-probably-should-fix-this-nao and walked away with about 15% of the kitty. Where the alert(1) guys were walking away with around 3.5-5% per report. So, my game over was worth around 3 very nicely written alert(1).

My thoughts?

  1. As an industry - i think the Bounty providers need to come clean
    with their products. What are the budgets, and how exactly they
    decide what to pay out. Does the bounty provider do it? Does the customer do it? Who decides what our vulns are worth?
    • Values for bugs: The formulas that are used can be secret sauce - but the
      values need to be there. I have seen everything from ‘We just guess
      at what we give you’. to BugCrowds own ‘Between 0 - and Eleventy
      Billion’, or another program ‘handful of bitcoin and a cool shirt’ .
  • Offer accelerators for ‘business impact’ – on the order to how much it would cost a business if the unfortunate event actually happened. Take my above case:
    • Total loss of application: Revenue lost per hour x amount of hours application was down. Take a percentage of that (say 5%), and this is what is paid out. That recognizes the culpability of the client and recognizes the researcher at the same time. It will sting, but these things are meant to sting – they are not a crutch.
    • Breach of customer database with PII: Use DataBreachCalculator – it can give you the sobering numbers to come up with an idea of the approximate cost if somebody (who wasn’t a security researcher) would have succeeded in finding the same issue. In this case:
      Your average cost per record is $ 208
      Your average cost per breach is $ 3,641,944
    • There is a scene in Gone in 60 Seconds where Nick Cage is trying to explain to Raymond Calitrey how simple the math is on the value of Eleanor – Goes something like ‘take the value of the car, subtract the amount to fix it – and you come up with it’s current present value’. In this case a simple percentage (say 5%) off the average cost of the breach, paid to the researcher for accomplishing this task. Why? Because that is 95% cheaper than what would happen in the real world. Thats what we are trying to simulate, no?
  • Budgets: I have seen some programs that list how much is in the kitty and how
    much has been expended. This is so important as a researcher – if
    I know a program is low on gas – I’m not going to submit a high
    dollar vuln – I may not get paid.
  1. We need an “oh shit” button – we can call it “GameOver” – when you find something of that magnitude – you get your stuff together, you PoC – and you hit the button.

The Oh Shit Button

What happens?

  • The rest of the available budget gets frozen
  • the program goes offline
  • the analyst starts verifying

If indeed it is Game Over – the cash out of the remaining budget goes to the researcher. If it is bogus – some punitive action takes place and we go back business as usual.

What does this do:

  • It creates a race condition between the alert(1) “Slow-loris” guys who are farming vulns that have little to no real business impact – and the guys that are applying practical exploit techniques to bring value to these programs. (Not saying the guys finding XXS-for-days do not provide value – but when I don’t think Chase was breached because of alert(‘XSS’);
  • It incentivizes PoC || GTFO – no longer are we penalized for taking the time to verify, create PoC walk through a complex chain of exploits to show real business impact. Today, stuff like that falls through the floor:
  • Getting through the analyst can be tough on complex multi-step pwnage, and not worth the time invested
  • Scope shields (more on that later)
  • Alert(1) is easy, and pays – why do anything more? Just takes time – and you may get paid less.
  • Get rid of scope shields. I find it laughable at times that the exclusions and ‘out of scope’ items in some briefs are longer than than the brief itself. Sometimes they don’t even make sense:

‘all hosts inside *.foo.bar.io are in scope’
‘all hosts inside *.qa.bar.io are out of scope’
Seems legit, right? Except –
every host inside *.foo.bar.io is a CNAME to a host in qa.bar.io
So what’s in scope?

That particular vexing issue took 3 days and about 12 emails to sort out. Why? I can only say that a lot of people on those threads were making decisions without an understanding of the technology.
Same can be said for

Bountys that exclude certificate pinning from a valid vuln, but then won’t accept any vuln relating to personal/privated/protected data being sent in GET over TLS.

You can’t have it both ways?

It seems the more selective you make your pool of researchers, the more you should relax the amount of OOS items. It should be items that you consider to be a Business Accepted Risk – and not a list of junk you ‘don’t want to pay for / fix’.

So…uh…in short…lol…
Pay a bounty commensurate to the amount of realistic business impact, the overall craft that went into producing it. Give us a jackpot button for ‘oh shit’ moments. And by all means stop the stupid scope shielding…


#31

Hey geekspeed,

First, who cares how old the thread is. Post away. This was really interesting and super relevant to the topic.

I did have a few questions for you mainly about transparency of budget. I totally agree and understand that budget should be very transparent which is why we make sure that for our bounties the full pool and/or high reward etc are made clear day 1. We want to make sure you know what you’re working on.
You also mentioned that if you see the budget is smaller you actually wouldn’t submit a high value (possibly high priority) bug because you wouldn’t get paid as much - i’m assuming as much as on another higher budget program-

Budgets: I have seen some programs that list how much is in the kitty and how
much has been expended. This is so important as a researcher – if
I know a program is low on gas – I’m not going to submit a high
dollar vuln – I may not get paid.

Wouldn’t this actually incentive programs to NOT disclose their budget and be transparent?

Thoughts?

K


#32

@krodzon its just the opposite – it gets the business guys on their heels to make sure that there is always money in the pot. A lot easier of a conversation up front with “Hey, your under the waterline, you might want to top up – you can only afford X number of high quality submissions before you are out…” – versus “Guy has a really awesome 0day – but you don’t have the money to pay him in the account – so we need you to pony up before we release it” – now your conversation has turned.


Doesn't anyone hack things just for the love of it?
#33

Since the last post we’ve launched a bunch of public programs (with both praise, criticism, and meh as reactions to the starting rewards for basically every single one of them), released and updated the Defensive Vulnerability Pricing Model and Bugcrowd Vulnerability Rating Taxonomy - both designed to push this conversation between companies and researchers towards better alignment. That, and we’re started to see a trend towards more critical categories joining the bug bounty trend; vehicles and financial services being two of them

Is it helping? What has changed out there since then? It’d be interesting to see how opinions have evolved/changes/stay the same here over the past few months.


#34

There needs to be better transparency on these payouts. I just had a full RCE paid out at an astonishing 4x a non-persistent XSS. This RCE was on an IOT project with working code, fully documented, chained multiple vulnerabilities together across a cloud app and an IOT device. How is this worth 4 alert(1)'s? I’m so annoyed I had to sign up for these forums just to post this…


#35

Hi popeax,

Without knowing the program it is hard for me to address your specific scenario, but I’m happy to talk with you directly via support@bugcrowd.com about your rewards in this instance.

Generally speaking, rewards are based on severity. A P1 in Bugcrowd’s Vulnerability Taxonomy Rating will pay significantly more than a P4, but both amounts are ultimately determined by the customer’s reward range - you’ll see a greater difference in P1 to P4 rewards in a program with a wide reward range like $100-$5000 than in a program with a range of $100-$1000.

Another factor that comes into play is if this was an ongoing or a flex program - the flex reward model documented in a blogpost so I won’t go into huge detail here, but again, happy to answer questions.


#36

I think theres many factors when determining payouts. My answers are:

  • Does the size or reputation of the company (or the particular target being exploited within a company) play a role in the value of a bug?
    A) I think size and target do matter. Size of the company matters because the big guys (i.e. Samsung and the likes) have more capital to pay out for bounties as well as there are MANY Samsung devices in the wild. More consumers means more risk means a bug could impact a large number of end users vs. a small time company. The for similar reasons, the target device does matter. Typically if flagship devices are vulnerable i think it should payout more than older uncommon devices that might not have received any updates for example. People tend to buy NEW devices.Myself for example seems to have a new device at least twice a year sometimes more and thats only counting my personal cell phone. I’m sure theyd want to squash bugs right away too so they can patch it if its serious. I think higher payouts for devices within the last 2 years is fine then little less for older than that. Thats at least two generations of a device seeing as how new ones are released every year.

  • Should a researcher get paid more for a bug that took hours or days to find than one that only took minutes?
    A) Unfortunately I dont think time spent should impact the payout at all. It should be about the impact the exploit/vulnerability has. I wouldnt pay less for a zero day impacting millions of devices if they found it in an hour vs. a year.

  • Does a quality writeup increase payout? What about identification vs full exploit of an issue?
    A) I dont think quality if the write up should have an impact on the payout itself. I do feel they can deny or reject the bug if not properly disclosed. Typically you should also provide a PoC fir them to test, thats just common courtesy. I feel the bug itself and the severity/impact is what the payout should be based on. If you cant explain the bug or provide a PoC then I feel they could just reject it and you will have to get everything togetger properly and resubmit.

I dont know exactly how every company comes up with their payouts. Just submitted my first to Samsung. It is a high severity. Basically can root or have anything executed in INIT context on boot lol. I mean control INIT, control everything. But it was only for msm8996 android 7.0 sammy devices so it is limited in scope. They are paying me 2500$. I’d like to thibk if it was on 7.0,7.1,8.0,8.1 msm8996,msm8998,sdm845 etc etc theyd pay out more because itd be a larger impact.

Thats just my 2 cents :slight_smile: