Payouts: what's a bug actually worth these days?

This is a hot button issue that we see discussed on Twitter, forums, and blogs a lot, and that we have been discussing it internally ever since Bugcrowd first started:

“What is a bug actually worth, and why?”

Share your response below

Bug bounties are evolving rapidly, and your input can help shape the way companies that run them approach this topic. In our industry there are several successful bounty programs that are high reward but, the reality of any marketplace is that the seller (the researcher) will want higher payouts, and the buyer (the target) will want to pay less. The goal here is to have both sides understand the “what” and the “why” of bug pricing, and for us all to learn a few things.

The type of things to ponder when replying:

  • Does the size or reputation of the company (or the particular target being exploited within a company) play a role in the value of a bug?

  • Should a researcher get paid more for a bug that took hours or days to find than one that only took minutes?

  • Does a quality writeup increase payout? What about identification vs full exploit of an issue?

It’s one thing to drop an idle comment on Twitter that “$COMPANY’S payouts are too low/high”… It’s another entirely to participate in a conversation that helps make sure everyone is on the same playing field.

Let us know what you think!

2 Likes

Definitely a fantastic question. I don’t know that I have the right answers and would be willing to discuss (and potentially even have my mind changed), but here are my thoughts:

  • Does the size or reputation of the company affect the bounty? This comes down to how much the company truly cares about security, but I would expect that BoA would pay more for a SQLi vulnerability that allows me to dump a million customer records than a mom and pop shop that has 50 customers total.

  • Should a researcher get paid more for a bug that took hours or days to find than one that took only minutes? No, there are too many factors involved to judge time/effort/etc… You are paying for results.

  • Does a quality writeup increase payout? Yes, but not by a signifcant amount. Not everyone is an English speaker or is great at putting thoughts/actions into words.

  • Identification vs Full Exploit? No, if a researcher thinks they will get more money by exploiting the vulnerability to the fullest, you may be encouraging them to go beyond identification. Do you really want me going MSSQL SQLi -> Command Injection -> Escalate to SYSTEM -> Hashdump -> Lateral movement -> DA -> ntds.dit? I didn’t think so :slight_smile: Or what if I just get command execution, download the source code, and then find more bugs (perhaps that is in scope, perhaps it’s not).

Anyway, just my thoughts, and I’m interested in others’ as well.

6 Likes

Essentially, I think it should always be up to the program holder to decide.

However, I would be more inclined to test in a program that:

  • States rewards per bug type or gives me an approximate some other way.
  • Doesn’t have too low minimum bounty. In my opinion, <$150 isn’t worth writing a report for (keeping in mind the risk of dupe)
  • Doesn’t randomly overpay/underpay. What I mean by that is that some programs doesn’t seem to know how to risk assess properly.

If I had a program I would:

  • State that the best report (not the first) will get a reward
  • Start with lower bounty to weed out the low hanging fruit (and before even running a program, have tested through everything, have incident response, logging etc in place)
  • Raise amounts to a level where a lot of people will get interested (Facebook, Google etc. did this)
  • Have optional swag/other. Many think it’s just silly but personally I think it’s fun to have something unique like Facebook’s Bug Bounty credit card or Tesla’s coin. Also, with more and more programs popping up, we need something special to stand out and attract people, right?
  • Answer in a reasonable amount of time. If that wasn’t possible, I would cap the number of researchers and tell whomever gave me the budget that I need more moneys.
8 Likes

I think a bug’s worth is something that is ultimately set by the program owners so having this conversation after a bug has been rewarded doesn’t make a whole lot of sense to me in the first place.

Having said that, what really irritates me is how programs don’t honor their minimum bounty set up front and constantly try to lower it or change it as the program begins. That to me shows that the program owners didn’t think through the consequences and weren’t prepared to begin with. Not to name anybody, but I am sure we have all seen this happen quite often.

Coming back to the topic, I think the following factors significantly increase a bug’s worth:

  • Quality write-up as mentioned. This should include the Description, Risk Impact, Steps to Reproduce and anything else that a layman/executive would need to understand the technicality of the bug and determine that the bug is worth fixing.

  • Showing the impact by demonstrating a real world exploit. I think this sort of falls under the first point but it is important to highlight this because it means that a lot of back and forth is avoided between the researcher and the program owners. I personally try to record videos with a voice over and I have only heard good feedback and I have not had to communicate more than once after the initial report.

  • How mature the company/program is. Facebook is the perfect example here which pays a $500 minimum and is probably one of the best bounty programs out there. The kind of responses/clarifications I have received on some of my submissions really shows that the folks there know what they are talking about.

  • The research done. This can be argued both ways. A bug found within a few minutes can be as impactful as a bug that took days to research. What eventually matters to the company is how big an impact that bug can cause so if a bug is rewarded based on the amount of research done, I feel that should be done by the company as a token of appreciation more than anything else.

  • One important point is also to be consistent with how a particular program rewards bugs. I have seen cases when two similar bugs have been rewarded differently. I have also seen cases when the minimum was set to $50 for a single bug. I reported at least 5-6 different bugs in one report and I was still rewarded the $50. Since the researchers are at the mercy of the programs to be rewarded and being treated fairly, not being consistent through rewards can be really demotivating.

I am sure there are more factors that go in determining a bug’s worth but above are some things that comes to my mind when I think about this topic.

Cheers!

3 Likes

Only in so far as the larger the company the more cash they likely have to put towards the program. I’d expect large banks to be paying out more than smaller start ups. As long as the value of payouts are clear at the start then people can choose what campaigns they take part in. I guess most would go for the high value/large firm but then there may be more cash to be made going for the smaller firms who may have more issues due to lack of in-house or previous testing.

No, getting paid more money for more work is like a salary, this is a different model of making money. This is why I don’t really participate in bug bounties, I personally like to know that I’m getting paid for the work I do regardless of the number of findings.

It shouldn’t but I bet it does. I’d expect if someone got in a nicely written up step-by-step proof-of-concept that they would look more favourably on it than a badly written few lines and a lump of code.

Depends on the program and what they ask for, the vulnerability and what you mean by full exploit. The program should dictate what is expected and testers shouldn’t go beyond that. If the vulnerability is that files can be deleted then even with full exploitation allowed common sense should rule. Finally you can argue around what a full exploit is; with SQLi, some would claim a SQL error when a ’ is dropped in a field is a PoC but some would say you have to extract data or at least run commands before it is a PoC, some would go as far as saying that using the SQLi to get command execution on the box is what is needed. It comes back to the rules of the program and what it says it pays out for, if it isn’t clear then then some testers may expect to get more cash for command execution through SQLi than if they just got the error message when the client is only prepared to pay a single price regardless.

Overall, I think that the value of a security vulnerability is based on the impact that it has against an organization. I think my answers to the questions below can more accurately describe my thoughts.

Does the size or reputation of the company (or the particular target being exploited within a company) play a role in the value of a bug?

Absolutely. A technical flaw doesn’t have inherent value – it’s the impact that the bug has that creates worth. For example, finding SQL injection a computer science student’s first-year web application is a lot less valuable (to anyone) than finding the exact same flaw on a major financial institution’s admin page.

Should a researcher get paid more for a bug that took hours or days to find than one that only took minutes?

I believe researchers should be paid by the impact of the finding in relation to the organization. If you have Heartbleed on your web servers, it’s going to take seconds to find – but that’s still more valuable to the organization than reflected XSS on an internal-only application. If organizations don’t want to pay for “low hanging fruit” findings, perhaps they should remediate them!

Does a quality writeup increase payout? What about identification vs full exploit of an issue?

Quality write-up vs. a write-up that is bad enough that the issue can’t be reproduced? Sure. Identification of, say, blind SQL injection vs. full exploitation is a little trickier. I think that as long as the impact of the vulnerability is demonstrated (or described) accurately, the pay should be the same.

1 Like

Here’s some info i’ve personally communicated to folks when they ask for payout guidance on PUBLIC programs. Still tweaking, and this is unofficial, so feel free to rip this apart.

RESEARCHER TIERS:

Tier 1 researchers: Regularly find high priority issues (RCE, Blind SQLi, XXE, etc), will also report moderate issues as identified
Tier 2 researchers: Occasionally find high priority issues (RCE, Blind SQLi, XXE, etc), report many moderate issues
Tier 3 researchers: Infrequently find high priority issues (RCE, Blind SQLi, XXE, etc), mostly report moderate and low impact issues, XSS + CSRF

GUIDANCE:

To attract Tier 1+2+3 researchers: (if you’re also doing best practice, regular security audits, internal reviews, etc)
Aim for a payout of $600+ avg
Suggested payout range: $500 - $2500

To attract Tier 2+3 Researchers: (if you’re also doing best practice, Occasional security audits)
Aim for a payout of $350 avg
Suggest payout range $100 - $1500

To attract Tier 3 Researchers: (if you’re at the beginning of the security maturity model, you might have an annual pentest)
Aim for a payout of $250 avg
Suggested payout range: $50 - $500

Also note that this leaves out the high end of programs (Google, Facebook, etc) which want to attract the best of the best. Probably best to plan for around $1000 average if you want to get those folks interested.

1 Like

When it comes to public bounties I don’t spend a lot of time on them, because even though they usually pay, the amount can be insulting. I know, I know if you’re getting paid you shouldn’t be insulted, but think about it. If you demanded I make two separate exploits for you, but say you are only paying for one of them, then I am only making one proof of concept. If you went to a restaurant and said you only have enough money for two pancakes, but demanded four they’d only make two or they’d kick you out.

The payout for the bounty that demanded two proof of concepts for the price of one only paid $50. The issue wasn’t CSRF, rather any signed up user for this product could make malicious iframes or upload malicious images. I think that’s worth more than $50.

I feel that it’s pointless to look for bugs that have a greater impact, like RCE, SQLI, etc. if you don’t even know if the company will pay an adequate amount. Also, it’s extremely insulting when companies add in terms and conditions saying they will own your exploit if you submit it and you can’t use it anywhere else. A lot of bugs can be used across the board, especially when it comes to open source bugs. It’s a slap in the face when the bounty doesn’t pay anything, not even points. If it doesn’t pay well, then I just ignore it.

It would be extremely useful for companies to have a minimum amount for each type of bug, that way you know the minimum you can make per bug and if it’s worth your time looking for more severe bugs. It would also be useful to have a ratio of how often they actually pay.

The problem with what I proposed above is that the company may try to downgrade the bug or in the one issue I talked about above the company thought Same Origin Policy would protect them from malicious iframes. It took a lot of explaining to get through to them and refining the proof of concept over and over until I just made it so if anyone accessed their test site they were redirected to a picture I made of skull and bones saying “Argh, I love bounties” That’s hosted on my companies site. I still believe that’s worth more then $50, since any user could do that, but I just won’t work with that company again.

What I am trying to say is that when you don’t know how much you will get paid you don’t have as big of an incentive to work on the project. I view private bounties differently then public bounties.

When I work on private bounties I work a lot harder then public bounties. There was one project that had a high minimum payout. I knew the company well and their reputation of not paying. I decided to make three proof of concepts and put a lot of time into one proof of concept. While every exploit met the conditions of the bounty and broke the security exactly as they asked, only one was worth the amount they were offering. Sadly, the company didn’t want to pay for any of them. I wasn’t thrilled with that, but the bounty still had a good ending thanks to bugcrowd. I can’t say anymore then that because what happens in a private bounty stays private.

Also, XSS sometimes can be quite bad for a company, especially if it defaces the entire site. If an XSS defaces the entire site or the subdomain it’s worth more then the average $100, since a deface is just a simple example. You could have malicious code load on every page, etc. Also, how do we rank how much chained exploits are worth? Is each part of the chain worth the minimum for each exploit and you add them together? Or are chains worth more?

These are just a few things I have to say about payment per bug. I may add a longer post later.

1 Like

The larger companies often have more users or customers reliant on their products’ security, so to me it makes sense that they should spend more in securing their apps for their customers. I imagine news of your company being breached being scattered across the web has a detrimental effect on profits or trust. If you’ve more to lose, you’ve more reason to incentivize security researchers to test it.

I think there should be a slight bonus for bugs that have scope beyond a particular bounty program, as in unknown vulnerabilities in products that are used throughout the web and not just by the target organisation. I’m sure there are some researchers who have discovered high impact 0days in products they didn’t know were being used. It is likely more valuable for a researcher to sell this via other means instead of disclosing it as part of a bounty.

I think for bug bounties a fixed minimum is a good thing, be that X amount for an XSS, X amount for an RCE etc Regardless of how long it took to reach a POC.

It can be hard to categorize some issues that are found so there should be a model of sorts developed and shared with researchers that fairly determines an expected reward. A means of valuing the criticality of an issue. I’ve seen some bugs that affect other users (IDOR for example) get low payouts whereas if an attacker were to script/automate attacks like this they could potentially destroy a whole platform, so they should be considered more critical and worthy of greater rewards.

Its understandably hard to come up with fixed figures that accounts for all the variables, it’s a balancing game between the information at risk, the business impact, the difficulty or complexity of the bug and the likelihood of it being abused.

Quality write-ups help move things towards a fix quicker, I do like the idea of slight rewards for those who take the time to thoroughly report on the harder to grasp issues, but at the same time if it’s a common bug say reflective XSS or a sensitive file being exposed, I don’t think the report needs to be anything special, “HEY LOOK HERE” should suffice.

1 Like

I’m going to put both my hats on in a response to my own post! Ha!

As a organization:

Hey Jason! Great post!

To answer your bullets;

  1. I think that externally a lot of people confuse reputation with budget (talked more about below). As for the target, I do think issues impacting systems that give away PII information should be considered worth more. They are definitely where we want researchers to poke.

  2. No, I think impact is more suited to rate bugs.

  3. YES, I do think it should increase payout or even count as “1st in” when an inferior write up has been submitted. There is the competition side of these programs but being the “best dressed” reduces costs on a team validating the bugs and therefore impacts the bottom-line. Identification vs Full exploit? I think identification is fine as long as it’s demonstrated to prove the fullest impact (that way I can justify paying the researcher more).

(Just some more general notes below)

In my past experience, simply put, a bounty payout is a function of security budget. Not only is security budget limited but it is very hard to justify regarding a return on investment.

For smaller companies this means lower payouts usually. You can think of these companies as the type who would use a automated scanner only to do their security assessments. They just don’t have the money to contract a respected pentest firm yet. They do, however, recognize that it is a better model for security testing and also want to engage the security community responsibly. I feel for these companies when I see complaints about their payouts due to this. Unless you are Google or Facebook, you’re brand or product may be top notch but that doesn’t always mean your budget is large.

Even with large companies like the aforementioned, think of how many assets they control and have to secure. Sometimes hundreds, even thousands, with multiple tiers of attack surface. This makes scope and minimum payout a really tough decision. You very well don’t want to open up a good-will program to a community of researchers hacking you, and then not be able to pay them because you are over budget. On top of this is also staffing and possibly needing a platform to validate the bugs and manage the program efficiently (if you don’t use a service like Bugcrowd).

Lets talk numbers:

I’ve seen some companies break it down from a financial aspect. Thier equation is pretty simple;

  • Average high quality consultant rate in the US = $60-$80/hr (that is a
    $125k-$175k salaried employee working at 40hrs/week before taxes)
  • Average time spent on a bug = 1-2 hours
  • So for them that’s where the minimum starts, $120-$160.

Then they go into risk classifications of bugs for more severe issues and let the engineers roundtable on the payout (like Google and Facebook do). I don’t have this data but our internal structure at Bugcrowd (and our recommendations to clients) looks something like this priority listing (P1 being the highest payout):

  • P1 - CRITICAL Vulnerabilities that cause a privilege escalation on the platform from unprivileged to admin, allows remote code execution, financial theft, etc. Examples: Remote Code Execution, Vertical Authentication bypass, XXE, SQL Injection, User authentication bypass, some IDORs, severe logic flaws, etc.

  • P2 - HIGH Vulnerabilities that affect the security of the platform including the processes it supports. Examples: Lateral authentication bypass, Stored XSS, some CSRF depending on impact.

  • P3 - MED Vulnerabilities that affect multiple users, and require little or no user interaction to trigger. Examples: Reflective XSS, Direct object reference, URL Redirect, some CSRF depending on impact.

  • P4 - LOW Issues that affect singular users and require interaction or significant prerequisites (MitM) to trigger. Examples: Common flaws, Debug information, Mixed Content, Lack of mobile encryption, etc

  • P5 - BIZ ACCEPTED RISK Non-exploitable weaknesses in functionality and “won’t fix” vulnerabilities. Examples: Best practices, mitigations, issues that are by design or deemed acceptable business risk to the customer such as use of CAPTCHAS, Code Obfuscation, rate limiting, SSL Pinning, etc.

Taking the leap:

There are also issues pertaining to handing over trust to these types of programs. I like @avlidienbrunn 's mention of starting small (or private imo) to get the low hanging fruit and test incident response, etc. Then, after that you can increase payouts. Most large bounty programs have done this since their inception. Saying that though, I think $100 minimum price floor should still be in place.

My second reply, as researcher:

Hey Jason! Hope you are well!

Hot button issue indeed. To cover your questions:

  1. On reputation, not usually, not unless it’s Google or Facebook. There is however some stigma to being a successful company in the public-eye and offering low bounty rewards. On targets, yes, PII is king. If I can get PII it should impact payout.

  2. No.

  3. No, “1st in” is the current model. As long as the bug report identifies the minimum to reproduce the bug. You have to remember, I’m competing with some pretty 1337 hackers in an epic race.

Bug payout, for me, is all about impact. If I can get really sensitive data about multiple people through an easy IDOR, I still expect a higher payout. If I have an RCE on a mostly marketing site, I’d like the chance to show them I can pivot from that system to other more critical systems, or demonstrate higher impact.

I do have strong feelings about participation though.

If you are in it for the money: (which I am def not, it’s more sport to me. I play because I enjoy it)

  • Participation in a BB program is a time investment on my side. I could find absolutely nothing so the minimum payout would be an important number to me. As a gut feeling, anything lower than $100 seems not worth it to me.

  • One other consideration would include how much competition I measure for the program. In a private program I know there are less hackers so I can feel better about investing time. There is a hurdle to get invited to these, but showing quality reports and finding a few bugs for free (a time investment on my side) seems worth it in the long run.

  • In bounties that offer a 1st, 2nd, and 3rd place prize (Bugcrowd calls these Flex’s) I am even more interested as I can get a Google or Facebook level payout.It’s nice to see these at around 2-5k.

  • This is an important one. Skillsets matter. Hacking an IoT device, evading a firewall product, in-depth mobile hacking, code audits, bin reversing, etc, are all harder (and require more time to setup) than normal web application hacking. I’d urge anyone running a program to consider this. From a consultant side, you can usually charge double or triple your normal consultant rate for this type of work.

Those are my notes for now, I might come back later with some more.

1 Like

That’s very unfortunate. I would make sure to submit them all individually or explicitly tell the program owner that it is a chain of vulns and you are submitting those 5-6 in one report but expect them to be treated as individual submissions (the reason for using one report being to show the highest impact).

I don’t think this is a good equation at all. First, I think $60-80 is low for a high quality consultant (note: not salary). Second, average time per bug is just a baseless assumption? Or where does 1-2 hours come from? Third, I think it’s unfair to compare a subscription model (consulting/salary) with a one-time-payment (bounty) model like that.

To make an example: it’s as if paying for one hour of Spotify would have the same cost as a 30-day Premium subscription divided by 720 (hours per 30 days).

For me, one of the reasons to do bug bounty is the fact that the $/minute could be be lucrative. It’s a gamble and if companies start out with the mindset of having equal $/minute in bug bounty and consulting, I might as well stick to consulting and remove all risks of dupe/not enough bugs found.

Here’s what I think would be a better equation (albeit harder to implement IRL):

  • Our last pentest costed $5600 ($70 for 80 hours) and 11 issues were discovered
  • 5600 / 11 = 509
  • We should pay on average $500

PS. Not criticising you, just the mindset of consulting==bugbounty DS.

No problem at all Mathias, this is exactly why I started this thread. That equation isn’t mine btw, just something that was relayed to me.

As for the 1-2 hours, I’m not sure. As a researcher I can say that I usually have submitted a few bugs in the 1-2hr window though, usually not the most critical one though. The longer I’m exposed to the app the more likely I am to find a critical bug.

I agree here.

I’m not sure this works well either. The quotes were minimums, meant to scale greatly depending on impact. In order to cover that budget ($5600) how would you break it down for low level issues vs critical vulns awards? Do you have model in your head?

Obviously critical > medium > low but other than that I don’t really know. Another thing that both suggestions misses is the cost of hours spent triaging/pay for platform. As with many things, it’s easier to say how it shouldn’t be than to say how it should be :slight_smile:

Actual contract work here ranges (I think, I actually have little experience on that side), I’d say $90-140/hr? I’d be interested to hear more from people who do a lot of contract work here and abroad. Would def change that equation.

I think $90 - $140 is more accurate.

Are we talking about what the consultant makes or the billing rate their consultancy charges the customer? I have seen billing rates as high as $325/hour for experienced (8-10 years exp) pen testers. :moneybag:

I’m talking purely freelance.

This is a great point @kymberlee . I totally forgot that contractor hourly is NOT the same as billed to the client. $200-$375/hr are numbers I think sound right there.