I don’t mean to necro an old post but this kinda hit home this week in an interesting way.
I cant go in to too many specifics as the particular program (not here, somewhere else) has all sorts of square hoops you have wiggle through to say anything – so lets just keep this very very very generic…
Came across a situation where in the span of about 5hrs I had verified that I could seize control of quite a number of the program owners infrastructure.
- Write access to public facing website(s) (check),
- Write access to public facing applications (check, and check),
- mysqladmin on Database servers (check),
- Mailgun interface api keys (check),
- IOS Code Signing Cert (check) -
- AWS Keys (check)
I literally could have laid waste to a company that
a. had no backups of anything, and
b. was dependent on their mobile application to make their money.
So now that we have the backstory – lets work towards where the issue laid:
- Time I spent to find the bug – 2hrs
- Time I spent verifying that everything I had was indeed correct. – 3hrs
- Time I spent verifying everything was within scope, crafting PoCs, creating justification for everything outside of the PoC – 3hrs
So I have no burned an 8hr day – and I haven’t even submitted…
- Submission #1 – 1hr
- Time arguing with an analyst over their interpretation of what i have submitted and it’s inherent value. – 8hrs of back and forth emails.
– Let’s frame this. All the while I explained as I did above, the grave situation the client was in. Unfortunately the analyst decided to mansplain to me how AWS worked, and then lectured me on how to write a good vuln report. Apparently ‘Change all the things…NAO!’ wasn’t business language.
So we are now at 17hrs. Program goes offline, I get a message confirming acceptance and get my payout. This payout was exponentially less than similar payout I had just previously gotten for almost the exact same vuln (though I didnt have nearly the control or business impact on that one) for a different program not two weeks ago.
The response I got “I wasn’t on the platform, and just took a guess at what we paid last time – my guess was wrong”. So there you have it – for this particular program, i was at the behest of a man, pulling numbers out of his arse. And in this case (not here – mind you, but in another program) – the dude was phoning it in.
So how much did I get? Right around 15% of the remaining budget. So, for a total ‘Game over’ scenario – I handed over a super-critical-you-probably-should-fix-this-nao and walked away with about 15% of the kitty. Where the alert(1) guys were walking away with around 3.5-5% per report. So, my game over was worth around 3 very nicely written alert(1).
- As an industry - i think the Bounty providers need to come clean
with their products. What are the budgets, and how exactly they
decide what to pay out. Does the bounty provider do it? Does the customer do it? Who decides what our vulns are worth?
- Values for bugs: The formulas that are used can be secret sauce - but the
values need to be there. I have seen everything from ‘We just guess
at what we give you’. to BugCrowds own ‘Between 0 - and Eleventy
Billion’, or another program ‘handful of bitcoin and a cool shirt’ .
- Offer accelerators for ‘business impact’ – on the order to how much it would cost a business if the unfortunate event actually happened. Take my above case:
- Total loss of application: Revenue lost per hour x amount of hours application was down. Take a percentage of that (say 5%), and this is what is paid out. That recognizes the culpability of the client and recognizes the researcher at the same time. It will sting, but these things are meant to sting – they are not a crutch.
- Breach of customer database with PII: Use DataBreachCalculator – it can give you the sobering numbers to come up with an idea of the approximate cost if somebody (who wasn’t a security researcher) would have succeeded in finding the same issue. In this case:
Your average cost per record is $ 208
Your average cost per breach is $ 3,641,944
- There is a scene in Gone in 60 Seconds where Nick Cage is trying to explain to Raymond Calitrey how simple the math is on the value of Eleanor – Goes something like ‘take the value of the car, subtract the amount to fix it – and you come up with it’s current present value’. In this case a simple percentage (say 5%) off the average cost of the breach, paid to the researcher for accomplishing this task. Why? Because that is 95% cheaper than what would happen in the real world. Thats what we are trying to simulate, no?
- Budgets: I have seen some programs that list how much is in the kitty and how
much has been expended. This is so important as a researcher – if
I know a program is low on gas – I’m not going to submit a high
dollar vuln – I may not get paid.
- We need an “oh shit” button – we can call it “GameOver” – when you find something of that magnitude – you get your stuff together, you PoC – and you hit the button.
The Oh Shit Button
- The rest of the available budget gets frozen
- the program goes offline
- the analyst starts verifying
If indeed it is Game Over – the cash out of the remaining budget goes to the researcher. If it is bogus – some punitive action takes place and we go back business as usual.
What does this do:
- It creates a race condition between the alert(1) “Slow-loris” guys who are farming vulns that have little to no real business impact – and the guys that are applying practical exploit techniques to bring value to these programs. (Not saying the guys finding XXS-for-days do not provide value – but when I don’t think Chase was breached because of alert(‘XSS’);
- It incentivizes PoC || GTFO – no longer are we penalized for taking the time to verify, create PoC walk through a complex chain of exploits to show real business impact. Today, stuff like that falls through the floor:
- Getting through the analyst can be tough on complex multi-step pwnage, and not worth the time invested
- Scope shields (more on that later)
- Alert(1) is easy, and pays – why do anything more? Just takes time – and you may get paid less.
- Get rid of scope shields. I find it laughable at times that the exclusions and ‘out of scope’ items in some briefs are longer than than the brief itself. Sometimes they don’t even make sense:
‘all hosts inside *.foo.bar.io are in scope’
‘all hosts inside *.qa.bar.io are out of scope’
Seems legit, right? Except –
every host inside *.foo.bar.io is a CNAME to a host in qa.bar.io
So what’s in scope?
That particular vexing issue took 3 days and about 12 emails to sort out. Why? I can only say that a lot of people on those threads were making decisions without an understanding of the technology.
Same can be said for
Bountys that exclude certificate pinning from a valid vuln, but then won’t accept any vuln relating to personal/privated/protected data being sent in GET over TLS.
You can’t have it both ways?
It seems the more selective you make your pool of researchers, the more you should relax the amount of OOS items. It should be items that you consider to be a Business Accepted Risk – and not a list of junk you ‘don’t want to pay for / fix’.
Pay a bounty commensurate to the amount of realistic business impact, the overall craft that went into producing it. Give us a jackpot button for ‘oh shit’ moments. And by all means stop the stupid scope shielding…