Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
$100M in bounties paid via HackerOne to ethical hackers (bleepingcomputer.com)
227 points by badRNG on May 27, 2020 | hide | past | favorite | 120 comments


> $100M in bounties paid by HackerOne to ethical hackers

Not by HackerOne per se but the companies using the platform.

A better title would be “$100M in bounties paid to ethical hackers by companies via HackerOne”.

To be fair, the original message on Twitter reads much better than the title of the article:

> HackerOne is proud to announce that hackers have earned $100 Million in bug bounties by hacking for good on our platform.

I was on both sides of this: leading the security team at a company paying bug bounties via HackerOne and also reporting security problems to other companies as a freelancer. To be honest, the experience was always bad in both cases. I wasted several hours triaging bugs reported by “hackers” that often disregarded the conditions of our bug bounty program. People reporting the most trivial things and we would have to pay them anyway just to move on, otherwise they would end up ranting for days.

On the other side, as a bug bounty hunter, the experience is also awful. One of the biggest problems is the fact that you have no way to know if other person has reported the same issue, so you spend hours if not days documenting a vulnerability and creating proof of concepts (PoC) and it is only after your submission that you get a message saying “closed: duplicate issue”. Add to that all the back-and-forth trying to justify more complex issues that are slightly more difficult to prove without damaging the system you are testing.

I am glad so many companies and people are still onboard with this service, but I wouldn’t blame anyone for closing their account after all the bad experiences I had.


I've started handling the security inbox at Buffer now and we use a normal email approach. I can honestly say that the experience is pretty much the same and I feel like the issues you describe are independent of Hackerone (perhaps the scale increases?).

From my end, there are a lot of trivial things I have to go through where it's low effort on the researcher end (And I've even had automated searches where the researcher has sent mails to us assuming we'd have the same issue because of some similar HTTP Header or something similar). Thankfully I've gotten faster at keeping these out but it still takes up more time than needed.

From the researcher end, I assume it's frustrating where they put in the effort to craft a well documented mail and I have to inform them that it's a duplicate or known issue that we are currently working on a fix for. It's a hard call and I'll often have to use a judgement call on these. But it's made harder still when I'll suddenly see an issue that has existed for a long time be reported by a single researcher. And then in a matter of two days 3 to 4 other researchers will pop up with the exact same issue leading me to believe that either there's different accounts under the same name, or some kind of researcher group that works together in sharing findings (And maybe bounty).

Basically, I'm not sure how much this is a HackerOne issue vs a general bounty program pain ¯\_(ツ)_/¯


As a bug bounty hunter, I can attest to having an awful experience at times. The three companies I have worked with @ HackerOne have all taken forever to payout or fix bugs. Currently, I have been waiting 4 months to be paid by a company on HackerOne, for a pretty dang high impact bug that leaves all their customers vulnerable; I checked today, & they still haven't fixed it, let alone paid out.

You also have to be aware of policy changes. I've noticed companies remove language that told how much they'd pay out. Some companies have a mandatory pay out of 7-14 days but they are rare; with everyone else, you just have to hope they pay you, and they do, I guess... whenever they feel like it.


> you have no way to know if other person has reported the same issue, so you spend hours if not days documenting a vulnerability and creating proof of concepts (PoC) and it is only after your submission that you get a message saying “closed: duplicate issue”.

Yes. I would say 2/3 of my reports were resolved this way. Sadly I can't fault either HackerOne of the company -- I don't see a viable alternative.


One alternative is to allow a report without full documentation and a PoC. If it’s a dupe, mark it as a dupe, otherwise ask for the full writeup.

As for the problem of spammy submissions: charge a small nominal fee of $10 or so to submit, with a full refund if it’s a legit issue (even if it’s a duplicate).


Isn't that a potential information leak, though? A potential black-hat attacker could sit around and brainstorm a big list of potential vulnerabilities without going through to figure out if they exist and are exploitable. Then they can start submitting them, and if they get closed as duplicate, they can quickly dive into that one, figure out how to exploit it, and make use of it for nefarious purposes.

Maybe that's a little farfetched, though.


Mozilla just recently changed their program to divide bounty amongst submissions of same vuln within 3 days.

Not a perfect fix, but I think it helps.


>Mozilla just recently changed their program to divide bounty amongst submissions of same vuln within 3 days.

One potential issue: someone on the Mozilla team could pass some of these on to a few friends who then claim some of the money.

It's not a major likelihood, unless the bounties are numerous or especially large.


>A potential black-hat attacker could sit around and brainstorm a big list of potential vulnerabilities without going through to figure out if they exist and are exploitable. Then they can start submitting them, and if they get closed as duplicate, they can quickly dive into that one, figure out how to exploit it, and make use of it for nefarious purposes.

So long as there is a big of lag before confirmation (which, in practice, there already is), the vendor would know about the issue at least several days in advance of the black-hat even getting a hint and could hopefully patch it in that time.

For especially big holes, just take a bit longer than usual to get back to people until it's fixed.


No it's exactly the sort of thought process people have.

Another scenario is someone does what you suggest after finding a real vulnerability, to waste the teams time while they exploit the real vulnerability.


Honestly, neither of these things would ever happen. It just doesn’t work that way and wouldn’t make sense for them to go about it that way.


if its a dupe then that means you are sitting on unpatched live bug that I found, F you pay me.


>if its a dupe then that means you are sitting on unpatched live bug that I found, F you pay me.

The problem is that you can just tell all your friends to file the same bug, and now they have to pay everyone?


If the vulnerability is still there, and you find and report it, you should be paid the bounty regardless of whether or not you were first. This should give companies a further incentive to close the vulnerability as quickly as possible. Isn’t the point supposed to be to discourage black hats from acting on vulnerabilities they find? If only the first person to find the bug is awarded the bounty and all others are told “tough shit” this really doesn’t solve that problem.


As a hacker, I see a very easily exploitable flaw in your proposed solution :)


Sadly, the solution to Sybil attacks is generally either exclusivity or proof-of-work…


Most companies don't do bounties at all (and arguably they shouldn't). The idea that the companies that do are going to 2-3-4x pay the same bounty on a bug collision is unrealistic.

The "fair" thing here is probably to split the bounty among the discoverers, but that's not going to happen either.


If you do that, you're merely trading one grievance for another: "evil company marked my bug as duplicate to avoid paying" for "evil company claimed to have gotten duplicate reports to weasel out of paying the full amount". More people upset, although individually, maybe to a lesser extent.

The core issue is not the reward division algorithm, it's the inherent lack of visibility. One solution here would be to just open all reports after a while, but this creates problems of its own. One is that it gives ammo to people engaging in dishonest or clueless PR. Another is that some researchers don't actually want visibility, because their employers have murky rules around such engagements, or because they have some far-off disclosure timeline in mind (as a part of a presentation at a conference, or whatnot).


How about combining reward splitting with having some way to show a count of recent submissions (no details, just count), so people know before submitting that there is a chance someone else already submitted the issue.

Or a mechanism for companies that use email to register the researchers submissions in HackerOne. The details will be sealed and non-public, with researchers having no way to know it exists unless the company provides a link to it as proof of work. HackerOne thus acts as a kind of notary against accusations from researchers that it wasn’t really a duplicate.


How about dividing among reporters, bounty increases the longer it's not fixed since first report, and those paid must be publicly acknowledged.

Probably also need stiff penalties for insiders who might conspire to notify others of bugs and split the pay out.


Nobody is going to do anything like this. Bug fixes take time to coordinate and deploy, and nobody is going to make themselves and their schedules accountable to some random bug bounty submitter. At the point where you're doing this, you might as well just engage professional pentesters; they don't give a shit when you ship fixes --- you just pay them to find bugs and write them up.


The trouble with your first point is that companies won't go for it; no point in having HackerOne around if no one will use their platform. It's a tricky problem; let's solve it with AI and Blockchain!


Why not divide the payout? The companies paying will pay the same amount, just divided among all the reporters. They already do the work of identifying duplicate reports. Maybe it could be weighted to pay more to the first reporter.

As far as not doing it. At some point critical industries may be have to be regulated to force them to behave responsibly.


Do that and a bad actor would just use multiple accounts to report to take a bigger chunk of the pool


Someone's going to create multiple bank accounts to fight for a bigger slice of the $50 they're contending for on an open redirect finding? This sounds like a concern that is plausible mostly on a message board.


I have multiple bank accounts. And I can get my friends on it.


That sounds like a super good use of your time and I wish you the best of luck.


This sarcasm seems misplaced. How much effort does it take to share the details of a bug with friends? Not to mention that some bounties are much greater than $50.


Particularly when you have a syndicate. You find a bug? Share it with the group, they find one and share it back. Now you have two smaller pieces of the pie that cumulatively are bigger than your initial piece.

And some payouts are 10k, I’ve never even heard of $50 minimums, I thought it’s either $100 or swag.


> Most companies don't do bounties at all (and arguably they shouldn't)

Out of curiosity, why shouldn't they? Is it because they then end up with a lot of garbage/spam submissions to sort through from people hoping to trick the company into paying out a bounty?


Yes, and they're a less time-efficient way to get a serious read-out of application security than a pentest is. A lot of companies run bounties because they think they're "supposed to", but really, no.


Same here. 90% of reports are from folks that didn't bother to read the terms and sending in useless reports. As a researcher, I'd spend days chasing something down just to get $50 for my finding months later.


Wow, that's like a 90% discount from what is presumably the actual going rate. I don't know how you could live in the Bay Area and possibly afford to do this, other than as a sort of beer money game.


I find in BugCrowd that most of these reports that are spammy or useless in nature are from researchers in developing countries and have a limited grasp of the English language.

They simply don't take no for an answer.


HackerOne has people screening reports that don't seem very technical.

They closed one of my reports for being a "denial of service" attack when it was a crash caused by malformed input. I've also heard of others having the same issue.


"a crash caused by malformed input" absolutely does sound like a DoS, though.

Sounds like you have a crash that you think might be exploitable- in that case, consider saying "memory corruption triggered by malformed input" to make it more clear that the crash isn't the point.


We can fix that issue with the title with s/by/via/. Thanks!


As contentious as reporting bugs can be during the development process, while working at the same company as the developers, I'm not at all surprised that ethical disclosure is terrible, especially with a monetary motivation.


> People reporting the most trivial things and we would have to pay them anyway just to move on, otherwise they would end up ranting for days.

sounds these "hackers" have an incentive to be stubborn over non issues


Agreed. There's also a clear incentive for companies offering the bounties to downplay valid bounties.


No, there isn't. Bounty dollars are immaterial to the companies offering them. In fact: the opposite incentive exists. These programs justify themselves by uncovering significant bugs.


When a bug has been submitted, and the company is trying to figure out how much to reward?

The companies that consistently publish their bugs have a good track record of giving appropriate bounties. I don't see a clear incentive for a company that doesn't allow their bugs to be published, to be anything other than stingy. Computer security is a cost center for a business, and the folks managing the bug bounty program have a clear incentive to minimize their costs, while maximizing the bugs found.

I think a potential solution would be increased bug disclosure on platforms like Hackerone. Currently, a company has to agree to disclose the details, and most never do. Openness allows hackers to vote with their feet, and spend more time on the companies that are easier to work with, both in bounties rewarded, and easiness of the reporting process.


I've read this a couple times and don't really follow it. In reality, there is a tacit consensus scale for vulnerabilities by severity; a real XSS, for instance, in a platform where XSS isn't somehow sev:crit like it is on a large social network, is worth "hundreds" of dollars. Most companies have a scale worked out ahead of time. HackerOne will give you advice on scales if you want to take it.

Whatever the scale is, companies have no incentive to avoid paying them, because even at the high end of the scale the amounts are immaterial. Remember, we're talking about H1 bounties here, not the Apple and Google platform bounties, which are totally different animals.

The real risk companies that run bounties face is that their programs won't generate any real bugs at all, but will absorb costs from both the platform and all the nonsense bugs they have to triage. New serious bounties are good news, not bad news, for most bounty programs.

(I've managed several, continuing until recently; before I did, I went around talking to people who ran them to get the lay of the land. I'm pretty confident in my answers here.)


Speaking as a former client of H1: exactly. Our problem was how to get more critical reports to throw money at, not how best to chisel the hackers. We weren’t able to sufficiently motivate hackers to dig into the places we felt deserved better coverage, but would have happily paid for critical bugs in those areas if anyone submitted them.


I agree with the points you made.

The issues I've personally experienced have been with impact, for bugs outside of the very traditional XSS/SQLI/RCE. I've gotten things along the lines of "yes, our _______ is seriously broken, but it's not/barely a security issue," with an explanation that stretches plausibility. Maybe I'm full of crap, maybe they are.

I'm sure those running bounty programs would have all sorts of folks contesting things that aren't actually real bugs. I think the only real good solution is increased visibility on all sides. That way each of our technical arguments can stand on their merits, whoever is full of crap can get called on it, and others looking for bugs can choose where to invest their time (glossing over that solution missing a bunch of thorny implementation details, I'm sure).


I'm not going to name the billion dollar company on HackerOne I have an issue with, but they routinely downplay bug reports, take over 1 year to deal with some of them (if ever). And just recently one report HackerOne screeners closed as being out of scope blew up in their face as it was exploited in the wild, and they only fixed it only after numerous public complaints.

I've heard rumors of people selling exploits for this company on the black market for more money now.


A company can ignore a report, then get the reporter banned on HackerOne if the reporter decides to no longer follow the HackerOne guidelines and publicly discloses a vulnerability. This gives companies an opportunity to turn HackerOne into a black hole where they get to not pay any bounties and simultaneously keep the vulnerability secret.


I don't believe any serious company does this. There are lots more companies running bounties than I could ever talk to, but I can't even fathom how the cost/benefit of this is supposed to work. It's a negligible amount of money, and a non-negligible reputation risk. I find it a lot more likely that the people who feel this happened to them either reported out-of-scope bugs, or collided with someone else.


Is there a feasible way to ask an organization if they're aware of a vulnerability before you put in too much work on it? Can the query be ambiguous enough to avoid a free service rendered?


>“closed: duplicate issue”

cool, posting my PoC on pastebin then


> HackerOne is proud to announce that hackers have earned $100 Million in bug bounties by hacking for good on our platform.

(Tangent) Can anyone recommend a coherent interpretation of that statement?

I know what it means for an individual person to be proud. And I could see an argument for extending that notion to a group of persons if every person in the organization was proud.

But I assume HackerOne has had employees / members come and go over time. And I also assume that some of the past or current members don't share that feeling of pride for this particular milestone.

So the only interpretation I can think of is that the person writing the PR was being sleazily vague. I.e., trying to get the audience to take a sentiment that's only meaningfully applied to individual humans / animals, and getting them to unwittingly apply it to a brand name instead.

Is there a better explanation that eludes me?


Not really sure what you’re getting at here. Without getting into the boilerplate cooperate “we’re helping the world”, it’s a perfectly coherent sentence.


Expressing sentiment on behalf of collectives isn't a new idea from a PR writer, it's an ancient figure of communication.


Idiomatic English might be eluding you.


This is a tangent but it's been fascinating watching the infosec community grow on twitter. It makes me feel super out of the loop with a huge part of my field (i'm more dev/architecture but of course security is important). I kind of dove into it a tiny, tiny bit the last week. I figure with all this time at home for quarantine I might as well start playing in CTFs, etc and hone some skill I'm so used to only using in a reactionary manner.

I'm honestly envious of their community and all of the tools they've created and tutorials and everything for newcomers. They've done a great job getting anyone who is remotely curious the ability to dabble.

When I was coming up as a sysadmin "rtfm."

Anyway, are people making lucrative careers out of bug bounties? What do these "infosec CEO" twitter people do day to day? Their goal is to hit bounties and sell pentesting/exploits I assume?


$100M is 1000 people getting paid $100k. $100k isn't a particularly good programmer salary, and is a very high bug bounty salary. From what I've heard, HackerOne/BugCrowd high earners basically just end up spamming as many companies as they can with web vulnerabilities like clickjacking that are low impact but everyone has, and live in very low cost-of-living areas like South America. It's a lot of repetitive hustle and not secure.

Infosec twitter people aren't the ones doing bug bounties. A lot of them are blue team cybersecurity, doing network engineering and IT, or do penetration testing for networks which is a different wheelhouse. Some of them might be reporting exploits to Zerodium/ZDI/other exploit brokers, but they keep a low profile and it is very different from "bug bounties".


> $100k isn't a particularly good programmer salary

It's 95th percentile for a lead developer in the UK... and probably everywhere outside some areas in the US.


It's been my experience that a good chunk of the researchers on HackerOne are exactly what this point would lead you predict. I recall working with some in Morocco, Pakistan, and Romania.

This was admittedly a few years ago. No idea if that's still accurate.


> Anyway, are people making lucrative careers out of bug bounties?

Not really, no.

Sure, the top 20 hackers on H1 do make a very decent living (you can listen to the story of Dawgy-G on Darknet Diaries ep60 about that). But realistically, if you are that good at it you can get paid much more doing a 'real' infosec job.

Bug bounty hunting platforms like H1 do give you the freedom to work whenever you want, wherever you want on your own terms. Basically gig economy.

Personally I do bounty hunting every now and then because I enjoy the learning experience from it. But looking at the time it takes me to discover a bug and writing a detailed report only to receive a couple hundred USD for it, it really isn't worth my time in a professional sense.

I'd say I make about 25USD/hour from it. Of course this is highly dependent on your skills. And it is also highly dependent on where you live whether 25 USD/hour is actually enough to make a living.

Sometimes you can get lucky and stumble upon a valuable bug and make a couple of grand for only an hour worth of work, but most of research you'll do will yield you knowledge, not money.


I've gone with a similar approach, kinda treating it like a CTF. I work on unique targets, with the goal of learning something along the way, which has worked well thus far.


> Anyway, are people making lucrative careers out of bug bounties?

I'm pretty sure a reasonable number are doing quite well for themselves after currency conversions. It would be quite challenging to replace a typical Bay Area security architecture salary with bug bounties, though. Few people are equipped to find and claim $10k+ bounties twice a month, every month.


>They've done a great job getting anyone who is remotely curious the ability to dabble.

Any suggestions where to start? I'm several years out of the scene and would love to be reacquainted.


I've only done a few hours of research so far and done little hacktivities (https://tryhackme.com/hacktivities).

Then there's hackthebox https://www.hackthebox.eu/ I haven't used this yet.

Mostly I just grab stuff off of twitter/reddit, I don't really know who to recommend right now, I kind of just followed a ton of relatively random open source infosec people organically, but none really scream "has guide for diving in or is a great intro person" off the top of my head.. (maybe @troyhunt), https://www.reddit.com/r/netsecstudents/

Hopefully someone else can chime in because I'd like to know more as well.


I've not seriously started but am considering it. I've only done a few toy problems myself.

On the YouTube side for people who are a bit more casual like me, John Hammond and Live Overflow both seem pretty good for beginners.

John has a bunch popping up on my feed where he runs through a CTF or hacking room and steps through his process, like this SQLite timing attack (https://www.youtube.com/watch?v=DYLDG_2Vs3E) which I found interesting since I knew the concept but hadn't seen it in action before.

Live Overflow also explains things pretty clearly and has covered a variety of topics like using Ghidra, hacking an intentionally hackable MMO, or recreating patched XSS flaws.

I think it's really good to pick a niche first and stick to it, reverse engineering code looks fun but I'm not sure how commercial that skill is compared to busting open web apps.

If you want something to poke and prod at there's also Damn Vulnerable Web Application (DVWA) at http://www.dvwa.co.uk/


Hack the box is good fun, I started with it the other day.


Not sure what you’re looking to get into specifically but here’s some resources you can look at in no particular order:

Hack the Box Try Hack Me Pentester Lab Hacker101 Portswigger Web Security Academy Linux Academy Bugcrowd University Hacksplaining Cybrary Malware Unicorn bugbountyguide.com CTFtime root-me.org MalwareTech Nahamsec The Cyber Mentor


> Anyway, are people making lucrative careers out of bug bounties?

There's a strong power law distribution - the top folks make a lot of money (say, > $500k/yr) and then it quickly drops off from there.


You know people making 500k/yr on HackerOne? That seems way higher than what I would estimate from anecdotes.

Unless you’re talking about private exploit sales, which is totally different.


Found some data about this: https://www.hackerone.com/press-release/six-hackers-break-bu...

As of that date, 6 people made over $1M total since it launched (not per year). So maybe 1 or 2 have hit 500k in a year? Some of those names have been doing it for over 6 years.

According to a previous report, only 0.3% have made over $5k in their lifetime on the platform.

https://duo.com/decipher/taking-hype-out-of-bug-bounty-progr...


There are numerous Synack Red Team member millionaires too. The first in hacker history was an SRT member in mid-2018, but didn't want publicity. - Rajesh Krishnan, Synack


It's definitely possible. A few of the researchers I've seen on BugCrowd that we help foster good relationships with, e.g. paying out quickly and with sizeable bounties, easily make 5 figures USD from us in 12 months.

And I'd hope we are only one of their many clients. We might be an outsider though, my company does very well to address the bugs that are brought to us fairly. We understand the need to encourage the top 100 researchers on the platform an incentive to look over our stack. That's where the value is.

We do get a lot of spammy notifications too which sucks but it's part and parcel with the issue mentioned here. We leverage BugCrowd to help filter the chaff from the precious metals. :)


I've seen and experienced that too! I follow a bunch of members of the community and seeing all the support they have for newcomers and people of color makes the field even more attractive to me. Kudos to them!


Seems a bit funny, the top scorers didn't have a few massive bounties, but many many little ones. Both of these accounts made most of their hits on Verizon. To get those kind of rates it's probably the same type of flaw present in many places of the system.

It's questionable if these companies are getting massive value for money if most of the bugs are oversights rather than intricate flaws in a bespoke process.

https://hackerone.com/try_to_hack?filter=type:bounty-awarded https://hackerone.com/mlitchfield?filter=type:bounty-awarded


I feel that bug bounties are still undervalued, and the market is still inefficient because the prices are unilaterally set by companies.

The only other publicly disclosed signals for market price come from third party companies and state actors.

The other signals are not public and hard to quantify, they come from trying to weaponize and monetize exploits yourself. This results in potentially incurring various forms of liability, or reducing that by selling information to a different broker, who will eventually find someone to weaponize or monetize a piece of the exploit. This part is a much more efficient market, but it is not vertically integrated.

The prime bug bounties seem to be trending upwards in value, with the bottom being crowded and with non-serious companies testing the waters.

Does anyone have any ideas to make the value of bug bounties be more dynamic and elastic, trend upwards towards their true value inline with the growth of the sector?


I see bug bounty programs as just being another example of InfoSec charlatanism. People do bug bounties because InfoSec people say you should do bug bounties, and not doing what InfoSec people say you should do implies some sort of malfeasance. But the value of them is remarkably questionable. For starters they are absolutely not a replacement for pen testing, having a bug bounty is not a sufficient substitute for any of the pen testing that you should be doing. The value of the bugs you have reported are also typically incredibly low value. Most of the reports you get through bug bounty programs are just the output of open source scanning and static analysis tools. You get no-effort reports for things like frame-able content and “your mobile app has a dependency, which has a dependency, which was compiled with marginally sub-optimal flags”. Actually valuable reports do make it through, but I seriously doubt having a bug bounty program is more effective than publishing a security email address on your website. They’re mostly just spam generating services that invite people to try pressure you into coughing up some money for largely trivial nonsense.


Bounty submissions are mostly scanner spam; maybe almost entirely. But every once in awhile you get a serious and important report, and you wouldn't have gotten it without the bounty. Retain 3 different pentesting firms to hit an app and their reports will overlap maybe 70-80%; similarly, bounty people find different stuff as well.


> But every once in awhile you get a serious and important report

Yes this is true.

> and you wouldn't have gotten it without the bounty

Of this I am highly skeptical. Most companies that I’ve worked at who don’t have bug bounty programs get the occasional serious report. These reports were submitted to companies before bug bounty programs were even thought up. Bug bounty services just seek to capture value from something that was happening long before they were invented, and I’m highly skeptical of them having a significant impact on generating serious reports that otherwise wouldn’t exist.

The other issue is that everybody has a finite security budget, and dealing with the spam is going to (perhaps significantly) eat into that. In those cases, it’s taking money that could be spent on something with a decent ROI, and redirecting it to essentially just creating busy work for your security team (or spending their budget on paying somebody else to do that busy work).

I think they really only exist because a lot of security professionals don’t really know what they’re doing, and a lot of their employers don’t really know what they’re supposed to be doing.


I watched it happen, several times, on projects where I not only had 6 weeks of undivided time to run an app assessment before I took over their bounty program, but the client had contracted assessments from other firms prior to me being involved.

It's also a refrain I got from almost everyone I asked about running bounty programs.

A fundamental truth I don't think anybody who does appsec assessments professionally can escape: multiple audits of the same target will turn up different bugs. It happens even if the same people do both assessments! I've never talked to a software security professional who pushed back on this; maybe you'll be the first.


> A fundamental truth I don't think anybody who does appsec assessments professionally can escape: multiple audits of the same target will turn up different bugs.

I completely agree with this.

> I watched it happen, several times

I don’t doubt you have, but this is anecdata. I wouldn’t want to draw any conclusions from this, especially because it conflicts with a lot of my own anecdata.

Auditing/security testing is something you could infinitely devote incrementally more budget to, and infinitely receive incremental returns from. While I question it provides as much benefit as you’re describing, it can certainly provide a benefit. My issue with it is that, in my experience, it has always had a bad RoI. The resources that you have to devote to dealing with the spam make extracting any value from bounty programs a very expensive exercise. I’ve personally seen people devoting more than one day a week to managing bounty programs that they were lucky to get one good report per year out of. Imagine how much more value they could have gotten from having a security engineer spending a full day per week threat modeling with the product engineers.

Perhaps the ROI is different if your department is over-funded, or if you have a big brand that people want to write self-promotion blogs about. But any time I hear a security professional parroting the importance of bug bounty programs, the cynic in my wonders whether they’re just looking for some low-skill busy work for themselves.


I think we'd have to struggle to find a clear debate between the two of us. I just wanted to make the point that people run bounty programs because they do in fact turn up black swan bugs, even after the repeated pentests that most of these firms have already engaged.


There are several other important price inputs to this market, including scanning services (thousands quarterly) and consulting penetration testing (tens of thousands annually). Most companies that offer bounties have also had pentests done; nobody believes that pentesters are underpaid (though maybe they should).

The reality is probably that 80% of the vulnerabilities disclosed on bounty platforms just aren't worth that much. Certainly: companies that lack security expertise but manage bounties tend to radically overpay bounties; I'd of course be curious to see a breakdown of that $100MM by bug class.


> Companies that lack security expertise but manage bounties tend to radically overpay bounties

Could you share your reasoning on this? Are you saying they're overpaying for a low threat type of vulnerability, have a large number of vulns that should have been caught earlier, or something different? Genuinely curious! Thanks


I'm saying that people are submitting sev:info's and getting paid $500 for them.


Makes sense, thanks for the clarification.


"reducing that by selling information to a different broker, who will eventually find someone to weaponize or monetize a piece of the exploit" This has become the standard way of monetisation for the more capable ones who don't want or don't see the point to do the crime itself. ATM skimming kits, carjacking kits, etc. The problem is that security is not rewarded at all.Even a top level expert with very deep knowledge gets peanuts in comparison to the cost businesses incur when they get hacked.


Obviously there's a way to make the market more efficient, let black hats compete to purchase exploits on the same platforms .... of course that's likely illegal, for good reason. Instead we have two markets with only a few people playing in both so price signals are likely weak


I don't think it's entirely fair to say there is no competition in this market. In the end companies are still competing for a limited supply of ethical hackers. Except that it is a bit of an unusual market, since both sides are operating with very limited information.

Perhaps it will become more stable when people realise you can sell negative results as well (either implicitly by cooperating, or through an actual market). Anything that can bridge the gap between hackers wanting to get paid for their efforts and companies wanting to pay for results would help.

More volume would also help, in the end it's hard to sustain a whole industry on just tens or hundreds of millions per year.


> I don't think it's entirely fair to say there is no competition in this market

Good thing I never said that. The market is inefficient. I like that idea of monetizing negative results. More transaction volume generally helps, I think external market signals and their actual value are whats really missing.

Like, how much would someone pay for an exploit should dictate how much a company should pay for letting a crowd try to fix it where only one person/group gets the payment. If there was a way for additional participants to get paid for bothering to look at all that would be helpful too, and different people can focus on how people try to game that instead of worrying about the idea of people gaming it.


Aren't negative results already monetized, through standard pen-testing servcies? The usefulness of a negative result is dependent on the thoroughness and skill of attacker, while a positive result is useful no matter who submits it.

An unknown person finding a vulnerability has clear value, but a negative result from an unknown person doesn't mean much.


A negative result has value as long as it's clear what vulnerabilities have been tested. Of course it's pointless if you don't trust the person testing it, but that's always going to be a problem. I guess you might need to have reproducibility to make it truly useful.


That seems to me to be mostly limited to automated tools, or checklist level evaluation?

The good/dangerous bugs require creativity and deep system knowledge to find, which is hard to trust that an unknown person has.


Then comes risk management and I would tell you "good bugs that require creativity and deep system knowledge to find" are not dangerous.

It is sexy and all to sell "we hacked you using emacs through sendmail by the moon phase exploit that happens every 2 months". But really how many people will be able to exploit that? My company is not target of a state sponsored actors. Automated tools, checklist evaluation and decent administrators will be good enough for most of companies. Ordering pentests from time to time is good practice to check if administrators are doing their job as in "trust but verify".


It's not. Running scanners and report scan results won't get you paid so it's a complete waste of time.

Trust is easy. You either have a proof of concept and can show impact in your report or gtfo basically.


As company you should do bug bounty but in addition, not as replacement of doing your own testing.

The reason: Bug bounty is fundamentally favoring the companies that sign up. They pay next to nothing for getting a lot of eyes on their site and here and there a valuable find will be made.

Rewards should probably be much higher, like 10x I think to attract better researchers.

Also, the latest invention of private programs, where testers aren't even allowed to talk about it or share finds after is a joke as well - it's all just in favor of the companies. The basically buy the researchers silence, e.g. they can dismiss a find and don't pay and just say oops duplicate.

For someone skilled and interested in infosec there are better ways to make money.


Make your program private, increase your bounties 3-5x, put in a $1k+ first bug report bonus and invite in people who have previously reported good and/or well documented bugs and you'll save time and get a lot more out of it

These free-for-all bug bounty programs are a drain on resources that could be better spent elsewhere

The real value hackerone and similar should be providing is filtering out the time wasting reporters and the vendors who slow roll on reports - but they do neither of those


This is correct in theory but not in practice. Commodity bug bounties have become somewhat of a failed experiment. For most companies, they cost more than they are worth, and that cost doesn’t even come from the reward payout.


That's a good point, but if I understand correctly Hackerone staff has to triage bugs before a company is even responding or required to look at it? So it's all offloaded, or is that incorrect?


The missing figure is how many hours those hackers spent. Most of the companies that were on HackerOne when last I checked are using the platform as a kind of a substitute for contracting with a pen test firm. Things like "will you break my website". With bug bounties, the payout is only for vulnerabilities found, not for analysis effort, and therefore one has to carefully weigh the expectancy. I think that vulnerabilities found is the wrong metric for the industry, because it is really downstream of the true desire. If anything, the metric should be complexity removed. Prove that you made the system simpler.


"ethical hacker" is such a trash term.


agree. and the term "hacker" has been driven through the ground and back up so many times its lost and changed meanings. Now "i got hacked" is akin to the dog ate my homework or I gave someone my password.


For what it’s worth, nobody who actually works in security uses it, and they mostly cringe when others do. That and “cyber” are pretty good smell tests that you’re interacting with someone who is very out of touch.


yeah, old (by todays standards anyway) infosec hacker here. "cyber" has always a fed/consultant tip off


Yes. You should generally think less of companies that use it.


It's funny since this term is rising in popularity in recent years, at exactly the same time that "hack" seems to be coming more popular in everyday use and losing some of its stigma. "IKEA hacks", "life hacks", etc.


Well in the philosophy problem space, ethics and morals are clearly a complex matter haha.

If someone’s ethics is to maximise chaos, then a full disclosure on 4chan is _technically_ the _most_ ethical action for this person.


curious to know, what's the term which people in the infosec community use then? pen-tester? cybersecurity researcher?


Penetration tester or application security audit. Not cyber.


Here's a link to "The 2020 hacker report" by HackerOne

https://www.hackerone.com/lp/resources/2020-hacker-report

I've looked through it and there's some nice information on how hacker industry emerged and grew into what it is now, talks about money earned by ethical hacking as well.


Paid through HackerOne not by HackerOne.


What if I introduce security bugs only to be paid bounty on them later


Are you asking what would happen if you defraud the company you work for?


What if you put anthrax in a birthday cake?

It's still illegal. Besides, planting a bug and solving it would still involve faking version control records to insert the bug.

You might be able to get away with it once but the bean counters wouldn't let you fool them twice in that regard. Unless you had a guy on the inside etc. but it's turtles all the way down.


I believe that most platforms have restrictions barring you from submitting bug reports if you're affiliated with the company offering them.


The platform doesn't really care, it is the company with the bugs that pays out. It is probably the client companies that have those policies.


Easy loophole, work with a friend.


And risk losing your 100s of k a year career for a $250 Amazon gift card. Knock yourself out.


You would be an unethical hacker I suppose


It would be more sensible to just report existing bugs via the platform instead of reporting them to the project manager.


I can only hope you're not self-employed.


Worth every penny




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: