> And then our hypothetical attacker can figure out how you generate the honeypot URL and embed an image with that URL for their visitors.
So add a salt. Just like you would when hashing passwords. You then make it more time consuming to crack the hash than it would be to perform a more typical denial of service attack
> you need to make sure that they can't obtain robots.txt via GET requests from a visitors browser (No Access-Control-Allow headers).
If an attacker already has control over the victims browser (to pull the robots.txt file) then they really don't need to bother with this attack.
> Also don't forget a single visit from a network, sharing an IP would still ban all the network.
Multiple users of the same site behind the same NATing really isn't that common unless you're Google / Facebook / etc. And when you're talking about those kinds of volumes then you'd have intrusion detection systems and possibly other, more sophisticated, honeypots in place to capture this kind of stuff. Also some busier sites will have to comply with PCI data security standards (and similar such as the Gambling Commision audits) which will require regular vulnerability scans (and possibly pen tests as well - depending on the strictness of the standards / audit) which will hopefully highlight weaknesses without the need to blanket ban via entrapment. And in the extremely rare instances where someone is innocently caught out, it's only a temporary ban anyway.
> Maybe a login form can be served from that URL, and any attempts to login would then get the visitor banned via a session cookie / browser fingerprint combo (Easy to get around but at least then you're not blocking IP addresses).
You can do this same method of banning with the honeypot you're arguing against!
While I don't disagree with any of your points per se, I do think you're being a little over dramatic. :)
Well, I was trying to point to the fact that how a small and not-so-useful feature can cause some unnecessary headaches. I know that it can work, just not worth it in my opinion. However, on second thought, I guess this could be implemented using less time than we used to write these comments :)
The login-form method would be a bit less silly, I thought, because it can be a POST. But, well...
So add a salt. Just like you would when hashing passwords. You then make it more time consuming to crack the hash than it would be to perform a more typical denial of service attack
> you need to make sure that they can't obtain robots.txt via GET requests from a visitors browser (No Access-Control-Allow headers).
If an attacker already has control over the victims browser (to pull the robots.txt file) then they really don't need to bother with this attack.
> Also don't forget a single visit from a network, sharing an IP would still ban all the network.
Multiple users of the same site behind the same NATing really isn't that common unless you're Google / Facebook / etc. And when you're talking about those kinds of volumes then you'd have intrusion detection systems and possibly other, more sophisticated, honeypots in place to capture this kind of stuff. Also some busier sites will have to comply with PCI data security standards (and similar such as the Gambling Commision audits) which will require regular vulnerability scans (and possibly pen tests as well - depending on the strictness of the standards / audit) which will hopefully highlight weaknesses without the need to blanket ban via entrapment. And in the extremely rare instances where someone is innocently caught out, it's only a temporary ban anyway.
> Maybe a login form can be served from that URL, and any attempts to login would then get the visitor banned via a session cookie / browser fingerprint combo (Easy to get around but at least then you're not blocking IP addresses).
You can do this same method of banning with the honeypot you're arguing against!
While I don't disagree with any of your points per se, I do think you're being a little over dramatic. :)