Re: How Hackers Hid a Money-Mining Botnet in Amazon’s Cloud
We need to recognize we're in the last days of the "people-moderated processes," i.e., where things can't happen so fast, as they depend on individuals' actions. We're well into an age where the right tail of "smart software" has overlapped the left tail of "humans," in terms of ability to respond to various tests, e.g., captchas, or even carrying on a simple conversation... given the keyhole of "text over the Internet," it's getting easier and easier for bots to pass. (And yet, tests can't be made harder, lest more and more average humans fail in false negatives.) Any system that depends on mapping obligations to individuals, and doesn't account for the problem that bots can masquerade as individuals, is asking for trouble. The trouble is, here, that the trouble they get ends up having its greatest impact on third parties. So I think we also ought to pay a good deal more attention to the economics and liability side of security... I attended the UC Berkeley workshop organized by Hal Varian, Ross Anderson, and Bruce Schneier, et al., more than a decade ago ( http://ift.tt/1lFz54U ), and more of that would be a good thing. We are seeing lots of problems by start-ups (and not so young companies, too) wildcatting "undervalued" resources (e.g., throwing a bunch of servers into a cloud to dramatically reduce the cost of cycles) while failing to pay full price for the consequences (e.g., suffering the cost of strong authentication).
from The RISKS Digest http://ift.tt/1nHYCjt