Saturday 18 October 2003 — This is more than 21 years old. Be careful.
Eric Lippert engages in a paranoid fantasy: The Malware of Ultimate Destruction. I think he makes a good point here: our digital machines are vulnerable, and much worse stuff than Blaster can happen. But along the way he seems to believe the Microsoft marketing materials, at the same time that he proves their illogic.
In his ultimate worst nightmare scenario, he imagines a virus that replaces your operating system with a cleverly written emulator (emphasis his):
The net result: you are not even running Windows anymore so nothing is trustworthy. The emulator could be logging every keystroke, sending your files to Brazil, whatever the emulator writer wants. ... You don’t own that box anymore. The chain of trust has been broken.
This paragraph implies that the essential damage would be that Windows isn’t taking care of you any more, that your protecting white knight has been vanquished.
But later in the same post, he comments on the theft and posting of the Half Life source code:
... what seems likely is that attackers exploited a known vulnerability in Outlook, and a high-ranking Valve employee was vulnerable to the attack. The malware installed a key press logger, and from that point, it was pretty much game over, so to speak.
In other words, forget the baroque fantasy of an emulator taking Windows out of the picture and leaving you vulnerable. Windows (and Outlook) left you vulnerable in the first place!
I’m not Microsoft bashing here: I don’t think Windows is more vulnerable than other operating systems. I think Windows’ higher rate of attack and compromise is due to its higher market share and interest for virus-writers. But to pretend that Windows is somehow enforcing a pristine chain of trust is clearly absurd.
Comments
Are there problems with that chain of trust due to mistakes? Oh yes. Can we do better? Oh yes! But this is a very, _very_ hard problem. The secure Windows initiative is an investment of billions of dollars that will take a long time and a lot of work to fulfill, but if I didn't believe we were on the right track, well, I wouldn't be here.
We'll have fewer examples of Half Life situations if we continue to educate people about how to behave securely, rather than conning them into thinking that somehow the technology is going to solve it all.
Look at cars: we educate drivers from the start that accidents happen, and that their potential is lurking everywhere. Then we tell people to drive carefully and defensively. We haven't tried to sell them on the idea that Chryslers are inherently accident-proof, partly because no one would believe it. Cars work according to the laws of physics, and everyone has an intuitve sense that you can't keep things from knocking into each other.
But in the virtual world, we somehow believe that the technology can solve everything, from virus attacks to music piracy. Computers and operating systems are just machines, and the people building and using them are still just people. Shit will happen. Be careful out there.
Saying "be careful out there" is good advice, but if that's _all_ we can give people then it is also an abdication of responsibility, an admission of failure, and a "blame the victim" mentality, and I won't go there.
Think about the phone system, for example. I know next to nothing about how the phone system works, but I have confidence that bored teenagers in Asia are not going to take over my phone, and if they do, that there are processes in place to deal with it. Why? Because I trust the phone company. Is their system perfect? Probably not, but that doesn't stop me from being a customer.
Right now, computer security is stopping people from being customers and is causing existing customers great pain. But just because we can't make a system perfect isn't reason to give up hope. Maybe a 95% solution is no solution -- but what about a 95% solution with a dedicated team of experts who are trying to turn it into a 99.9999% solution?
Add a comment: