The first thing to note when discussing computer viruses is that viruses are just programs, sort of like how spam is just email. Computer users demand the ability to run any program they want, without restrictions, not have any viruses, and also not be asked any questions before running a program. That is impossible. I’m going to discuss four common approaches to dealing with the problem of protecting computers from unwanted programs.
The first approach is to use antivirus software. Antivirus software scans all programs to see if they match a registry of known viruses. Hackers fool this by continuously changing the virus programs. Antivirus software also attempts to look for patterns in the behavior of programs to check for similarity to known viruses. This is akin to spam filters looking for keywords to determine if emails are spam. Antivirus software is not very effective, often causing more problems than it solves, and fails completely against customized targeted attacks.
Also known as “closed platform”, the walled garden approach is to only run programs approved by the OS vendor. This is what iOS does. You can only run Apple-approved programs on your iPhone. This approach is the most secure, but also severely limits the abilities of the device.
An approach that works well with sophisticated users is to have programs request permission from users the first time they do anything. Imagine you download a program to zip or unzip files. If that program attempts to make an internet connection, you know to be suspicious and decline permission. This is the approach Microsoft used with User Account Control in Vista. The unfortunate results were that users complained about getting popups all the time and would end up blindly clicking “Allow” for everything.
Instead of blocking programs from running, systems can require the author or source of any program to present a verified identity. Such accountability deters distribution of software that does anything illegal. Publishers can be verified by having software vendors sign their programs with certificates issued by a third-party. They can also be verified by requiring them to distribute software through a repository such as the Google Play Store for Android or Download.com. This way, if a program turns out to steal bank account information or attack government websites, there’s an audit trail to track down who wrote it.
Imagine living in an apartment building with a security guard at the main entry. Tenants would like to leave their unit’s doors open all the time and have a perfect security guard control access to the building to prevent theft. The analogous security approaches are:
Antivirus The security guard turns away any visitors who are on a block list or even looks suspicious. It doesn’t seem secure enough to leave apartment doors open, right? This is why you get viruses even with antivirus software installed.
Walled Garden The security guard only allows people who have a building badge. “You have a visitor for the weekend?” “Too bad, no badge, can’t visit, here’s an application form and a filing fee.” That’s why iOS is frustrating. http://en.wikipedia.org/wiki/IOS_app_approvals
Request Permission The security guard lets everyone in, but your unit’s door automatically shuts and locks, so you have to open it for each visitor. People who complain about popups requesting permission are like tenants who hate having to get up to open the door each time. They end up letting every visitor in until they get robbed and then blame the doorman.
Track Identities The security guard scans the driver’s license or passport of each visitor. If you get robbed, the police can track down who did it.
I prefer a combination of request permission and track identities. One problem with tracking identities is that knowing they can be traced doesn’t stop software vendors from doing malicious things as long as they are still legal. Sony once installed programs that essentially left back doors open on computers. Apple had iTunes install Safari in an update, without asking permission. Lenovo shipped computers that injected advertising into browsers. Having software request permissions gives me the ability to monitor and control activity on my computer.
It’s probably unreasonable to expect all users to properly understand computer security. It’s hard enough to get them to look for the green lock symbol and verify the URL in a browser. But OS vendors could do a better job of implementing identity tracking. They could configure systems to refuse unsigned software while also making the signature process easy and free. Right now, a lot of software vendors just don’t bother to sign their software because users just bypass the warnings and let it run anyway. If bypassing the warning weren’t an option, vendors would be forced to sign their software. They probably wouldn’t mind it much either, if there were a simple and inexpensive way to do so. It seems this is what Microsoft had in mind, but the plan failed. “…annoying users had been part of a Microsoft strategy to force independent software vendors (ISVs) to make their code more secure, as insecure code would trigger a prompt, discouraging users from executing the code” — http://news.cnet.com/Microsoft-Vista-feature-designed-to-annoy-users/2100-1016_3-6237191.html
Java code that runs in browser plugins is a particularly big threat because it doesn’t involve downloading an executable and running it. To users, it doesn’t feel as dangerous as running a program. So Oracle has tightened up Java security to do what I describe, making the default option to completely block unsigned code. This is good progress. But all developers releasing Java code need to participate…
This warning demonstrates how broken the identity verification system is. LiveMeeting, by Microsoft, didn’t bother signing their code with Oracle. I don’t know if it’s Oracle’s fault for making code signing difficult or LiveMeeting’s for not abiding Microsoft’s standard procedure. They even know about the problem and instead of signing the code they describe how to bypass the security warning. “This issue occurs because versions of the Java console later than 1.16.18 contain a security feature that prevents mixed signed and unsigned code from lauching in Java. The Live Meeting web-based console is an unsigned code.” — https://support.microsoft.com/en-us/kb/2079285