Security is broken when you leave it to end-users

ctodx
06 August 2008

And before you think I'm slamming those damn lusers, think again. I'm including you and me in this.

A couple of weeks ago, I took my wife's car in for a recall. Since I work at home and she works downtown and the dealer is the other side of downtown, I said I'd drop her off at work, drop the car off for its service, and drive home in the loaner car. No problem, everything went as planned until I got home...

...When I realized that I didn't have a front door key, and I didn't have the garage door remote. It was pointless going back to the dealer since my wife's garage door opener is built into her car (it's a feature of some Acuras - a programmable remote). I was stuck outside my own house.

Needless to say, I managed to get in. No, I'm not telling you how, but it scared me that I was able to do it without any expensive damage, and that no one saw me do it either and I consider our neighborhood to be safe and crime-free. The whole episode made me think about security and how we take it for granted and how easily it can be subverted.

Another story. At the end of June, we were visiting my parents-in-law, when my father-in-law asked me some questions about viruses; the computer kind, not the biological kind. This led me to a demonstration of why Vista's UAC (User Account Control) was such a good idea. It heartened me that he had hardly ever come across the "dreaded" UAC dialog, meaning that for many people it's pretty invisible. I showed him how the UAC dialog will come up whenever the system detects that something is about to alter the system itself, such as installing a program. I drilled it into him that he should always click Cancel if it ever came up in his normal interactions with his computer, but that if he initiated the event that caused the UAC dialog then he was free to click Allow, although he should think about it first.

But there's a big problem with UAC and any other method of asking the end-user permission to do something when the end-user doesn't have the expertise needed to properly assess the risks: social engineering. Social engineering is the technique of fooling people into revealing secret information or of making them do something they shouldn't do. There are many examples of this:

  • Phishing emails. You get an email from your bank, you click on the link, you go to a site which looks exactly like your bank's website and you enter your userid and password. You get a page saying, due to high workload the site is temporarily down, please try again later, but of course the baddies are already making off with your savings.
  • CNN Top Ten lists. (Man, I'm getting sick of these.) The latest scam email purports to come from CNN.com, lists some kind of Top Ten set of videos. Each item is a link. Click on the link and you go to a page with a video, but, alas, it seems your video player is out-of-date, so could you install this latest version? Ta. Oops, the install seems to have failed, but we did mange to install a bot without you knowing. Welcome to the Storm botnet.
  • You get a phone call, an automated recording, saying that your car's warranty is about to expire and you only have a very short amount of time to buy an extended warranty. Press 1 to talk to an operator. You break out in a sweat, press 1, talk to someone, pass over your credit card details, etc, put the phone down before you realize that they didn't know what car you had, how old it was, whether it had a warranty or not.
  • You can't afford some software, so you go looking for cracked versions on warez sites. You find a zip of the application you want and download and install it. Well, we know what happens next. Pwned!

The common theme to these examples is that the point of failure is the human being. We are conditioned to be trustful. In general the people we meet and talk to are not trying to fleece us, so when someone who acts trustful towards us, we can easily be duped by them. It also seems that we are also unable to evaluate risks properly: if something has low friction (clicking on a link) we'll ignore the risks, when we know we should go the long, but less risky, way round (type the URL into the address bar).

Security is hard to get right. Not only that, it's downright difficult to patch on afterwards. If we write a program we should think about the security issues right up front. We should build in security so that the end-user doesn't have to think or worry about it. For example, are you going to have an auto-update option in your software? How can the user be sure that the update is coming from you? Perhaps a digital signature might be the answer, maybe something else entirely, but you should think about this first and not tack it on when the software is complete. Consider doing a threat analysis.

Security is also about education and risk-assessment. Educate your users on what to expect with your software and with your company, and train them to contact you if something else happens. Keep it simple. If you trade using the foobar.com domain, don't suddenly send your users emails from foobar-thatsus.ru (and if you do, and some users respond, then someone wasn't listening). Learn how to assess risks as well. This is a much harder lesson to learn, and, to be honest, just taking airport security as an example, you shouldn't feel bad that you may get it wrong some of the time.

And make sure you carry your front door key with you at all times.

Free DevExpress Products - Get Your Copy Today

The following free DevExpress product offers remain available. Should you have any questions about the free offers below, please submit a ticket via the DevExpress Support Center at your convenience. We'll be happy to follow-up.
No Comments

Please login or register to post comments.