The Lesson of Lavabit

An implication of undeliverable security painted a bullseye…Post’s Permalink

On Thursday, August 8th, Ladar Levison, the owner and operator of the semi-secure Lavabit.com eMail system, shut down his nearly ten year old service rather than be forced to continue to comply with United States law enforcement demands for the disclosure of personal and private information belonging to his service’s clients. The Lavabit web site now simply displays this notice:

My Fellow Users,

I have been forced to make a difficult decision: to become complicit in crimes against the American people or walk away from nearly ten years of hard work by shutting down Lavabit. After significant soul searching, I have decided to suspend operations. I wish that I could legally share with you the events that led to my decision. I cannot. I feel you deserve to know what’s going on–the first amendment is supposed to guarantee me the freedom to speak out in situations like this. Unfortunately, Congress has passed laws that say otherwise. As things currently stand, I cannot share my experiences over the last six weeks, even though I have twice made the appropriate requests.

What’s going to happen now? We’ve already started preparing the paperwork needed to continue to fight for the Constitution in the Fourth Circuit Court of Appeals. A favorable decision would allow me resurrect Lavabit as an American company.

This experience has taught me one very important lesson: without congressional action or a strong judicial precedent, I would _strongly_ recommend against anyone trusting their private data to a company with physical ties to the United States.

Sincerely,
Ladar Levison
Owner and Operator, Lavabit LLC

Defending the constitution is expensive! Help us by donating to the Lavabit Legal Defense Fund here.

What is the lesson of Lavabit?

When news first surfaced about Edward Snowden’s presumptive use of Lavabit’s eMail service for his eMail communication the assumption was that it was somehow “secure.” So I researched the nature of the service that was being offered, and I was not impressed. The trouble was, it was making a lot of noise about security, but as an eMail store-and-forward service it didn’t (and couldn’t) really do anything that was very useful from a security standpoint: Ladar had arranged to encrypt and store incoming eMail to a user’s inbox in such a fashion that his service could not then immediately decrypt the eMail. It would not be until the user logged in that the Lavabit servers would be able to derive the decryption key in order to forward the then decrypted eMail to the user.

As you can see, while this did offer somewhat useful encryption of data-at-rest, it didn’t actually offer his users any real protection because both incoming and outgoing eMail would necessarily be transmitted in the clear.

This architecture would, therefore, inherently expose the Lavabit service, its servers, its owners, and thus its users’ data to law enforcement demands. Which, it seems clear, is exactly what happened. Ladar made his service a target by offering “security” that wasn’t actually secure. (And how very wrong is it that he cannot even share the exact nature of the demands that were made upon him?!)

I am impressed that Ladar chose to shutdown his service rather than continue to promise something that he now unequivocally knew was no longer secure in the face of law enforcement’s quasi-legal incursions. It would have probably been better if he hadn’t attempted to offer security that was beyond his ability to provide.

During my weekly Security Now! podcast with Leo Laporte, we use the acronym “TNO” (Trust No One) to refer to any system where readily available cryptographic technology is properly employed in such a fashion that it is not necessary to trust the behavior of any third party. Unfortunately, without going to extraordinary lengths (e.g. S/MIME, PGP, GnuPG, etc.), today’s eMail technology is resistant to the TNO principle.

In coming weeks our Security Now! podcast will be delving deeply into the ways and means of producing true TNO eMail security.

Steve's Sig

Posted in Uncategorized | 94 Comments

IronMan 3 was “Unbelievable”… but not in a good way.

My two-cent take on IronMan 3:

This was a Disney/Marvel collaboration. Perhaps one problem was that it was too much Disney and insufficient Marvel.

The thing I was conscious of at many points throughout the movie, was that in ridiculously violent fights between unarmored and unprotected simple flesh and blood humans… no one gets hurt. In Road Runner cartoons, when the anvil flattens the Coyote, it’s quite funny due to its ludicrous overstatement. But the real parts of a movie involving humans — which are intended to be believable — really need to remain believable… or it’s asking too much from a mature audience.

As a Science Fiction lover, I am more than willing to suspend my disbelief for the sake of immersion into a new idea. I loved the first IronMan, and have watched it many times. So I will gleefully imbue a robotic suit with any levels of strength and power the story may require. That’s fine. Bring it on. Thrill me. But I know the limitations of an unaided human body. We all have one. And what I saw far too much of, against human flesh, was a level of coyote-flattening violence that was utter nonsense.

Despite the fact that I have no doubt IronMan 3 will break US domestic box office records, as it already has overseas, I think that “Oblivion” was the far better movie so far this summer.

/Steve. (@SGgrc and http://www.grc.com)

Posted in Uncategorized | Tagged , , , | 31 Comments

Reverse Engineering RSA’s “Statement”

Responsible Disclosure?  Ummm, not so much…Sharable Shortlink

On March 17th, 2011, Art Coviello, RSA Security‘s Executive Chairman, posted a disturbingly murky statement on their website disclosing their discovery of an “APT” (Advanced Persistent Threat). In other words, they discovered that bad guys had been rummaging around within their internal network for some time (persistent) and had managed to penetrate one of their most sensitive and secret databases.  Here is the most relevant piece of Art Coviello’s disclosure (you can find the whole piece here):

[...] Our investigation also revealed that the attack resulted in certain information being extracted from RSA’s systems. Some of that information is specifically related to RSA’s SecurID two-factor authentication products. While at this time we are confident that the information extracted does not enable a successful direct attack on any of our RSA SecurID customers, this information could potentially be used to reduce the effectiveness of a current two-factor authentication implementation as part of a broader attack. We are very actively communicating this situation to RSA customers and providing immediate steps for them to take to strengthen their SecurID implementations. [...]

As you can see, it would have been difficult for any bureaucrat to be less clear about what they know. But science is science, and the simple realities of what must be going on doesn’t accommodate much bureaucratic wiggle-room:

RSA’s SecureID devices are known to be designed around a cipher keyed with a 64-bit secret. The 64-bit secret is used to encrypt a realtime counter which generates an effective 22-bit value. While this is not many bits of time, the clock is incremented slowly, only once every 30 or 60 seconds, so 22-bits (4,194,304 values) is sufficient to outlive the expected life of the device and the timer would never be expected to wrap around.

RSA SecureID Token

One of several forms of the RSA SecurID Token

Each SecureID has an external serial number (printed on the back) that is used to identify and register it with an authentication service.  Hopefully, there is no “algorithm” of any sort for determining the internal secret key from the device’s serial number, since the discovery of such an algorithm would instantly kill the security of the entire system.

In the absence of a mapping algorithm, at the time of manufacture individual SecurID devices would be assigned a secret internal random or pseudo-random 64-bit key and a database would be maintained to forever map the device’s externally visible serial number to its internal secret 64-bit key.

This public-serial-number-to-secret-key mapping database then becomes “the keys to the kingdom”. It is RSA’s biggest secret to keep, since a full or partial disclosure of the database would potentially allow attackers to determine a device’s current and future display values and would therefore, of course, break any authentication protection.

To carry out a successful attack, an attacker would need to obtain its target device’s public serial number as well as one or more current output samples, at a known time, to determine the current state of the device’s 22-bit realtime clock. From that point on, an attacker could reliably determine the device’s output at any time in the future.

What can be deduced from what (little) RSA has disclosed?

  • If “the keys to the kingdom”—the public serial number to secret key mapping database—had NOT been compromised, there would be zero danger to users of RSA’s SecurIDs.  But we know at least that the danger is not zero.  Therefore, the most reasonable conclusion to reach is that RSA believes that at least some of  “the keys to the kingdom” have been compromised. (Because that’s their system’s only real vulnerability.)

  • Users of SecurID, and other multifactor authentication systems, typically do not provide the device’s public serial number when they are using it for authentication … though neither is that number intended to be kept secret (since it is printed on the back of every device.)  This means that an attacker would need to either have brief physical access to a device to obtain its serial number (which would also presumably allow them to obtain a few output samples to determine the clock counter’s current realtime position), or also have compromised RSA’s authentication account registration database which presumably maps user accounts to their device’s SecurID serial number.  Unless RSA discloses more, we won’t know how much more than the secret key mapping database may have been compromised.  Thus, it’s not possible to assemble a comprehensive threat model.

  • RSA may not want to do the responsible thing because it would be very expensive for them but given the only deductions possible from what little RSA has said in light of the technology, any company using RSA SecurID tokens should consider them completely compromised and should insist upon their immediate replacement.

RSA is understandably embarrassed.  And mistakes do happen.  If employees of a security company are using today’s incredibly insecure desktop toy operating systems, bad guys are going to be able to find a way to penetrate even the most carefully guarded connected networks.

RSA therefore needs to step up to the plate and take responsibility for what has happened. That means recalling every single SecurID device and replacing them all.  No company can consider RSA’s existing deployed SecurID devices to be secure.

You may CLICK THIS LINK to view this blog posting by itself so you can see replies and add your own.

Steve's Sig

Posted in Uncategorized | 122 Comments

Why Firesheep’s Time Has Come

This is what it takes to effect change…Sharable Shortlink

At Noon on Sunday, October 24th, 2010, during the final day of the 12th annual Toorcon Security Conference held in San Diego, two Seattle, Washington-based hackers, Eric Butler and Ian Gallagher, brought web session hijacking to the masses with their release of “Firesheep” … and the world was changed forever.

In case you’ve been somewhere off the grid, and have somehow missed the news, Firesheep is an incredibly easy to use add-on for the Firefox web browser that, when invoked while connected to any open and unencrypted WiFi hotspot, lists every active web session being conducted by anyone sharing the hotspot, and allows a snooping user to hijack any other user’s online web session logon with a simple double-click of the mouse. The snooper, then logged on and impersonating the victim, can do anything the original logged on user/victim might do.

Firesheep’s creators will be the first to tell you that what it is doing is not rocket science. The hacking capability to do this has been known and freely available within the hacking community for many years while the security community has been screaming into deaf ears about the need to fix the easily remedied configuration problems that make this possible. But thanks to Firesheep, reports are now coming in of people seeing other people using Firesheep in public WiFi hotspot settings.

Foreseeably, we will soon be hearing reports — many reports — of all sorts of mischief befalling the accounts of innocent users after they logged onto their accounts from open and unencrypted WiFi hotspots. At that point the implications of these long-standing security issues will finally hit home… and loud end-user complaints will drive the long-awaited changes the security community has been seeking for years.

The ease and simplicity of using Firesheep has transformed web session hijacking from a mysterious command-line driven black art into something for the masses. This is huge.

To get some sense for just how huge this is, check out the current download count of Firesheep at its download page: http://github.com/codebutler/firesheep/downloads After half a minute, press your browser’s page refresh to see how many more copies were downloaded just while you were looking at that first count.

I said above “the world was changed forever” because I can’t see how it could remain the way it has been in the face of point-and-click web session hijacking.

What needs to change? … Exactly two things:

  • 1. WiFi hotspots must encrypt. Period. They can still remain free and open, but they must use WPA encryption to protect their users from casual eavesdropping. As I wrote in my previous blog posting, this is not difficult. The hotspot’s WPA password does not need to be secret in order for all of the hotspot’s users to be protected from casual passive eavesdropping by each other and any other outsiders. For example, Starbucks could simply adopt the password “starbucks” throughout their entire coffee shop chain and have it known to all users. Users get the benefit of knowing that their traffic is encrypted in return for the minor one time burden of entering the “starbucks” password when prompted by their computers.

    Is this perfect protection? No. Because robust endpoint authentication will always be missing from any public-access WiFi system, complex active “man in the middle” attacks can still be mounted, but simply switching to encrypted WPA protocol raises the attack bar very much higher with near zero effort. And, importantly, switching to WPA encryption can be done immediately to offer significant protection to ALL users of such encrypted hotspots, not only just those who might be targeted by Firesheep. It’s just the right thing to do, and it’s SO simple.

  • 2. The bigger change that must also be made is for all vendors of web services to switch their connections over to using the SSL/TLS protocol exclusively. Only inertia and laziness has prevented this from being done long ago. It is my hope that the appearance of a tool as popular and easy-to-use as Firesheep will provide the incentive that has been missing for so long. The mischief it will cause should cause end users to demand this enhanced security from their web service vendors.

    Even when a user is not in the process of logging on, they have a reasonable expectation that their interactions with a remote server will be relatively private, not literally broadcast to anyone with an antenna … like a passing Google mapping car. And when those interactions contain the user’s logged on state cookies, as they must for the user to be recognized as currently logged on, a user’s unencrypted session becomes readily hijackable and hackable, making the situation even worse.

Isn’t switching over to SSL/TLS difficult and expensive?

No. The belief that switching to using pure SSL/TLS is any burden was obsoleted years ago with the addition of SSL/TLS Session Resume. Session Resume allows a particular client and server to perform the high-overhead public key negotiation just once (which they always need to do during the secure SSL/TLS logon anyway) and to then reuse those negotiated credentials for all future SSL/TLS connections being made. Since the credential reuse duration is typically 24 hours, very little additional burden is placed upon either the client or the server as a consequence of using SSL/TLS pervasively across a web site … always and for everything.

ALWAYS Authenticated & Encrypted – It’s WAY past time.

The idea of using SSL/TLS pervasively has been growing slowly but has, until now, been slow to catch on due to inertia more than anything else. Various client-side add-ons, such as the Electronic Frontier Foundation’s (EFF) HTTPS-Everywhere add-on, or Force-TLS, attempt to induce the client to push for SSL/TLS from its end. And the emerging HTTP Strict Transport Security (HTTP-STS) extension would allow web sites to enforce their own intention to only accept secure connection from clients.

This is all good, but someone needs to light a fire under the WiFi hotspot providers and web service vendors to make this happen … which is precisely why I am so pleased that “Firesheep” has finally happened.

The ground has already been prepared for the move to pervasive authentication and encryption. Let’s hope that the user, press, and provider communities will become upset enough over the appearance of Firesheep that these long-awaited security changes will finally be made. If that could happen, the world wide web be a far better and more secure place to hang out and play.

Steve's Sig

Posted in Uncategorized | 66 Comments

Instant Hotspot Protection from “FireSheep”

What any open hotspot can do to protect its usersSharable Shortlink

Amid all the fury over the release of Firesheep, no one else seems to have noticed, or at least mentioned, that the only thing any WiFi hotspot needs to do to protect its users is activate WPA encryption using any simple publicly-known password.

For example, Starbucks could simply set their password to “starbucks”, Peets Coffee to “peets”, Panera Bread to “panera” … and every user of those free wireless hotspots would be protected from the Firesheep threat … and from much more. Or, by general agreement, all free and open WiFi access points could simply use the password “free”, which would work just as well.

As long as the universally supported WPA encryption protocol is used, each individual user receives their own private “session key” that absolutely prevents eavesdropping between users, even through they are all using the same WiFi password.  It’s just that simple.

Hotspots only need to switch from “no encryption” to WPA and post or publish any static WPA password … and a large part of the problem, and more, is solved.

I have posted a following-up to this blog posting with a detailed look at Firesheep, and why I think it is such a fantastic thing to have happened. But before I wrote that I wanted to quickly publish the idea of simply encrypting with WPA under any simple static password, since that will instantly lock down any public WiFi hotspot.

Steve's Sig

Posted in Uncategorized | 84 Comments

iPhone 4 External Antenna Problem

What the evidence indicates is going onSharable Shortlink

On Friday, June 25th, I tweeted a link to a YouTube video created and posted by one of my Twitter followers — @antio — whom I have every reason to believe is legitimate and well meaning. In this brief (53 second) video we see a convincing and rather horrifying demonstration of what appears to be a serious design flaw in the brand new iPhone. For your reference, here’s the YouTube video link:

iPhone External Antenna ProblemiPhone 4 Antenna Problem is Caused By a Design Flaw, Not Signal Blockage

Mentions back to me from new iPhone 4 owners were mixed, with some confirming Anthony’s demo and others unable to confirm it and being suspicious of the results.

However, as an engineer I can propose a useful theory to explain what everyone is seeing, and not seeing — and even why Apple shipped the iPhone as it is — as follows:

Simply stated, Apple’s “5-bars” cellular signal strength display is not showing the full range of possible, or even typical, received cellular signal strength. It is only showing the BOTTOM END of the full range of possible reception strength.

In other words, say for example that the iPhone is able to deliver a good clear conversation when receiving only 5% of the signal strength that you might have when standing in the shadow of a cell tower. Even though 5% signal strength is far less than 100%, if it delivers a strong and clear conversation, it’s enough. So Apple’s engineers calibrated their digital “5-bars” digital display to show all 5-bars at any signal strength from 100% all the way down to 5%. It’s only when the received signal strength begins to drop below 5% that conversations suffer, calls get dropped, and Apple starts to take bars away from their 5-bar display.

Now imagine that “bridging” the cellular and WiFi antennas by placing one’s hand across the black insulating antenna gap causes a 5% drop in received signal strength.  If you initially had, say, 80% strength, now you would be down to 75%… and you’d still have all five bars, since you still have way more than the 5% required for clear calls.  Thus, you would see and hear no effect from either deliberate or inadvertent antenna bridging.  But if you only had 5% incoming signal strength with the antenna completely in the clear — thus no remaining signal strength margin even though you were seeing 5-bars — and you then bridged the antenna, dropping the signal strength by 5% down to 0% … you would see exactly what Anthony’s video demonstrates.

It’s unfortunate that we don’t have a useful “full range” signal strength display showing us the true received power throughout its entire possible range from 100% all the way to 0% — because I believe there would be much less confusion if people could see what was actually going on.  But for now we don’t.

Whatever the case, it does appear that Apple’s latest phone, with its externalized and perhaps too accessible antennas, should be wrapped in an insulating case of some sort in order to not only keep it safe from bumps and bruises, but also to allow its antennas to operate without the attenuation created by direct contact with the phone’s owner’s body.

UPDATE: Don’t miss the comment to this posting by Simon Byrnand who adds some great real world numbers and confirms my engineering theory.

Steve's Sig

Posted in Uncategorized | 1,221 Comments

HCP 0-Day Quick Fix

ONLY NECESSARY for Windows XP and Server 2003Sharable Shortlink

UPDATES:

  • As predicted, very soon after news of this new vulnerability became public, exploits began appearing on the Internet. We have no way of knowing how long Microsoft will take to fix this through their automatic update system, especially considering that news of this unfortunately coincided with their most recent “patch Tuesday.” So fixing this yourself is even more important.
  • Microsoft has produced one of their quick “FixIt” buttons that will perform the Help Center neutering functions (originally described below) automatically. We recommend doing this sooner rather than later: Help Center Vulnerability FixIt.

A bit of background:
On Saturday, June 5th, Tavis Ormandy, a security researcher employed by Google, provided acknowledged proof to Microsoft of a previously unpublished and unknown vulnerability affecting the XP and Server 2003 versions of Windows (neither Vista nor Windows 7.)

Then, five days later, breaking from the “Responsible Disclosure” tradition of providing a software publisher time to research and repair the problem prior to disclosing its existence to the world, Tavis did just that in a high visibility posting on Thursday, June 10th.

A predictable fracas has arisen because Tavis’ employer, Google, and Microsoft are increasingly seen as competitors in “the race to the cloud” as personal and corporate computing move from the desktop and into “the cloud” of the Internet and the Web.

For his part, Tavis appears to be no big fan of the Responsible Disclosure paradigm, preferring the “Full Disclosure” approach. Tavis suggests that anyone interested consider the published opinion of the much-respected security researcher and cryptographer, Bruce Schneier:
http://www.schneier.com/essay-146.html
http://www.schneier.com/crypto-gram-0111.html#1

Tavis attempts to explain that he performed this research — and made this disclosure — on his own behalf and not under the auspices of his employer, Google. But neither he nor Google are getting off so easily. (It occurs to me that he could have easily made the disclosure anonymously if he had wanted the information out there without dragging Google into the controversy. But, for whatever reason, he chose to employ his public persona.) Microsoft has also gone public with their unhappiness, making it clear that Tavis is a Google security researcher.

Why does any of this matter to us?
Unfortunately, the surprising amount of noise created by the details of this disclosure have lifted “just another 0-day vulnerability” (which would be bad enough all by itself) well into the spotlight, making it all the more likely to be exploited. Google News (note the irony) currently finds 207 separate articles on this topic! How can malicious hackers resist this one? They won’t.

And the second bit of bad news is that this is the worst sort of vulnerability: Trivial to cause malicious code to run on the users’ computer, with a public, very complete and thorough description including sample code. Since Microsoft was given very little notice, and since their monthly “Patch Tuesday” occurred just two days before the vulnerability disclosure, it’s unclear whether the world of XP users will need to wait a month, more than a month, or less … But it could be a while.

Therefore, XP users may wish (and would probably be well advised) to immediately disable their system’s “hcp” protocol handler simply by renaming its Key in the Windows registry. (I prefer renaming, Microsoft offers several more complex workarounds. See the link under “Workarounds”.)

If you choose to follow my simple renaming suggestion, do the following:

  1. Run XP’s “Regedit” registry editor by clicking on “Start” then choose “Run”, enter “regedit” in the Open field, then click “Ok.”
  2. Find the “HCP” protocol key by searching the registry: Using the Regedit application, select “Edit” from the menu, then “Find…” As shown in the sample below, enter “HCP” into the “Find what:” field, then uncheck “Values” and “Data” and check “Match whole string only”. With the “Find” dialog set as shown below, click the “Find Next” button…

    Find the HCP Key…some time will pass while Windows searches through the registry to locate the “HCP” key…

  3. Once the search stops, you should see the “HCP” key highlighted as shown below:

    Found the HCP Key

    Verify that the correct “HCP” is highlighted by checking the lower-left status line which should show “My Computer\HKEY_CLASSES_ROOT\HCP” just like the sample above.

  4. Right-click on the “HCP” key, choose “Rename” from the pop-up menu, then change the key’s name to “HCP-OFFLINE” (or whatever you like other than “HCP”).

Following the simple instructions above will immediately (no reboot required) eliminate your system’s ability to launch the vulnerable and defective Help Center application in response to an “hcp://” style URL link — now you’re safe. That’s what you want until Microsoft updates and repairs the newly public vulnerability in Windows Help Center.

You can test it too!
If you’re a belt & suspenders sort of person (as I am) you can test your system’s vulnerability to the exploit both with the “HCP” key named “HCP” and also “HCP-OFFLINE” (or whatever you may have named it). Under the “Consequences” section of Tavis’ original posting to seclists.org, he provides proof-of-concept links for users having IE7 and IE8 (and the IE8 link was effective with my Firefox system).

But please remember!, this is admittedly a horrendous kludge that you will need to remember to “undo” — by restoring the renamed HCP key back to “HCP” once Microsoft repairs their code. Still, it’s all we have for now and it’s arguably better than having our machines taken over remotely.

Steve's Sig

Posted in Uncategorized | 64 Comments