Yes… TrueCrypt is still safe to use.

So opens the short editorial I wrote this morning and placed at the top of GRC’s new TrueCrypt Final Version Repository page.

tc-logo

The impetus for the editorial was the continual influx of questions from people asking whether TrueCrypt was still safe to use, and if not, what they should switch to, and so on. By this time, one of the TrueCrypt developers, identified as David, had been heard from, and his interchange confirmed the essential points of my conjectured theory of the events surrounding the self-takedown of TrueCrypt.org, etc.

Rather than repeating that entire editorial here, I’m posting this as a pointer to it since folks here have thanked me for maintaining a blog and not relying solely upon Twitter.  And also, this venue supports feedback and interaction which GRC’s current read-only format can not.

Peace.

/Steve.

Posted in Uncategorized | Tagged , | 121 Comments

An Imagined Letter from the TrueCrypt Developer(s)

As I wrote yesterday, we know virtually nothing about the developer(s) behind TrueCrypt. So any speculation we entertain about their feelings, motives, or thought processes can only be a reflection of our own. With that acknowledgement, I’ll share the letter I think they might have written:

TrueCrypt is software. Frankly, it’s incredibly great software. It’s large, complex and multi-platform. It has been painstakingly designed and implemented to provide the best security available anywhere. And it does. It is the best and most secure software modern computer science has been able to create. It is a miracle, and a gift, and it has been a labor of love we have toiled away, thanklessly for a decade, to provide to the world… for free.

TrueCrypt is open source. Anyone could verify it, trust it, give back, contribute time, talent or money and help it to flourish. But no one has helped. Most just use it, question it and criticize it, while requiring it to be free, and complaining when it doesn’t work with this or that new system.

After ten years of this mostly thankless and anonymous work, we’re tired. We’ve done our part. We have what we want. And we feel good about what we have created and freely given. Do we use it?  Hell yes.  As far as we know, TrueCrypt is utterly uncrackable, and plenty of real world experience, and ruthlessly still-protected drives, back up that belief.

But hard drives have finally exceeded the traditional MBR partition table’s 32-bit sector count. 2.2 terabytes is not enough. So the world is moving to the GPT. But we’re not. We’re done. You’re on your own now. No more free lunch.

We’re not bitter. Mostly we’re just tired and done with TrueCrypt. Like we wrote above, as far as we know today, it is a flawless expression of cryptographic software art. And we’re very proud of it. But TrueCrypt, which we love, has been an obligation hanging over our heads for so long that we’ve decided to not only shut it down, but to shoot it in the head. If you believe we’re not shooting blanks you may want to switch to something else. Our point is, now, finally, it’s on you, not us.

Good luck with your NSA, CIA, and FBI.

Please also see Brad Kovach’s blog posting about this topic. Very useful.

/Steve.

Posted in Uncategorized | 96 Comments

Whither TrueCrypt?

My guess is that the TrueCrypt self-takedown
is going to turn out to be legitimate.

We know NOTHING about the developers behind TrueCrypt.

Research Professor Matthew Green, Johns Hopkins Cryptographer who recently helped to launch the TrueCrypt Audit, is currently as clueless as anyone. But his recent tweets indicate that he has come to the same conclusion that I have:

  • I have no idea what’s up with the Truecrypt site, or what ‘security issues’ they’re talking about.
  • I sent an email to our contact at Truecrypt. I’m not holding my breath though.
  • The sad thing is that after all this time I was just starting to like Truecrypt. I hope someone forks it if this is for real.
  • The audit did not find anything — or rather, nothing that we haven’t already published.
  • The anonymous Truecrypt dev team, from their submarine hideout. I emailed. No response. Takes a while for email to reach the sub.
  • I think it unlikely that an unknown hacker (a) identified the Truecrypt devs, (b) stole their signing key, (c) hacked their site.
  • Unlikely is not the same as impossible. So it’s *possible* that this whole thing is a hoax. I just doubt it.
  • But more to the point, if the Truecrypt signing key was stolen & and the TC devs can’t let us know — that’s reason enough to be cautious.
  • Last I heard from Truecrypt: “We are looking forward to results of phase 2 of your audit. Thank you very much for all your efforts again!”

I checked out the cryptographic (Authenticode) certificate used to sign the last known authentic version (v7.1a) of TrueCrypt, signed on Feb. 7th, 2012:

Image

You’ll notice that nine months after being used to sign the v7.1a Windows executable the signing certificate expired (on November 9th of 2012.)

The just-created Windows executable version of TrueCrypt, v7.2, was signed on May 27th, 2014 with THIS certificate:

Image

You’ll notice that the certificate which signed it was minted on August 24th of 2012, a few months before the previous certificate was due to expire, just like we’d expect, and also by the same CA (GlobalSign), though having a longer public key (4096 bits). This all exactly passes the smell test.

In a comment below, Taylor Hornby of Defuse Security noted that “The GPG signatures of the files also check out. The key used to sign them is the same as the one that was used to sign the 7.1a files I downloaded months ago.” So, again, this speaks of either a willful and deliberate act by the developers, or a rather stunning compromise of their own security. While, yes, the latter is possible, it seems much more likely, if also much less welcome, that TrueCrypt has been completely abandoned by its creators.

So, given the scant evidence, I think it’s much more likely that the TrueCrypt team – whomever they are – legitimately created this updated Windows executable and other files which would imply that they also took down their long-running TrueCrypt site.

Which, of course, leaves us asking why?  We don’t know because we don’t know anything about them or their motives. They might be in Russia or China where Windows XP is still a big deal (with a more than 50% share) and personally annoyed with Microsoft for cutting off support for Windows XP.  Or anything else.

What’s creepy is that we may never know.

/Steve.

Posted in Uncategorized | 132 Comments

A quick mitigation for Internet Explorer’s new 0-day vulnerability

The Internet industry press has been milking the news of the end of Windows XP support for much more than it’s worth. Now, over the weekend, we get news of another, in a continuing series of, (0-day) flaws in Internet Explorer.  (Oh My God! It’s the XPocalypse!!)

Or maybe not quite yet.

May 1st, 2014: Microsoft has decided to patch everyone’s
versions of Internet Explorer v6 through v11… even on XP.
So nothing changes yet.  Stay tuned. (And update your IE’s!)

Web browsers are growing incredibly complex. It’s pretty clear that they will be our next-generation operating platforms. And as the last annual “Pwn2Own” contest showed, none of them can currently withstand the focused attention of skilled and determined attackers, especially when some prize money is dangled on the other side of the finish line.

With most recent exploits, the path to exploitation is convoluted and complex and this one is no exception. In this case it depends upon encountering malicious Web content with IE’s ActiveScripting and ActiveX enabled (which is the default in both cases). That will load an Adobe SWF (Shockwave FLASH) file which first prepares the machine for exploitation, then uses JavaScript against the vulnerable version of IE (presently all versions of IE) to exploit a subtle flaw in the age-old and long-ago deprecated VML (vector markup language) rendering library. (Which is, nonetheless, still hanging around “just in case.”)

To immediately protect any use of Internet Explorer – yes, even on creaky old WinXP (the XPocalypse has been delayed):  You must first open a command prompt window with administrative privileges. This is done by right-clicking on the Command Prompt icon in the start menu and selecting “Run As Administrator.” Commands issued within this window will have the privilege required to make system level changes.

32-bit systems only require the first command. But since 64-bit systems have both a 32-bit and 64-bit version of the vulnerable file, both commands must be used with them:

regsvr32 -u "%CommonProgramFiles%\Microsoft Shared\VGX\vgx.dll"
regsvr32 -u "%CommonProgramFiles(x86)%\Microsoft Shared\VGX\vgx.dll"

These commands unregister (-u) the VML renderer, making it inaccessible to the exploit attempt.  Your IE browser will no longer be able to render vector markup language content, but it’s been unused on the web for many years.

You can perform a “before and after” test to confirm that VML rendering has been disabled with this simple VML rendering of an office layout: http://www.vmlmaker.com/gallery/visio/office_layout.htm. The proper response is a BLANK PAGE. If you receive a notice that “A VML capable browser is required…” you must add the vmlmaker.com domain to IE’s “Compatibility View” for the test to function properly. This is done under the settings menu.

Note: An additional test can be performed by searching the Windows registry (search: Data with “Match whole strings only” disabled) for references to vgx.dll. If it is found showing its location as the “Default” data of an “InprocServer32″ key, then it is still registered and available.

You can confidently leave things this way.. since you are never going to need VML and, as this circus shows, we’re all a lot better off without it!

(My most recent work: An Evaluation of the Effectiveness of Chrome’s CRLSets.)

/Steve.

Posted in Uncategorized | 92 Comments

The Lesson of Lavabit

An implication of undeliverable security painted a bullseye…Post’s Permalink

On Thursday, August 8th, Ladar Levison, the owner and operator of the semi-secure Lavabit.com eMail system, shut down his nearly ten year old service rather than be forced to continue to comply with United States law enforcement demands for the disclosure of personal and private information belonging to his service’s clients. The Lavabit web site now simply displays this notice:

My Fellow Users,

I have been forced to make a difficult decision: to become complicit in crimes against the American people or walk away from nearly ten years of hard work by shutting down Lavabit. After significant soul searching, I have decided to suspend operations. I wish that I could legally share with you the events that led to my decision. I cannot. I feel you deserve to know what’s going on–the first amendment is supposed to guarantee me the freedom to speak out in situations like this. Unfortunately, Congress has passed laws that say otherwise. As things currently stand, I cannot share my experiences over the last six weeks, even though I have twice made the appropriate requests.

What’s going to happen now? We’ve already started preparing the paperwork needed to continue to fight for the Constitution in the Fourth Circuit Court of Appeals. A favorable decision would allow me resurrect Lavabit as an American company.

This experience has taught me one very important lesson: without congressional action or a strong judicial precedent, I would _strongly_ recommend against anyone trusting their private data to a company with physical ties to the United States.

Sincerely,
Ladar Levison
Owner and Operator, Lavabit LLC

Defending the constitution is expensive! Help us by donating to the Lavabit Legal Defense Fund here.

What is the lesson of Lavabit?

When news first surfaced about Edward Snowden’s presumptive use of Lavabit’s eMail service for his eMail communication the assumption was that it was somehow “secure.” So I researched the nature of the service that was being offered, and I was not impressed. The trouble was, it was making a lot of noise about security, but as an eMail store-and-forward service it didn’t (and couldn’t) really do anything that was very useful from a security standpoint: Ladar had arranged to encrypt and store incoming eMail to a user’s inbox in such a fashion that his service could not then immediately decrypt the eMail. It would not be until the user logged in that the Lavabit servers would be able to derive the decryption key in order to forward the then decrypted eMail to the user.

As you can see, while this did offer somewhat useful encryption of data-at-rest, it didn’t actually offer his users any real protection because both incoming and outgoing eMail would necessarily be transmitted in the clear.

This architecture would, therefore, inherently expose the Lavabit service, its servers, its owners, and thus its users’ data to law enforcement demands. Which, it seems clear, is exactly what happened. Ladar made his service a target by offering “security” that wasn’t actually secure. (And how very wrong is it that he cannot even share the exact nature of the demands that were made upon him?!)

I am impressed that Ladar chose to shutdown his service rather than continue to promise something that he now unequivocally knew was no longer secure in the face of law enforcement’s quasi-legal incursions. It would have probably been better if he hadn’t attempted to offer security that was beyond his ability to provide.

During my weekly Security Now! podcast with Leo Laporte, we use the acronym “TNO” (Trust No One) to refer to any system where readily available cryptographic technology is properly employed in such a fashion that it is not necessary to trust the behavior of any third party. Unfortunately, without going to extraordinary lengths (e.g. S/MIME, PGP, GnuPG, etc.), today’s eMail technology is resistant to the TNO principle.

In coming weeks our Security Now! podcast will be delving deeply into the ways and means of producing true TNO eMail security.

Steve's Sig

Posted in Uncategorized | 193 Comments

IronMan 3 was “Unbelievable”… but not in a good way.

My two-cent take on IronMan 3:

This was a Disney/Marvel collaboration. Perhaps one problem was that it was too much Disney and insufficient Marvel.

The thing I was conscious of at many points throughout the movie, was that in ridiculously violent fights between unarmored and unprotected simple flesh and blood humans… no one gets hurt. In Road Runner cartoons, when the anvil flattens the Coyote, it’s quite funny due to its ludicrous overstatement. But the real parts of a movie involving humans — which are intended to be believable — really need to remain believable… or it’s asking too much from a mature audience.

As a Science Fiction lover, I am more than willing to suspend my disbelief for the sake of immersion into a new idea. I loved the first IronMan, and have watched it many times. So I will gleefully imbue a robotic suit with any levels of strength and power the story may require. That’s fine. Bring it on. Thrill me. But I know the limitations of an unaided human body. We all have one. And what I saw far too much of, against human flesh, was a level of coyote-flattening violence that was utter nonsense.

Despite the fact that I have no doubt IronMan 3 will break US domestic box office records, as it already has overseas, I think that “Oblivion” was the far better movie so far this summer.

/Steve. (@SGgrc and http://www.grc.com)

Posted in Uncategorized | Tagged , , , | 31 Comments

Reverse Engineering RSA’s “Statement”

Responsible Disclosure?  Ummm, not so much…Sharable Shortlink

On March 17th, 2011, Art Coviello, RSA Security‘s Executive Chairman, posted a disturbingly murky statement on their website disclosing their discovery of an “APT” (Advanced Persistent Threat). In other words, they discovered that bad guys had been rummaging around within their internal network for some time (persistent) and had managed to penetrate one of their most sensitive and secret databases.  Here is the most relevant piece of Art Coviello’s disclosure (you can find the whole piece here):

[...] Our investigation also revealed that the attack resulted in certain information being extracted from RSA’s systems. Some of that information is specifically related to RSA’s SecurID two-factor authentication products. While at this time we are confident that the information extracted does not enable a successful direct attack on any of our RSA SecurID customers, this information could potentially be used to reduce the effectiveness of a current two-factor authentication implementation as part of a broader attack. We are very actively communicating this situation to RSA customers and providing immediate steps for them to take to strengthen their SecurID implementations. [...]

As you can see, it would have been difficult for any bureaucrat to be less clear about what they know. But science is science, and the simple realities of what must be going on doesn’t accommodate much bureaucratic wiggle-room:

RSA’s SecureID devices are known to be designed around a cipher keyed with a 64-bit secret. The 64-bit secret is used to encrypt a realtime counter which generates an effective 22-bit value. While this is not many bits of time, the clock is incremented slowly, only once every 30 or 60 seconds, so 22-bits (4,194,304 values) is sufficient to outlive the expected life of the device and the timer would never be expected to wrap around.

RSA SecureID Token

One of several forms of the RSA SecurID Token

Each SecureID has an external serial number (printed on the back) that is used to identify and register it with an authentication service.  Hopefully, there is no “algorithm” of any sort for determining the internal secret key from the device’s serial number, since the discovery of such an algorithm would instantly kill the security of the entire system.

In the absence of a mapping algorithm, at the time of manufacture individual SecurID devices would be assigned a secret internal random or pseudo-random 64-bit key and a database would be maintained to forever map the device’s externally visible serial number to its internal secret 64-bit key.

This public-serial-number-to-secret-key mapping database then becomes “the keys to the kingdom”. It is RSA’s biggest secret to keep, since a full or partial disclosure of the database would potentially allow attackers to determine a device’s current and future display values and would therefore, of course, break any authentication protection.

To carry out a successful attack, an attacker would need to obtain its target device’s public serial number as well as one or more current output samples, at a known time, to determine the current state of the device’s 22-bit realtime clock. From that point on, an attacker could reliably determine the device’s output at any time in the future.

What can be deduced from what (little) RSA has disclosed?

  • If “the keys to the kingdom”—the public serial number to secret key mapping database—had NOT been compromised, there would be zero danger to users of RSA’s SecurIDs.  But we know at least that the danger is not zero.  Therefore, the most reasonable conclusion to reach is that RSA believes that at least some of  “the keys to the kingdom” have been compromised. (Because that’s their system’s only real vulnerability.)

  • Users of SecurID, and other multifactor authentication systems, typically do not provide the device’s public serial number when they are using it for authentication … though neither is that number intended to be kept secret (since it is printed on the back of every device.)  This means that an attacker would need to either have brief physical access to a device to obtain its serial number (which would also presumably allow them to obtain a few output samples to determine the clock counter’s current realtime position), or also have compromised RSA’s authentication account registration database which presumably maps user accounts to their device’s SecurID serial number.  Unless RSA discloses more, we won’t know how much more than the secret key mapping database may have been compromised.  Thus, it’s not possible to assemble a comprehensive threat model.

  • RSA may not want to do the responsible thing because it would be very expensive for them but given the only deductions possible from what little RSA has said in light of the technology, any company using RSA SecurID tokens should consider them completely compromised and should insist upon their immediate replacement.

RSA is understandably embarrassed.  And mistakes do happen.  If employees of a security company are using today’s incredibly insecure desktop toy operating systems, bad guys are going to be able to find a way to penetrate even the most carefully guarded connected networks.

RSA therefore needs to step up to the plate and take responsibility for what has happened. That means recalling every single SecurID device and replacing them all.  No company can consider RSA’s existing deployed SecurID devices to be secure.

You may CLICK THIS LINK to view this blog posting by itself so you can see replies and add your own.

Steve's Sig

Posted in Uncategorized | 124 Comments