Re: Haystack redux

2010-09-16 Thread Jacob Appelbaum
On 09/15/2010 11:48 AM, Adam Fields wrote:
 On Wed, Sep 15, 2010 at 03:16:34AM -0700, Jacob Appelbaum wrote:
 [...]
 What Steve has written is mostly true - though I was not working alone,
 we did it in an afternoon. It took quite a bit of effort to get Haystack
 to take this seriously. Eventually, there was an internal mutiny because
 of a serious technical disconnect between the author Daniel Colascione
 and the supposed author, Austin Heap. Daniel has been a stand up guy
 about the issues discovered and he really the problem space that the
 tool created.

 Sadly, most of the issues discovered do not have easy fixes - this
 includes even discussing some of the very simple but serious design
 flaws discovered. This has to be the worst disclosure issue that I've
 ever had to ponder - generally, I'm worried about being sued by some
 mega corp for speaking some factual information to their users. In this
 case, I guess the failure mode for being open about details is ... much
 worse for those affected. :-(

 An interesting unintended consequence of the original media storm is
 that no one in the media enjoys being played; it seems that now most of
 the original players are lining up to ask hard questions. It may be too
 little and too late, frankly. I suppose it's better than nothing but it
 sure is a great lesson in popular media journalism failures.
 
 I'm wondering if someone could shed a little light on how this service
 acquired any real users in the first place, and whether anyone thinks
 that anyone in danger of death-should-the-service-be-compromised is
 actually (still) using it.

The media hype? The fact that many Iranians were reaching out to people
in the West during the summer of 2009?

 
 I find it hard to believe that even the most uninformed dissidents
 would be using an untested, unaudited, _beta_, __foreign__ new service
 for anything. Is there any reason to believe otherwise? My first guess
 would have been that it was a government-sponsored honeypot, and I bet
 they're far more suspicious than I am.
 

I guess the dissidents that you work with are all savvy, never tricked,
know how to make solid security evaluations, and so on? Generally
speaking... that is not my experience at all.

All the best,
Jacob

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Haystack redux

2010-09-15 Thread Jacob Appelbaum
On 09/14/2010 09:57 AM, Steve Weis wrote:
 There have been significant developments around Haystack since the
 last message on this thread. Jacob Applebaum obtained a copy and found
 serious vulnerabilities that could put its users at risk. He convinced
 Haystack to immediately suspend operations. The developer of Haystack,
 Daniel Colascione, has subsequently resigned from the project.
 
 Many claims made about Haystack's security and usage made by its
 creators now appear to be inaccurate. These claims were repeated
 without verification by the New York Times, Newsweek, the BBC, and the
 Guardian UK. Evegeny Morozov wrote several blog posts covering this.
 His latest post is here:
 http://neteffect.foreignpolicy.com/posts/2010/09/13/on_the_irresponsibility_of_internet_intellectuals
 

Hi,

What Steve has written is mostly true - though I was not working alone,
we did it in an afternoon. It took quite a bit of effort to get Haystack
to take this seriously. Eventually, there was an internal mutiny because
of a serious technical disconnect between the author Daniel Colascione
and the supposed author, Austin Heap. Daniel has been a stand up guy
about the issues discovered and he really the problem space that the
tool created.

Sadly, most of the issues discovered do not have easy fixes - this
includes even discussing some of the very simple but serious design
flaws discovered. This has to be the worst disclosure issue that I've
ever had to ponder - generally, I'm worried about being sued by some
mega corp for speaking some factual information to their users. In this
case, I guess the failure mode for being open about details is ... much
worse for those affected. :-(

An interesting unintended consequence of the original media storm is
that no one in the media enjoys being played; it seems that now most of
the original players are lining up to ask hard questions. It may be too
little and too late, frankly. I suppose it's better than nothing but it
sure is a great lesson in popular media journalism failures.

All the best,
Jacob

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Merry Certmas! CN=*\x00thoughtcrime.noisebridge.net

2009-09-30 Thread Jacob Appelbaum
Hello *,

In the spirit of giving and sharing, I felt it would be nice to enable
other Noisebridgers (and friends of Noisebridge) to play around with
bugs in SSL/TLS.

Moxie was just over and we'd discussed releasing this certificate for
some time. He's already released a few certificates and I thought I'd
join him. In celebration of his visit to San Francisco, I wanted to
release fun-times-at-moxie-marlinspike-high. This is a text file that
contains a fully valid, signed certificate (with private key) that can
be used to exploit the NULL certificate prefix bug[0]. The certificate
is valid for * on the internet (when exploiting libnss software). The
certificate is good for two years. It won't work for exploiting the bug
for software written with the WIN32 api, they don't accept (for good
reason) *! I suggest the use of Moxie's sslsniff[1] if you're so
inclined to try network related testing. It may also be useful for
testing code signing software.

It's been long enough that everyone should be patched for this awesome
class of bugs. This certificate and corresponding private key should
help people test fairly obscure software or software they've written
themselves. I hope this release will help with confirmation of the bug
and with regression testing. Feel free to use this certificate for
anything relating to free software too. Consider it released into the
public domain of interesting integers.

Enjoy!

Best,
Jacob

[0] http://thoughtcrime.org/papers/null-prefix-attacks.pdf
[1] http://thoughtcrime.org/software/sslsniff/
Private-Key: (1024 bit)
modulus:
00:cf:4d:17:42:00:8d:0c:41:95:31:8c:40:30:bc:
5e:42:b6:28:09:75:2f:19:61:d9:ab:4d:ec:f3:44:
c4:1c:01:95:6f:27:eb:70:07:98:4f:1e:05:d0:f3:
6c:49:45:e6:de:48:7a:59:f0:c2:93:6a:37:9c:02:
72:4f:bd:14:36:26:a1:70:97:d4:fe:4b:24:e8:cd:
29:1e:61:1a:85:b0:6f:96:06:83:10:13:d6:89:9f:
bd:07:67:f1:42:de:9b:63:67:8b:96:f9:06:ef:7c:
93:4b:6a:f9:39:31:32:7f:98:59:ef:ce:91:be:05:
ce:f0:82:33:d8:76:06:4c:9f
publicExponent: 65537 (0x10001)
privateExponent:
00:8c:4f:3b:7c:ba:ee:bc:ea:ee:d6:58:7d:61:ff:
3d:35:9e:21:3f:35:87:a9:80:67:59:e1:26:8e:09:
6f:4b:1d:6f:4d:8b:11:7a:04:49:fc:d2:ef:50:dc:
51:e0:ce:65:52:f2:6f:8d:cc:bd:86:15:90:8a:11:
c5:d9:5e:ba:fc:2b:fc:e3:a0:cd:c8:f0:9a:05:76:
06:82:07:a9:bd:14:cc:c7:7e:54:b9:32:5b:40:7a:
35:0a:26:80:d7:30:98:d6:b7:71:d5:9d:f4:0d:f2:
28:b5:a9:0c:2e:6d:78:19:86:a9:31:b0:a1:43:1c:
57:2c:78:a9:42:b2:49:d8:71
prime1:
00:ec:07:79:1d:e2:50:14:77:af:99:18:1b:14:d4:
0c:25:0c:20:26:0d:dd:c7:75:0e:08:d3:77:72:ce:
2d:57:80:9d:18:bb:60:7b:b2:62:4e:21:a1:e6:84:
96:91:31:15:cc:5b:89:5b:5a:83:07:96:51:e4:d4:
e6:3a:40:99:03
prime2:
00:e0:d7:5a:07:0e:cc:a6:17:22:f8:ec:51:b1:7b:
17:af:3a:87:7b:f1:e4:6d:40:48:28:d2:c0:9c:93:
e0:f1:8f:79:07:8f:00:e0:49:1d:0e:8c:65:41:ba:
c8:20:e2:ae:78:54:75:6b:f0:41:e5:d1:9c:2e:23:
49:79:53:35:35
exponent1:
15:17:15:db:75:bd:72:16:bf:ba:0e:4d:5d:2f:15:
66:ba:0e:a5:57:d7:d9:5a:bc:46:4d:9e:fe:c3:2d:
8a:04:14:05:81:b8:bd:54:d3:33:e8:0d:6f:6b:a9:
88:8f:ba:42:e8:6a:fd:9e:b8:d6:94:b7:fc:9a:89:
77:eb:0d:c1
exponent2:
5c:5a:38:61:63:c3:cd:88:fd:55:6f:84:12:b9:73:
be:06:f5:75:84:a3:05:f8:fc:6a:c0:3e:5b:52:26:
78:32:2d:4d:5c:80:c8:9f:5f:6f:05:5d:e6:04:b9:
85:40:76:d7:78:21:8f:07:6d:99:df:62:1e:55:62:
2d:92:6e:ed
coefficient:
00:c5:62:ea:ee:85:5c:eb:e6:07:12:58:a5:63:5a:
8f:e3:b3:df:c5:1e:cc:01:cd:87:d4:12:3f:45:8e:
a9:4c:83:51:31:5a:e5:8d:11:a1:e3:84:b8:b4:e1:
12:33:eb:2d:4c:4e:8c:49:e2:0d:50:aa:ca:38:e3:
e6:c2:29:86:17
Certificate Request:
Data:
Version: 0 (0x0)
Subject: C=US, CN=*\x00thoughtcrime.noisebridge.net, ST=California, 
L=San Francisco, O=Noisebridge, OU=Moxie Marlinspike Fan Club
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
RSA Public Key: (1024 bit)
Modulus (1024 bit):
00:cf:4d:17:42:00:8d:0c:41:95:31:8c:40:30:bc:
5e:42:b6:28:09:75:2f:19:61:d9:ab:4d:ec:f3:44:
c4:1c:01:95:6f:27:eb:70:07:98:4f:1e:05:d0:f3:
6c:49:45:e6:de:48:7a:59:f0:c2:93:6a:37:9c:02:
72:4f:bd:14:36:26:a1:70:97:d4:fe:4b:24:e8:cd:
29:1e:61:1a:85:b0:6f:96:06:83:10:13:d6:89:9f:
bd:07:67:f1:42:de:9b:63:67:8b:96:f9:06:ef:7c:
93:4b:6a:f9:39:31:32:7f:98:59:ef:ce:91:be:05:
ce:f0:82:33:d8:76:06:4c:9f
Exponent: 65537 (0x10001)
Attributes:
a0:00
Signature Algorithm: md5WithRSAEncryption
64:e6:b2:77:45:74:c3:dc:f6:3d:e7:73:7f:0f:fb:dd:d7:30:
c3:0f:30:d5:52:2c:6b:41:ad:40:2b:4b:07:2a:de:80:69:d4:
a7:0b:6f:ed:cc:62:e7:4d:e1:fc:1e:81:0d:94:b9:c8:9b:14:
0a:10:d4:8e:f9:53:76:11:51:1d:c9:80:ca:15:e5:78:02:e1:
d1:89:95:b5:4a:3f:e0:f7:f3:35:ad:1f:7d:85:5b:8c:f5:de:
 

Re: FileVault on other than home directories on MacOS?

2009-09-28 Thread Jacob Appelbaum
Ivan Krstić wrote:
 On Sep 22, 2009, at 5:57 AM, Darren J Moffat wrote:
 There is also a sleep mode issue identified by the NSA
 
 Unlike FileVault whose keys (have to) persist in memory for the duration
 of the login session, individual encrypted disk images are mounted on
 demand and their keys destroyed from memory on unmount.

The devil is in the details. If you use your default keychain to unlock
a disk, I believe the _passphrase_ is still stored by LoginWindow.app in
plain text... So even if they destroyed keying material properly (do
they? Is there source we can review for how FV works?) when the disk
isn't in use, I somehow doubt that it's really safe to use FileVault in
some circumstances against some attackers. Especially if you have a
laptop and especially if you didn't turn on encrypted swap. Also
especially if you happened to use the encrypted swap feature when it
wasn't working. The list of hilarious bugs goes on and on.

(The LoginWindow.app bug is as old as the hills and I'm one of a dozen
people to have reported it, I bet. Apple still hasn't fixed it because
they rely on a users password being in memory to escalate privileges
without interacting with the user! I hear they're working on a fix but
that it's difficult because many systems rely on this feature.)

I haven't been working on or thinking about VileFault much but I suppose
that we probably could add support for sparse bundles if someone wanted.
I've been bugging Apple for some specifications and so far, it's been
years without a real response.

Most of what we know is in VileFault:
http://code.google.com/p/vilefault/

It would be really awesome if Apple would open up all of this code or at
least publish a specification for how it works. With either we could
have a Fuse file system module to support these disk images on other
platforms...

Best,
Jacob



signature.asc
Description: OpenPGP digital signature


Re: password safes for mac

2009-06-30 Thread Jacob Appelbaum
Ivan Krsti? wrote:
 On Jun 27, 2009, at 6:57 PM, Perry E. Metzger wrote:
 Does anyone have a recommended encrypted password storage program for
 the mac?
 
 
 System applications and non-broken 3rd party applications on OS X store
 credentials in Keychain, which is a system facility for keeping secrets.
 Your user keychain is encrypted with your login password, and items in
 it have application-level ACLs (this credential can only be read by
 these applications). The definition of application for the purpose of
 Keychain ACLs is derived from OS X code signing, so if someone tampers
 with one of your apps on disk, the resulting application won't get
 access to Keychain until you explicitly approve it.
 
 You can inspect and modify your keychain with the Keychain Access
 application, which also allows you to add your own items.
 

This would be great if LoginWindow.app didn't store your unencrypted
login and password in memory for your entire session (including screen
lock, suspend to ram and hibernate).

I keep hearing that Apple will close my bug about this and they keep
delaying. I guess they use the credentials in memory for some things
where they don't want to bother the user (!) but they still want to be
able to elevate privileges.

Best,
Jacob

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


MD5 considered harmful today

2008-12-30 Thread Jacob Appelbaum
Hello,

I wanted to chime in more during the previous x509 discussions but I was
delayed by some research.

I thought that I'd like to chime in that this new research about
attacking x509 is now released. We gave a talk about it at the 25c3
about an hour or two ago.

MD5 considered harmful today: Creating a rogue CA certificate

Nearly everything is here:
http://www.win.tue.nl/hashclash/rogue-ca/

Best,
Jacob



signature.asc
Description: OpenPGP digital signature


Re: Designing and implementing malicious hardware

2008-04-24 Thread Jacob Appelbaum
Perry E. Metzger wrote:
 A pretty scary paper from the Usenix LEET conference:
 
 http://www.usenix.org/event/leet08/tech/full_papers/king/king_html/
 
 The paper describes how, by adding a very small number of gates to a
 microprocessor design (small enough that it would be hard to notice
 them), you can create a machine that is almost impossible to defend
 against an attacker who possesses a bit of secret knowledge. I suggest
 reading it -- I won't do it justice with a small summary.
 
 It is about the most frightening thing I've seen in years -- I have no
 idea how one might defend against it.
 

Silicon has no secrets.

I spent last weekend in Seattle and Bunnie (of XBox hacking fame/Chumby)
gave a workshop with Karsten Nohl (who recently cracked MiFare).

In a matter of an hour, all of the students were able to take a
selection of a chip (from an OK photograph) and walk through the
transistor layout to describe the gate configuration. I was surprised
(not being an EE person by training) at how easy it can be to understand
production hardware. Debug pads, automated masking, etc. Karsten has
written a set of MatLab extensions that he used to automatically
describe the circuits of the mifare devices. Automation is key though, I
think doing it by hand is the path of madness.

If we could convince (this is the hard part) companies to publish what
they think their chips should look like, we'd have a starting point.

Perhaps,
Jacob

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: wrt Cold Boot Attacks on Disk Encryption

2008-03-15 Thread Jacob Appelbaum
Ken Buchanan wrote:
 A lot of people seem to agree with what Declan McCullagh writes here:
 
 It's going to make us rethink how we handle laptops in sleep mode and 
 servers that use
 encrypted filesystems (a mail server, for instance).
 
 What I'd like to know is why people weren't already rethinking this
 when people like Maximillian Dornseif
 (http://md.hudora.de/presentations/firewire/2005-firewire-cansecwest.pdf)
 and later Adam Boileau
 (http://www.security-assessment.com/files/presentations/ab_firewire_rux2k6-final.pdf)
 showed you can read arbitrary RAM from a machine just by plugging into
 a FireWire port, due to lack of security considerations in the IEEE
 1394 standard?
 

I think that it's clear that people were shocked when Max released his
work. Many people may discount the work if they (say like many
Thinkpads) do not have at IEEE 1394 port. This is of course not going to
stop someone from inserting a cardbus card. Furthermore, I think Max
didn't manage to demonstrate a contradiction to a commonly held thought.

I'm sure it was no surprise to FreeBSD kernel developers that you could
use Firewire to read kernel memory structures using DMA.

 Adam Boileau demonstrated finding passwords, but of course we already
 know that it's easy to locate cryptographic keys in large volumes of
 data (Shamir, van Someren: http://citeseer.ist.psu.edu/265947.html).
 
 Reading cold DRAM may have some applications on its own -- if only
 because of the large number of devices that it effects -- but as far
 as walking up to a locked machine/hibernated laptop/whatever and
 stealing its RAM contents, the game may have been up some time ago.
 

I think the most important aspect of this work is that by using
redundant (all Hail Nadia Heninger) keying information in memory we can
recover and make a pretty good confirmation. This means we don't have to
do reverse engineering to find keys and we can correct for errors.

Our keyfinder could be used with firewire and I think it stands on its own.

Regards,
Jacob Appelbaum

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: cold boot attacks on disk encryption

2008-02-22 Thread Jacob Appelbaum
Jon Callas wrote:
 
 On Feb 21, 2008, at 12:14 PM, Ali, Saqib wrote:
 
 However, the hardware based encryption solutions like (Seagate FDE)
 would easily deter this type of attacks, because in a Seagate FDE
 drive the decryption key never gets to the DRAM. The keys always
 remain in the Trusted ASIC on the drive.
 
 Umm, pardon my bluntness, but what do you think the FDE stores the key
 in, if not DRAM? The encrypting device controller is a computer system
 with a CPU and memory. I can easily imagine what you'd need to build to
 do this to a disk drive. This attack works on anything that has RAM.
 

Actually, I hear that some companies store the keys on the platters of
the disk. Again this is if they're actually using crypto and not just
selling XOR'ed blocks of data.

To project the keys, they limit standard read commands to not enter
those areas of the disk. Of course the vendor provides a method to read
those areas of disk, it's just a matter of finding them.

Regards,
Jacob Appelbaum

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: cold boot attacks on disk encryption

2008-02-21 Thread Jacob Appelbaum

Hi,

I'm one of the coauthors of the paper and I'd love to chime in.

Perry E. Metzger wrote:
 Ali, Saqib [EMAIL PROTECTED] writes:
 This methods requires the computer to be recently turned-on and unlocked.
 
 No, it just requires that the computer was recently turned on. It need
 not have been unlocked -- it jut needed to have keying material in RAM.
 

This is correct.

 So the only way it would work is that the victim unlocks the disks
 i.e. enter their preboot password and turn off the computer and
 immediately handover (conveniently) the computer to the attacker so
 that the attacker remove the DRAM chip and store in nitrogen.
 
 LN2 is pretty trivial to get your hands on, and will remain happy and
 liquid in an ordinary thermos for quite some hours or longer. However,
 the authors point out that canned air works fine, too.
 

Yes, this is also correct. Canned air is often found in server rooms. An
attacker might not even need to bring anything with them to leverage
this attack.

 And the attacker has to do all this in less then 2 seconds :)
 
 No, they may even have minutes depending on the RAM you have.
 

This is an important point. Without cooling, it's not merely a matter of
a second or less. This is a common misconception that even in light of
new evidence is difficult to believe. I think reading our paper and
understanding our graphs should help with this.

 Or am I missing something?
 
 People readily assume that rebooting or turning off a computer wipes
 RAM. It doesn't. This is just more evidence that it is bad
 to assume that the contents of RAM are gone even if you turn off the
 machine.

Yes. General purpose memory isn't a safe place to store keying material
and software countermeasures are a step behind. Even with obfuscated key
schedules or strange byte ordering, the physical properties of the
memory chips are going to be difficult to overcome.

As our paper states: There is no easy solution to this problem.

I'm happy to field questions if this is the proper forum.

Best,
Jacob Appelbaum

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: cold boot attacks on disk encryption

2008-02-21 Thread Jacob Appelbaum
Ali, Saqib wrote:
 After thinking about this a bit, i have changed my views on this
 attack. i think it is quite easy to perform this attack. i myself have
 been in similar situations, where my personal computer could have been
 easily compromised by this attack

Usually when doing a demo of this attack, people change their minds
quickly. I'm pleased to hear that in actual use cases you acknowledge an
actual problem.

 
 However, the hardware based encryption solutions like (Seagate FDE)
 would easily deter this type of attacks, because in a Seagate FDE
 drive the decryption key never gets to the DRAM. The keys always
 remain in the Trusted ASIC on the drive.
 

While riding the BART this morning, I ran into Nate Lawson (from
Cryptography Research). He mentioned that he felt people would reply
with such endorsements. I'd like to take the time to disagree with
endorsing hardware products as a solution. Hardware solutions are going
to be hard to attack in some circumstances. But they may in fact be
worse without fully viewable (or Free) source software (for the
firmware, the FPGA or the layout of the ASIC, etc.

A great example of some so called unbreakable hardware disk crypto is
this snakeoil:
http://it.slashdot.org/article.pl?sid=08/02/19/0213237
http://www.heise-online.co.uk/security/Enclosed-but-not-encrypted--/features/110136

A new generation of inexpensive disk drive enclosures using hardware
encryption and RFID keys do not fulfil the promises of their publicity.
The adverts claim 128-bit AES hardware encryption, but they don't tell
us how it is used.

Without transparency, I'd rather stick with software. It has issues, we
now know about another one.

Regards,
Jacob Appelbaum

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Free Rootkit with Every New Intel Machine

2007-06-27 Thread Jacob Appelbaum
Jon Callas wrote:
 
 On Jun 25, 2007, at 7:23 PM, Matt Johnston wrote:
 
 On Mon, Jun 25, 2007 at 04:42:56PM +1200, David G. Koontz wrote:
   Apple (mis)uses
 TPM to unsuccessfully prevent OS X from running on non-Apple Hardware.
 All Apple on Intel machines have TPM, that's what 6 percent of new PCs?

 To nit pick, the TPM is only present in some Apple Intel
 machines and isn't used in any of them. See
 http://osxbook.com/book/bonus/chapter10/tpm/

 Their OS decryption key is just stored in normal firmware,
 unprotected AIUI.

Are you discussing how they handle their encrypted swap, encrypted disk
(via FileVault) or their encrypted sleep image? I was unaware that Apple
had implemented full root file system encryption.

 
 They've apparently stopped shipping TPMs. There isn't one on my MacBook
 Pro from last November, and it is missing on my wife's new Santa Rosa
 machine.
 
 If you want to see if a machine has one, then the command:
 
 sudo ioreg -w 0 | grep -i tpm
 
 should give something meaningful. Mine reports the existence of
 ApplePCISlotPM, but that's not the same thing.
 

A positive match looks like this:

| +-o ApplePCISlotPM  class ApplePCISlotPM, !registered, !matched,
active, busy 0, retain count 8
| +-o TPM  class IOACPIPlatformDevice, registered, matched, active,
busy 0, retain count 6

Regards,
Jacob Appelbaum

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]