Re: Crypto dongles to secure online transactions

2009-11-25 Thread Darren J Moffat

Peter Gutmann wrote:

external data from finding its way onto their corporate networks (they are
really, *really* concerned about this).  If you wanted this to work, you'd
need to build a device with a small CMOS video sensor to read data from the
browser via QR codes and return little more than a 4-6 digit code that the
user can type in (a MAC of the transaction details or something).  It's
feasible, but not quite what you were thinking of.


That reminds me of the Lenslok copy protection device on the Elite (and 
others) game from the '80s[1]


[1] http://www.birdsanctuary.co.uk/sanct/s_lenslok.php


--
Darren J Moffat

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: AES-CBC + Elephant diffuser

2009-11-01 Thread Darren J Moffat

Eugen Leitl wrote:

We discuss why no existing cipher satisfies the requirements of this
application. Uh-oh.

http://www.microsoft.com/downloads/details.aspx?FamilyID=131dae03-39ae-48be-a8d6-8b0034c92555DisplayLang=en

AES-CBC + Elephant diffuser

Brief Description

A Disk Encryption Algorithm for Windows Vista

^^^

That is the key issue here, it is a disk encryption algorithm 
independent of the filesystem that sits above it.


If instead you put the encryption directly into the filesystem, rather 
than below it, then the restrictions of sector size that mean you can't 
easily use a MAC go away.


This is exactly what we have done for ZFS, we do use a MAC (the one from 
CCM or GCM modes) as well as a SHA256 hash of the ciphertext (used for 
resilvering operations in RAID) and they are stored in the block 
pointers (not the data blocks) forming a Merkle tree.  We also have a 
place to store an IV.  So every encrypted ZFS block is self contained, 
has an IV and a 16 byte MAC.   This means that the crypto is all 
standards based algorithms and modes for ZFS.


http://hub.opensolaris.org/bin/view/Project+zfs-crypto/

--
Darren J Moffat

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Truncating SHA2 hashes vs shortening a MAC for ZFS Crypto

2009-11-01 Thread Darren J Moffat
For the encryption functionality in the ZFS filesystem we use AES in CCM 
or GCM mode at the block level to provide confidentiality and 
authentication.  There is also a SHA256 checksum per block (of the 
ciphertext) that forms a Merkle tree of all the blocks in the pool. 
Note that I have to store the full IV in the block.   A block here is a 
ZFS block which is any power of two from 512 bytes to 128k (the default).


The SHA256 checksums are used even for blocks in the pool that aren't 
encrypted and are used for detecting and repairing (resilvering) block 
corruption.  Each filesystem in the pool has its own wrapping key and 
data encryption keys.


Due to some unchangeable constraints I have only 384 bits of space to 
fit in all of: IV, MAC (CCM or GCM Auth Tag), and the SHA256 checksum, 
which best case would need about 480 bits.


Currently I have Option 1 below but I the truncation of SHA256 down to 
128 bits makes me question if this is safe.  Remember the SHA256 is of 
the ciphertext and is used for resilvering.


Option 1

IV  96 bits  (the max CCM allows given the other params)
MAC 128 bits
ChecksumSHA256 truncated to 128 bits

Other options are:

Option 2

IV  96 bits
MAC 128 bits
ChecksumSHA224 truncated to 128 bits

Basically if I have to truncate to 128 bits is it better to do
it against SHA224 or SHA256 ?

Option 3

IV  96 bits
MAC 128 bits
ChecksumSHA224 or SHA256 truncated to 160 bits

Obviously better than the 1 and 2 but how much better ?
The reason it isn't used just now is because it is slightly
harder to layout given other constrains in where the data lives.

Option 4

IV  96 bits
MAC 32 bits
ChecksumSHA256 at full 256 bits

I'm pretty sure the size of the MAC is far to small.

Option 5

IV  96 bits
MAC 64 bits
ChecksumSHA224 at full 224 bits

This feels like the best compromise, but is it ?

Option 6

IV  96 bits
MAC 96 bits
ChecksumSHA224 or SHA256 truncated to 192 bits

--
Darren J Moffat

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: FileVault on other than home directories on MacOS?

2009-09-28 Thread Darren J Moffat

james hughes wrote:
TrueCrypt on the other hand uses AES in XTS mode so you get 
confidentiality and integrity.


Technically, you do not get integrity. With XTS (P1619, narrow block 
tweaked cipher) you are not notified of data integrity failures, but 
these data integrity failures have a much reduced usability than CBC. 
With XTS:


[snip]


If you change this to ZFS Crypto
http://opensolaris.org/os/project/zfs-crypto/
You get complete integrity detection with the only remaining 
vulnerability that


For those not familiar this is because Jim and I choose to use CCM/GCM 
with AES.  ZFS is already using a copy-on-write validated merkle tree. 
The 16 byte tag/MAC from CCM/GCM is stored in the block pointer above 
forming a merkle tree.  Each encrypted block in ZFS has its own IV.  ZFS 
disk blocks are variable size from 512 bytes to (currently) 128k.



1) you can return the entire disk to a previous state.

While I may have put you all asleep, the basic premise holds... XTS is 
better than unauthenticated CBC.


Which is really what I was trying to say and over stated that XTS 
provides integrity. When really what it does is as you said, provides a 
better protection for certain classes of ciphertext modification than 
just using CBC.


--
Darren J Moffat

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: FileVault on other than home directories on MacOS?

2009-09-23 Thread Darren J Moffat

Ivan Krsti  wrote:
TrueCrypt is a fine solution and indeed very helpful if you need 
cross-platform encrypted volumes; it lets you trivially make an 
encrypted USB key you can use on Linux, Windows and OS X. If you're 
*just* talking about OS X, I don't believe TrueCrypt offers any 
advantages over encrypted disk images unless you're big on conspiracy 
theories.


Note my information may be out of date.  I believe that MacOS native 
encrypted disk images (and thus FileVault) uses AES in CBC mode without 
any integrity protection, the Wikipedia article seems to confirm that is 
 (or at least was) the case http://en.wikipedia.org/wiki/FileVault


There is also a sleep mode issue identified by the NSA:

http://crypto.nsa.org/vilefault/23C3-VileFault.pdf

TrueCrypt on the other hand uses AES in XTS mode so you get 
confidentiality and integrity.


--
Darren J Moffat

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: AES-GMAC as a hash

2009-09-04 Thread Darren J Moffat

Hal Finney wrote:

Darren J Moffat darren.mof...@sun.com asks:
Ignoring performance for now what is the consensus on the suitabilty of 
using AES-GMAC not as MAC but as a hash ?


Would it be safe ?

The key input to AES-GMAC would be something well known to the data 
and/or software.


No, I don't think this would work. In general, giving a MAC a fixed key
cannot be expected to produce a good hash. With AES-GMAC in particular,
it is unusual in that it has a third input (besides key and data to MAC),
an IV, which makes your well-known-key strategy problematic. And even as a
MAC, it is very important that a given key/IV pair never be reused. Fixing
a value for the key and perhaps IV would defeat this provision.

But even ignoring all that, GMAC amounts to a linear combination of
the text blocks - they are the coefficients of a polynomial. The reason
you can get away with it in GMAC is because the polynomial variable is
secret, it is based on the key. So you don't know how things are being
combined. But with a known key and IV, there would be no security at all.
It would be linear like a CRC.


Thanks, that is pretty much what I suspected would be the answer but you 
have more detail than I could muster in my head at a first pass on this.


Thanks.

--
Darren J Moffat

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: SHA-1 and Git (was Re: [tahoe-dev] Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git)

2009-08-25 Thread Darren J Moffat

Ben Laurie wrote:

Perry E. Metzger wrote:

Yet another reason why you always should make the crypto algorithms you
use pluggable in any system -- you *will* have to replace them some day.


In order to roll out a new crypto algorithm, you have to roll out new
software. So, why is anything needed for pluggability beyond versioning?


Versioning catches a large part of it, but that alone isn't always 
enough.  Sometimes for on disk formats you need to reserve padding space 
to add larger or differently formatted things later.


Also support for a new crypto algorithm can actually be done without 
changes to the software code if it is truely pluggable.


An example from Solaris that is how our IPsec implementation works.  If 
a new algorithm is available via the Solaris crypto framework in many c 
cases were we don't need any code changes to support it, just have the 
end system admin run the ipsecalgs(1M) command to update the IPsec 
protocol number to crypto framework algorithm name mappings (we use 
PKCS#11 style mechanism names that combine algorithm and mode).  The 
Solaris IPSec implementation has no crypto algorithm names in the code 
base at all (we do currently assume CBC mode though but are in the 
process of adding generic CCM, GCM and GMAC support).


Now having said all that the PF_KEY protocol (RFC 2367) between user and 
kernel does know about crypto algorithms.



It seems to me protocol designers get all excited about this because


Not just on the wire protocols but persistent on disk formats, on disk 
is a much bigger deal.  Consider the case when you have terrabytes of 
data written in the old format and you need to migrate to the new format 
 - you have to support both at the same time.  So not just versioning 
but space padding can be helpful.


--
Darren J Moffat

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Unattended reboots (was Re: The clouds are not random enough)

2009-08-03 Thread Darren J Moffat

Arshad Noor wrote:

Almost every e-commerce site (that needs to be PCI-DSS compliant) I've
worked with in the last few years, insists on having unattended reboots.


Not only that but many will be multi-node High Availability cluster 
systems as well or will be horizontally scaled.  This means that there 
are multiple machines needing access to the same key material.  Or it 
means putting a crypto protocol terminator on the front - the down 
side of that is loosing end to end security.



Even when the server is configured with a FIPS-certified HSM and the
cryptographic keys are in secure storage with M of N controls for access
to the keys, in order for the application to have access to the keys in
the crypto hardware upon an unattended reboot,


This is because availability of the service is actually more important 
to the business than real security.


 the PINs to the hardware

must be accessible to the application.  If the application has automatic


Or at least a broker for the application.


access to the PINs, then so does an attacker who manages to gain entry
to the machine.


The way we have traditionally done that in Solaris for IKE is to write 
the passphrase/PIN in the clear to disk but rely on UNIX permissions to 
protect ie readable only to root or the user account the service runs as.



P.S. As an aside, the solution we have settled on is to have the key-
custodians enter their PINs to the crypto-hardware (even from remote
locations, if needed, through secure channels).  While it does not
provide for unattended reboots, it does minimize the latency between
reboots while ensuring that there is nothing persistent on the machine
with PINs to the crypto-hardware.


We recently added this ability for the IKE daemon on Solaris/OpenSolaris 
for the case when the private keys IKE uses are stored in a PKCS#11 
keystore (HSM or TPM).  However we don't expect this to be used in the 
case where unattended reboots or cluster failover be used.


This is really no different to storing a root/host/service keytab on 
disk for Kerberos - yet that seems to be accepted practice even in 
organisations that by policy don't want passphrase/PIN on disk.


--
Darren J Moffat

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Weakness in Social Security Numbers Is Found

2009-07-12 Thread Darren J Moffat

d...@geer.org wrote:

I don't honestly think that this is new, but even
if it is, a 9-digit random number has a 44% chance
of being a valid SSN (442 million issued to date).


I wonder if the UK NI numbers suffer from a similar problem.

The look a little like this:  AB 12 34 56 C

Information on how they are strutured is here:

http://en.wikipedia.org/wiki/National_Insurance#National_Insurance_number

However given we don't use the NI number in the UK like the SSN is 
abused in the US there isn't the same security risk in guessing them. 
Although the Wikipedia article claims they are sometimes used for 
identification I know I have never been asked for mine other than by an 
employer or suitably authorised government body how has a real need to know.


--
Darren J Moffat

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: consulting question.... (DRM)

2009-05-27 Thread Darren J Moffat

John Gilmore wrote:

It's only the DRM fanatics whose installed bases of customers
are mentally locked-in despite the crappy user experience (like
the brainwashed hordes of Apple users, or the Microsoft victims)
who are troublesome.  In such cases, the community should


I assume the Apple reference here is aimed at iTunes.  You do know that 
iTunes Music Store no longer uses any DRM right ?


--
Darren J Moffat

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Warning! New cryptographic modes!

2009-05-21 Thread Darren J Moffat

Jerry Leichter wrote:
To support insertions or deletions of full blocks, you can't make the 
block encryption depend on the block position in the file, since that's 
subject to change.  For a disk encryptor that can't add data to the 
file, that's a killer; for an rsync pre-processor, it's no big deal - 
just store the necessary key-generation or tweak data with each block.  
This has no effect on security - the position data was public anyway.


That is basically what I'm doing in adding encryption to ZFS[1].  Each 
ZFS block in an encrypted dataset is encrypted with a separate IV and 
has its own AES-CCM MAC both of which are stored in the block pointer 
(the whole encrypted block is then checksumed with an unkeyed SHA256 
which forms a merkle tree).


To handle smaller inserts or deletes, you need to ensure that the 
underlying blocks get back into sync.  The gzip technique I mentioned 
earlier works.  Keep a running cryptographically secure checksum over 
the last blocksize bytes.  


ZFS already supports gzip compression but only does so on ZFS blocks not 
on files so it doesn't need to do this trick.  The downside is we don't 
get as good a compression as when you can look at the whole file.


ZFS has its own replication system in its send/recv commands (which take 
a ZFS dataset and produce either a full or delta between snapshots 
object change list).  My plan for this is to be able to send the per 
block changes as ciphertext so that we don't have to decrypt and 
re-encrypt the data.  Note this doesn't help rsync though since the 
stream format is specific to ZFS.


[1] http://opensolaris.org/os/project/zfs-crypto/

--
Darren J Moffat

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: full-disk subversion standards released

2009-05-01 Thread Darren J Moffat

Thor Lancelot Simon wrote:

No, no there's not.  In fact, I solicited information here about crypto
accellerators with onboard persistent key memory (secure key storage)
about two years ago and got basically no responses except pointers to
the same old, discontinued or obsolete products I was trying to replace.


I wouldn't normally play marketeer but since you asked did you look at 
this product ?   Either way I'd be interested in your view on it.


http://www.sun.com/products/networking/sslaccel/suncryptoaccel6000/index.xml

Please ignore the sslaccel in the URL this card doesn't know anything 
about SSL it is a pure Crypto accelerator and keystore with a FIPS 140-2 
@ Level certification.  Support on Solaris, OpenSolaris, RHEL 5 and SuSE 10.


It has the ability to have centralised key management and shared 
keystores (within and across machines).


It even has Eliptic Curve support available.

--
Darren J Moffat

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: full-disk subversion standards released

2009-05-01 Thread Darren J Moffat

Thor Lancelot Simon wrote:

To the extent of my knowledge there are currently _no_ generally
available, general-purpose crypto accellerator chip-level products with
onboard key storage or key wrapping support, with the exception of parts
first sold more than 5 years ago and being shipped now from old stock.


CA-6000 supports on board key storage and key wrapping.  It even 
supports the NIST AES Keywrap algorithm.


This card is certainly newer than 5 years old, in fact when we first 
released it we had some deployment issues because we had created a PCIe 
only card and several customers wanted to put on in machines that didn't 
have PCIe capability.


--
Darren J Moffat


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: full-disk subversion standards released

2009-05-01 Thread Darren J Moffat

Peter Gutmann wrote:

(Does anyone know of any studies that have been done to find out how prevalent
this is for servers?  I can see why you'd need to do it for software-only
implementations in order to survive restarts, but what about hardware-assisted
TLS?  Is there anything like a study showing that for a random sampling of x
web servers, y stored the keys unprotected?  Are you counting things like
Windows' DPAPI, which any IIS setup should use, as protected or
unprotected?)


We recently had some discussion about this inside Sun.  Not just for TLS 
but for IKE as well.


Until very recently our IKE daemon required the PKCS#11 PIN to be on 
disk (readable only by root) even if you were using sensitive and non 
extractable keys in a hardware keystore.   We changed that to provide an 
admin command to interactively load the key.   However we know that this 
won't actually be used on the server side in many case, and not in a 
cluster (the Solaris/OpenSolaris IKE and IPsec is cluster capable).


For Web servers the situation was similar, either the naked private key 
was on disk or the PKCS#11 PIN that allowed access to it was.



I solicited information here about crypto accellerators with onboard
persistent key memory (secure key storage) about two years ago and got
basically no responses except pointers to the same old, discontinued or
obsolete products I was trying to replace.


I was hoping someone else would leap in about now and question this, but I
guess I'll have to do it... maybe we have a different definition of what's
required here, but AFAIK there's an awful lot of this kind of hardware
floating around out there, admittedly it's all built around older crypto
devices like Broadcom 582x's and Cavium's Nitrox (because there hasn't been
any real need to come up with replacements) but I didn't think there'd be much
problem with finding the necessary hardware, unless you've got some particular
requirement that rules a lot of it out.


The Sun CA-6000 card I just pointed to in my other email is such a card 
it uses Broadcom 582x.


--
Darren J Moffat

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: MD5 considered harmful today, SHA-1 considered harmful tomorrow

2009-01-20 Thread Darren J Moffat

Paul Hoffman wrote:

At 12:24 PM +0100 1/12/09, Weger, B.M.M. de wrote:

When in 2012 the winner of the
NIST SHA-3 competition will be known, and everybody will start
using it (so that according to Peter's estimates, by 2018 half
of the implementations actually uses it), do we then have enough
redundancy?


No offense, Benne, but are serious? Why would everybody even consider it? 
Give what we know about the design of SHA-2 (too little), how would we know whether SHA-3 
is any better than SHA-2 for applications such as digital certificates?

In specific, if most systems have implemented the whole SHA-2 family by the 
time SHA-3 is settled, and then there is a problem found in SHA-2/256, I would 
argue that it is probably much more prudent to change to SHA-2/384 than to 
SHA-3/256. SHA-2/384 will most likely be much than to SHA-3/256, but it will 
have had significantly more study.


Can you state the assumptions for why you think that moving to SHA384 
would be safe if SHA256 was considered vulnerable in some way please.


SHA256,384,512 are a suite all built on the same basic algorithm 
construction.  Depending on how SHA256 fell the whole suite could be 
vulnerable irrespective of the digest length or maybe it won't be.


Until we know how the SHA3 digest is actually constructed the same could 
even be true of that.


I don't think it depends at all on who you trust but on what algorithms 
are available in the protocols you need to use to run your business or 
use the apps important to you for some other reason.   It also very much 
depends on why the app uses the crypto algorithm in question, and in the 
case of digest/hash algorithms wither they are key'd (HMAC) or not.


--
Darren J Moffat

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: once more, with feeling.

2008-09-18 Thread Darren J Moffat

Dirk-Willem van Gulik wrote:

  ... discussion on CA/cert acceptance hurdles in the UI 

I am just wondering if we need a dose of PGP-style reality here.

We're really seeing 3 or 4 levels of SSL/TLS happening here - and whilst
they all appear use the same technology - the assurances, UI, operational
regimen, 'investment' and user expectations are way different:

^^^
I seriously doubt that even a single digit percentage of end users out 
on the internet know anything about the different types of certificates 
used in SSL/TLS and what they mean.   I know none of my family (other 
than my wife: but given she worked for a large CA doing authentication 
and verification) knows what SSL really means never mind what the 
different types of cert are supposed to indicate and what to do about 
them, yet they buy stuff on the internet.  It doesn't mean they are 
ignorant it is just the normal case.



So my take is that it is pretty much impossible to get the UI to do
the right thing - until it has this information* - and even then
you have a fair chunk of education left to do :). 


Even if you got the UI to do the right thing it still doesn't mean 
anything real about trust all it really means is how much money was 
invested in getting the cert and setting up the correct information 
about the company identity behind it.



--
Darren J Moffat

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: once more, with feeling.

2008-09-08 Thread Darren J Moffat

Perry E. Metzger wrote:

I was shocked that several people posted in response to Peter
Gutmann's note about Wachovia, asking (I paraphrase):

What is the problem here? Wachovia's front page is only http
protected, but the login information is posted with https! Surely this
is just fine, isn't it?


[snip]


(I won't be forwarding followups to this unless they are unusually
interesting.)


Hopefully this is interesting enough to get forwarded on...

Sadly this practice is all too common, and often goes hand in hand with 
the other cardinal sin of https that of mixed http/https pages.


I believe the only way both of these highly dubious deployment practices 
will be stamped out is when the browsers stop allowing users to see such 
web pages. So that there becomes a directly attributable financial 
impact to the sites that deploy in that way.


As much as I like Firefox  Safari [ the only two browsers I use now ] 
this has to be led by Microsoft with Internet Explorer since that will 
have the biggest impact, given IE 8 is in beta this seems like a perfect 
opportunity to get this in as a change for the next version.


Warnings aren't enough in this context [ whey already exists ] the only 
thing that will work is stopping the page being seen - replacing it with 
a clearly worded explanation with *no* way to pass through and render 
the page (okay maybe with a debug build of the browser but not in the 
shipped product).



--
Darren J Moffat

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]