the early history of NSA

2005-12-02 Thread Steven M. Bellovin
The Quest For Cryptologic Centralization and the Establishment of NSA:
1940-1952

http://www.fas.org/irp/nsa/quest.pdf

--Steven M. Bellovin, http://www.cs.columbia.edu/~smb



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Encryption using password-derived keys

2005-12-02 Thread Alexander Klimov
On Tue, 29 Nov 2005, Jack Lloyd wrote:

 The basic scenario I'm looking at is encrypting some data using a
 password-derived key (using PBKDF2 with sane salt sizes and
 iteration counts). [...] My inclination is to use the PBKDF2 output
 as a key encryption key, rather than using it to directly key the
 cipher (with the key used for the cipher itself being created by a
 good PRNG).

IMO this is too much complicated: just generate random salt with your
PRNG and use PBKDF2(password, salt) as a session key.  Since PBKDF2 is
a (xor of) PRF outputs it is (pseudo-)random.

-- 
Regards,
ASK

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Session Key Negotiation

2005-12-02 Thread Will Morton

Eric Rescorla wrote:


May I ask why you don't just use TLS?



I would if I could, believe me. :o)

The negotiated key will be used for both reliable (TCP-like) and 
non-reliable (UDP-like) connections, all tunnelled over a single UDP 
port for NAT-busting purposes.  For the TCP-like component, I want to 
follow TLS as much as possible for obvious reasons.




Well, in TLS in RSA mode, the client picks the secret value (technical
term: PreMaster Secret) but both sides contribute randomness to ensure
that the Master Secret secret is unique. This is a clean way to
ensure key uniqueness and prevent replay attack.

In DH mode, of course, both sides contribute shares, but that's
just how DH works.



That's what I figured.  Thanks Eric.

W

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Session Key Negotiation

2005-12-02 Thread Richard Salz
 I am designing a transport-layer encryption protocol, and obviously wish
 to use as much existing knowledge as possible, in particular TLS, which
 AFAICT seems to be the state of the art.

In general, it's probably a good idea to look at existing mechanisms and 
analyze why they're not appropriate, rather than start with a clean slate 
and import things that seem useful, especially if you don't understand 
the rationale.

/r$

-- 
SOA Appliance Group
IBM Application Integration Middleware



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Encryption using password-derived keys

2005-12-02 Thread John Kelsey
From: Jack Lloyd [EMAIL PROTECTED]
Sent: Nov 29, 2005 11:08 AM
To: cryptography@metzdowd.com
Subject: Encryption using password-derived keys

The basic scenario I'm looking at is encrypting some data using a
password-derived key (using PBKDF2 with sane salt sizes and iteration
counts). I am not sure if what I'm doing is sound practice or just
pointless overengineering and wanted to get a sanity check.

My inclination is to use the PBKDF2 output as a key encryption key,
rather than using it to directly key the cipher (with the key used
for the cipher itself being created by a good PRNG). For some reason
the idea of using it directly makes me nervous, but not in a way I
can articulate, leading me to suspect I'm worried over nothing.

I think this is sensible for convenience reasons:

a.  You can now change passwords without decrypting and re-encrypting
all your data.  And similarly, if you ever had a reason to change
keys, you could do that without changing passwords.  

b.  You can now check the correctness of the entered password when you
decrypt the data encryption key (using authenticated encryption!),
rather than needing to process the whole data.  (You could also just
generate a few extra check bits from PBKDF2.)  

So, assuming using it as a KEK makes sense: At first I thought to use
XOR to combine the two keys, but realized that could lead to related
key attacks (by just flipping bits in the field containing the
encrypted key). That is probably not a problem with good algorithms,
but, then again, why take the chance; so I was thinking instead using
NIST's AES-wrap (or perhaps a less weirdly designed variant of it
that uses HMAC for integrity checking and AES in CBC mode for
confidentiality).

You almost certainly need to do encryption and authentication both on
your bulk data and your encrypted key.  So why not do some mode that
does both (CCM being the obvious choice) for both the key encryption
and the bulk data encryption?

Like:

salt = PRNG_output(128)
iteration_count = 100
nonce = current nonce
DEK = PRNG_output(128)
KEK = PBKDF2(password,salt,iteration_count,128)
KeyBlob = CCM(KEK,0,DEK)
BulkData = CCM(DEK,nonce,plaintext)

Am I thinking about this far harder than I should?

I'll toss two other random ideas out there to see if they're useful to
you:

a.  You may be worried about having a properly seeded PRNG available
to do your data encryption key generation.  I think a sensible way
around this is to use both a PRNG output and some extra bits from
PBKDF2 to derive the first data encryption key.  Like you could do:

X = PBKDF2(password,salt,iteration_count,256)
KEK = left 128 bits of X
S = right 128 bits of X
DEK = S xor PRNG_output(128)  

b.  You can use a clever trick by Abadi, Lomas and Needham to save
yourself most of the work you do on iterating the password hash during
the creation of the KEK, but not when rederiving it.  Basically, what
you do is instead of setting an iteration count of 2^{21}, you
generate a big random salt, and omit 20 bits of it from the salt
that's stored with the encrypted file.  This forces anyone trying to
re-derive the KEK to do about 2^{20} work on average, but it makes
generating the original encrypted file almost free.  I'm always
surprised that this isn't used more often, because it's such a clever
trick.

-Jack

--John Kelsey

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Fermat's primality test vs. Miller-Rabin

2005-12-02 Thread Nicolas Rachinsky
* Joseph Ashwood [EMAIL PROTECTED] [2005-11-22 02:50 -0800]:
 - Original Message - 
 From: Anton Stiglic [EMAIL PROTECTED]
 Subject: RE: Fermat's primality test vs. Miller-Rabin
 
 
 -Original Message-
 From: [Joseph Ashwood]
 Subject: Re: Fermat's primality test vs. Miller-Rabin
 I think much of the problem is the way the number is being applied. Giving
 a stream of random numbers that have passed a single round of MR you will
 find that very close to 50% of them are not prime, this does not mean that
 it passes 50% of the numbers (the 2^-80 probability given above is of this
 type).
 
 Do you do an initial sieving to get rid of the more obvious primes?
 
 No I did not, since this was specifically to test the effectiveness of MR I 
 determined that it would be better to test purely based on MR, and not use 
 any sieving. The actual algorithm was:
 
 
 16384 times
 {
question = random 512-bit number
//this is not the most efficient, but it should remove bias making this 
 just MR

If I remember the proof of MR correctly it assumes an odd number. Were
all your questions odd?

If not, please try again with odd numbers only.

Nicolas

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


security modifications to current PCs

2005-12-02 Thread Travis H.
Hey,

I've been reading through the TCPA documents and thinking a bit about
changes that might give higher assurance to an ordinary PC, or at
least a PC with only minor changes.

Specifically, one of the things I've always been mulling over is a
secure boot sequence.  Basically, like the TCPA, I want a sequence
where each stage decrypts and validates the next one so that a user
doesn't have to worry about modifications to the bootup state. 
Basically, I've been thinking about rewriting the BIOS (perhaps with
large portions in FORTH a la openfirmware*) such that instead of
prompting the user for a password which is compared to a stored copy
(that can be erased by removing the battery), it instead prompts the
user for a passphrase that is used to decrypt and authenticate the MBR
(boot block) and possibly the first-stage boot loader.  The boot
loader in turn decrypts and authenticates the kernel and any
associated crud it needs (perhaps supporting the multiboot spec), and
the kernel and crud are smart enough to decrypt and authenticate the
root partition, and away we go.

[*] http://www.openfirmware.org/

Similarly, I wouldn't mind seeing a PCI card or something that is
designed for securely storing crypto keys (from DMA among other
things) and performing crypto operations.  These parts of the TCPA are
okay.  I don't see the need to curtain memory, as I'm comfortable with
the ring 0 can do anything property.

Additionally, it would be nice to have a trusted path to the OS,
whereby a certain key sequence triggers a direct input path to a
program, or the user is assured of what program he/she is talking to.
http://www.dwheeler.com/secure-programs/Secure-Programs-HOWTO/trusted-path.html

Is it possible to implement most block ciphers in FPGAs?  It'd be nice
to have a bus-mastering crypto co-processor device to do, say, disk
encryption without requiring CPU help, but I want to be able to update
it to new algorithms as new attacks against the cipher appear.  I use
some disk encryption stuff on a dual processor machine and it's still
slow.  The load climbs to 10 or 12 all too easily, then stuff becomes
unresponsive (perhaps because swap is one of the things I'm
encrypting).
--
http://www.lightconsulting.com/~travis/  --
We already have enough fast, insecure systems. -- Schneier  Ferguson
GPG fingerprint: 50A1 15C5 A9DE 23B9 ED98 C93E 38E9 204A 94C2 641B

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Haskell crypto

2005-12-02 Thread Travis H.
 IMO it is pointless to
 write SHA in a language that ``can have properties of programs
 proved,'' because test vectors are good enough, and there is no real
 assurance that when you write the specification in a machine-readable
 form you do not make the same mistake as in your code.

I think you can prove things about many languages, it just may not be
easy for an arbitrary program in that language.  If you write code
with proof in mind, it probably can work with any language.  If you
don't, and the language is modestly powerful, then you may end up with
the halting problem.

For example, see:
http://raw.cs.berkeley.edu/pcc.html
http://www.cs.princeton.edu/sip/projects/pcc/

It looks like the java bytecode verifier is an example of proving
something about a non-functional language, and they have examples of
checking the safety of hand-coded assembly language.

Even if the proof or specification can be wrong, writing it a
different way may catch some implementation errors.  If the
specification is more terse than the code, then there may be fewer
places to get it wrong, in the same way that handling strings as first
class objects avoids many buffer overflow situations.

None of this will help if the programmer misunderstands the algorithm,
of course.  Test vectors would probably help on that front.

Once I was implementing some crypto, and the AES module was failing
some test vectors, but it worked anyway.  I was told to not worry
about it, but I did.  Later after perusing the code I found that the
author was copying an array of characters to an array of integers,
reducing the keyspace from 128 bits to 32 bits, with 3/4 of the key
being zeroes.

In general testing isn't really a replacement for proof.  It seems
like it would be useful for finding problems in code branches that
aren't taken frequently and thus might be missed by test vectors.  I'm
not sure how many ciphers have this characteristic, I think Schneier
mentioned that IDEA does, among others.
--
http://www.lightconsulting.com/~travis/  --
We already have enough fast, insecure systems. -- Schneier  Ferguson
GPG fingerprint: 50A1 15C5 A9DE 23B9 ED98 C93E 38E9 204A 94C2 641B

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Fermat's primality test vs. Miller-Rabin

2005-12-02 Thread Joseph Ashwood
- Original Message - 
From: Nicolas Rachinsky [EMAIL PROTECTED]

Subject: Re: Fermat's primality test vs. Miller-Rabin



* Joseph Ashwood [EMAIL PROTECTED] [2005-11-22 02:50 -0800]:

16384 times
..



If I remember the proof of MR correctly it assumes an odd number. Were
all your questions odd?


The random numbers tested were almost certainly not all odd, no filtering 
was done on random.



If not, please try again with odd numbers only.


I'm running an abbreviated test right now, and it's looking less impressive, 
I have to assume I'm hitting a bad streak somehow. Real bad, 30 numbers 
tested, no primes at all so far, I see one that has passed 79 tests. I have 
to assume I'm missing something really stupid at this point in my new number 
chooser that I don't have the time to find right now. So I'm asking for 
anyones help in pointing out to me, why after I let it go the full 128 runs 
(that is 128 numbers that passed a single round of MR) I didn't get a single 
number to pass more than 79? Did I just hit a really, really bad streak?


The exact code for the function and the support variables :

static int lenNum = 512;
static SecureRandom rand = new SecureRandom();
static BigInteger two = BigInteger.valueOf(2);

static BigInteger chooseValue()
{
 //pick a random integer
 BigInteger curNum = null;
 byte [] rawBytes = new byte[lenNum/8];
 rand.nextBytes(rawBytes);
 curNum = new BigInteger(rawBytes);

 //make sure it's odd
 if(curNum.mod(BigInteger.valueOf(2)).compareTo(BigInteger.ZERO) == 0)
 {
  curNum = curNum.add(BigInteger.ONE);
 }

 //it's 0 or negative try again
 if(curNum.compareTo(BigInteger.ZERO)=0)
 {
  return chooseValue();
 }
 return curNum;

}

This should choose a 512-bit random odd positive number, unless I'm missing 
something horribly, horribly braindead.


Anyway, back to trying to design a cool user interface (anyone who knows 
me knows that the cue to begin laughing, I can't design a UI for sh*t).
   Joe 




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Broken SSL domain name trust model

2005-12-02 Thread leichter_jerrold
| ...basically, there was suppose to be a binding between the URL the user
| typed in, the domain name in the URL, the domain name in the digital
| certificate, the public key in the digital certificate and something
| that certification authorities do. this has gotten terribly obfuscated
| and looses much of its security value because users rarely deal directly
| in actual URLs anymore (so the whole rest of the trust chain becomes
| significantly depreciated)
One can look at this in more general terms.  For validation to mean
anything,
what's validated has to be the semantically meaningful data - not some
incidental aspect of the transaction.  The SSL model was based on the
assumption that the URL was semantically meaningful, and further that any
other semantically meaningful data was irreversibly bound to it, so that if
the URL were valid, anything you read using that URL could also be assumed
to be equally valid.

This fails today in (at least) two different ways.  First, as you point out,
URL's are simply not semantically meaningful any more.  They are way too
complex, and they're used in ways nothing like what was envisioned when SSL
was designed.  In another dimension, things like cache poisoning attacks
lead to a situationd in which, even if the URL is valid, the information
you actually get when you try to use it may not be the information that was
thought to be irreversibly bound to it.

Perhaps the right thing to do is to go back to basics.  First off, there's
your observation that for payment systems, certificates have become a
solution in search of a problem:  If you can assume you have on-line access
- and today you can - then a certificate adds nothing but overhead.

The SSL certificate model is, I contend, getting to pretty much the same
state.  Who cares if you can validate a signature using entirely off-line
data?  You have to be on-line to have any need to do such a validation, and
you form so many connections to so many sites that another one to do a
validation would be lost in the noise anyway.

Imagine an entirely different model.  First off, we separate encryption
from authentication.  Many pages have absolutely no need for encryption
anyway.  Deliver them in the clear.  To validate them, do a secure hash,
and look up the secure hash in an on-line registry which returns to you
the registered owner of that page.  Consider the page valid if the
registered owner is who it ought to be.  What's a registered owner?  It
could be the URL (which you never have to see - the software will take
care of that).  It could be a company name, which you *do* see:  Use a
Trustbar-like mechanism in which the company name appears as metadata
which can be (a) checked against the registry; (b) displayed in some non-
alterable form.

The registry can also provide the public key of the registered owner, for
use
if you need to establish an encrypted session.  Also, for dynamically
created
pages - which can't be checked in the registry - you can use the public key
to
send a signed hash value along with a page.

Notice that a phisher can exactly duplicate a page on his own site, and it
may
well end up being considered valid - but he can't change the links, and he
can't change the public key.  So all he's done is provide another way to get
to the legitimate site.

The hash registries now obviously play a central role.  However, there are a
relatively small number of them and this is all they do.  So the SSL model
should work well for them:  They can be *designed* to match the original
model.
-- Jerry





-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Broken SSL domain name trust model

2005-12-02 Thread Anne Lynn Wheeler
[EMAIL PROTECTED] wrote:
 One can look at this in more general terms.  For validation to mean
 anything,
 what's validated has to be the semantically meaningful data - not some
 incidental aspect of the transaction.  The SSL model was based on the
 assumption that the URL was semantically meaningful, and further that any
 other semantically meaningful data was irreversibly bound to it, so that if
 the URL were valid, anything you read using that URL could also be assumed
 to be equally valid.
 
 This fails today in (at least) two different ways.  First, as you point out,
 URL's are simply not semantically meaningful any more.  They are way too
 complex, and they're used in ways nothing like what was envisioned when SSL
 was designed.  In another dimension, things like cache poisoning attacks
 lead to a situationd in which, even if the URL is valid, the information
 you actually get when you try to use it may not be the information that was
 thought to be irreversibly bound to it.
 
 Perhaps the right thing to do is to go back to basics.  First off, there's
 your observation that for payment systems, certificates have become a
 solution in search of a problem:  If you can assume you have on-line access
 - and today you can - then a certificate adds nothing but overhead.
 
 The SSL certificate model is, I contend, getting to pretty much the same
 state.  Who cares if you can validate a signature using entirely off-line
 data?  You have to be on-line to have any need to do such a validation, and
 you form so many connections to so many sites that another one to do a
 validation would be lost in the noise anyway.
 
 Imagine an entirely different model.  First off, we separate encryption
 from authentication.  Many pages have absolutely no need for encryption
 anyway.  Deliver them in the clear.  To validate them, do a secure hash,
 and look up the secure hash in an on-line registry which returns to you
 the registered owner of that page.  Consider the page valid if the
 registered owner is who it ought to be.  What's a registered owner?  It
 could be the URL (which you never have to see - the software will take
 care of that).  It could be a company name, which you *do* see:  Use a
 Trustbar-like mechanism in which the company name appears as metadata
 which can be (a) checked against the registry; (b) displayed in some non-
 alterable form.
 
 The registry can also provide the public key of the registered owner, for
 use if you need to establish an encrypted session.  Also, for dynamically
 created pages - which can't be checked in the registry - you can use the 
 public key
 to send a signed hash value along with a page.
 
 Notice that a phisher can exactly duplicate a page on his own site, and it
 may well end up being considered valid - but he can't change the links, and he
 can't change the public key.  So all he's done is provide another way to get
 to the legitimate site.
 
 The hash registries now obviously play a central role.  However, there are a
 relatively small number of them and this is all they do.  So the SSL model
 should work well for them:  They can be *designed* to match the original
 model.

this can basically be considered a form of extended DNS providing
additinal authentication ... secure DNS is one such proposal for
repository of public keys ... but the DNS model of online information
repository can be used for a variety of information.

this is also my oft repeated scenario of the ssl domain name
certification authorities needing secure DNS ... because when processing
an SSL domain name certificate request ... they have to check with the
domain name infrastructure as to the true owner of the domain name. this
currently is an expensive, time-consuming and error-prone process of
matching identification supplied with the request against identification
information on file with the domain name infrastructure. on the other
hand, if public keys of domain name owners were on file with the domain
name infrastructure ... the domain name infrastructure uses digitally
signed communication (validating with the onfile public keys) to
eliminate some of their existing integrity problems (which, then in
turn, improves the integrity of any ssl domain name certificate based on
information at the domain name infrastructure registry). the registered
public keys also allow the certification authorities to turn the
expensive, time-consuming and error-prone identification process into a
much less expensive, simpler, and more reliable authentication process
... by requiring ssl domain name name certificate requests to be
digitally signed by the domain name owner (and validated with the onfile
public keys).

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


[Clips] Banks Seek Better Online-Security Tools

2005-12-02 Thread R. A. Hettinga

--- begin forwarded text


 Delivered-To: [EMAIL PROTECTED]
 Date: Thu, 1 Dec 2005 16:54:00 -0500
 To: Philodox Clips List [EMAIL PROTECTED]
 From: R. A. Hettinga [EMAIL PROTECTED]
 Subject: [Clips] Banks Seek Better Online-Security Tools
 Reply-To: [EMAIL PROTECTED]
 Sender: [EMAIL PROTECTED]

 http://online.wsj.com/article_print/SB113339543967610740.html

 The Wall Street Journal

 December 1, 2005

 Banks Seek Better Online-Security Tools
 New Software Adds Layers
  To Verify Users' Identities;
  Ease of Use Remains Worry
 By RIVA RICHMOND
 DOW JONES NEWSWIRES
 December 1, 2005; Page B4

 More banks, driven by rising online identity theft and regulators'
 concerns, are shopping for security technology to help ensure those logging
 into accounts are the customers they claim to be.

 But while banks want security that is stronger than standard user names and
 passwords, they also don't want the technology to turn off customers by
 diminishing the convenience of online banking.

 Software makers are aiming to help banks strike a tricky balance between
 security and convenience, with several, including Corillian Corp. and
 Entrust Inc., recently introducing systems that raise the bar for risky or
 suspect transactions. The software works behind the scenes to apply extra
 security measures when there is unusual or questionable activity -- say,
 account access from a cybercafe in Prague or a large money transfer that
 isn't a normal bill-payment routine.

 The emergence of these products reflects the industry's concerns that email
 identity-theft scams, called phishing, and hacker programs that steal
 consumers' account information could hurt online banking, which is valued
 by banks as a low-cost way of doing business.

 In the U.S., the Federal Financial Institutions Examination Council, a
 group that sets standards for banks, credit unions and thrifts, in October
 urged that online-banking security move beyond simple passwords by the end
 of next year. Its recommendation carries the force of regulation because
 banks' failure to comply would earn them black marks from bank examiners.

 Many of the new products would help banks respond to the FFIEC, which
 didn't endorse specific security technologies but encouraged banks to
 choose measures appropriate to the risk. Other suppliers of software for
 tightening security include closely held firms Cyota Inc., New York, and
 PassMark Security Inc., Menlo Park, Calif.

 The banks are being pushed to bring in stronger authentication, but match
 it to the risk of the transaction and to the user experience and their
 desires, said Chris Voice, a vice president at Entrust, of Addison, Texas.
 Authentication is a security measure for verifying a customer or
 transaction.

 Industry analysts think banks will employ several techniques to weigh risk
 and verify identities. One way is to halt any transactions from certain
 computers or countries with a high fraud risk. In addition to a user name
 and password, some of these new security systems add a fairly obscure
 personal question, such as What was your high-school mascot? Some also
 allow banks facing a suspicious transaction to send an extra four-digit
 security code for use online to a customer's cellphone.

 The idea is similar to credit-card-fraud systems that trigger phone calls
 to cardholders when they detect unusual activity, while letting the vast
 majority of transactions through without incident.

 Corillian, of Hillsboro, Ore., already provides the technology behind the
 online-banking operations of many banks and credit unions. Woodforest
 National Bank, which has 190 branches in Texas and North Carolina, is
 rolling out Corillian's security technology during the first half of 2006.
 Corillian also has sold the technology to three credit unions and says it
 is in talks with three of the top-10 U.S. banks.

 The key to keeping this channel open is keeping it secure, said Charles
 Manning, president and chief information officer of Woodforest, which
 operates most of its branches inside Wal-Mart stores.

 Corillian's Intelligent Authentication package, launched Oct. 25, tracks
 the behavior of online-banking customers and builds histories of their
 habits to create access signatures. Its files don't include personal
 information. But they do track the characteristics of the computers and
 Internet-service providers that a customer typically uses. It also records
 the normal geographic locations and the times of day a customer prefers to
 bank online, flagging exceptions for scrutiny.

 Meanwhile, security-software maker Entrust unveiled a major new version of
 its IdentityGuard product on Nov. 8 that offers a menu of user-verification
 methods banks can choose from to beef up security on transactions they deem
 risky. It has sold IdentityGuard to Miami-based Commercebank NA, a unit of
 Mercantil Servicios Financieros of Venezuela, and a number of European
 banks. European customers of Entrust's software 

RE: Encryption using password-derived keys

2005-12-02 Thread Anton Stiglic
It can be useful to derive a key encryption key from the password, and not
use the key derived from the password to directly encrypt data you want to
protect, when the resulting ciphertext can be found in different places
where your encrypted key won't necessarly also be found.  For example, to
encrypt files, when the encrypted files found themselves on a backup disk,
but the key is stored somewhere else (encrypted with a password based key).

This can prevent someone who has access to the ciphertext from executing a
brute force attack.

If however your ciphertext always travers with your encrypted key, you don't
gain much of an advantage (the weak point is the password-based key which
can be brute forced or dictionary attacked).

I don't recommend just XORing for the protection of the key.  If ever your
Key Derivation Function doesn't really act like a good pseudo-random
function, or if you use the same password and salt to derive the same key to
protect two different keys, you will be screwed. I rather recommend
encrypting with something like AES, and I also recommend to compute a MAC
over the ciphertext to turn it into a strong encryption, and avoid attacks
such as what have been found with the HSM and the way they stored keys
outside the HSM.  For further details on that point, see for example section
4.3 of the following paper (follow the references given there)
http://crypto.cs.mcgill.ca/~stiglic/Papers/tripleDES.pdf

--Anton


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Jack Lloyd
Sent: November 29, 2005 11:09 AM
To: cryptography@metzdowd.com
Subject: Encryption using password-derived keys


The basic scenario I'm looking at is encrypting some data using a
password-derived key (using PBKDF2 with sane salt sizes and iteration
counts). I am not sure if what I'm doing is sound practice or just pointless
overengineering and wanted to get a sanity check.

My inclination is to use the PBKDF2 output as a key encryption key, rather
than
using it to directly key the cipher (with the key used for the cipher itself
being created by a good PRNG). For some reason the idea of using it directly
makes me nervous, but not in a way I can articulate, leading me to suspect
I'm
worried over nothing.

So, assuming using it as a KEK makes sense: At first I thought to use XOR to
combine the two keys, but realized that could lead to related key attacks
(by
just flipping bits in the field containing the encrypted key). That is
probably
not a problem with good algorithms, but, then again, why take the chance; so
I
was thinking instead using NIST's AES-wrap (or perhaps a less weirdly
designed
variant of it that uses HMAC for integrity checking and AES in CBC mode for
confidentiality).

Am I thinking about this far harder than I should?

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Broken SSL domain name trust model

2005-12-02 Thread Anne Lynn Wheeler
[EMAIL PROTECTED] wrote:
 One can look at this in more general terms.  For validation to mean
 anything,

 what's validated has to be the semantically meaningful data - not some
 incidental aspect of the transaction.  The SSL model was based on the
 assumption that the URL was semantically meaningful, and further that any
 other semantically meaningful data was irreversibly bound to it, so that if
 the URL were valid, anything you read using that URL could also be assumed
 to be equally valid.

note that the other possible semantic confusion is referring to them as
certificate authorities ... rather than certification authorities.

they happen to distribute certificates which are a representation of the
certication. however, there are some number of the certification
authorities that

1) aren't the actual authoritative agency for the information being
certified i.e. the certification authority is just checking with the
real authoritative agency as to the validity of the information

2) many appear to actually prefer to just do certificate manufactoring
... a term we coined when we were doing audits of these new
organizations called certification authorities ... back when we were
consulting with the new client/server startup on something that has come
to be called electronic commerce.

of course the issue has always been that if you can do real-time, online
certification it has no lower value than a stale, static,
offline-oriented certificate. the business model tends to be further
aggrevated by the fact that most of the certification authorities aren't
actually the authoritative agency for the information being certified.
it is highly likely that as online connectivity becomes more and more
pervasive ... that people will start to realize the much higher value of
having real-time, online certification. Since the majority of the
certification authorities aren't actually the authoritative agency for
the actual information, then any transition to high-value, real-time,
online certification will tend to be done directly with the
authoritative agency responsible for the actual informmation. at that
point, most of the certification authorities become obsolete.

an obfuscation is to concentrate on the certificates as having magical
properties, distinct from their representation of an information
certifying business process. referring to them as certificate
authorities helps create semantic confusing as to where the business
process value actual exists. similarly there have articles in the
popular press referring to attached digital crrtificates as what
provides the value to any digitally signed message/document ... further
obfuscating the the value of authentication can be done with digital
signatures with online registered public keys (where any digital
certificates become totally redundant and superfluous).

the other problem/issue with requiring x.509 identity certificates on
every digitally signed message/document  is that it turns what
should be straight-forward, simple authentication operation into a heavy
duty identification operation.

this has also tended to cause semantic confusion as well as something of
a schizo personality in some societies; especially those professing
extremely stringent privacy principles and at the same time trying to
mandate x.509 identity certificates attached to every electronic
communication (making every electronic message an identification operation).

misc. past posts referring to semantic confusion:
http://www.garlic.com/~lynn/aadsm3.htm#kiss5 Common misconceptions, was
Re: KISS for PKIX. (Was: RE: ASN.1 vs XML (used to be RE: I-D ACTION
:draft-ietf-pkix-scvp- 00.txt))
http://www.garlic.com/~lynn/aepay11.htm#53 Authentication white paper
http://www.garlic.com/~lynn/aadsm12.htm#30 Employee Certificates -
Security Issues
http://www.garlic.com/~lynn/aadsm13.htm#16 A challenge
http://www.garlic.com/~lynn/aadsm15.htm#36 VS: On-line signature standards
http://www.garlic.com/~lynn/aadsm19.htm#7 JIE - Contracts in Cyberspace
http://www.garlic.com/~lynn/aadsm19.htm#24 Citibank discloses private
information to improve security
http://www.garlic.com/~lynn/aadsm19.htm#25 Digital signatures have a big
problem with meaning
http://www.garlic.com/~lynn/aadsm20.htm#8 UK EU presidency aims for
Europe-wide biometric ID card
http://www.garlic.com/~lynn/aadsm20.htm#44 Another entry in the internet
security hall of shame
http://www.garlic.com/~lynn/aadsm21.htm#13 Contactless payments and the
security challenges
http://www.garlic.com/~lynn/2003k.html#6 Security models
http://www.garlic.com/~lynn/2004i.html#27 New Method for Authenticated
Public Key Exchange without Digital Certificates
http://www.garlic.com/~lynn/2005f.html#20 Some questions on smart cards
(Software licensing using smart cards)
http://www.garlic.com/~lynn/2005m.html#11 Question about authentication
protocols
http://www.garlic.com/~lynn/2005n.html#51 IPSEC and user vs machine
authentication

Proving the randomness of a random number generator?

2005-12-02 Thread Lee Parkes
Hi,
Apologies if this has been asked before.

The company I work for has been asked to prove the randomness of a random
number generator. I assume they mean an PRNG, but knowing my employer it
could be anything.. I've turned the work down on the basis of having another
gig that week. However, it raised the issue of just how this could be 
achieved. As far as I'm aware there are no strong mathematicians in the team, so
it will get thrown out to the first available person (cool idea, eh?). There
will most likely be very little time allocated to do it.

So, the question is, how can the randomness of a PRNG be proved within 
reasonable limits of time, processing availability and skill?

Thanks,
Lee

-- 
--
[EMAIL PROTECTED] DOC #25 GLASS #136 www.mud-dog.org
I Need A Reason To Stand Up And Fight
Need To Believe What I See - The Silver Drop - Mnemic

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]