### Re: [Cryptography] Key stretching

```On 10/11/2013 11:22 AM, Jerry Leichter wrote:

1.  Brute force.  No public key-stretching algorithm can help, since the
attacker
will brute-force the k's, computing the corresponding K's as he goes.

There is a completely impractical solution for this which is applicable
in a very few ridiculously constrained situations.  Brute force can
be countered, in very limited circumstances, by brute bandwidth.

You have to use random salt sufficient to ensure that all possible
decryptions of messages transmitted using the insufficient key or
insecure cipher are equally valid.

Unfortunately, this requirement is cumulative for *ALL* messages that
you encrypt using the key, and becomes flatly impossible if the total
amount of ciphertext you're trying to protect with that key is greater
than a very few bits.

So, if you have a codebook that allows you to transmit one of 128 pre-
selected messages (7 bits each) you could use a very short key or an
insecure cipher about five times, attaching (2^35)/5 bits of salt to
each message, to achieve security against brute-force attacks.  At
that point your opponent sees all possible decryptions as equally
likely with at least one possible key that gives each of the possible
total combinations of decryptions (approximately; about 1/(2^k) of the
total number of possible decryptions will be left out, where k is the
size of your actual too-short key).

The bandwidth required is utterly ridiculous, but you can get
security on a few very short messages, assuming there's no identifiable

Unfortunately, you cannot use this to leverage secure transmission of
keys, since whatever key larger than the initial key you transmit
using this scheme, once your opponent has ciphertext transmitted
using the longer key, the brute-force method against the possibilities
for your initial short key becomes applicable to that ciphertext.

Bear

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

```

### [Cryptography] Broken RNG renders gov't-issued smartcards easily hackable.

```Saw this on Arstechnica today and thought I'd pass along the link.

http://arstechnica.com/security/2013/09/fatal-crypto-flaw-in-some-government-certified-smartcards-makes-forgery-a-snap/2/

More detailed version of the story available at:

https://factorable.net/paper.html

Short version:  Taiwanese Government issued smartcards to citizens.
Each has a 1024 bit RSA key.  The keys were created using a borked
RNG.  It turns out many of the keys are broken, easily factored,
or have factors in common, and up to 0.4% of these cards in fact
provide no encryption whatsoever (RSA keys are flat out invalid,
and there is a fallback to unencrypted operation).

This is despite meeting (for some inscrutable definition of meeting)
FIPS 140-2 Level 2 and Common Criteria standards.  These standards
require steps that were clearly not done here.  Yet, validation
certificates were issued.

Taiwan is now in the process of issuing a new generation of
smartcards; I hope they send the clowns who were supposed to test
the first generation a bill for that.

Bear

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

```

### Re: [Cryptography] prism-proof email in the degenerate case

```On 10/10/2013 12:54 PM, John Kelsey wrote:
Having a public bulletin board of posted emails, plus a protocol
for anonymously finding the ones your key can decrypt, seems
like a pretty decent architecture for prism-proof email.  The
tricky bit of crypto is in making access to the bulletin board
both efficient and private.

Wrong on both counts, I think.  If you make access private, you
generate metadata because nobody can get at mail other than their
own.  If you make access efficient, you generate metadata because
you're avoiding the wasted bandwidth that would otherwise prevent
the generation of metadata. Encryption is sufficient privacy, and
efficiency actively works against the purpose of privacy.

The only bow I'd make to efficiency is to split the message stream
into channels when it gets to be more than, say, 2GB per day. At
that point you would need to know both what channel your recipient
listens to *and* the appropriate encryption key before you could
send mail.

Bear

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

```

### Re: [Cryptography] P=NP on TV

```On 10/07/2013 05:28 PM, David Johnston wrote:

We are led to believe that if it is shown that P = NP, we suddenly have a
break for all sorts of algorithms.
So if P really does = NP, we can just assume P = NP and the breaks will make
themselves evident. They do not. Hence P != NP.

As I see it, it's still possible.  Proving that a solution exists does
not necessarily show you what the solution is or how to find it.  And
just because a solution is subexponential is no reason a priori to
suspect that it's cheaper than some known exponential solution for
any useful range of values.

So, to me, this is an example of TV getting it wrong.  If someone
ever proves P=NP, I expect that there will be thunderous excitement
in the math community, leaping hopes in the hearts of investors and
technologists, and then very careful explanations by the few people
who really understand the proof that it doesn't mean we can actually
do anything we couldn't do before.

Bear
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

```

### Re: [Cryptography] Sha3

```On 10/04/2013 07:38 AM, Jerry Leichter wrote:
On Oct 1, 2013, at 5:34 AM, Ray Dillinger b...@sonic.net wrote:
What I don't understand here is why the process of selecting a standard
algorithm for cryptographic primitives is so highly focused on speed.

If you're going to choose a single standard cryptographic algorithm, you have
to consider all the places it will be used.  ...

It is worth noting that NSA seems to produce suites of algorithms optimized
for particular uses and targeted for different levels of security.  Maybe
it's time for a similar approach in public standards.

people
were trying to find *ONE* cryptographic primitive for use across a very wide
range of
clients, many of which it is inappropriate for (too light for first-class or
long-term
protection of data, too heavy for transient realtime signals on embedded
low-power
chips).

I probably care less than most people about the low-power devices dealing with
transient realtime signals, and more about long-term data protection than most
people.  So, yeah, I'm annoyed that the standard algorithm is insufficient to
just *STOMP* the problem and instead requires occasional replacement, when
*STOMP*
is well within my CPU capabilities, power budget, and timing requirements.  But
somebody else is probably annoyed that people want them to support AES when they
were barely able to do WEP on their tiny power budget fast enough to be
non-laggy.

These are problems that were never going to have a common solution.

Bear
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

```

### Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

```Is it just me, or does the government really have absolutely no one
with any sense of irony?  Nor, increasingly, anyone with a sense of
shame?

I have to ask, because after directly suborning the cyber security
of most of the world including the USA, and destroying the credibility
of just about every agency who could otherwise help maintain it, the
NSA kicked off National Cyber Security Awareness Month on the first
of October this year.

http://blog.sfgate.com/hottopics/2013/10/01/as-government-shuts-down-nsa-excitedly-announces-national-cyber-security-awareness-month/

[Slow Clap]  Ten out of ten for audacity, wouldn't you say?

Bear
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

```

### [Cryptography] Politics - probably off topic here.

```

Original message
From: Phillip Hallam-Baker hal...@gmail.com
Date: 10/06/2013  08:18  (GMT-08:00)
To: James A. Donald jam...@echeque.com
Cc: cryptography@metzdowd.com
Subject: Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was:

Phillip Hallam-Baker wrote:

But when I see politicians passing laws to stop people voting, judges deciding
that the votes in a Presidential election cannot be counted and all the other
right wing antics taking place in the US at the moment, the risk of a right
wing fascist coup has to be taken seriously.

Well, yes.  That is my main concern as well.  The recent tactics of the
Republican party are more an attack on the process of constitutional government
than ordinary tactics of the sort that make sense within the process.  This
sort of issue used to get solved with simple and relatively harmless horse
trading and pork barrel deals where the minority members would sell their
votes for something to bring back to their constituents. Shutting down the
whole system is mad - and horribly damaging - compared to just taking the
chance to bring home some pork.

And this concerns me more than it otherwise might because of the recent
economic trouble we've been having.  When economies go bad is when fascist
parties tend to come to power. Or more to the point in our case , when existing
parties veer further in a fascist direction.  I'm seeing the golden dawn party
in Greece gaining popularity that it could never have gotten when its economy
was in better shape.  I see the recent elections in a few euro zone nations
giving seats to far-right parties, and I just can't help starting to worry.

Most of the history I'm aware of regarding genocides and the emergence of
dictatorships is that everywhere from Rwanda to Germany to Haiti,  they have
always followed close on the heels of a particularly severe and long sustained
economic crisis.

I don't want the country I live in to become another example. And I don't want
the world I live on to suffer through more examples.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography
```

### Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

```

Original message
From: Jerry Leichter leich...@lrw.com
Date: 10/06/2013  15:35  (GMT-08:00)
To: John Kelsey crypto@gmail.com
Cc: cryptography@metzdowd.com List cryptography@metzdowd.com,Christoph
Anton Mitterer cales...@scientia.net,james hughes
hugh...@mac.com,Dirk-Willem van Gulik di...@webweaving.org
Subject: Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was:

On Oct 5, 2013, at 9:29 PM, John Kelsey wrote:
Really, you are talking more about the ability to *remove* algorithms.  We
still have stuff using MD5 and RC4 (and we'll probably have stuff using dual ec
drbg years from now) because while our standards have lots of options and it's
usually easy to add new ones, it's very hard to take any away.

Can we do anything about that? If the protocol allows correction (particularly
remote or automated correction) of an entity using a weak crypto primitive,
that opens up a whole new set of attacks on strong primitives.

We'd like the answer to be that people will decline to communicate with you if
you use a weak system,  but honestly when was the last time you had that degree
of choice in from whom you get exactly the content and services you need?

Can we even make renegotiating the cipher suite inconveniently long or heavy so
defaulting weak becomes progressively more costly as more people default
strong? That opens up denial of service attacks, and besides it makes it
painful to be the first to default strong.

Can a check for a revoked signature for the cipher's security help? That makes
the CA into a point of control.

Anybody got a practical idea?

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography
```

### Re: [Cryptography] AES-256- More NIST-y? paranoia

```
On 10/03/2013 06:59 PM, Watson Ladd wrote:

On Thu, Oct 3, 2013 at 3:25 PM,leich...@lrw.com  wrote:

On Oct 3, 2013, at 12:21 PM, Jerry Leichterleich...@lrw.com  wrote:

As *practical attacks today*, these are of no interest - related key

attacks only apply in rather unrealistic scenarios, even a 2^119 strength
is way beyond any realistic attack, and no one would use a reduced-round
version of AES-256.

Expanding a bit on what I said:  Ideally, you'd like a cryptographic
algorithm let you build a pair of black boxes.  I put my data and a key
into my black box, send you the output; you put the received data and the
same key (or a paired key) into your black box; and out comes the data I
sent you, fully secure and authenticated.  Unfortunately, we have no clue
how to build such black boxes.  Even if the black boxes implement just the
secrecy transformation for a stream of blocks (i.e., they are symmetric
block ciphers), if there's a related key attack, I'm in danger if I haven't
chosen my keys carefully enough.

So, it seems that instead of AES256(key) the cipher in practice should be
AES256(SHA256(key)).

Is it not the case that (assuming SHA256 is not broken) this defines a cipher
effectively immune to the related-key attack?

Bear

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

```

### Re: [Cryptography] encoding formats should not be committee'ised

```
On 10/04/2013 01:23 AM, James A. Donald wrote:

On 2013-10-04 09:33, Phillip Hallam-Baker wrote:

The design of WSDL and SOAP is entirely due to the need to impedance match COM
to HTTP.

That is fairly horrifying, as COM was designed for a single threaded
environment, and becomes and incomprehensible and extraordinarily inefficient
security hole

Well, yes, as a matter of fact DCOM was always incomprehensible
and extraordinarily inefficient.  However, it wasn't so much of
a security hole in the remotely crashable bug sense.  It made
session management into something of a difficult problem though.

Bear
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

```

### Re: [Cryptography] AES-256- More NIST-y? paranoia

```
On 10/02/2013 02:13 PM, Brian Gladman wrote:

The NIST specification only eliminated Rijndael options - none of the
Rijndael options included in AES were changed in any way by NIST.

Leaving aside the question of whether anyone weakened it, is it
true that AES-256 provides comparable security to AES-128?

Bear

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

```

### Re: [Cryptography] Sha3

```What I don't understand here is why the process of selecting a standard
algorithm for cryptographic primitives is so highly focused on speed.

We have machines that are fast enough now that while speed isn't a non issue,
it is no longer nearly as important as the process is giving it precedence for.

Our biggest problem now is security,  not speed. I believe that it's a bit
silly to aim for a minimum acceptable security achievable within the context of
speed while experience shows that each new class of attacks is usually first
seen against some limited form of the cipher or found to be effective only if
the cipher is not carried out to a longer process.

Original message
From: John Kelsey crypto@gmail.com
Date: 09/30/2013  17:24  (GMT-08:00)
To: cryptography@metzdowd.com List cryptography@metzdowd.com
Subject: [Cryptography] Sha3

If you want to understand what's going on wrt SHA3, you might want to look at
the nist website, where we have all the slide presentations we have been giving
over the last six months detailing our plans.  There is a lively discussion
going on at the hash forum on the topic.

This doesn't make as good a story as the new sha3 being some hell spawn cooked
up in a basement at Fort Meade, but it does have the advantage that it has some
connection to reality.

You might also want to look at what the Keccak designers said about what the
capacities should be, to us (they put their slides up) and later to various
crypto conferences.

Or not.

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography
```

### Re: [Cryptography] Sha3

```Okay, I didn't express myself very well the first time I tried to say this.
But as I see it,  we're still basing the design of crypto algorithms on
twelve years ago.

To make an analogy, it's like making tires when you need to have a ten thousand
mile warranty.  When rubber is terribly expensive and the cars are fairly slow,
you make a tire that probably won't be good for twelve thousand miles. But
it's now years later.  Rubber has gotten cheap and the cars are moving a lot
faster and the cost of repairing or replacing crashed vehicles is now
dominating the cost of rubber. Even if tire failure accounts for only a small
fraction of that cost,  why shouldn't we be a lot more conservative in the
design of our tires? A little more rubber is cheap and it would be nice to know
that the tires will be okay even if the road turns out to be gravel.

This is where I see crypto designers.  Compute power is cheaper than it's ever
been but we're still treating it as though its importance hasn't changed. More
is riding on the cost of failures and we've seen how failures tend to happen.
Most of the attacks we've seen wouldn't have worked on the same ciphers if the
ciphers had been implemented in a more conservative way.   A few more rounds of
a block cipher or a wider hidden state for a PRNG, or longer RSA keys,  even
though we didn't know at the time what we were protecting from, would have kept
most of these things safe for years after the attacks or improved factoring
methods were discovered.

Engineering is about achieving the desired results using a minimal amount of
resources.  When compute power was precious that meant minimizing compute
power. But the cost now is mostly in redeploying and upgrading extant
infrastructure.  And in a lot of cases we're having to do that because the
crypto is now seen to be too weak.  When we try to minimize our use of
resources,  we need to value them accurately.

To me that means making systems that won't need to be replaced as often.  And
just committing more of the increasingly cheap resource of compute power would
have achieved that given most of the breaks we've seen in the past few years.

still confident in that ten thousand mile warranty now that we've discovered
that the company that puts up road signs has also been contaminating our rubber
formula, sneakily cutting brake lines,  and scattering nails on the road. Damn,
it's enough to make you wish you'd overdesigned, isn't it?

Original message
From: John Kelsey crypto@gmail.com
Date: 09/30/2013  17:24  (GMT-08:00)
To: cryptography@metzdowd.com List cryptography@metzdowd.com
Subject: [Cryptography] Sha3

If you want to understand what's going on wrt SHA3, you might want to look at
the nist website, where we have all the slide presentations we have been giving
over the last six months detailing our plans.  There is a lively discussion
going on at the hash forum on the topic.

This doesn't make as good a story as the new sha3 being some hell spawn cooked
up in a basement at Fort Meade, but it does have the advantage that it has some
connection to reality.

You might also want to look at what the Keccak designers said about what the
capacities should be, to us (they put their slides up) and later to various
crypto conferences.

Or not.

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography
```

### Re: [Cryptography] RSA recommends against use of its own products.

```
*1 Anyone who attempts to generate random numbers by
deterministic means is, of course, living in a
state of sin. -- John Von Neumann

That said, it seems that most of these attacks on Pseudorandom
generators some of which are deliberately flawed, can be ameliorated
somewhat by using a known-good (if slow) Pseudorandom generator.

If we were to take the compromised products, rip out the PRNG's,
and replace them with Blum-Blum-Shub generators, we would have
products that work more slowly -- spending something like an
order of magnitude more time on the generation of Pseudorandom
bits -- but the security of those bits would be subject to an
actual mathematical proof that prediction of the next really is
at least equal in difficulty to a known-size factoring problem.
Factoring problems apparently aren't as hard as we used to think
but they *are* still pretty darn hard.

Slow or not, I think we do need to have at least one option
available in most PRNG-using systems which comes with a
mathematical proof that prediction is GUARANTEED to be hard.
Otherwise it's too easy for people and businesses to be caught
absolutely flatfooted and have no recourse when a flawed PRNG
is discovered or a trust issue requires them to do something
heroic in order to convince customers that the customers' data
can actually be safe.

We've been basing our notion of security on the idea that others
don't know something we don't know -- which is sort of nebulous
on its face and of course can never be provable. We can't really
change that until/unless we can say something definite about
P=NP, but we're a lot more sure that nobody else has anything
primitives.

Do we know of anything faster than BBS that comes with a real
mathematical proof that prediction is at least as hard as
\$SOME_KNOWN_HARD_PROBLEM ?

Bear
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

```

### Re: [Cryptography] A lot to learn from Business Records FISA NSA Review

```
On 09/16/2013 07:58 AM, Perry E. Metzger wrote:

Well, we do know they created things like the (not very usable)
seLinux MAC (Multilevel Access Control) system, so clearly they do
some hacking on security infrastructure.

SeLinux seems to be targeted mostly at organizational security,
whereas the primary need these days is not organizational, but
uniform.

That is to say, we don't in practice see many situations where
different levels and departments of an organization have complex
and different rules for how and whether they can access each
other's information and complex requirements for audit trails.

What we see is simpler; we see systems used by people who have
more or less uniform requirements and don't much need routine
auditing, except for one or two administrators.

More useful than the complexity of SeLinux would be a relatively
simple system in which ordinary Unix file permissions were
cryptographically enforced.  If for example read permissions on
a file are exclusive to some user or some group, then that file
should be encrypted so that no one else, even if the bytes are
accessible to them by some means, should be able to make sense
of it, and the configuration options should include not storing
the key to it anywhere in the system -- let the user plug a
USB stick in to give the key for his session, and let the user
remove it to take that key away again whenever he's not using it,
rather than leave it around on the hard drive somewhere potentially
to be accessed by someone else at some other time.

We have spent years learning to protect the operating system from
damage by casual mistakes and even from most actual attacks,
because for years control of the computer itself was the only
notable asset that needed to be protected.  It is still true that
control of the computer is always at least as valuable as
everything else that it could be used to compromise, but with
unencrypted files it can compromise far too much.  And the value
of what is stored in individual accounts has gotten far too high
to *NOT* give protecting them at least as much thought as
protecting root's access rights. Photographs, banking records,
schedules, archived mail going back for years, browser histories,
wallets that contain many other keys, etc, etc.  This is far
different from old days when what was on a user's account
was basically a few programs the user used and some text or
code that the user had written.  We need to catch up.

Bear

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

```

### Re: [Cryptography] Impossible trapdoor systems (was Re: Opening Discussion: Speculation on BULLRUN)

```
On 09/08/2013 11:49 AM, Perry E. Metzger wrote:

That said, your hypothetical seems much like imagine that you can
float by the power of your mind alone. The construction of such a
cipher with a single master key that operates just like any other key
seems nearly impossible, and that should be obvious.

True.  A universal key that uses the same decryption operation as
a normal key is clearly stupid.

I guess the thing I was thinking of is that the attacker knows
a method that allows him to decrypt anything if he knows the IV,
but cannot recover the key used to encrypt it.

Which is of course a public-key system, where the decryption
method is the private key and the IV is the public key.
The thing I was thinking of as a key functions as a nonce
or subkey which allows people unrelated to the private key
holder to communicate semi-privately by shared secret, but
the private key is a backdoor on their communication.

Duh. Sorry, just wasn't thinking of the right parallel mapping
of what I described. For the cipher itself to function as a key
sort of escaped my attention.

Sorry to waste time.

Ray.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

```

### Re: [Cryptography] Suite B after today's news

```
On 09/05/2013 07:00 PM, Jon Callas wrote:

I don't think they're actively bad, though. For the purpose they were created
for --
parallelizable authenticatedencryption -- it serves its purpose. You can have a
decent implementor implement them right in hardware and walk away.

Given some of the things in the Snowden files, I think it has become the case
that one ought not trust any mass-produced crypto hardware.  It is clearly on
the agenda of the NSA to weaken the communications infrastructure of American
and other business, specifically at the level of chip manufacturers.  And
chips are too much of a black-box for anyone to easily inspect and too much
subject to IP/Copyright issues for anyone who does to talk much about what
they find.  Seriously; microplaning, micrography, analysis, and then you get
sued if you talk about what you find?  It's a losing game.

Given good open-source software, an FPGA implementation would provide greater
assurance of security. An FPGA burn-in rig can be built by hand if necessary,
or at the very least manufactured in a way that is subject to visual inspection
(ie, on a one-layer circuit board with dead-simple 7400-series logic chips).
It would be a bit of a throwback these days, but we're deep into whom-can-you-
trust territory at this point and going for lower tech is worth it if it means
tech that you can still inspect and verify.

Bear

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

```

### Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

```
On 09/06/2013 05:58 PM, Jon Callas wrote:

We know as a mathematical theorem that a block cipher with a back
door *is* a public-key system. It is a very, very, very valuable
thing, and suggests other mathematical secrets about hitherto
unknown ways to make fast, secure public key systems.

I've seen this assertion several times in this thread, but I cannot
help thinking that it depends on what *kind* of backdoor you're
talking about, because there are some cases in which as a crypto
amateur I simply cannot see how the construction of an asymmetric
cipher could be accomplished.

As an example of a backdoor that doesn't obviously permit an
asymmetric-cipher construction, consider a broken cipher that
has 128-bit symmetric keys; but one of these keys (which one
depends on an IV in some non-obvious way that's known to the
attacker) can be used to decrypt any message regardless of the
key used to encrypt it.  However, it is not a valid encryption
key; no matter what you encrypt with it you get the same
ciphertext.

There's a second key (also known to the attacker, given the IV)
which is also an invalid key; it has the property that no
matter what you encrypt or decrypt, you get the same result
(a sort of hash on the IV).

How would someone construct an asymmetric cipher from this?
Or is there some mathematical reason why such a beast as the
hypothetical broken cipher I describe, could not exist?

Bear

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

```

### Re: [Cryptography] Why prefer symmetric crypto over public key crypto?

```
On 09/07/2013 07:51 PM, John Kelsey wrote:

Pairwise shared secrets are just about the only thing that scales
worse than public key distribution by way of PGP key fingerprints on
If we want secure crypto that can be used by everyone, with minimal
trust, public key is the only way to do it.

One pretty sensible thing to do is to remember keys established in
previous sessions, and use those combined with the next session.

Of course the idea of remembering keys established in previous
sessions and using them combined with keys negotiated in the next
session is a scalable way of establishing and updating pairwise
shared secrets.

In fact I'd say it's a very good idea.  One can use a distributed
public key (infrastructure fraught with peril and mismanagement)
for introductions, and thereafter communicate using a pairwise
shared secret key (locally managed) which is updated every time
you interact, providing increasing security against anyone who
hasn't monitored and retained *ALL* previous communications. In
order to get at your stash of shared secret keys Eve and Mallory
have to mount an attack on your particular individual machine,
which sort of defeats the trawl everything by sabotaging vital
infrastructure at crucial points model that they're trying to
accomplish.

One thing that weakens the threat model (so far) is that storage
is not yet so cheap that Eve can store *EVERYTHING*. If Eve has
to break all previous sessions before she can hand your current
key to Mallory, first her work factor is drastically increased,
second she has to have all those previous sessions stored, and
third, if Alice and Bob have ever managed even one secure exchange
or one exchange that's off the network she controls (say by local
bluetooth link)she fails. Fourth, even if she *can* store everything
and the trawl *has* picked up every session, she still has to guess
*which* of her squintillion stored encrypted sessions were part
of which stream of communications before she knows which ones
she has to break.

Bear

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

```

### Re: [Cryptography] MITM source patching [was Schneier got spooked]

```
On 09/08/2013 05:28 AM, Phillip Hallam-Baker wrote:

every code update to the repository should be signed and
recorded in an append only log and the log should be public and enable any
party to audit the set of updates at any time.

This would be 'Code Transparency'.

Problem is we would need to modify GIT to implement.

Why is that a problem?  GIT is open-source.  I think even *I* might be
good enough to patch that.

Ray

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

```

### Re: [Cryptography] Suite B after today's news

```
On 09/08/2013 10:13 AM, Thor Lancelot Simon wrote:

On Sat, Sep 07, 2013 at 07:19:09PM -0700, Ray Dillinger wrote:

Given good open-source software, an FPGA implementation would provide greater
assurance of security.

How sure are you that an FPGA would actually be faster than you can already
achieve in software?

Thor

Depends on the operation.  If it's linear, somewhat certain.  If it's
parallizable or streamable, then very certain indeed.

But that's not even the main point.  It's the 'assurance of security' part
that's important, not the speed.  After you've burned something into an
FPGA (by toggle board if necessary) you can trust that FPGA to run the same
algorithm unmodified unless someone has swapped out the physical device.

Given the insecurity of most net-attached operating systems, the same is
simply not true of most software.  Given the insecurity of chip fabs and
their management, the same is not true of special-purpose ASICs.

Ray

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

```

### Re: [Cryptography] [cryptography] Random number generation influenced, HW RNG

```
On 09/08/2013 04:27 AM, Eugen Leitl wrote:

On 2013-09-08 3:48 AM, David Johnston wrote:

Claiming the NSA colluded with intel to backdoor RdRand is also to
accuse me personally of having colluded with the NSA in producing a
subverted design. I did not.

Well, since you personally did this, would you care to explain the
very strange design decision to whiten the numbers on chip, and not

Y'know what?  Nobody has to accuse anyone of anything.  The result,
no matter how it came about, is that we have a chip whose output
cannot be checked.  That isn't as good as a chip whose output can
be checked.

A well-described physical process does in fact usually have some
off-white characteristics (bias, normal distribution, etc). Being
able to see those characteristics means being able to verify that
the process is as described.  Being able to see also the whitened
output means being able to verify that the whitening is working
correctly.

OTOH, it's going to be more expensive due to the additional pins of
output required, or not as good because whitening will have to be
provided in separate hardware.

Ray
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

```

### Re: [Cryptography] Bruce Schneier has gotten seriously spooked

```
On 09/06/2013 01:25 PM, Jerry Leichter wrote:

A response he wrote as part of a discussion at
http://www.schneier.com/blog/archives/2013/09/the_nsa_is_brea.html:

Q: Could the NSA be intercepting downloads of open-source encryption software and
silently replacing these with their own versions?

A: (Schneier) Yes, I believe so.
-- Jerry

Here is another interesting comment, on the same discussion.

https://www.schneier.com/blog/archives/2013/09/the_nsa_is_brea.html#c1675929

Schneier states of discrete logs over ECC: I no longer trust the constants.
I believe the NSA has manipulated them through their relationships with
industry.

Is he referring to the standard set of ECC curves in use?  Is it possible
to select ECC curves specifically so that there's a backdoor in cryptography
based on those curves?

I know that hardly anybody using ECC bothers to find their own curve; they
tend to use the standard ones because finding their own involves counting all
the integral points and would be sort of compute expensive, in addition to
being involved and possibly error prone if there's a flaw in the implementation.

But are the standard ECC curves really secure? Schneier sounds like he's got
some innovative math in his next paper if he thinks he can show that they
aren't.

Bear

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

```

### [Cryptography] Three kinds of hash: Two are still under ITAR.

```
On 09/03/2013 09:54 AM, radi...@gmail.com wrote:

--Alexander Kilmov wrote:

--David Mercer wrote:

2) Is anyone aware of ITAR changes for SHA hashes in recent years
that require more than the requisite notification email to NSA for
time around took ltttss of reading.

I used to believe that hashing (unlike encryption) was not considered
arms.

If I recall the most recent revision, the above requirement is true
for keyed hashes whether they are signatures with public-key crypto
or secret hashes with private-key crypto) but not for fingerprint
or unkeyed hashes like FIPS or SHA-XXX.

The distinction among the three types:

Signature hashes:  Alice produces a signature hash using her
private key.  Because her public key is common knowledge, everybody
can tell that Alice (or at least someone with her private key)
really did sign it.

Secret hashes:  MIB or some similar group share knowledge of a
secret key.  A, a member of the group, produces a secret hash
using that key, and when they check, every member from Bea to Zed
knows know that some member of the organization (or at least
someone who has the secret key) did sign it. But even if the
message and hash are public or in an insecure channel like email,
nobody who doesn't have the key can prove a thing about the
signer. Or at least, not from the signature itself.  Server logs
and security video surveillence of public terminals etc, are
an entirely different thing. A would be worried about those
if she had an official identity for someone to find.

Fingerprint hashes:  Anybody can apply a fingerprint hash to
something, and it proves nothing about who signed it because
the hash is completely public knowledge and has no particular
key. Anyone who applies a fingerprint hash to something will get
exactly the same hash code for the same thing. The point of a
fingerprint hash is that it is a fixed-length probably-unique
identifier that can be checked in constant time.  If the
fingerprint of two documents are not equal, the documents are
guaranteed to be dissimilar.  If the documents are dissimilar,
the signatures are *almost* guaranteed to be dissimilar.  This
is very useful for looking up documents in a hash table or
tree, for example, using the fingerprint hash as a key.
Usually when cryptographers use the word hash they are

Bear

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

```

### Re: [Cryptography] Functional specification for email client?

```
On 08/31/2013 02:53 PM, John Kelsey wrote:

I think it makes sense to separate out the user-level view of what happens.

True.  I shouldn't have muddied up user-side view with notes about
packet forwarding, mixing, cover traffic, and domain lookup, etc.
Some users (I think) will want to know that much in general terms,
in order to have some basis to evaluate/understand the security
promises, but it's not part of the interface. Only the serious crypto
wonks will want to know in more detail.

{note: how strange! my spell checker thinks crypto is a typo, but
has no problem with wonks!}

If something arrives in my inbox with a from address of nob...@nowhere.com,
then I need to know that this means that's who it came from.  If I mail
something to nob...@nowhere.com, then I need to know that only the owner

As I consider it, I'm thinking even that promise needs to be amended
to include the possibility of leaking from the recipient.  For example
email forwarding, unencrypted mail archives found by hackers, etc.

My intuition is that binding the security promises to email addresses
instead of identities is the right way to proceed.  I think this is
something most people can understand, and more importantly it's
somethingwe can do with existing technology and no One True Name
Authority In The Sky handing out certs.

Eggs Ackley.  I believe every user in the world is familiar at this point
with the idea of an email alias, and that the concept maps reasonably
well to holder of a key for crypto purposes.  To promise any more
than that about identity requires centralized infrastructure that
cannot really exist in a pure P2P system.

One side issue here is that this system's email address space needs
to somehow coexist with the big wide internet's address space.  It
will really suck if someone else can get my gmail address n the
secure system, but it will also be confusing if my inbox has a
random assortment of secure and insecure emails, and I have to do
some extra step to know which is which.

If you want to gateway secure mail into the same bucket with
insecure mail, I guess you can do that; I would far rather have
separate instances of mail clients that do not mix types. eg,
this is Icedove/P2P, and this is Icedove/SMTP, and they are not
expected to be able to interchange messages without some gateway.

That said, all you need to gateway secure mail into an SMTP system
is easy to construct.  Consider if the peer mail system has an

You have a machine with DNS/SMTP address like secure.peermail.
com to reserve the name and provide bounce messages that prompt
people to get a peer mail client and send a message in that
client to name**domain for whatever address someone tried to
reply to. Mail imported from the peer mail client with a
name*domain mail format, could show in an SMTP client as
name**dom...@secure.peermail.com.

Alternatively, or additionally, you could have a machine with
an address like insecure.peermail.com that actually does
protocol translation and forwards SMTP mail onto the secure
network and vice versa, and allow peer mail users to choose
which machine handles their SMTP-translated address. But
this has the same problems as Lavabit and Silent Circle,
which recently shut down under duress.

Dual-protocol mail clients could use name**domain on the
peer network directly.  Mail imported from the SMTP network
on a dual-protocol client or on a peer mail client could
appear as n...@address.com**INSECURE-SMTP or similar, and on
the dual-protocol client a direct reply would prompt use of
the insecure protocol after a warning prompt.  On a secure-
protocol client it would simply prompt the user to use an
insecure mail client, same as the bounce message on the
other side.

I see the Big Wide Internet's address space as a simple tool to
implement it, not as a conflicting thing that needs reconciled.

The domain lookup as I envision it would associate mail peer email
addresses with a tuple of IPv6 address and public key.  The public
keys are stable; the IPv6 address may appear and disappear (and may
be different each time) as the user connects and disconnects from the
system.  The presumption is that the mail peer daemon on the local
machine sends a routing update message when starting up, and
possibly another (deleting routing information) in an orderly
shutdown.

As stated earlier, the system makes no effort to actively hide the
machine where an email address is located.  It could be a machine
designated to receive and keep mail for that address until it gets
a private address update that tells it where to send the messages
but which is not propagated; even in that case, the designated
maildrop machine if not controlled by the holder of the address
cannot be considered to hold any real secrets.

Routing update messages propagate across the network of relevant domain
servers, which check the sig on the update against the ```

### Re: [Cryptography] NSA and cryptanalysis

```
On 08/30/2013 08:10 PM, Aaron Zauner wrote:

I read that WP report too. IMHO this can only be related to RSA (factorization,
side-channel attacks).

I have been hearing rumors lately that factoring may not in fact be as hard
as we have heretofore supposed.  Algorithmic advances keep eating into RSA
keys, as fast as hardware advances do.  A breakthrough allowing most RSA keys
to be factored could be just one or two more jumps of algorithmic leverage
away (from academics; possibly not from the NSA).  It could also be the case
that special-purpose ASICs that accelerate the process substantially may
have been designed and built.

We know about Shor's algorithm for factoring in NlogN time.  It requires a
quantum computer to run though.  We have heard rumors of quantum computers
being built, and I recall a group of academics who actually built one nearly
eight years ago.

That seems to be the sort of thing that would attract attention from a lot
of three-letter agencies, and efforts to scale it up would be intensely
supported with all the resources and brainpower that such an organization
could bring to bear.  How far have they come in eight years?  It is both
interesting and peculiar that so little news of quantum computing has been
published since.

Bear

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

```

### [Cryptography] Functional specification for email client?

```

Okay...

User-side spec:

1.  An email address is a short string freely chosen by the email user.
It is subject to the constraint that it must not match anyone else's
email address, but may (and should) be pronounceable in ordinary language
and writable with the same character set that the user uses for writing.
They require extension with a domain as current email addresses do,
but not a domain name in the IETF sense; just a chosen disambiguator
(from a finite set of a million or so) to make name collisions less of
a problem.

2.  An email user may have more than one email address.  In fact s/he can
make up more email addresses at any time.  He or she may choose to associate
a tagline -- name, handle, slogan or whatever -- with the address.

3.  When an email user gets an email, s/he is absolutely sure that it comes
from the person who holds the email address listed in its from line.
S/he may or may not have any clue who that person is.  S/he is also
sure that no one else has seen the contents of the email.  The tagline
and email address are listed in the from: line.

4.  A user has an address book. The address book can be viewed as a whole or
as seen by just one of the user's email addresses.  IOW, if you have an
email address that you use for your secret society and a different email
address that you use for your job, you can choose to be one or the other
and your address book will reflect only the contacts that you have seen

5.  A mail client observes all email addresses that go through it.  When a
user receives mail from someone who has not directly sent them mail before,
the client opens a visible entry in the address book and makes available
a record of previous less-direct contacts with that address, for example
from posts to mailing lists, from CC: lists on emails, etc.  The client
also makes visible a list of possible contact sources; places where the
correspondent may have seen the address s/he's writing to.  However, often
enough, especially with cases where it's a scribbled on a napkin address,
the client just won't know.

6.  When a user sends mail, s/he knows that no one other than the holder of
the address/es s/he's sending it to will see the body of the mail, and also
that the recipient will be able to verify absolutely that the mail did in
fact come from the holder of the user's address.

7.  Routing information once obtained for a given domain is maintained locally.
This means routing information for each email address is public knowledge,
but also means that no one can tell from your address queries who
specifically
your correspondents are more precisely than knowing which domains they are
in.
This also means that other users may obtain routing information for that
domain from you. You can update your routing information (ie, set the system
to route messages for your address to the network location where you
actually
are) at any time, via a message propagated across all peers serving the
domain
periodically
sending out a message that is propagated across all server peers for that
domain.
This happens at intervals you set (a few months to ten years) when you
create
the email address.  If that interval goes by without a keep-alive or a
routing
information update, the servers will drop the address.

8.  Emails are mixed on your machine locally, then sent out onto the network.
The mixing means creating packets of a uniform size, planning a route for
each, encrypting them once for each 'hop' on the route, and sending them.
Routing is constrained to average less than ten 'hops'.  The packet size
should be selected so most text emails are one packet or less.  Larger
messages
will be sent as a set of packets and reassembled at destination.  Packets
will
be released at a rate of one every few seconds; very large file attachments
may take days to send and are discouraged.

9.  Your machine, while connected, is collecting your email.  It is also in the
business of packet forwarding:  ie, it gets a packet, decrypts it, reads the
next hop, waits some random number of seconds, and sends it to the next hop.

10. Finally, your mail client will occasionally create one or more packets and
send them via some randomly selected route to another point on the network,
where they will be received and ignored.  It will do this just about as
often as it sends original content-bearing packets, and about five percent
as often as it forwards packets.  This generates 'cover traffic' equal to
about three quarters of the total network volume. Generation and receipt
of cover traffic is completely invisible to the user.

```

### Re: [Cryptography] Functional specification for email client?

```
On 08/30/2013 01:52 PM, Jonathan Thornburg wrote:

On Fri, 30 Aug 2013, Ray Dillinger wrote:

3.  When an email user gets an email, s/he is absolutely sure that it comes
from the person who holds the email address listed in its from line.
S/he may or may not have any clue who that person is.  S/he is also
sure that no one else has seen the contents of the email.

This probably needs amending to deal with messages addressed to multiple
recipients (either cc:, bcc:, or simply multiple to: addresses).

Right.

More generally, I was hoping for feedback as to whether this is a design
useful to and usable by ordinary people.

It's fairly straightforwardly implementable as a fully distributed system,
given the notion that routing information includes public key.  The only
slightly tricky issue is maintaining the coherence of the per-domain
routing databases among mutually suspicious clients, and there are existing
techniques for that.

It's also compatible with current standards for email payloads, so existing
infrastructure can easily be adapted; in the protocol stack, it looks like
any other MTA.

It can also fairly easily gateway to SMTP, but is not dependent on it.

Bear

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

```

### Re: [Cryptography] Good private email

```
On 08/26/2013 04:12 AM, Richard Salz wrote:

You need the client to be

able to generate a keypair, upload the public half, and pull down
(seamlessly) recipient public keys.  You need a server to store and
return those keys. You need an installed base to kickstart the network
effect.

Who has that?

I know who has that - in spades!

The bitcoin network is a public transaction record of bitcoin transfers.
The individual accounts are not quite fully anonymous to a determined
observer, but nothing we've discussed here would be more anonymous.

Anyway, a bitcoin client already generates key pairs, and every transaction
stores them in the database.  The database is distributed to all full node
clients, and kept (reasonably) secure using Nakamoto's proof-of-work protocol
for the byzantine-generals problem.  The maintainers of the database have a
vested (monetary) interest in keeping the database secure.

Anyway, each address is a relatively short high-entropy string (ECC
crypto) -- and each client already has an address book of public
addresses (public keys where people can be sent bitcoin payments --
or private messages) and accounts (private keys which represent
bitcoin that can be sent).  In addition, you can ask the client to
generate a new address (keypair) for you at any moment.  The private
key goes into your accounts as an account with zero balance (and no
message history) and a new public key for you goes into your addresses
as a place where you can receive payments (and messages).

There are smartphone clients that don't maintain the full database, but
for you.  There are already solutions for transferring public keys
directly between smartphones via bluetooth, which is a convenient channel
outside the sphere of Internet eavesdropping.  And there is already
software that can preprint N business cards (with or without your name/etc
on them) that all have different addresses on them, so you can hand them
out to anyone whom you think may have a reason to send you money (or

In practice, people need to key in an address for someone once if they
are handed a card.  Keying it is about the same difficulty as a VIN
number on an auto insurance form.  Subsequent new addresses for the same
person can be sent in a message encrypted, along with any bitcoin
associated with that account for your next payment (or message).  If
Alice doesn't have preprinted cards, she has her smartphone and it can
generate an address for her on demand -- She will have to read it off
her smartphone screen if she wants to scribble it on a napkin.

If we build further email infrastructure on top of this, A side effect of
this is that every user has a choice about whether or not s/he will accept
messages without payments.  You can require someone to make a bitcoin
payment to send you an email.  Even a tiny one-percent-of-a-penny payment
that is negligible between established correspondents or even on most email
lists would break a spammer.  Also, you can set your client to automatically
return the payment (when you read a message and don't mark it as spam) or
just leave it as a balance that you'll return when you reply.

In short, a private email client can be built directly on top of the
bitcoin network.  In practice, I think it would be useful mainly for
maintaining the distribution and updating of keys, rather than for
messages per se, because the amount of extra data you can send along
with a bitcoin transaction is quite small (3k?  I think?).  Anyway, it
couldn't handle file attachments etc.

Bear

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

```

### Re: [Cryptography] Email and IM are ideal candidates for mix networks

```
On 08/25/2013 03:28 PM, Perry E. Metzger wrote:

So, imagine that we have the situation described by part 1 (some
universal system for mapping name@domain type identifiers into keys
with reasonable trust) and part 2 (most users having some sort of
long lived \$40 device attached to their home network to act as a
home server.)

My main issue with this proposal is that somebody identifiable is going
to manufacture these boxes.  Maybe several somebodies, but IMO, that's
an identifiable central point of control/failure.  If this is deployed,
what could an attacker gain by compromising the manufacturers, via sabotage,
component modification/substitution at a supplier's chip fab, or via
secret court order from a secret court operating according to a secret
interpretation of the law?

Bear

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

```

### Re: [Cryptography] Email and IM are ideal candidates for mix networks

```
On 08/25/2013 08:32 PM, Jerry Leichter wrote:

Where
mail servers have gotten into trouble is when they've tried to provide
additional services - e.g., virus scanners, which then try to look
inside of complex formats like zip files.  This is exactly the kind
of thing you want to avoid - another part of the mission creep that
we tend to see in anything that runs on a general-purpose computer.

Absolutely agreed; the most reliable things are the least complex.

That's 20th century thinking:  The computer is expensive, keep

it busy.  Twenty first century thinking should be:  The computer
is cheap - leave it alone to do its job securely.

My thinking is more like: The computer has a multitasking OS.  Whatever
else it needs to be doing will be in another process.  So you lose nothing
if you keep each process simple.  Or if it's a single-purpose box intended
to provide security; don't dilute its purpose.  Keep it simple enough that
even installations of it in the wild, after unknown handling and in all
possible configurations, can be unambiguously, easily, and exhaustively
tested so you know they're doing exactly what they should be and no more.

Realistically, it will be impossible to get little appliances like
this patched on a regular basis - how many people patch their WiFi
routers today? - so better to design on the assumption there won't
be any patches.

Also agreed; online patches are the number one distribution vector of
malware that such a device would need to be worried about. Firstly
because whoever can issue such a patch is a central point of control/
failure and can be coerced.  So send it out with an absolutely sealed
kernel.

Bear

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

```

### Re: [Cryptography] Good private email

```
On 08/26/2013 10:39 AM, Jerry Leichter wrote:

On Aug 26, 2013, at 1:16 PM, Ray Dillinger b...@sonic.net wrote:

Even a tiny one-percent-of-a-penny payment
that is negligible between established correspondents or even on most email
lists would break a spammer.

This (and variants, like a direct proof-of-work requirement) has been proposed
time and again in the past.  It's never worked, and it can't work, because the
spammers don't use their own identities or infrastructure - they use botnets.
They don't care what it costs (in work or dollars or Bitcoins) to send their
message, because they aren't going to pay it - the machine they've taken over
is going to pay.

Possible, but Doubtful.  The bitcoin wallet is extraordinarily secure
as software goes. Once you've chosen a keyphrase, It NEVER gets saved in
decrypted form to the disk, and even in the client software, cannot be
decrypted except by explicit command and will not remain in memory for more
than a few seconds in decrypted form. Furthermore, the client software
does not invoke other programs (like Word or other scriptable attack
vectors) under any circumstances.  Furthermore any extensions like
clickable URLs in messages or javascript execution etc or other methods
by which external possibly non-secure applications could start up with
information from inside the client would be soundly rejected as
untrustworthy extensions.  People design for and demand an altogether
different level of security when you're talking about their own money,
and handle the complexities of key management with no difficulty.

In short, no possibly naive user could convince the developers to do
the stupid things that email clients do for coolness or convenience in
the context of a financial client.

If there were a vulnerability or exploit discovered that allowed a spammer
to take control of a bitcoin account, it would be regarded as a MAJOR
DISASTER by the community and prompt a fix within minutes, not hours
days or months as is the case with mere email clients.

Consider that *every* *last* *developer* stands to lose at least
thousands or tens of thousands of dollars of real, personally owned
money if confidence in the network falters.  In some cases literally
millions.  This is not some hypothetical loss to the company that
they can be ordered to do by some boss even though they think it's
a bad idea, nor some hobby that they can allow to fall by the wayside;
these people are deeply and very literally invested in the security
of the code, and flatly will refuse to do anything that might
compromise it.

If some company did issue a client with security holes, the usual
shrink-wrap not liable crap would be completely unacceptable, the
lawsuit exposure would be somewhere in the trillions of dollars,
and the legal costs to even try to defend a mealymouthed claim of
not liable because of our shrink wrap license from the resulting
firestorm would probably break the company.  There are *dozens*
of serious, litigous, investors who hold millions of dollars in
bitcoin these days, including, among others, the Winkelvoss
brothers who spent ten years or more pursuing their infamous
Facebook lawsuit.  Even if you win that legal fight you're going
to lose.

The fact that the client is also highly usable is an excellent example
of interface design.

Bear

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

```

### Re: [Cryptography] PRISM PROOF Email

```
On 08/22/2013 02:36 AM, Phillip Hallam-Baker wrote:

Thanks to Snowden we now have a new term of art 'Prism-Proof', i.e. a
security scheme that is proof against state interception. Having had

an attack by the Iranians, I am not just worried about US interception.
Chinese and Russian intercepts should also be a concern.

We have two end to end security solutions yet Snowden used neither. If
PGP and S/MIME are too hard to use for the likes of Snowden, they are
too hard to use. The problem Snowden faced was that even if he could
grok PGP, the people sending him emails probably couldn't.

Observation:  Silent Circle and Lavabit both ran encrypted email services.
Lavabit shut down a few days ago rather than become complicit in crimes
against the American People.  I would say that's about as close as you can
skate to We're facing a court order that we're not allowed to tell you
about.  Maybe even closer; we'll be forbidden to know whether anyone
prosecutes them for violating the presumed gag order.  Silent Circle shut
down soon after, saying, We always knew the USG would come after us.
Which perhaps a little less clearly indicates a court oder they can't talk
about, but that's certainly one interpretation.

Egypt, Oman, and India refused to allow Blackberry to operate with their
end-to-end encrypted devices.  In cases where Blackberry is now allowed to
operate in those jurisdictions it is not at all clear that they are not
doing so using compromised devices whose keys shared with those governments.

Chinese military teams spent so much effort hacking at gmail and facebook
accounts, in order to ferret out dissidents, that Google was eventually
forced to cease doing business in China, and now gmail and facebook both
have some end-to-end encrypted clients.

My point I guess is that we have some evidence that Governments across the
world are directly hostile to email privacy.  Therefore any centralized server,
CA, or company providing same may expect persecution, prosecution or subversion
depending on the jurisdiction.

And it can never, ever, not in a billion years, be clear to users which if
any of those centralized servers or companies are trustworthy.  Google now
implements some end-to-end encryption for gmail but we also know that google
US government.  The exact details of Blackberry's keys in Oman, UAE,  India
are now subject to largely unknown deals and settlements.

Therefore, IMO, any possible solution to email privacy, if it is to be trusted
at all, must be pure P2P with no centralized points of failure/control and no
specialized routers etc.  And it can have no built-in gateways to SMTP.  Sure,
someone will set one up, but there simply cannot be any dependence on SMTP or
the whole thing is borked before it begins.  It is time to simply walk away
from that flaming wreckage and consider how to do email properly. S/Mime and
PGP email-body encryption both fail to protect from traffic analysis because
of underlying dependence on SMTP.  Onion routing fails to protect due to timing
attacks.

So I say you must design your easy-to-use client completely replacing the
protocol layer.  No additional effort to install because this is the only
protocol it handles.

The traditional approach to making a system intercept proof is to eliminate
the intermediaries. PGP attempts to eliminate the CA but it has the unfortunate
effect on scalability. Due to the Moore bound on a minimum diameter graph, it is
only possible to have large graphs with a small diameter if you have nodes of
high degree. If every PGP key signer signs ten other people then we have to
trust
key chains of 6 steps to support even a million users and nine to support a
global solution of a billion users.

My solution is to combine my 'Omnibroker' proposal currently an internet
draft and Ben Laurie's Certificate Transparency concept.

I would start from a design in which mail is a global distributed database, with
globs that can be decrypted by use of one or more of each user's set of keys,
and
all globs have expiry dates after which they cease to exist.  Routing becomes a
nonissue because routing, like old USENET, is global.  Except instead of
timestamp/
message ID's, we just use dates (because timestamps are too precise) and message
hashes (because message IDs contain too much originating information).

No certificate, no broker, no routing information unless the node that first
hears
about the new glob has been compromised.  Each message (decrypted glob)
optionally

If we need more 'scalability' we could set up channels discriminated by some
nine bit or so substring of the message hash, and require senders to solve
hashes
until they get a hash with the right nine bits to put it in the desired
channel.
Still no routing information as such. Now Eve can tell what channel/s a user is
listening to, but the user has ```

### Re: [Cryptography] Snowden fabricated digital keys to get access to NSA servers?

```
On 06/28/2013 09:36 PM, Udhay Shankar N wrote:

On Sat, Jun 29, 2013 at 4:30 AM, John Gilmoreg...@toad.com  wrote:

[John here.  Let's try some speculation about what this phrase,
fabricating digital keys, might mean.]

Perhaps something conceptually similar to PGP's Additional Decryption
Key [1]? If the infrastructure is in place for this, perhaps one might
be able to generate a key on demand, with the appropriate access
permissions.

I read it to mean that the NSA is using some sort of defeatable
cryptography in its own communications with contractors, presumably
to enable internal snooping for purposes of monitoring contractors.
If a contractor then discovers this system, and manages to cryptanalyze
it (or somehow obtain a copy of the snooping software, though that's
not strictly necessary to cryptanalysis) to figure out the corresponding
method of how the snoopers from the NSA generate keys out of thin
air for it, then he might use that method himself to get access to
all the material that other contractors on that system are working
with.

It would be a ridiculously stupid methodology for the NSA to manage
its security affairs this way, but if fabricated keys isn't a flat
out lie, then it's the only thing I can think of that makes sense.
And if it is a flat out lie, then lying to congress is fairly serious.
'Tho it wouldn't be the first time that's happened, either.

Bear
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

```

### Computer health certificate plan indistinguishable from Denial Of Service attack.

```Microsoft is sending up a test balloon on a plan to 'quarantine'
computers from accessing the Internet unless they produce a 'health
certificate'  to ensure that software patches are applied, a firewall
is installed and configured correctly, an antivirus program with current
signatures is running, and the machine is not currently infected with
known malware.

Apparently in a nod to the fact that on technical grounds this is
effectively impossible, the representative goes on to say

Relevant legal frameworks would also be needed.

as though that would make lawbreakers stop spoofing it.  Existing
malware already spoofs antivirus software to display current patches,
in order to prevent itself from being uninstalled.

It is hard to count the number of untestable and/or flat out wrong
assumptions built into this idea, and harder still to enumerate all the
ways it could go wrong.

The article is available at:

http://www.bbc.co.uk/news/technology-11483008

Bear

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com

```

### English 19-year-old jailed for refusal to disclose decryption key

```a 19-year-old just got a 16-month jail sentence for his refusal to
disclose the password that would have allowed investigators to see
what was on his hard drive.

I suppose that, if the authorities could not read his stuff
without the key, it may mean that the software he was using may
have had no links weaker than the encryption itself -- and that
is extraordinarily unusual - an encouraging sign of progress in
the field, if of mixed value in the current case.

Really serious data recovery tools can get data that's been
erased and overwritten several times (secure deletion being quite
unexpectedly difficult), so if it's ever been in your filesystem
unencrypted, it's usually available to well-funded investigators
without recourse to the key.  I find it astonishing that they
would actually need his key to get it.

Rampant speculation: do you suppose he was using a solid-state
drive instead of a magnetic-media hard disk?

http://www.bbc.co.uk/news/uk-england-11479831

Bear

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com

```

### RE: Has there been a change in US banking regulations recently?

```On Fri, 2010-08-13 at 14:55 -0500, eric.lengve...@wellsfargo.com wrote:

Moore's law helped immensely here. In the last 5 years systems have gotten
about 8 times faster, reducing the processing cost of crypto a lot.

The big drawback is that those who want to follow NIST's recommendations
to migrate to 2048-bit keys will be returning to the 2005-era overhead.
Either way, that's back in line with the above stated 90-95% overhead.
Meaning, in Dan's words 2048 ain't happening.

I'm under the impression that 2048 keys are now insecure mostly due
to advances in factoring algorithms that make the attack and the
encryption effort closer to, but by no means identical to, scaling
with the same function of key length.  This makes the asymmetric
cipher have a lower ratio of attack cost to encryption cost at any given
key length, but larger key lengths still yield *much* higher ratios of
attack cost to encryption cost.

At 2048 bits, I think that with Moore's law over the next decade or two
dropping attack costs and encryption costs by the same factor, attack
costs should remain comfortably out of reach while encryption costs

Of course, this reckons without the potential for unforseen advances
in factoring or Quantum computing.

There are some possibilities, my co-workers and I have discussed. For
purely internal systems TLS-PSK (RFC 4279) provides symmetric
encryption through pre-shared keys which provides us with whitelisting
as well as removing asymmetric crypto.

That's probably a good idea. We've placed a lot of stock in public-
key systems because of some neat mathematical properties that seemed to
conform to someone's needs for an online business model involving the
introduction of strangers who want to do business with each other.  But
if you can handle key distribution internally by walking down the hall
or mailing a CD-ROM preloaded with keys instead of by trusting the
network the keys are supposed to secure, you really don't need
Public-key crypto's neat mathematical properties.

Bear

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com

```

### About that Mighty Fortress... What's it look like?

```
Assume, contra facto, that in some future iteration of PKI, it
works, and works very well.

What the heck does it look like?

At a guess  Anybody can create a key (or key pair).  They
get one clearly marked private, which they're supposed to keep,
and one clearly marked public, which they can give out to anybody
they want to correspond with.

Gaurantors and certifying authorities can endorse the public key
for specific purposes relating to their particular application.
Your landlord can endorse your keycard to allow you to get into
the apartment you rent, the state government can endorse your
key when you get a contractor's license or private investigator's
license or register a business to sell to consumers and pay taxes,
etc.

There are no certifying agencies other than interested parties
and people who issue licenses/guarantees for specific reasons.

You can use your private key to endorse somebody else's key
to allow them to do some particular thing (you have to write a
short note that says what) that involves you, or check someone
else's key to see if it's one that you've endorsed.  If you've
endorsed it, you get back the short note that you wrote, telling
you what purpose you've endorsed it for.

Anybody who's endorsed a key can prove that they've endorsed it
by publishing their endorsement.  You can read and verify public
endorsements using the public keys of the involved parties.

And you can revoke your endorsement of any particular key, at any
time, for any reason.  The action won't affect other endorsements
of the same key, nor other endorsements you've made.

Finally, you can use your private key to prepare a revocation,
which can be held indefinitely in some backup storage, insurance
database, or safe-deposit box. If you ever lose your private key,
you send the revocation and everybody who has endorsed your
public key gets notified that it's no good anymore.

I think this model is simple enough to be understood by
ordinary people.  It's also clear enough in its semantics to
be implemented in a straightforward way.  Is it applicable
to the things we want to use a PKI for?

Bear

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com

```

### Re: Crypto dongles to secure online transactions

```On Fri, 2009-11-20 at 20:13 +1300, Peter Gutmann wrote:

Because (apart from the reasons given above) with business use specifically
you run into insurmountable PC - device communications problems.  Many
companies who handle large financial transactions are also ones who, due to
concern over legal liability, block all access to USB ports to prevent
external data from finding its way onto their corporate networks (they are
need to build a device with a small CMOS video sensor to read data from the
browser via QR codes and return little more than a 4-6 digit code that the
user can type in (a MAC of the transaction details or something).  It's
feasible, but not quite what you were thinking of.

So the model of interaction is:
Software displays transaction on the screen (in some
predetermined form)
Device reads the screen, MACs the transaction, and
displays MAC to user
User enters MAC to confirm transaction
Transaction, with MAC, is submitted to user's bank.
Bank checks MAC to make sure it matches transaction
and performs transaction if there's a match.

Malware that finds the user account details lying around
on the hard drive cannot form valid MACs for the transactions
it wants to use those details for, so the user and the bank
are protected from credit card harvesting by botnets.
Malware that attempts to get a user authorization by displaying
a different transaction on the screen is foiled by not being
able to MAC the transaction it's really trying to do.  etc.

But a four or six digit MAC isn't nearly enough.

You see, there's still the problem of how you handle fraudulent
transactions.  If the black hats start submitting transactions
with random MACs in the certain knowledge that one out of ten
thousand four-digit MACs will be right, all that happens is
that they have to invest some bandwidth when they want to drain
your account.  They will do it, because it's more profitable
than sending spam.

If there is some reasonable control like freezing an account
after a thousand attempts to make fraudulent transactions or
not accepting transaction requests within twenty seconds after
an attempt to make a fraudulent transaction on the same account,
then you have created an easy denial-of-service attack that can
be used to deny a particular person access to his or her bank
account at a particular time of the attacker's choosing.

an unexpected time - they can't get a hotel room, they can't get
a cab, they can't get a plane home, they can't buy gas for their
car and get stranded somewhere, and they become easy pickings
for physical crimes like assault, rape, theft of vehicle, etc.
That's not acceptable.

In order to be effective the MAC has to make success so unlikely
that submitting a fraudulent transaction has a higher cost than
its amortized benefit.  Since the botnets are stealing their
electricity and bandwidth anyway, the absolute cost to black
hats of submitting a fraudulent transaction is very very close
to zero.  What we have to look at then is their opportunity cost.

Consider the things a botted machine could do with a couple
kilobytes of bandwidth and a couple milliseconds of compute time.
It could send a spam or it could send a fraudulent transaction
to a bank with a random MAC.  It will do whichever is considered
most profitable by the operator of the botnet.

Note: with spamming there's almost no chance of arrest.  Receiving
money via a fraudulent transaction submitted to a bank *MIGHT* be
made more risky, so if that actually happens then there's an
additional risk or cost associated with successful fraud attempts,
which I don't account for here. But ignoring that because I don't
know how to quantify it:

In late 2008, ITwire estimated that 7.8 billion spam emails were
generated per hour. (http://www.itwire.com/content/view/19992/53/)
Consumer reports estimates consumer losses due to phishing at a
quarter-billion dollars per year.
(http://www.consumerreports.org/cro/magazine-archive/june-2009/
electronics-computers/state-of-the-net/phishing-costs-millions/
state-of-the-net-phishing-costs-millions.htm)

Check my math, but If we believe those sources, then that puts
the return on sending one spam email at 1/546624 of a dollar,
or about one point eight ten-thousandths of a penny. If we can
make submitting a fraudulent transaction return less than that,
then the botnets go on sending spams instead of submitting
fraudulent transactions and our banking infrastructure is
relatively safe. For now.  (Just don't think about our email
infrastructure - it's depressing).

If a fraud attack seeks to drain the account, it'll go
for about the maximum amount it expects the bank to honor,
which means, maybe, a couple thousand dollars (most checking
accounts have overdraft ```

### Re: Client Certificate UI for Chrome? [OT anonymous-transaction bull***t]

```[Moderator's note: this is getting a bit off topic, and I'd prefer to
limit followups. --Perry]

On Wed, 2009-08-19 at 06:23 +1000, James A. Donald wrote:
Ray Dillinger wrote:

If there is not an existing relationship (first time someone
uses an e-tailer) then there has to be a key depository that
both can authenticate to, with a token authorizing their
authentication to authenticate them to the other, which then
vouches to each for the identity of the other.

Actually not.

What the seller wants to know is that the buyer's money is good, not
what the true name of the buyer is - a service provided by Visa, or
Web-money, or some such.

No.  This juvenile fantasy is complete and utter nonsense, and
I've heard people repeating it to each other far too often.  If
you repeat it to each other too often you run the risk of starting
to believe it, and it will only get you in trouble.  This is a
world that has not just cryptographic protocols but also laws
and rules and a society into which those protocols must fit.  That
stuff doesn't all go away just because some fantasy-world
conception of the future of commerce as unlinkable anonymous
transactions says it should.

In any transaction involving physical goods, the seller also wants
to know to whom to ship the product.  Since the laws in most nations
do not require the recipient of an erroneous shipment to return
the goods and *do* require the seller to give back the buyer's money
if the shipment doesn't go where the buyer wants it, sellers really
care that the correct recipient will receive the package and really
need some way to contact the buyer in case there's a mistake about
the recipient address or identity.  Otherwise you'd get people
playing silly buggers with the shipping address to get out of paying
for million-dollar equipment.

The law usually requires that the recipient of defective goods
or services has the ability to return those goods for a refund
or obtain a refund in the event of seller nonperformance of
services or nonshipment of goods.  Since such returns can be
used to launder money from illegal enterprises, laws usually
restrict anonymous returns. Therefore the seller needs the
buyer's (or client's) identity in order to comply with the law.

In information-based transactions involving IP that's subject
wants to know who is the licensee that's bound by the terms
of the license and who now poses a risk of copyright breakage.
In both cases this is a liability taken on by the buyer, and
not something that his money being good for just the
transaction price can ameliorate.

In financial transactions The seller also wants to know that s/he
can comply with, eg, know your customer laws and avoid liability
for gross negligence in, eg, money laundering cases.

In many transactions the seller wants the buyer's identity and a
liability waiver signed by the buyer so as to keep track of or
avoid liability for what the customer is going to do with his/her
products.

Most sellers want the ability to offer the buyer credit terms,
especially when large sums are involved.  And even where money
in their accounts) it is subject to catastrophic vanishment in
extraordinary circumstances.  The seller needs to know whom to
sue or at least whose name to put on the forms for their insurance
claim if contrary to expectations the buyer's money turns out not
to be good.

If the cert authority does not provide the identity of the buyer
but asserts that the buyer's money is good, and this turns out not
to be true (as in the case of Madoff's clients), then in most
legal systems the cert authority is either liable, or can expect
to be sued in a very expensive empirical test of liability.  So
the cert authority doesn't want to be in the business of vouching
for the ability of anonymous people to pay.

The only way for the money to be truly firm for these purposes
is that the cert authority has it in escrow.  This makes the
cert authority a financial institution and therefore subject to
know your customer mandatory reporting, data retention laws,
subpeonas, and so on.  Also, it introduces a needless delay
and complication to the transaction that legitimate buyers and
sellers would mostly rather not have.

Also, in any large transaction the seller or cert authority or both
must retain buyer identity information in order to be able to
comply with subpeonas, inquests, or equivalent writs, for
periods ranging from zero in a few undeveloped african nations to
five years in much of the rest of the world.

In most of the nations on earth, there is such a thing as sales
tax or use tax on goods or services, and any transaction involving
more than a tiny sum must be reported (with the names of buyer and
seller```

### Re: MD6 withdrawn from SHA-3 competition

```On Sat, 2009-07-04 at 10:39 -0700, Hal Finney wrote:
Rivest:
Thus, while MD6 appears to be a robust and secure cryptographic
hash algorithm, and has much merit for multi-core processors,
our inability to provide a proof of security for a
reduced-round (and possibly tweaked) version of MD6 against
differential attacks suggests that MD6 is not ready for
consideration for the next SHA-3 round.

But how many other hash function candidates would also be excluded if
such a stringent criterion were applied? Or turning it around, if NIST
demanded a proof of immunity to differential attacks as Rivest proposed,
how many candidates have offered such a proof, in variants fast enough
to beat SHA-2?

I think resistance to attacks (note absence of any restrictive
adjective such as differential) is a very important property
(indeed, one of the basic defining criteria) to demonstrate
in a hash algorithm.  If someone can demonstrate an attack,
differential or otherwise, or show reason to believe that such
an attack may exist, then that should be sufficient grounds
to eliminate a vulnerable candidate from any standardization
competition.

In other words, the fact that MD6 can demonstrate resistance to
a class of attacks, if other candidates cannot, should stand in
its favor regardless of whether the competition administrators
say anything about proving resistance to any particular *kind*
of attacks.  If that does not stand in its favor then the
competition is exposed as no more than a misguided effort to
standardize on one of the many Wrong Solutions.

Bear

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com

```

### Re: consulting question.... (DRM)

```On Tue, 2009-05-26 at 18:49 -0700, John Gilmore wrote:
It's a little hard to help without knowing more about the situation.
I.e. is this a software company?  Hardware?  Music?  Movies?
Documents?  E-Books?

It's a software company.

the copying of something?  What's the something?  What's the threat
model?  Why is the company trying to do that?  Trying to restrain
customers?

Its customers would be other software companies that want to produce
monitored applications.  Their product inserts program code into
existing applications to make those applications monitor and report
their own usage and enforce the terms of their own licenses, for
example disabling themselves if the central database indicates that
their licensee's subscription has expired or if they've been used
for more hours/keystrokes/clicks/users/machines/whatever in the

The idea is that software developers could use their product instead
of spending time and programming effort developing their own license-
enforcement mechanisms, using it to directly transform on the
executables as the last stage of the build process.

The threat model is that the users and sysadmins of the machines
where the monitored applications are running have a financial
motive to prevent those applications from reporting their usage.

What country or countries does the company
operate in?  What jurisdictions hold its main customer bases?

They are in the US.  Their potential customers are international.
And their customers' potential clients (the end users of the
monitored applications) are of course everywhere.

Why should we bother?  Isn't it a great idea for DRM fanatics to
throw away their money?  More, more, please!  Bankrupt yourselves

You're taking a very polarized view.  These aren't DRM fanatics;
they're business people doing due diligence on a new project, and
likely never to produce any DRM stuff at all if I can successfully
convince them that they are unlikely to profit from it.

Bear

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com

```

### Re: consulting question....

```On Wed, 2009-05-27 at 10:31 -0400, Roland Dowdeswell wrote:

I have noticed in my years as a security practitioner, that in my
experience non-security people seem to assume that a system is
perfectly secure until it is demonstrated that it is not with an
example of an exploit.  Until an exploit is generated, any discussion
of insecurity is filed in their minds as ``academic'', ``theoretical''
or ``not real world''.

This matches my experience as well.  Have any exploits of this
particular scheme been found in the wild? is always one of the
first three questions, and the answer is one of the best predictors
of whether the questioner actually does anything.  For best results
one must be able to say something like, Yes, six times in the
last year and start naming companies, products, dates, and
independent sources that can be used to verify the incidents.  To
really make the point one should also be able to cite financial
costs and losses incurred.

Because companies don't like talking about cracks and exploits
involving their own products, nor support third parties who attempt
systematic documentation of same, it is frequently very hard to
produce sufficient evidence to convince and deter new reinventors
of the same technology. This failure to track and document exploits
and cracks is a cultural failure that, IMO, is currently one of the
biggest nontechnical obstacles to software security.

Bear

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com

```

### consulting question....

```
At a dinner party recently, I found myself discussing the difficulties
of DRM (and software that is intended to implement it) with a rather
intense and inquisitive woman who was very knowledgeable about what
such software is supposed to do, but simultaneously very innocent of
She was eager to learn, and asked me to summarize what I said to her
in an email. So I did

And it turns out that she is an executive in a small company which is
now considering the development of a DRM product.  I just got email
from her boss (the CEO) offering to hire me, for a day or two
anyway, as a consultant.  If I understand correctly, my job as
consultant will be to make a case to their board about what hurdles
of technology and credibility that small company will find in its
path if it pursues this course.

So now I need to go from Dinner party conversation mode to
consultant mode and that means I need to be able to cite specific
examples and if possible, research for the generalities I explained
over dinner.  I'll be combing Schneier's blog and using Google to
fill in details of examples I've already cited to get ready for
this, but any help that folks could throw me to help illustrate
and demonstrate my points (the paragraphs below) will be much
appreciated.

I explained to her that the typical experience of monitored or
protected software (software modified for DRM enforcement) is that
some guy in a randomly selected nation far outside the jurisdiction
of your laws, using widely available tools like debuggers and hex
editors, makes a cracked copy and distributes it widely, and
that current efforts in the field seem more focused on legislation
and international prosecutions than on software technology.  Software-
only solutions, aside from those involving a Trusted Computing Module
(which their proposed project does not - She seemed unaware of both
the Trusted Computing Platform and the controversy over it) are no
longer considered credible.  I cited the example of DeCSS, whose
crack of players for DRM'd movies used techniques generally
applicable to any form of DRM'd software.

I explained that in the worst case, such software works by making
unacceptable compromises of security or autonomy on the machines where
it is installed, citing the infamous and widespread Sony Rootkit, (and
IMO also the TCM system, but I didn't go into that messa worms at
dinner) and that these compromises usually become public and do
serious damage to both the credibility of DRM systems generally and
the cash flow of the companies that perpetrate them (ISTR Sony wound
up losing something over 6 million in the US judgement alone on that
one, and spent considerably more than that on legal fees in the US
and several other nations).

Finally, I explained the cheap attacks available to a sysadmin who
does not want his DRM'd software reporting its usage statistics; for
example having a firewall that filters outgoing packets.

Does anyone feel that I have said anything untrue?

Can anyone point me at good information uses I can use to help prove
the case to a bunch of skeptics who are considering throwing away
their hard-earned money on a scheme that, in light of security
experience, seems foolish?

Ray Dillinger

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com

```

### Re: [tahoe-dev] SHA-1 broken! (was: Request for hash-dependency in Tahoe security.)

```On Thu, 2009-04-30 at 13:56 +0200, Eugen Leitl wrote:

http://eurocrypt2009rump.cr.yp.to/837a0a8086fa6ca714249409ddfae43d.pdf

Wow!  These slides say that they discovered a way to find collisions
in SHA-1 at a cost of only 2^52 computations.  If this turns out to
be right (and the authors are respected cryptographers -- the kind of
people who really hate to be wrong about something like this) then it
is very exciting!

I cannot derive a realistic threat model from the very general
statements in the slides.

In the case of, for example, the Debian organization, which uses SHA-1
keys to check in code so that it's always clear with a distributed
network of developers who made what changes, What threats must they
now guard against and what corrective measures ought they take?

Can a third-party attacker now forge someone's signature and check in
code containing a backdoor under someone else's key?  Such code could
of target machines with devastating effect and no way to catch the
attacker.

Can a rogue developer now construct a valid code vector B, having
the same signature as some of his own (other) code A, thus bypassing
the signature check and inserting a backdoor?  The scenario is the
same with a poisoned server but, once detected, the attacker would
be identifiable.

Is it the case that a constructed hash collision between A and B
can be done by a third party but would be highly unlikely to contain
any executable or sensible code at all?  In this case the threat
is serious, but mainly limited to vandalism rather than exploits.

Is it the case that a constructed hash collision between A and B
can only be done by the developer of both A and B, but would be
highly unlikely to contain any executable or sensible code at all?
In this case the threat is very minor, because the identity of the
vandal would be instantly apparent.

Bear

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com

```

### Re: Judge orders defendant to decrypt PGP-protected laptop

```On Tue, 2009-03-03 at 21:33 -0500, Ivan Krsti? wrote:

If you give me the benefit of the doubt for having a reasonable
general grasp of the legal system and not thinking the judge is an
automaton or an idiot, can you explain to me how you think the judge
can meet the burden of proof for contempt in this instance? Surely you
don't wish to say that anyone using encryption can be held in contempt
on the _chance_ they're not divulging all the information; what, then,
is the other explanation?

The law is not administered by idiots.

In particular, the law is not administered by people who are more
idiotic than you.  You may disagree with them, or with the law,
but that does not make them stupid.

On the one hand there are (inevitable) differences in profile
between a partition that sees daily use and a partition that
doesn't.  If a forensics squad had a good look at my laptop,
they'd see that my (unencrypted) Windows partition has not been
booted or used in three years, whereas file dates, times, and
contents indicate that one of the other partitions is used daily.

If he decrypts a partition that clearly does not get used
frequently, and more to the point shows no signs of having been
used on a day when it is known that the laptop was booted up,
then he is clearly in violation of the order.

More to the point, you're arguing about a case where they
have testimony from multiple officers who have *SEEN* that
the images are on the computer, where both defense and
prosecution agree that they do not enjoy fifth-amendment
priveleges, and where the testomony of multiple officers
gives the partition name (Z drive) in which the images
were found.  If the decrypted partition does not match in
these particulars, and especially if it does not show any
evidence of usage while the laptop is known to have been
powered up during the initial search, then the defendant
is clearly in violation of the order.

Now, I think there is a legitimate argument to be made about
whether the defendant can be compelled to *use* a key which
he has not got written down or otherwise stored anywhere
outside his own head.  It's generally agreed that people can't
be compelled to produce or disclose the existence of memorized
keys, but can be compelled to produce or disclose the existence
of any paper or device on which a key is recorded.  But
regardless, if the order to use the key is considered legit,
then failure to comply with the order (by using a different or
wrong key, unlocking a different volume) is direct violation
of a court order.  People go to jail for that.

Keep in mind that the right to be secure from search and seizure
of one's documents has always been subject to due process and
court orders in the form of search warrants.  The right to privacy
is not an absolute right and never has been, and obstructing the
execution of a lawfully served warrant is not a viable strategy
for staying out of jail.

Bear
(neither a lawyer, nor, usually, an idiot)

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com

```

### Re: Security through kittens, was Solving password problems

```On Wed, 2009-02-25 at 14:53 +, John Levine wrote:

You're right, but it's not obvious to me how a site can tell an evil
MITM proxy from a benign shared web cache.  The sequence of page
accesses would be pretty similar.

There is no such thing as a benign web cache for secure pages.
If you detect something doing caching of secure pages, you need
to shut them off just as much as you need to shut off any other
MITM.

Bear

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com

```

### UCE - a simpler approach using just digital signing?

```I have a disgustingly simple proposal.  It seems to me that one
of the primary reasons why UCE-limiting systems fail is the
astonishing complexity of having a trust infrastructure
maintained by trusted third parties or shared by more than
one user.  Indeed, trusted third party and trust shared by
multiple users may be oxymorons here. Trust, by nature, is
not really knowable by any third party, not the same for any
set of more than one user, and in fact the people most willing
to pay for it at least where UCE is concerned, experience shows
to be usually the _least_ trustworthy parties.

So why hasn't anybody tried direct implementation of user-
managed digital signatures yet?

A key list maintained by individual recipients for themselves
alone could be astonishingly simpler in practice, probably
to the point of actually being practical.

In fact, it is _necessary_ to eliminate third parties and
shared infrastructure almost entirely in order to allow mail
recipients to have the kind of fine-grained control that
they actually need to address the problem by creating
social and business feedback loops that promote good security.

As matters stand today, there is no protection from UCE.
If I know there is a user account named 'fred' on the host
'example.com', then I have an email address and I can send
all the UCE I want.  And poor fred has the same email address
he gives everybody, so he gets spam from people who've gotten
his address and he has no idea where they got it.  All his
legitimate correspondents are using the same email address,
so he can't abandon it without abandoning *all* of them,
and he doesn't know which of them gave his address to the
spammers.  What if email accounts weren't that simple?

Consider the implications of a third field, or trust token,
mailer's copy of fred's email address would look like
fred#to...@example.com where token was a field that
system would still send mail to f...@example.com but
it would include a Trust: header based on the token.

The simplest solution I can think of would be a direct
application of digital signatures;  the trust token would
be (used as) a cryptographic key, and the headers of any
message would have to include a Trust field containing a
digital signature (a keyed cryptographic hash of the message,
generated by that key).  Messages to multiple recipients
would need to contain one Trust field per recipient.

Its use would follow simple rules:

Each time Fred gives out his email address to a new sender,
he creates a trust token for that sender.  They must use it
when they send him mail.  So fred gives his bank a key when
he gives them his email address.  If fred were willing to
recieve mail from strangers, he could publish a trust token
on his webpage or on usenet or whatever - it would be painless
to revoke it later, so why not?  If fred trusted someone to
give out his email address, he could give that person multiple
trust tokens to pass along to others.  Again, an error in
judgement would be painless to revoke later.

Fred can revoke any trust token from his system at any time,
and does so whenever he gets spam with a trust token he issued.
In UI terms there'd be a button in his mail reader that works
as, this message is spam, so revoke this trust token because
now a spammer has it.  Other messages sent with the same
trust token would disappear from his mailbox instantly. Fred
might not push this button every time, but at least he'd know
what spam he was getting due to (say) his published trust token
on his webpage or usenet, and what spam he was getting due to
his relationship with a bank, and he'd have the option of
turning any source of spam off instantly.

In the short run the .aliases file on the mail host would need
a line so it would know to deliver mail to fred#anyth...@example.com
to fred.  This is not because a legitimate email would ever include
the literal key, but for purposes of alerting fred's MUA to protocol
breaches, so it could do key management.  Fred's MUA could then
be upgraded to use tokens without affecting other users on the
system.In later MDA's that handle trust tokens directly,
this forwarding would be automatic.

Whenever Fred gets email sent by someone using a trust token,
his system tells him which token - ie, what sender he gave
that trust token to.  So email sent to fred using the trust
token he gave his bank will show up in his mailbox under a
heading that says this was sent by someone using the trust

Whenever fred gets email for fred#to...@example.com and that's
still a legitimate token, his system revokes the token, sends him
an automatic note that says which trust token was revoked, and
bounces the email with a message that says,
Your mailer is not using trust tokens.  Your mail has not been
```

### Re: Bitcoin P2P e-cash paper

```Okay I'm going to summarize this protocol as I understand it.

I'm filling in some operational details that aren't in the paper
by supplementing what you wrote with what my own design sense
tells me are critical missing bits or obvious methodologies for
use.

First, people spend computer power creating a pool of coins to use
as money.  Each coin is a proof-of-work meeting whatever criteria
were in effect for money at the time it was created.  The time of
creation (and therefore the criteria) is checkable later because
people can see the emergence of this particular coin in the
transaction chain and track it through all its consensus view

When a coin is spent, the buyer and seller digitally sign a (blinded)
transaction record, and broadcast it to a bunch of nodes whose purpose
is keeping track of consensus regarding coin ownership.  If someone
double spends, then the transaction record can be unblinded revealing
the identity of the cheater.  This is done via a fairly standard cut-
and-choose algorithm where the buyer responds to several challenges
with secret shares, and the seller then asks him to unblind and
checks all but one, verifying that they do contain secret shares any
two of which are sufficient to identify the buyer.  In this case the
seller accepts the unblinded spend record as probably containing
a valid secret share.

The nodes keeping track of consensus regarding coin ownership are in
a loop where they are all trying to add a link to the longest chain
they've so far recieved.  They have a pool of reported transactions
which they've not yet seen in a consensus signed chain.  I'm going
to call this pool A.  They attempt to add a link to the chain by
moving everything from pool A into a pool L and using a CPU-
intensive digital signature algorithm to sign the chain including
the new block L.  This results in a chain extended by a block
containing all the transaction records they had in pool L, plus
the node's digital signature.  While they do this, new
transaction records continue to arrive and go into pool A again
for the next cycle of work.

They may also recieve chains as long as the one they're trying to
extend while they work, in which the last few links are links
that are *not* in common with the chain on which they're working.
These they ignore.  (?  Do they ignore them?  Under what
circumstances would these become necessary to ever look at again,
bearing in mind that any longer chain based on them will include
them?)

But if they recieve a _longer_ chain while working, they
immediately check all the transactions in the new links to make
sure it contains no double spends and that the work factors of
all new links are appropriate.  If it contains a double spend,
then they create a transaction which is a proof of double
spending, add it to their pool A, broadcast it, and continue work.
If one of the new links has an inappropriate work factor (ie,
someone didn't put enough CPU into it for it to be licit
according to the rules) a new transaction which is a proof
of the protocol violation by the link-creating node is created,
broadcast, and added to pool A, and the chain is rejected.  In
the case of no double spends and appropriate work factors for
all links not yet seen, they accept the new chain as consensus.

If the new chain is accepted, then they give up on adding their
current link, dump all the transactions from pool L back into pool
A (along with transactions they've recieved or created since
starting work), eliminate from pool A those transaction records
which are already part of a link in the new chain, and start work
again trying to extend the new chain.

If they complete work on a chain extended with their new link, they
all the transactions that have accumulated in pool A since they
began work.

Do I understand it correctly?

Biggest Technical Problem:

Is there a mechanism to make sure that the chain does not consist
solely of links added by just the 3 or 4 fastest nodes?  'Cause a
broadcast transaction record could easily miss those 3 or 4 nodes
and if it does, and those nodes continue to dominate the chain, the

To remedy this, you need to either ensure provable propagation of
transactions, or vary the work factor for a node depending on how

Unfortunately, both measures can be defeated by sock puppets.
This is probably the worst problem with your protocol as it
stands right now; you need some central point to control the
identities (keys) of the nodes and prevent people from making
new sock puppets.

Provable propagation would mean that When Bob accepts a new chain
from Alice, he needs to make sure that Alice has (or gets) all
transactions in his A and L pools.  He ```

### Re: Bitcoin P2P e-cash paper

```On Sat, 2008-11-15 at 12:43 +0800, Satoshi Nakamoto wrote:

I'll try and hurry up and release the sourcecode as soon as possible
to serve as a reference to help clear up all these implementation
questions.

Ray Dillinger (Bear) wrote:
When a coin is spent, the buyer and seller digitally sign a (blinded)
transaction record.

Only the buyer signs, and there's no blinding.

If someone double spends, then the transaction record
can be unblinded revealing the identity of the cheater.

Identities are not used, and there's no reliance on recourse.  It's all
prevention.

Okay, that's surprising.  If you're not using buyer/seller
identities, then you are not checking that a spend is being made
by someone who actually is the owner of (on record as having
recieved) the coin being spent.

There are three categories of identity that are useful to
think about.  Category one: public.  Real-world identities
are a matter of record and attached to every transaction.
Category two: Pseudonymous.  There are persistent identities
within the system and people can see if something was done by
the same nym that did something else, but there's not necessarily
any way of linking the nyms with real-world identities.  Category
three: unlinkably anonymous.  There is no concept of identity,
persistent or otherwise.  No one can say or prove whether the
agents involved in any transaction are the same agents as involved
in any other transaction.

Are you claiming category 3 as you seem to be, or category 2?
Lots of people don't distinguish between anonymous and
pseudonymous protocols, so it's worth asking exactly what
you mean here.

Anyway:  I'll proceed on the assumption that you meant very
nearly (as nearly as I can imagine, anyway) what you said,
a spender has to demonstrate knowledge of a secret known only
to the real owner of the coin.  One way to do this would be
to have the person recieving the coin generate an asymmetric
key pair, and then have half of it published with the
transaction.  In order to spend the coin later, s/he must
demonstrate posession of the other half of the asymmetric
key pair, probably by using it to sign the key provided by
the new seller.  So we cannot prove anything about identity,
but we can prove that the spender of the coin is someone who
knows a secret that the person who recieved the coin knows.

And what you say next seems to confirm this:

No challenges or secret shares.  A basic transaction is just
what you see in the figure in section 2.  A signature (of the
buyer) satisfying the public key of the previous transaction,
and a new public key (of the seller) that must be satisfied to
spend it the next time.

Note, even though this doesn't involve identity per se, it still
makes the agent doing the spend linkable to the agent who
earlier recieved the coin, so these transactions are linkable.
In order to counteract this, the owner of the coin needs to
make a transaction, indistinguishable to others from any
normal transaction, in which he creates a new key pair and
transfers the coin to its posessor (ie, has one sock puppet
spend it to another). No change in real-world identity of
the owner, but the transaction linkable to the agent who spent
has to be done a random number of times - maybe one to six
times?

lines are scrolling stupidly off to the right and I have to
scroll to see what the heck you're saying, then edit to add
carriage returns before I respond.

If it contains a double spend, then they create a transaction
which is a proof of double spending, add it to their pool A,

There's no need for reporting of proof of double spending like
that.  If the same chain contains both spends, then the block is
invalid and rejected.

Same if a block didn't have enough proof-of-work.  That block is
invalid and rejected.  There's no need to circulate a report
about it.  Every node could see that and reject it before relaying it.

Mmmm.  I don't know if I'm comfortable with that.  You're saying
there's no effort to identify and exclude nodes that don't
cooperate?  I suspect this will lead to trouble and possible DOS
attacks.

If there are two competing chains, each containing a different
version of the same transaction, with one trying to give money
to one person and the other trying to give the same money to
someone else, resolving which of the spends is valid is what
the whole proof-of-work chain is about.

Okay, when you say same transaction, and you're talking about
transactions that are obviously different, you mean a double
spend, right?  Two transactions signed with the same key?

We're not on the lookout for double spends to sound the alarm
and catch```

### Re: Bitcoin P2P e-cash paper

```On Tue, 2008-11-04 at 06:20 +1000, James A. Donald wrote:

If I understand Simplified Payment Verification
correctly:

New coin issuers need to store all coins and all recent
coin transfers.

There are many new coin issuers, as many as want to be
issuers, but far more coin users.

Ordinary entities merely transfer coins.  To see if a
coin transfer is OK, they report it to one or more new
coin issuers and see if the new coin issuer accepts it.
New coin issuers check transfers of old coins so that
their new coins have valid form, and they report the
outcome of this check so that people will report their
transfers to the new coin issuer.

I think the real issue with this system is the market
for bitcoins.

Computing proofs-of-work have no intrinsic value.  We
can have a limited supply curve (although the currency
is inflationary at about 35% as that's how much faster
computers get annually) but there is no demand curve
that intersects it at a positive price point.

I know the same (lack of intrinsic value) can be said of
fiat currencies, but an artificial demand for fiat
currencies is created by (among other things) taxation
and legal-tender laws.  Also, even a fiat currency can
be an inflation hedge against another fiat currency's
higher rate of inflation.   But in the case of bitcoins
the inflation rate of 35% is almost guaranteed by the
technology, there are no supporting mechanisms for
taxation, and no legal-tender laws.  People will not
hold assets in this highly-inflationary currency if
they can help it.

Bear

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

```

### Re: Who cares about side-channel attacks?

```On Thu, 2008-10-30 at 16:32 +1300, Peter Gutmann wrote:
Look at the XBox
attacks for example, there's everything from security 101 lack of
checking/validation and 1980s MSDOS-era A20# issues through to Bunnie Huang's
FPGA-based homebrew logic analyser and use of timing attacks to recover device
keys (oh, and there's an example of a real-world side-channel attack for you),
there's no rhyme or reason to them, it's just hammer away at everything with
anything you've got and exploit the first bit that fails.

But isn't that the attacker's job?  We will never arrive at anything
secure - or even *learn* anything about how to build real security -
if attackers leave any part of it untested or consistently fail to
try particular approaches.  As far as I can see the acid tests of
the real world, hammering away with anything they've got, are exactly
the kind of environment that security pros have to design for in the
long run.

We should be trying to identify products and implementations that
hold up under this kind of assault, and then publishing books about
the design processes and best practices that produced them.  Knowing
full well that Kerchoff's Principle is alive and well, and that
the people doing the attacks will be first in line to buy the
books.  The point is that if the material in the books is any
good, then having the books shouldn't help them.

Cipher suites and protocols and proofs and advanced mathematics
are well and good, but we have to recognize that they are only a
small part of actually building a secure implementation.  Holding up
under diverse assault *is* the desired property that we are all
supposed to be working toward, and this kind of diverse assault
is exactly the sort of test we need to validate security design
processes.

Bear

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

```

### Re: User interface, security, and simplicity

```On Sat, 2008-05-03 at 23:35 +, Steven M. Bellovin wrote:

There's a technical/philosophical issue lurking here.  We tried to
solve it in IPsec; not only do I think we didn't succeed, I'm not at
all clear we could or should have succeeded.

IPsec operates at layer 3, where there are (generally) no user
contexts.  This makes it difficult to bind IPsec credentials to a user,
which means that it inherently can't be as simple to configure as ssh.

Let me restate things just to make sure I understand the problem.
You're talking about binding IPsec credentials to a user, but
I want to look at it from the point of view of exactly what
problems this causes, so is the following an accurate position?

The problem is that we're trying to have entities with different
security needs share a common set of authentications.  When user 'pat'
and user 'sandy' have different security needs (different authorized
them IPSEC because IPSEC operates on channels between machines
rather than on channels between trusting/trusted entities.   Even
if 'pat' and 'sandy' both have a trusted/trusting entity on a given
remote machine from theirs, IPSEC fails them because it cannot
differentiate between the various entities (users, agents, services)
using that remote machine, when 'pat' and 'sandy' need it to.
Similarly, it fails the entities on that remote machine because it
cannot differentiate between 'pat', 'sandy' and any other entities
using the local machine, when trust relationships might exist only
for some subset of those entities.

Bear

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

```

### Re: Death of antivirus software imminent

```On Fri, 2008-01-18 at 02:31 -0800, Alex Alten wrote:
At 07:35 PM 1/18/2008 +1000, James A. Donald wrote:

And all the criminals will of course obey the law.

Why not just require them to set an evil flag on all
their packets?

These are trite responses.  Of course not.  My point is
that if the criminals are lazy enough to use a standard
security protocol then they can't expect us not to put
something in place to decrypt that traffic at will if necessary.

I see your point, but I can't help feeling that it's a
lot like requiring all houses to be designed and built with
a backdoor that the police have a key to, in order to
guarantee that the police can come in to investigate crimes.

The problem is that the existence of that extra door, and
the inability of people to control their own keys to lock
it, makes crimes drastically easier to commit.  You think
police don't use DMV records to harass ex-girlfriends or
make life hard for people they don't like?  You think
Private investigators and other randoms who somehow finesse
access to that data all have the best interests of the public
at heart?  You think the contractor who builds the house
will somehow forget where the door is, or will turn over
*all* copies of the keys?

And stepping away from quasi-legit access used for illegitimate
purposes, you think there're no locksmiths whose services the
outright criminals can't buy?  You think the existence of a
backdoor won't inspire criminal efforts to get the key (by
reading a binary dump if need be) and go through it?

I guarantee I can make any payload look like any other
payload.  If the only permitted communications are
prayers to Allah, I can encode key exchange in prayers
to Allah.

Look, the criminals have to design their security system with
severe disadvantages; they don't own the machines they
attack/take over so they can't control its software/hardware
contents easily, they can't screw around too much with the IP
protocol headers or they lose communications with them, and

That is a very petty class of criminal.  While the aggregate
thefts (of computer power, bandwidth, etc) are impressive,
they're stealing nothing that isn't a cheap commodity anyway
and the threat to lives and real property that would justify
the kind of backdoors we're talking about just isn't there.
Being subject to botnets and their ilk is more like the
is like being the victim of a planned and premeditated
crime with a particular high-value target.

Moreover, we know how to weatherproof our systems.
Seriously.  We know where the vulnerabilities are and we
know how to create systems that don't have them.  And we
don't need to install backdoors or allocate law enforcement
budget to do it.  More than half the servers on the Internet -
the very most desirable machines for botnet operators,
because they have huge storage and huge bandwidth - run
some form of Unix, and yet, since 1981 and the Morris Worm,
you've never heard of a botnet composed of Unix machines!
as everyone else, but it costs them very little, because
they have ROOFS!

I submit that the sole reason Botnet operation even exists
is because so many people are continuing to use an operating
system and software whose security is known to be inferior.
A(nother) backdoor in that system won't help.

The criminals whose activities do justify the sort of backdoors
you're talking about - the bombers, the kidnappers, the
extortionists, even the kiddie porn producers and that ilk -
won't be much affected by them, because they *do* take the
effort to get hard crypto working in addition to standard
protocols, they *do* own their own machines and get to pick
and choose what software goes on them, and if they're
technically bent they can roll their own protocols.

Bear

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

```