Re: [Cryptography] please dont weaken pre-image resistance of SHA3 (Re: NIST about to weaken SHA3?)

2013-10-14 Thread ianG

On 14/10/13 17:51 PM, Adam Back wrote:

On Tue, Oct 01, 2013 at 12:47:56PM -0400, John Kelsey wrote:

The actual technical question is whether an across the board 128 bit
security level is sufficient for a hash function with a 256 bit
output. This weakens the proposed SHA3-256 relative to SHA256 in preimage
resistance, where SHA256 is expected to provide 256 bits of preimage
resistance.  If you think that 256 bit hash functions (which are normally
used to achieve a 128 bit security level) should guarantee 256 bits of
preimage resistance, then you should oppose the plan to reduce the
capacity to 256 bits.


I think hash functions clearly should try to offer full (256-bit) preimage
security, not dumb it down to match 128-bit birthday collision resistance.

All other common hash functions have tried to do full preimage security so
it will lead to design confusion, to vary an otherwise standard
assumption. It will probably have bad-interactions with many existing
KDF, MAC,
merkle-tree designs and combined cipher+integrity modes, hashcash (partial
preimage as used in bitcoin as a proof of work) that use are designed in a
generic way to a hash as a building block that assume the hash has full
length pre-image protection.  Maybe some of those generic designs survive
because they compose multiple iterations, eg HMAC, but why create the work
and risk to go analyse them all, remove from implementations, or mark as
safe for all hashes except SHA3 as an exception.



I tend to look at it differently.  There are ephemeral uses and there 
are long term uses.  For ephemeral uses (like HMACs) then 128 bit 
protection is fine.


For long term uses, one should not sign (hash) what the other side 
presents (put in a nonce) and one should always keep what is signed 
around (or otherwise neuter a hash failure).  Etc.  Either way, one 
wants here a bit longer protection for the long term hash.


That 'time' axis is how I look at it.  Simplistic or simple?

Alternatively, there is the hash cryptographer's outlook, which tends to 
differentiate collisions, preimages, 2nd preimages and lookbacks.


From my perspective the simpler statement of SHA3-256 having 128 bit 
protection across the board is interesting, perhaps it is OK?




If MD5 had 64-bit preimage, we'd be looking at preimages right now being
expensive but computable.  Bitcoin is pushing 60bit hashcash-sha256
preimage
every 10mins (1.7petaHash/sec network hashrate).



I might be able to differentiate the preimage / collision / 2nd pi stuff 
here if I thought about if for a long time ... but even if I could, I 
would have no confidence that I'd got it right.  Or, more importantly, 
my design gets it right in the future.


And as we're dealing with money, I'd *want to get it right*.  I'd 
actually be somewhat happier if the hash had a clear number of 128.




Now obviously 128-bits is another scale, but MD5 is old, broken, and there
maybe partial weakenings along the way.  eg say design aim of 128 slips
towards 80 (in another couple of decades of computing progress).  Why
design
in a problem for the future when we KNOW and just spent a huge thread on
this list discussing that its very hard to remove upgrade algorithms from
deployment.  Even MD5 is still in the field.


Um.  Seems like this argument only works if people drop in SHA3 without 
being aware of the subtle switch in preimage protection, *and* they 
designed for it earlier on.  For my money, let 'em hang.



Is there a clear work-around proposed for when you do need 256?  (Some
composition mode or parameter tweak part of the spec?)



Use SHA3-512 or SHA3-384?

What is the preimage protection of SHA3-512 when truncated to 256?  It 
seems that SHA3-384 still gets 256.


 And generally where

does one go to add ones vote to the protest for not weakening the
2nd-preimage propoerty?



For now, refer to Congress of the USA, it's in Washington DC. 
Hopefully, it'll be closed soon too...




iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-11 Thread ianG

On 10/10/13 19:06 PM, John Kelsey wrote:

Just thinking out loud

The administrative complexity of a cryptosystem is overwhelmingly in key 
management and identity management and all the rest of that stuff.  So imagine 
that we have a widely-used inner-level protocol that can use strong crypto, but 
also requires no external key management.  The purpose of the inner protocol is 
to provide a fallback layer of security, so that even an attack on the outer 
protocol (which is allowed to use more complicated key management) is unlikely 
to be able to cause an actual security problem.  On the other hand, in case of 
a problem with the inner protocol, the outer protocol should also provide 
protection against everything.

Without doing any key management or requiring some kind of reliable identity or 
memory of previous sessions, the best we can do in the inner protocol is an 
ephemeral Diffie-Hellman, so suppose we do this:

a.  Generate random a and send aG on curve P256

b.  Generate random b and send bG on curve P256

c.  Both sides derive the shared key abG, and then use SHAKE512(abG) to 
generate an AES key for messages in each direction.

d.  Each side keeps a sequence number to use as a nonce.  Both sides use 
AES-CCM with their sequence number and their sending key, and keep track of the 
sequence number of the most recent message received from the other side.

The point is, this is a protocol that happens *inside* the main security 
protocol.  This happens inside TLS or whatever.  An attack on TLS then leads to 
an attack on the whole application only if the TLS attack also lets you do 
man-in-the-middle attacks on the inner protocol, or if it exploits something 
about certificate/identity management done in the higher-level protocol.  
(Ideally, within the inner protcol, you do some checking of the identity using 
a password or shared secret or something, but that's application-level stuff 
the inner and outer protocols don't know about.

Thoughts?



What's your goal?  I would say you could do this if the goal was 
ultimate security.  But for most purposes this is overkill (and I'd 
include online banking, etc, in that).


Right now we've got a TCP startup, and a TLS startup.  It's pretty 
messy.  Adding another startup inside isn't likely to gain popularity.


(Which was one thing that suggests a redesign of TLS -- to integrate 
back into IP layer and replace/augment TCP directly.  Back in those days 
we -- they -- didn't know enough to do an integrated security protocol. 
 But these days we do, I'd suggest, or we know enough to give it a try.)


iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-11 Thread ianG

On 10/10/13 08:41 AM, Bill Frantz wrote:


We should try to characterize what a very long time is in years. :-)



Look at the produce life cycle for known crypto products.  We have some 
experience of this now.  Skype, SSL v2/3 - TLS 0/1/2, SSH 1 - 2, PGP 2 
- 5+.


As a starting point, I would suggest 10 years.

iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-11 Thread ianG

On 10/10/13 17:58 PM, Salz, Rich wrote:

TLS was designed to support multiple ciphersuites. Unfortunately this opened 
the door
to downgrade attacks, and transitioning to protocol versions that wouldn't do 
this was nontrivial.
The ciphersuites included all shared certain misfeatures, leading to the 
current situation.


On the other hand, negotiation let us deploy it in places where full-strength 
cryptography is/was regulated.



That same regulator that asked for that capability is somewhat prominent 
in the current debacle.


Feature or bug?



Sometimes half a loaf is better than nothing.



A shortage of bread has been the inspiration for a few revolutions :)

iang

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] encoding formats should not be committee'ized

2013-10-05 Thread ianG

On 4/10/13 11:17 AM, Peter Gutmann wrote:


Trying to get back on track, I think any attempt at TLS 2 is doomed.  We've
already gone through, what, about a million messages bikeshedding over the
encoding format and have barely started on the crypto.  Can you imagine any
two people on this list agreeing on what crypto mechanism to use?  Or whether
identity-hiding (at the expense of complexity/security) should trump
simplicity/security 9at the expense of exposing identity information)?



Au contraire!  I think what we have shown is that the elements in 
dispute must be found in the competition.  Not specified beforehand.


Every proposal must include its own encoding, its own crypto suite(s), 
its own identity-hiding, and dollops and dollops of simplicity.


Let the games begin!

iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] encoding formats should not be committee'ized

2013-10-05 Thread ianG

On 2/10/13 00:16 AM, James A. Donald wrote:

On 2013-10-02 05:18, Jerry Leichter wrote:

To be blunt, you have no idea what you're talking about. I worked at
Google until a short time ago; Ben Laurie still does. Both of us have
written, submitted, and reviewed substantial amounts of code in the
Google code base. Do you really want to continue to argue with us
about what the Google Style Guide is actually understood within Google?


The google style guide, among other things, prohibits multiple direct
inheritance and operator overloading, except where stl makes you do
operator overloading.



I do similar.  I prohibit reflection and serialization in java.  In C I 
used to prohibit malloc().



Thus it certainly prohibits too-clever code.  The only debatable
question is whether protobufs, and much of the rest of the old codebase,
is too-clever code - and it certainly a lot more clever than operator
overloading.


protobufs I would see as just like any external dependency -- trouble, 
and not good for security.  Like say an external logger or IPC or crypto 
library.  It would be really nice to eliminate these things but often 
enough one can't.


On the other hand, if you are not so fussed about security, then it is 
probably far better to use protobufs to stop the relearning cycle and 
reduce the incompatibility bugs across a large group of developers.




Such prohibitions also would prohibit the standard template library,
except that that is also grandfathered in, and prohibits atl and wtl.

The style guide is designed for an average and typical programmer who is
not as smart as the early google programmers.   If you prohibit anything
like wtl, you prohibit the best.


Right.  Real world is that an org has to call on the talents of a 
variety of programmers, high-end *and* aspirational, both.  So one tends 
to prohibit things that complicate the code for the bulk, and one tends 
to encourage tools that assist the majority.


I'd probably encourage things like protobufs for google.  They have a 
lot of programmers, and that tends to drive the equation more than other 
considerations.




Prohibiting programmers from using multiple inheritance is like the BBC
prohibiting the world literally instead of mandating that it be used
correctly.  It implies that the BBC does not trust its speakers to
understand the correct use of literally, and google does not trust its
programmers to understand the correct use of multiple direct inheritance.



I often wish I had some form of static multiple inheritance in Java...



iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-03 Thread ianG

On 2/10/13 17:46 PM, John Kelsey wrote:

Has anyone tried to systematically look at what has led to previous crypto 
failures?


This has been a favourite topic of mine, ever since I discovered that 
the entire foundation of SSL was built on theory, never confirmed in 
practice.  But my views are informal, never published nor systematic. 
Here's a history I started for risk management of CAs, informally:


http://wiki.cacert.org/Risk/History

But I don't know of any general history of internet protocol breaches.




That would inform us about where we need to be adding armor plate.  My 
impression (this may be the availability heuristic at work) is that:



a.  Most attacks come from protocol or mode failures, not so much crypto 
primitive failures.  That is, there's a reaction attack on the way CBC 
encryption and message padding play with your application, and it doesn't 
matter whether you're using AES or FEAL-8 for your block cipher.



Most attacks go around the protocol, or as Adi to eloquently put it. 
Then, of the rest, most go against the software engineering outer 
layers.  Attacks become less and less frequent as we peel the onion to 
get to the crypto core.  However, it would be good to see an empirical 
survey of these failures, in order to know if my picture is accurate.




b.  Overemphasis on performance (because it's measurable and security usually 
isn't) plays really badly with having stuff be impossible to get out of the 
field when it's in use.  Think of RC4 and DES and MD5 as examples.



Yes.  Software engineers are especially biased by this issue.  Although, 
it rarely causes a breach, it more often distracts attention from what 
really matters.



c.  The ways I can see to avoid problems with crypto primitives are:

(1)  Overdesign against cryptanalysis (have lots of rounds)



Frankly, I see this as a waste.  The problem with rounds and analysis of 
same is that it isn't just one algorithm, it's many.  Which means you 
are overdesigning for many algorithms, which means ... what?


It is far better to select a target such as 128 bit security, and then 
design each component to meet this target.  If you want overdesign 
then up the target to 160 bits, etc.  And make all the components 
achieve this.


The papers and numbers shown on keylength.com provide the basis for 
this.  It's also been frequently commented that the NSA's design of 
Skipjack was balanced this way, and that's how they like it.


Also note that the black-box effect in crypto protocols is very 
important.  Once we have the black box (achieved par excellence by block 
ciphers and MDs) we can then concentrate on the protocol using those 
boxes.  Which is to say that, because crypto can be black-boxed, then 
security protocols are far more of a software engineering problem than 
they are a crypto problem.  (As you know, the race is now on to develop 
an AE stream black box.)


Typically then we model the failure of an entire black box, as if it is 
totally transparent, rather than if it becomes weak.  For example, in my 
payments work, I say what happens if my AES128 fails?  Well, because 
all payments are signed by RSA204 then the attacker can simply read the 
payments, cannot make or inject payments.  And the converse.


This software engineering approach dominates questions such as AES at 
128 level or 96 level, as it covers more attack surface area than the 
bit strength question.




(2)  Overdesign in security parameters (support only high security levels, use 
bigger than required RSA keys, etc.)



As above.  Perhaps the reason why I like a balanced approach is that, by 
the time that some of the components have started to show their age (and 
overdesign is starting to look attractive in hindsight) we have moved on 
*for everything*.


Which is to say, it's time to replace the whole darn lot, and no 
overdesign would have saved us.  E.g., look at SSL's failures.  All 
(most?) of them were design flaws from complexity, none of them could be 
saved by overdesign in terms of rounds or params.


So, overdesign can be seen as a sort of end-of-lifecycle bias of hindsight.



(3)  Don't accept anything without a proof reducing the security of the whole 
thing down to something overdesigned in the sense of (1) or (2).



Proofs are ... good for cryptographers :)  As I'm not, I can't comment 
further (nor do I design to them).




iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] AES-256- More NIST-y? paranoia

2013-10-03 Thread ianG
I know others have already knocked this one down, but we are now in an 
area where conspiracy theories are real, so for avoidance of doubt...



On 2/10/13 00:58 AM, Peter Fairbrother wrote:

AES, the latest-and-greatest block cipher, comes in two main forms -
AES-128 and AES-256.

AES-256 is supposed to have a brute force work factor of 2^256  - but we
find that in fact it actually has a very similar work factor to that of
AES-128, due to bad subkey scheduling.


This might relate to the related-key discoveries in 2009.  Here's an 
explanation from Dani Nagy that might reach the non-cryptographer:


http://financialcryptography.com/mt/archives/001180.html



Thing is, that bad subkey scheduling was introduced by NIST ... after
Rijndael, which won the open block cipher competition with what seems to
be all-the-way good scheduling, was transformed into AES by NIST.


So, why did NIST change the subkey scheduling?



I don't think they did.  Our Java code was submitted as part of the 
competition, and it only got renamed after the competition.  No crypto 
changes that I recall.




iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] encoding formats should not be committee'ised

2013-10-03 Thread ianG

On 3/10/13 00:37 AM, Dave Horsfall wrote:

On Wed, 2 Oct 2013, Jerry Leichter wrote:


Always keep in mind - when you argue for easy readability - that one
of COBOL's design goals was for programs to be readable and
understandable by non-programmers.


Managers, in particular.



SQL, too, had that goal.  4GLs (remember them?).  XML.  Has it ever worked?



iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] TLS2

2013-10-02 Thread ianG

On 2/10/13 00:43 AM, James A. Donald wrote:

On 2013-10-01 14:36, Bill Stewart wrote:

It's the data representations that map them into binary strings that
are a
wretched hive of scum and villainy, particularly because you can't
depend on a
bit string being able to map back into any well-defined ASN.1 object
or even any limited size of ASN.1 object that won't smash your stack
or heap.
The industry's been bitten before by a widely available open source
library
that turned out to be vulnerable to maliciously crafted binary strings
that could be passed around as SNMP traps or other ASN.1-using messages.

Similarly, PGP's most serious security bugs were related to
variable-length binary representations that were trying to steal bits
to maximize data compression at the risk of ambiguity.
Scrounging a few bits here and there just isn't worth it.



This is an inherent problem, not with ASN.1, but with any data
representation that can represent arbitrary data.



Right.  I see the encoding choice as both integral to any proposal, and 
a very strong design decision.


I would fail any proposal that used some form of external library like 
ASN.1, XML, JSON, YAML, pb, Thrift, etc, that was clearly not suited for 
purpose of security.  I would give a thumbs-up to any proposal that 
created its own tight custom definition.




The decoder should only be able to decode the data structure it expects,
that its caller knows how to interpret, and intends to interpret.
Anything else should fail immediately.  Thus our decoder should have
been compiled from, a data description, rather than being a general
purpose decoder.



This is why I like not using a decoder.  My requirement is that I read 
exactly what I expect, check it for both syntax  semantics, and move 
on.  There should be no intervening lazy compilation steps to stop the 
coder seeing the entire picture.


Another problem with decoders is that you need a language.  So that 
makes two languages - the primary one and the layout.  Oops.  Have you 
noticed how these languages start off simple and get more and more 
complicated, as they try and do what the primary could already do?


The end result is no savings in coding, split sanity  semantics 
checking, added complexity and less security.  For every element you 
need to read, you need a line of code either way you do it, so it may as 
well be in the primary language, and then you get the security and the 
full checking capability, for free.




Thus sender and receiver should have to agree on the data structure for
any communication to take place, which almost automatically gives us a
highly compressed format.

Conversely, any highly compressed format will tend to require and assume
a known data structure.

The problem is that we do not want, and should not have, the capacity to
send a program an arbitrary data structure, for no one can write a
program that can respond appropriately to an arbitrary data structure.



Right.  To solve this, we would generally know what is to come, and we 
would signal that the exact expected thing is coming.


Following-data-identification is the one problem I've not seen an 
elegant solution to.  Tagging is something that lends itself to some 
form of hierarchical or centralised solution.  I use a centralised file 
with numbers and classes, but there are many possibilities.


If I was to do it, for TLS2, I'd have a single table containing the 
mapping of all things.  It would be like (off the top of my head):



1  compactInt
2  byteArray
3  bigInt
4  booleans

20 secret key packet
21 hash
22 private key packet
23 public key packet
24 hmac

40 Hello
41 Hiya
42 Go4It
43 DataPacket
44 Teardown


I don't like it, but I've never come across a better solution.




iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] TLS2

2013-10-02 Thread ianG

On 1/10/13 23:13 PM, Peter Fairbrother wrote:
...

Sounds like you want CurveCP?

http://curvecp.org/




Yes, EXACTLY that.  Proposals like CurveCP.



I have said this first part before:

Dan Boneh was talking at this years RSA cryptographers track about
putting some sort of quantum-computer-resistant PK into browsers - maybe
something like that should go into TLS2 as well?



I would see that as optional.  If a designer thinks it can be done, go 
for it.  Let's see what the marketplace




We need to get the browser makers - Apple, Google, Microsoft, Mozilla -
and the webservers - Apache, Microsoft, nginx - together and get them to
agree we must all implement this before writing the RFC.



Believe me, that way is a disaster.

The first thing that happens is someone says, let's get together and 
we'll fix this.  Guys, we can do this!


The second thing that happens is they form a committee.  Then the 
companies insist that only their agenda be respected.


End of (good) story, start of rort.



Also, the banks and the CA's should have an input. But not a say.



I'm sorry, this is totally embarrased by history.  The CAs have *all* 
the say, the vendors are told what to say by the CAs.  The banks have 
*none* of the say.  We can see this from the history of CABForum, which 
started out as I suggested above.


(The users were totally excluded from CABForum.  Then about 2 years 
back, after they had laid out the foundation and screwed the users 
totally, they invented some sort of faux figurehead user representation. 
 I never followed it after they announced their intent to do a facade.)




More rules:

IP-free, open source code,



Patent free or free licences provided, yes.


no libraries (*all* functions internal to each suite)


Fewest dependencies.


a compiler which gives repeatable binary hashes so you can verify binary
against source.


Note to Microsoft - open source does not always mean free. But in this
case it must be free.



Maximum of four crypto suites.



3 too many!


Each suite has fixed algorithms, protocols, key and group sizes etc.



I agree with that.  You'll find a lot of people don't agree with the key 
size being fixed, and people like NIST love yanking the chain by 
insisting on upping the numbers to some schedule.


But that resistance is somewhat of an RSA hangover; if the one 
cryptosuite is based on EC then there is more chance of it being fixed 
to one size.




Give them girls' names, not silly and incomplete crypto names - This
connection is protected by Alice.



:)


Ability to add new suites as secure browser upgrade from browser
supplier. ?New suites must be signed by working group?. Signed new
suites must then be available immediately on all platforms, both browser
and webserver.



And that opens pandora's box.  It requires a WG.  I have a vanity need. 
 Trouble begins...




Separate authentication and sessionkeysetup keys mandatory.



I like it, but DJB raises a good point:  if EC is fast enough, there may 
be scope to eliminate some of the phases.



Maybe use existing X.509? but always for authentication only, never
sessionkeysetup.



I see this as difficult.  A lot of the problems in the last lot happened 
because the institutions imposed x.509 over everything.  I see the same 
problem with the anti-solution which is passwords.


How the past is rectified and future auth needs are handled will be part 
of what makes a winning solution the winner.




No client authentication. None. Zero.



That won't get very far.  We need client auth for just about everything.

The business about privacy is totally dead;  sophisticated websites are 
slopping up the id info regardless of the auth.  Privacy isn't a good 
reason to drop client-side auth.


(Which isn't to say privacy isn't a requirement.)



That's too hard for an individual to manage - remembering passwords or
whatever, yes, global authentication, no. That does not belong in TLS.

I specifically include this because the banks want it, now, in order to
shift liability to their customers.


Well, they want a complete solution.  Not the crapola they have to deal 
with now, where they have to figure out where CIA stops and where their 
problems start.



And as to passwords being near end-of-life? Rubbish. Keep the password
database secure, give the user a username and only three password
attempts, and all your GPUs and ASIC farms are worth nothing.



So, it seems that there is no consensus on the nature of client auth. 
Therefore I'd suggest we throw the whole question open:  How much auth 
and which auth will be a key telling point.




iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-10-02 Thread ianG

Hi Peter,

On 30/09/13 23:31 PM, Peter Fairbrother wrote:

On 26/09/13 07:52, ianG wrote:

On 26/09/13 02:24 AM, Peter Fairbrother wrote:

On 25/09/13 17:17, ianG wrote:

On 24/09/13 19:23 PM, Kelly John Rose wrote:


I have always approached that no encryption is better than bad
encryption, otherwise the end user will feel more secure than they
should and is more likely to share information or data they should not
be on that line.



The trap of a false sense of security is far outweighed by the benefit
of a good enough security delivered to more people.


Given that mostly security works (or it should), what's really important
is where that security fails - and good enough security can drive out
excellent security.



Indeed it can.  So how do we differentiate?  Here are two oft-forgotten 
problems.


Firstly, when systems fail, typically it is the system around the crypto 
that fails, not the crypto itself.  This tells us that (a) the job of 
the crypto is to help the rest of the system to not fail, and (b) near 
enough is often good enough, because the metric of importance is to push 
all likely attacks elsewhere (into the rest of the system).


An alternative treatment is Adi Shamir's 3 laws of security:

http://financialcryptography.com/mt/archives/000147.html

Secondly, when talking about security options, we have to show where the 
security fails.  With history, with evidence -- so we can inform our 
speculations with facts.  If we don't do that, then our speculations 
become received wisdom, and we end up fielding systems that not only are 
making things worse, but are also blocking superior systems from emerging.




We can easily have excellent security in TLS (mk 2?) - the crypto part
of TLS can be unbreakable, code to follow (hah!) - but 1024-bit DHE
isn't say unbreakable for 10 years, far less for a lifetime.



OK, so TLS.  Let's see the failures in TLS?  SSL was running export 
grade for lots and lots of years, and those numbers were chosen to be 
crackable.  Let's see a list of damages, breaches, losses?


Guess what?  Practically none!  There is no recorded history of breaches 
in TLS crypto (and I've been asking for a decade, others longer).


So, either there are NO FAILURES from export grade or other weaker 
systems, *or* everyone is covering them up.  Because of some logic (like 
how much traffic and use), I'm going to plumb for NO FAILURES as a 
reasonable best guess, and hope that someone can prove me wrong.


Therefore, I conclude that perfect security is a crock, and there plenty 
of slack to open up and ease up.  If we can find a valid reason in the 
whole system (beyond TLS) to open up or ease up, then we should do it.




We are only talking about security against an NSA-level opponent here.
Is that significant?



It is a significant question.  Who are we protecting?  If we are talking 
about online banking, and credit cards, and the like, we are *not* 
protecting against the NSA.


(Coz they already breached all the banks, ages ago, and they get it all 
in real time.)


On the other hand, if we are talking about CAs or privacy system 
operators or jihadist websites, then we are concerned about NSA-level 
opponents.


Either way, we need to make a decision.  Otherwise all the other 
pronouncements are futile.




Eg, Tor isn't robust against NSA-level opponents. Is OTR?



All good questions.  What you have to do is decide your threat model, 
and protect against that.  And not flip across to some hypothetical 
received wisdom like MITM is the devil without a clear knowledge about 
why you care about that particular devil.




We're talking multiple orders of magnitude here.  The math that counts
is:

Security = Users * Protection.


No. No. No. Please, no? No. Nonononononono.

It's Summa (over i)  P_i.I_i where P_i is the protection provided to
information i, and I_i is the importance of keeping information i
protected.



I'm sorry, I don't deal in omniscience.Typically we as suppliers of
some security product have only the faintest idea what our users are up
to.  (Some consider this a good thing, it's a privacy quirk.)



No, and you don't know how important your opponent thinks the
information is either, and therefore what resources he might be willing
or able to spend to get access to it



Indeed, so many unknowables.  Which is why a risk management approach is 
to decide what you are protecting against and more importantly what you 
are not protecting against.


That results in, sharing the responsibility with another layer, another 
person.  E.g., if you're not in the sharing business, you're not in the 
security business.




- but we can make some crypto which
(we think) is unbreakable.



In that lies the trap.  Because we can make a block cipher that is 
unbreakable, we *think* we can make a system that is unbreakable.  No 
such applies.  Because we think we can make a system that is 
unbreakable, we talk like we can protect the user unbreakably.  A joke

Re: [Cryptography] TLS2

2013-10-01 Thread ianG

On 1/10/13 02:01 AM, Tony Arcieri wrote:

On Mon, Sep 30, 2013 at 1:02 AM, Adam Back a...@cypherspace.org
mailto:a...@cypherspace.org wrote:

If we're going to do that I vote no ASN.1, and no X.509.  Just BNF
format
like the base SSL protocol; encrypt and then MAC only, no
non-forward secret
ciphersuites, no baked in key length limits.  I think I'd also vote
for a
lot less modes and ciphers.  And probably non-NIST curves while
we're at it.


Sounds like you want CurveCP?

http://curvecp.org/




Yes, EXACTLY that.  Proposals like CurveCP.


iang

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] NIST about to weaken SHA3?

2013-10-01 Thread ianG

On 1/10/13 00:21 AM, James A. Donald wrote:

On 2013-10-01 00:44, Viktor Dukhovni wrote:

Should one also accuse ESTREAM of maliciously weakening SALSA?  Or
might one admit the possibility that winning designs in contests
are at times quite conservative and that one can reasonably
standardize less conservative parameters that are more competitive
in software?


less conservative means weaker.

Weaker in ways that the NSA has examined, and the people that chose the
winning design have not.

Why then hold a contest and invite outside scrutiny in the first place.?

This is simply a brand new unexplained secret design emerging from the
bowels of the NSA, which already gave us a variety of backdoored crypto.

The design process, the contest, the public examination, was a lie.

Therefore, the design is a lie.




This could be the uninformed opinion over unexpected changes.  It could 
also be the truth.  How then to differentiate?


Do we need to adjust the competition process for a tweak phase?

Let's whiteboard.  Once The One is chosen, have a single round + 
conference where each of the final contestants propose their optimised 
version.  They then vote on the choice.


(OK, we can imagine many ways to do this ... point being that if NIST 
are going to tweak the SHA3 then we need to create a way for them to do 
this, and have that tweaking be under the control of the submitters, not 
NIST itself.  In order to maintain the faith of the result.)




iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-10-01 Thread ianG

On 28/09/13 22:06 PM, ianG wrote:

On 27/09/13 18:23 PM, Phillip Hallam-Baker wrote:


Problem with the NSA is that its Jekyll and Hyde. There is the good side
trying to improve security and the dark side trying to break it. Which
side did the push for EC come from?



What's in Suite A?  Will probably illuminate that question...



Just to clarify my original poser -- which *public key methods* are 
suggested in Suite A?


RSA?  EC?  diversified keys?  Something new?

The answer will probably illuminate what the NSA really thinks about EC.

(As well as get us all put in jail for thought-crime.)



iang

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] check-summed keys in secret ciphers?

2013-09-30 Thread ianG

On 29/09/13 16:01 PM, Jerry Leichter wrote:

...e.g., according to Wikipedia, BATON is a block cipher with a key length of 
320 bits (160 of them checksum bits - I'd guess that this is an overt way for 
NSA to control who can use stolen equipment, as it will presumably refuse to 
operate at all with an invalid key). ...



I'm not really understanding the need for checksums on keys.  I can sort 
of see the battlefield requirement that comms equipment that is stolen 
can't then be utilized in either a direct sense (listening in) or 
re-sold to some other theater.


But it still doesn't quite work.  It seems antithetical to NSA's 
obsession with security at Suite A levels, if they are worried about the 
gear being snatched, they shouldn't have secret algorithms in them at all.


Using checksums also doesn't make sense, as once the checksum algorithm 
is recovered, the protection is dead.  I would have thought a HMAC 
approach would be better, but this then brings in the need for a 
centralised key distro approach.  Ok, so that is typically how 
battlefield codes work -- one set for everyone -- but I would have 
thought they'd have moved on from the delivery SPOF by now.





Cryptographic challenge:  If you have a sealed, tamper-proof box that implements, say, 
BATON, you can easily have it refuse to work if the key presented doesn't checksum 
correctly.  In fact, you'd likely have it destroy itself if presented with too many 
invalid keys.  NSA has always been really big about using such sealed modules for their 
own algorithms.  (The FIPS specs were clearly drafted by people who think in these terms. 
 If you're looking at them while trying to get software certified, many of the provisions 
look very peculiar.  OK, no one expects your software to be potted in epoxy (opaque 
in the ultraviolet - or was it infrared?); but they do expect various kinds of 
isolation that just affect the blocks on a picture of your software's implementation; 
they have no meaningful effect on security, which unlike hardware can't enforce any 
boundaries between the blocks.)

Anyway, this approach obviously depends on the ability of the hardware to 
resist attacks.  Can one design an algorithm which is inherently secure against 
such attacks?  For example, can one design an algorithm that's strong when used 
with valid keys but either outright fails (e.g., produces indexes into 
something like S-boxes that are out of range) or is easily invertible if used 
with invalid keys (e.g., has a key schedule that with invalid keys produces all 
0's after a certain small number of rounds)?  You'd need something akin to 
asymmetric cryptography to prevent anyone from reverse-engineering the checksum 
algorithm from the encryption algorithm, but I know of no fundamental reason 
why that couldn't be done.



It also seems a little overdone to do that in the algorithm.  Why not 
implement a kill switch with a separate parallel system?  If one is 
designing the hardware, then one has control over these things.


I guess then I really don't understand the threat they are trying to 
address here.


Any comments from the wider audience?

iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] TLS2

2013-09-30 Thread ianG

On 30/09/13 11:02 AM, Adam Back wrote:

If we're going to do that I vote no ASN.1, and no X.509.  Just BNF format
like the base SSL protocol; encrypt and then MAC only, no non-forward
secret
ciphersuites, no baked in key length limits.  I think I'd also vote for a
lot less modes and ciphers.  And probably non-NIST curves while we're at
it. And support soft-hosting by sending the server domain in the
client-hello. Add TOFO for self-signed keys.  Maybe base on PGP so you
get web of trust,
thogh it started to get moderately complicated to even handle PGP
certificates.



Exactly.  By setting the *high-level* requirements, we can show how real 
software engineering is done.  In small teams.


Personally, I'd do it over UDP (and swing for an IP allocation).  So it 
incorporates the modes of TLS and UDP, both.  Network packets orderable 
but not ordered, responses have to identify their requests.


One cipher/mode == one AE.  One curve, if the users are polite they 
might get another in v2.1.


Both client and server must have a PP key pair.  Both, used every time 
to start the session, both sides authenticating each other at the key 
level.  Any question of certificates is kicked out to a higher 
application layer with key-based identities established.





Adam

On Sun, Sep 29, 2013 at 10:51:26AM +0300, ianG wrote:

On 28/09/13 20:07 PM, Stephen Farrell wrote:


b) is TLS1.3 (hopefully) and maybe some extensions for earlier
   versions of TLS as well



SSL/TLS is a history of fiddling around at the edges.  If there is to
be any hope, start again.  Remember, we know so much more now.  Call
it TLS2 if you want.

Start with a completely radical set of requirements.  Then make it so.
There are a dozen people here who could do it.

Why not do the requirements, then ask for competing proposals? Choose
1.  It worked for NIST, and committees didn't work for anyone.

A competition for TLS2 would bring out the best and leave the
bureaurats fuming and powerless.



iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] encoding formats should not be committee'ized

2013-09-30 Thread ianG

On 29/09/13 16:13 PM, Jerry Leichter wrote:

On Sep 26, 2013, at 7:54 PM, Phillip Hallam-Baker wrote:

...[W]ho on earth thought DER encoding was necessary or anything other than 
incredible stupidity?...

It's standard.  :-)

We've been through two rounds of standard data interchange representations:

1.  Network connections are slow, memory is limited and expensive, we can't 
afford any extra overhead.  Hence DER.
2.  Network connections are fast, memory is cheap, we don't have to worry about 
them - toss in every last feature anyone could possibly want.  Hence XML.

Starting from opposite extremes, committees of standards experts managed to 
produce results that are too complex and too difficult for anyone to get right 
- and which in cryptographic contexts manage to share the same problem of 
multiple representations that make signing such a joy.

BTW, the *idea* behind DER isn't inherently bad - but the way it ended up is 
another story.  For a comparison, look at the encodings Knuth came up with in 
the TeX world.  Both dvi and pk files are extremely compact binary 
representations - but correct encoders and decoders for them are plentiful.  
(And it's not as if the Internet world hasn't come up with complex, difficult 
encodings when the need arose - see IDNA.)



Experience suggests that asking a standards committee to do the encoding 
format is a disaster.


I just looked at my code, which does something we call Wire, and it's 
700 loc.  Testing code is about a kloc I suppose.  Writing reference 
implementations is a piece of cake.


Why can't we just designate some big player to do it, and follow suit? 
Why argue in committee?




iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] TLS2

2013-09-29 Thread ianG

On 28/09/13 20:07 PM, Stephen Farrell wrote:


b) is TLS1.3 (hopefully) and maybe some extensions for earlier
versions of TLS as well



SSL/TLS is a history of fiddling around at the edges.  If there is to be 
any hope, start again.  Remember, we know so much more now.  Call it 
TLS2 if you want.


Start with a completely radical set of requirements.  Then make it so. 
There are a dozen people here who could do it.


Why not do the requirements, then ask for competing proposals?  Choose 
1.  It worked for NIST, and committees didn't work for anyone.


A competition for TLS2 would bring out the best and leave the bureaurats 
fuming and powerless.




iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-28 Thread ianG

On 27/09/13 18:23 PM, Phillip Hallam-Baker wrote:


Problem with the NSA is that its Jekyll and Hyde. There is the good side
trying to improve security and the dark side trying to break it. Which
side did the push for EC come from?



What's in Suite A?  Will probably illuminate that question...



iang

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA recommends against use of its own products.

2013-09-26 Thread ianG

On 25/09/13 21:12 PM, Jerry Leichter wrote:

On Sep 25, 2013, at 12:31 PM, ianG i...@iang.org wrote:

...

My conclusion is:  avoid all USA, Inc, providers of cryptographic products.

In favor off ... who?



Ah well, that is the sticky question.  If we accept the conclusion, I 
see these options:


1.  shift to something more open.
2.  use foreign providers.
3.  start writing.
4.  get out of the security game.


We already know that GCHQ is at least as heavily into this monitoring business 
as NSA, so British providers are out.  The French ...


Right, scratch the Brits and the French.  Maybe AU, NZ?  I don't know. 
Maybe the Germans / Dutch / Austrians.



It's a really, really difficult problem.  For deterministic algorithms, in 
principle, you can sandbox ...


If you are referring to testing a provider's product for leaks, I think 
that's darn near impossible.


(If referring to the platform and things like leakage, that is an 
additional/new scope.)



For probabilistic algorithms - choosing a random number is, of course, the 
simplest example - it's much, much harder.  You're pretty much forced to rely 
on some mathematics and other analysis - testing can't help you much.



As I have said, if you care, you write your own collector/mix/DRBG.  If 
not, then you're happy reading /dev/random.


(for the rest, all agreed.)



iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-26 Thread ianG

On 26/09/13 02:24 AM, Peter Fairbrother wrote:

On 25/09/13 17:17, ianG wrote:

On 24/09/13 19:23 PM, Kelly John Rose wrote:


I have always approached that no encryption is better than bad
encryption, otherwise the end user will feel more secure than they
should and is more likely to share information or data they should not
be on that line.



The trap of a false sense of security is far outweighed by the benefit
of a good enough security delivered to more people.

We're talking multiple orders of magnitude here.  The math that counts
is:

Security = Users * Protection.


No. No. No. Please, no? No. Nonononononono.

It's Summa (over i)  P_i.I_i where P_i is the protection provided to
information i, and I_i is the importance of keeping information i
protected.



I'm sorry, I don't deal in omniscience.  Typically we as suppliers of 
some security product have only the faintest idea what our users are up 
to.  (Some consider this a good thing, it's a privacy quirk.)


With that assumption, the various i's you list become some sort of 
average.  This is why the security model that is provided is typically 
one-size-fits-all, and the most successful products are typically the 
ones with zero configuration and the best fit for the widest market.




Actually it's more complex than that, as the importance isn't a linear
variable, and information isn't either - but there's a start.

Increasing i by increasing users may have little effect on the overall
security, if the protecting the information they transmit isn't
particularly valuable.



Right, and you know that, how?

(how valuable each person's info is, I mean.)



And saying that something is secure - which is what people who are not
cryptographers think you are doing when you recommend that something -
tends to increase I_i, the importance of the information to be protected.



2nd order effects from the claim of security, granted.  Which effects 
they are, is again subject to the law of averages.




And if the new system isn't secure against expensive attacks, then
overall security may be lessened by it's introduction. Even if Users are
increased.



Ah, and therein lies the rub.  Maybe.  This doesn't mean it will.

Typically, the fallacy of false sense of security relies on an extremely 
unusual or difficult attack (aka acceptable risk).  And then ramps up 
that rarity to a bogeyman status.  So that everyone is scared of it. 
And we must, we simply must protect people against it!


Get back to science.  How risky are these things?



I have about 30 internet passwords, only three of which are in any way
important to me - those are the banking ones. I use a simple password
for all the rest, because I don't much care if they are compromised.

But I use the same TLS for all these sites.

Now if that TLS is broken as far as likely attacks against the banks go,
I care. I don't much care if it's secure against attacks against the
other sites like my electricity and gas bills.



(You'll see this play out in phishing.  Banks are the number one target 
for attacks on secure browsing.)



I might use TLS a lot more for non-banking sites, but I don't really
require it to be secure for those. I do require it to be secure for
banking.



You are resting on taught wisdom about TLS, which is oriented to a 
different purpose than security.


In practice, a direct attack against TLS is very rare, a direct attack 
against your browser connection to your bank is very rare, and a direct 
attack against your person is also very rare.


This is why for example we walk the streets without body armour, even in 
Nairobi (this week) or the Beltway (11 years ago).  This is why there 
are few if any (open question?) reported breaches of banks due to the 
BEAST and other menagerie attacks against TLS.


We can look at this many ways, but one way is this:  the margin of fat 
in TLS is obscene.  If it were sentient, it would be beyond obese, it 
would be a circus act.  We can do some dieting.




And I'm sure that some people would like TLS to be secure against the
NSA for, oh, let's say 10 years. Which 1024-bit DHE will not provide.



Well, right.  So, as TLS is supposed to be primarily (these days) 
focussed on protecting your bank account access, and as its auth model 
fails dismally when it comes to phishing, why do we care about something 
so exotic as the NSA?


Get back to basics.  Let's fix the TLS so it actually does the client - 
webserver auth problem first.


1024 is good enough for that, for now, but in the meantime prepare for 
something longer.  (We now have evidence of some espionage spear 
phishing that bothered to crunch 512.  Oh happy day, some real evidence!)


As for the NSA, actually, 1024 works fine for that too, for now.  As 
long as we move them from easy decryption to actually having to use a 
lot of big fat expensive machines, we win.  They then have to focus, 
rather than harvest.  Presumably they have not forgotten how to do that.




If you

Re: [Cryptography] RSA recommends against use of its own products.

2013-09-26 Thread ianG

On 26/09/13 02:32 AM, Peter Gutmann wrote:

ianG i...@iang.org writes:


Well, defaults being defaults, we can assume most people have left it in
default mode.  I suppose we could ask for research on this question, but I'm
going to guess:  most.


“Software Defaults as De Facto Regulation: The Case of Wireless APs�, Rajiv
Shah and Christian Sandvig, Proceedings of the 33rd Research Conference on
Communication, Information and Internet Policy (TPRC’07), September 2005,
reprinted in Information, Communication, and Society, Vol.11, No.1 (February
2008), p.25.

Peter.




Nice.  Or, as I heard somewhere, there is only one mode, and it is secure.

http://www-personal.umich.edu/~csandvig/research/Shah-Sandvig--Defaults_as_de_facto_regulation.pdf



Today’s internet presumes that individuals are capable of configuring 
software to address issues such as spam, security, indecent content, and 
privacy. This assump- tion is worrying – common sense and empirical 
evidence state that not everyone is so interested or so skilled. When 
regulatory decisions are left to individuals, for the unskilled the 
default settings are the law. This article relies on evidence from the 
deployment of wireless routers and finds that defaults act as de facto 
regu- lation for the poor and poorly educated. This paper presents a 
large sample beha- vioral study of how people modify their 802.11 
(‘Wi-Fi’) wireless access points from two distinct sources. The first is 
a secondary analysis of WifiMaps.com, one of the largest online 
databases of wireless router information. The second is an original 
wireless survey of portions of three census tracts in Chicago, selected 
as a diversity sample for contrast in education and income. By 
constructing lists of known default settings for specific brands and 
models, we were then able to ident- ify how people changed their default 
settings. Our results show that the default settings for wireless access 
points are powerful. Media reports and instruction manuals have 
increasingly urged users to change defaults – especially passwords, 
network names, and encryption settings. Despite this, only half of all 
users change any defaults at all on the most popular brand of router. 
Moreover, we find that when a manufacturer sets a default 96–99 percent 
of users follow the suggested behavior, while only 28–57 percent of 
users acted to change these same default settings when exhorted to do so 
by expert sources. Finally, there is also a suggestion that those living 
in areas with lower incomes and levels of education are less likely to 
change defaults, although these data are not conclusive. These results 
show how the authority of software trumps that of advice. Consequently, 
policy-makers must acknowledge and address the power of software to act 
as de facto regulation.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] RSA equivalent key length/strength

2013-09-25 Thread ianG

On 24/09/13 19:23 PM, Kelly John Rose wrote:


I have always approached that no encryption is better than bad
encryption, otherwise the end user will feel more secure than they
should and is more likely to share information or data they should not
be on that line.



The trap of a false sense of security is far outweighed by the benefit 
of a good enough security delivered to more people.


We're talking multiple orders of magnitude here.  The math that counts is:

   Security = Users * Protection.



iang


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA recommends against use of its own products.

2013-09-25 Thread ianG

Hi Jerry,

I appreciate the devil's advocate approach here, it has helped to get my 
thoughts in order!  Thanks!


My conclusion is:  avoid all USA, Inc, providers of cryptographic 
products.  Argumentation follows...



On 24/09/13 19:01 PM, Jerry Leichter wrote:

On Sep 23, 2013, at 4:20 AM, ianG i...@iang.org wrote:

RSA today declared its own BSAFE toolkit and all versions of its
Data Protection Manager insecure...


Etc.  Yes, we expect the company to declare itself near white, and the press to 
declare it blacker than the ace of spaces.

Meanwhile, this list is about those who know how to analyse this sort of stuff, 
independently.  So...

Indeed.


...  But they made Dual EC DRBG the default ...


I don't see a lot of distance between choosing Dual_EC as default, and the 
conclusion that BSAFE  user-systems are insecure.

The conclusion it leads to is that *if used in the default mode*, it's (well, 
it *may be*) unsafe.



Well, defaults being defaults, we can assume most people have left it in 
default mode.  I suppose we could ask for research on this question, but 
I'm going to guess:  most.  Therefore we could say that BSAFE is 
mostly unsafe, but as we don't know who is using it in default mode, 
I'm sure most cryptography people would agree that means unsafe, period.




We know no more today about the quality of the implementation than we did 
yesterday.  (In fact, while I consider it a weak argument ... if NSA had 
managed to sneak something into the code making it insecure, they wouldn't have 
needed to make a *visible* change - changing the default.  So perhaps we have 
better reason to believe the rest of the code is OK today than we did 
yesterday.)



Firstly, this is to suggest that quality of implementation is the issue. 
 It isn't, the issue is whether the overall result is safe -- to 
end-users.  In this case, it could be fantastic code, but if the RNG is 
spiked, then the fantastic code is approx. worthless.


Reminds me of what the IRA said after nearly knocking off Maggie Thatcher:

Today we were unlucky, but remember we only have to be lucky once.
You will have to be lucky always.

Secondly, or more widely, if the NSA has targetted RSA, then what can we 
conclude about quality of the rest of the implementation?  We can only 
make arguments about the rest of the system if we assume this was a 
one-off.  That would be a surprising thing to assume, given what else we 
know.




The question that remains is, was it an innocent mistake, or were they 
influenced by NSA?

a)  How would knowing this change the actions you take today?



* knowing it was an innocent mistake:  well, everyone makes them, even 
Debian.  So perhaps these products aren't so bad?


* knowing it was an influenced result:   USA corporations are to be 
avoided as cryptographic suppliers.  E.g., JCE, CAPI, etc.


Supporting assumptions:

1. assume the NSA is your threat model.  Once upon a time those 
threatened were a small group of neerdowellers in far flung wild 
countries with exotic names.  Unfortunately, this now applies to most 
people -- inside the USA, anyone who's facing a potential criminal 
investigation by any of the USA agencies, due to the DEA trick.  So most 
of Wall Street, etc, and anyone who's got assets attachable for ML, in 
post-WoD world, etc.  Outside the USA, anyone who's 2 handshakes from 
any neerdowellers.


2. We don't as yet have any such evidence from non-USA corps, do we? 
(But I ain't putting my money down on that...)


3. Where goes RSA, also follows Java's JCE (recall Android) and CAPI. 
How far behind are the rest?


http://www.theregister.co.uk/2013/09/19/linux_backdoor_intrigue

4. Actually, we locals on this list already knew this to a reasonable 
suspicion.  But now we have a chain of events that allows a reasonable 
person outside the paranoiac security world to conclude that the NSA has 
corrupted the cryptography delivery from a USA corp.


http://financialcryptography.com/mt/archives/001446.html



b)  You've posed two alternatives as if they were the only ones.  At the time this default was 
chosen (2005 or thereabouts), it was *not* a mistake.  Dual EC DRBG was in a 
just-published NIST standard.  ECC was hot as the best of the new stuff - with 
endorsements not just from NSA but from academic researchers.  Dual EC DRBG came with a self-test 
suite, so could guard itself against a variety of attacks and other problems.  Really, the only 
mark against it *at the time* was that it was slower than the other methods - but we've learned 
that trading speed for security is not a good way to go, so that was not dispositive.



True, 2005 or thereabouts, such a story could be and was told, and we 
can accept for the sake of argument it might not have been a mistake 
given what they knew.


That ended 2007.  RSA was no doubt informed of the results as they 
happened, because they are professionals, now conveniently listed out by 
Mathew Greene:


http

Re: [Cryptography] RSA recommends against use of its own products.

2013-09-24 Thread ianG

On 22/09/13 16:43 PM, Jerry Leichter wrote:

On Sep 20, 2013, at 2:08 PM, Ray Dillinger wrote:

More fuel for the fire...

http://rt.com/usa/nsa-weak-cryptography-rsa-110/

RSA today declared its own BSAFE toolkit and all versions of its
Data Protection Manager insecure, recommending that all customers
immediately discontinue use of these products

Wow.  You took as holy writ on a technical matter a pronouncement of the 
general press.



Etc.  Yes, we expect the company to declare itself near white, and the 
press to declare it blacker than the ace of spaces.


Meanwhile, this list is about those who know how to analyse this sort of 
stuff, independently.  So...




...  But they made Dual EC DRBG the default ...


I don't see a lot of distance between choosing Dual_EC as default, and 
the conclusion that BSAFE  user-systems are insecure.


The question that remains is, was it an innocent mistake, or were they 
influenced by NSA?


We don't have much solid evidence on that.  But we can draw the dots, 
and a reasonable judgement can fill the missing pieces in.




iang

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] PRISM-Proofing and PRISM-Hardening

2013-09-24 Thread ianG


I think, if we are about redesigning and avoiding the failures of the 
past, we have to unravel the false assumptions of the past...



On 20/09/13 01:21 AM, Phillip Hallam-Baker wrote:
...

Bear in mind that securing financial transactions is exactly what we
designed the WebPKI to do and it works very well at that.



Reasonable people may disagree with that claim.

PKI for the web was designed to secure *one small part* of the financial 
process -- sending credit card numbers over the net.  To secure 
financial transactions without limit, we'd need an end-to-end solution. 
 E.g., online banking (which comes much later) requires an 
authentication solution, which offering by WebPKI (the client cert) is 
infamously not used;  and, as a counterpoint, the biggest hacks occur at 
the server, being that large part of financial transactions that 
WebPKI explicitly ignored.


Further, very well is a gross exaggeration of marketing proportions. 
In order to say it works very well at even its small part of 
protecting access to servers, we'd have to solve the browser 
authentication problem that is at the root cause of phishing.  I grant 
that the phishing bug was addressed at a level of PKI-me-harder, but we 
still lack a solution...




Criminals circumvent the WebPKI rather than trying to defeat it. If they
did start breaking the WebPKI then we can change it and do something
different.



Oh, they broke it.  Criminals send an unauthenticated URL and the user 
goes to that URL.  The browser doesn't notice, the user doesn't notice, 
and the implementors conspire not to notice.  WebPKI is totally broken. 
 The fact that the criminals didn't follow the cutesy rules laid out in 
the WebPKI security model is not a circumvention but a breach and an 
excuse -- the rules weren't applicable to the real world.


And, regardless of whether we decide that it is circumvention or breach, 
nothing positive was ever done about it.  So we're left arguing about 
the point of something that is too easy to circumvent and doesn't get 
fixed.  WebPKI is either an historical oddity or an economic drag on 
real security.


(Quite where reasonable people might have a reasonable disagreement is 
where the breach/circumvention is;  that's an argument that will (and 
did) roll on for a decade, which is perhaps why it never gets fixed... 
insert long thread.)




But financial transactions are easier than protecting the privacy of
political speech because it is only money that is at stake. The
criminals are not interested in spending $X to steal $0.5X. We can do
other stuff to raise the cost of attack if it turns out we need to do that.

So I think what we are going to want is more than one trust model
depending on the context and an email security scheme has to support
several.



Yes.  Challenge is to get that into the supply chain.



If we want this to be a global infrastructure we have 2.4 billion users
to support. If we spend $0.01 per user on support, that is $24 million.
It is likely to be a lot more than that per user.

Enabling commercial applications of the security infrastructure is
essential if we are to achieve deployment. If the commercial users of
email can make a profit from it then we have at least a chance to co-opt
them to encourage their customers to get securely connected.



It's either that, or bypass completely.  I agree email looks difficult, 
and the economics suggest bypass not rebuild.




One of the reasons the Web took off like it did in 1995 was that
Microsoft and AOL were both spending hundreds of millions of dollars
advertising the benefits to potential users. Bank America, PayPal etc
are potential allies here.



Curiously (digression), Paypal bought Skype for a secure end-to-end 
solution to many of these problems.  They never capitalised on it.  Did 
they ever say why?




iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-24 Thread ianG

On 22/09/13 03:07 AM, Patrick Pelletier wrote:

On 9/14/13 11:38 AM, Adam Back wrote:


Tin foil or not: maybe its time for 3072 RSA/DH and 384/512 ECC?


I'm inclined to agree with you, but you might be interested/horrified in
the 1024 bits is enough for anyone debate currently unfolding on the
TLS list:

http://www.ietf.org/mail-archive/web/tls/current/msg10009.html



1024 bits is pretty good, and there's some science that says it's about 
right.  E.g., risk management says there is little point in making a 
steel door inside a wicker frame.


The problem is more to do with distraction than anything else.  It is a 
problem that people will argue about the numbers, because they can 
compare numbers, far more than they will argue about the essentials. 
There is a psychological bias to beat ones chest about how tough one is 
on the numbers, and thus prove one is better at this game than the enemy.


Unfortunately, in cryptography, almost always, other factors matter more.

So, while you're all arguing about 1024 versus 4096, what you're not 
doing is delivering a good system.  That delay feeds in to the customer 
equation, and the result is less security.  Even when you finally 
compromise on 1964.13 bits, the result is still less security, because 
of other issues like delays.




and there was a similar discussion on the OpenSSL list recently, with
GnuTLS getting blamed for using the ECRYPT recommendations rather than
1024:

http://www.mail-archive.com/openssl-users@openssl.org/msg71899.html



Yeah, they are getting confused (compatibility failures) from too much 
choice.  Never a good idea.  Take out the choice.  One number.  Get back 
to work.




iang

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] PRISM-Proofing and PRISM-Hardening

2013-09-19 Thread ianG

Hi John,

(I think we are in agreement here, there was just one point below where 
I didn't make myself clear.)



On 18/09/13 23:45 PM, John Kemp wrote:

On Sep 18, 2013, at 4:05 AM, ianG i...@iang.org wrote:


On 17/09/13 23:52 PM, John Kemp wrote:

On Sep 17, 2013, at 2:43 PM, Phillip Hallam-Baker hal...@gmail.com



I am sure there are other ways to increase the work factor.


I think that increasing the work factor would often result in
switching the kind of work performed to that which is easier than
breaking secrets directly.



Yes, that's the logical consequence  approach to managing risks. Mitigate the 
attack, to push attention to easier and less costly attacks, and then start working 
on those.

There is a mindset in cryptography circles that we eliminate entirely the 
attacks we can, and ignore the rest.  This is unfortunately not how the real 
world works.  Most of risk management outside cryptography is about reducing 
risks not eliminating them, and managing the interplay between those reduced 
risks.  Most unfortunate, because it leads cryptographers to strange 
recommendations.


The technical work always needs doing. It's not that we shouldn't do our best 
to improve cryptographic protection. It's more that one can always bypass 
cryptographic protection by getting to the cleartext before it is encrypted.



Right.  So the amount of effort we should put in should not be dictated 
(solely) by received wisdom about perfect security, but (also) by how 
quickly we can push the bulk of the attackers elsewhere.  Thus releasing 
our costly resources for 'elsewhere'.


I wrote about this tradeoff many moons ago.  I called the preferred 
target Pareto-secure as a counterpoint to the expected 100% secure, 
which I defined as a point where there is no Pareto-improvement that can 
be made, because the attacker is already pushed elsewhere.


The other side of the coin is to have a gentler attitude to breaches.

When a breach is announced, we also need to consider whether anyone has 
actually lost anything, and whether the ones that weren't attacked have 
got good service.  A protocol is rarely broken for the user, even if the 
cryptographic world uses the word 'broken' for a few bits.  E.g., if one 
looks at the TLS changes of the last 5 years due to a series of attacks, 
there isn't much of a record of actual hacks to users.




That may be good. Or it may not.



If other attacks are more costly to defender and easyish for the attacker, then 
perhaps it is bad.  But it isn't really a common approach in our security world 
to leave open the easiest attack, as the best alternative.  Granted, this 
approach is used elsewhere (in warfare for example, minefields and wire will be 
laid to channel the attack).

If we can push an attacker from mass passive surveillance to targetted direct 
attacks, that is a huge win.  The former scales, the latter does not.


My point was that mass passive surveillance is possible with or without 
breaking SSL/TLS (for example, but also other technical attacks), and that it is often 
simpler to pay someone to create a backdoor in an otherwise well-secured system. Or to 
simply pay someone to acquire the data in cleartext form prior to the employment of any 
technical protections to those data. Other kinds of technical protections (not really 
discussed here so far) might be employed to protect data from such attacks, but they 
would still depend on the possibility for an attacker to acquire the cleartext before 
such protections were applied.



To some extent, mass passive surveillance is entirely possible because 
SSL/TLS is so poorly employed.  I haven't looked for a while, but it was 
always about 1% of web traffic.


This is the motive behind HTTPS Everywhere - All The Time.  Let's make 
SSL the norm not the exception.  Then we've got some security against 
passive surveillance, then we force the attacker to other attacks, which 
are typically much more expensive.



I would point out that it was historically the case that the best espionage was achieved 
by paying (or blackmailing) people close to the source of the information to retrieve the 
necessary information. The idea of the mole. That would seem to still be 
possible.





PRISM-Hardening seems like a blunt instrument, or at least one which
may only be considered worthwhile in a particular context (technical
protection) and which ignores the wider context (in which such technical
protections alone are insufficient against this particular adversary).



If I understand it correctly, PRISM is or has become the byword for the NSA's 
vacuuming of all traffic for mass passive surveillance.  In which case, this is 
the first attack of all, and the most damaging, because it is undetectable, 
connects you to all your contacts, and stores all your open documents.

 From the position of a systems provider, mass surveillance is possibly the 
most important attack to mitigate.


If you yourself the systems provider

Re: [Cryptography] An NSA mathematician shares his from-the-trenches view of the agency's surveillance activities

2013-09-18 Thread ianG

On 18/09/13 00:56 AM, John Gilmore wrote:

Forwarded-By: David Farber d...@farber.net
Forwarded-By: Annie I. Anton Ph.D. aian...@mindspring.com

http://www.zdnet.com/nsa-cryptanalyst-we-too-are-americans-720689/

NSA cryptanalyst: We, too, are Americans



Speaking as a non-American, you guys have big problems concerning the 
nexus of cryptography and politics.


...

The rest of this article contains Roger's words only, edited simply for 
formatting.


I really, really doubt that.  I don't really wish to attack the author, 
but the style and phraseology is pure PR.  Ordinary people do not write 
PR.  Nor do they lay out political strategies and refer to their 
commander-in-chief as the supreme leader.  Nor indeed are employees of 
military and intelligence *permitted to talk to the press* unless 
sanctioned at high level.



...  Do I, as an American, have any concerns about whether the NSA is 
illegally or surreptitiously targeting or tracking the communications of 
other Americans?


The answer is emphatically, No.


Of course, Americans talking to Americans might be one debate.  But then 
there are Americans talking to the world, and people talking to people.


It should be remembered that espionage is illegal, and the activities of 
the NSA are more or less illegal *outside their borders*.  I give them 
no permission to monitor me or mine, and nor does any of the laws of my 
land(s).


The fact that we cannot stop them doesn't make it any less legal.  The 
fact that there is a gentleman's agreement between countries to look the 
other way doesn't make it any less palatable to us non-gentlepersons 
excluded from the corridors of powers.


And all that doesn't make NSA mathematicians any less a partner to the 
activity.  Any intelligence agent is typically controlled and often 
banned from overseas travel, because of the ramifications of this activity.



...


A myth that truly bewilders me is the notion that the NSA could or would spend 
time looking into the communications of ordinary Americans

There's no doubt about it: We all live in a new world of Big Data.



In two paras above, and the next two paras below, this 'mathematician' 
lays the political trap for Americans.  The collection by the federal 
government of data is almost certainly unconstitutional.  Yet, everyone 
acts as if that's ok because ... we live in the new world of Big Data?




Much of the focus of the public debate thus far has been on the amount of data 
that NSA has access to, which I feel misses the critical point.


Unless one subscribes to the plain wording of your (American) 
constitution...




In today's digital society, the Big Data genie is out of the bottle. Every day, 
more personal data become available to individuals, corporations, and the 
government. What matters are the rules that govern how NSA uses this data, and 
the multiple oversight and compliance efforts that keep us consistent with 
those rules. I have not only seen but also experienced firsthand, on a daily 
basis, that these rules and the oversight and compliance practices are 
stringent. And they work to protect the privacy rights of all Americans.


ditto, repeat.

Although, to be honest, we-the-world don't care about it;  the USG's 
temptation to rewrite the constitution in the minds of its subjects is 
strictly a domestic political affair.  For most other countries, the Big 
Data genie is truly out of the bottle, and there's precious little we 
can do about it.


...

As this national dialogue continues, I look to the American people to reach a 
consensus on the desired scope of U.S. intelligence activities


Good luck!


 The views and opinions expressed herein are those of the author and do not 
necessarily reflect those of the National Security Agency/Central Security 
Service.



I seriously doubt that.



iang

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] PRISM-Proofing and PRISM-Hardening

2013-09-18 Thread ianG

On 17/09/13 23:52 PM, John Kemp wrote:

On Sep 17, 2013, at 2:43 PM, Phillip Hallam-Baker hal...@gmail.com



I am sure there are other ways to increase the work factor.


I think that increasing the work factor would often result in
switching the kind of work performed to that which is easier than
breaking secrets directly.



Yes, that's the logical consequence  approach to managing risks. 
Mitigate the attack, to push attention to easier and less costly 
attacks, and then start working on those.


There is a mindset in cryptography circles that we eliminate entirely 
the attacks we can, and ignore the rest.  This is unfortunately not how 
the real world works.  Most of risk management outside cryptography is 
about reducing risks not eliminating them, and managing the interplay 
between those reduced risks.  Most unfortunate, because it leads 
cryptographers to strange recommendations.




That may be good. Or it may not.



If other attacks are more costly to defender and easyish for the 
attacker, then perhaps it is bad.  But it isn't really a common approach 
in our security world to leave open the easiest attack, as the best 
alternative.  Granted, this approach is used elsewhere (in warfare for 
example, minefields and wire will be laid to channel the attack).


If we can push an attacker from mass passive surveillance to targetted 
direct attacks, that is a huge win.  The former scales, the latter does not.




PRISM-Hardening seems like a blunt instrument, or at least one which
may only be considered worthwhile in a particular context (technical
protection) and which ignores the wider context (in which such technical
protections alone are insufficient against this particular adversary).



If I understand it correctly, PRISM is or has become the byword for the 
NSA's vacuuming of all traffic for mass passive surveillance.  In which 
case, this is the first attack of all, and the most damaging, because it 
is undetectable, connects you to all your contacts, and stores all your 
open documents.


From the position of a systems provider, mass surveillance is possibly 
the most important attack to mitigate.  This is because:  we know it is 
done to everyone, and therefore it is done to our users, and it informs 
every other attack.  For all the other targetted and active attacks, we 
have far less certainty about the targetting (user) and the 
vulnerability (platform, etc).  And they are very costly, by several 
orders of magnitude more than mass surveillance.




iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] The paranoid approach to crypto-plumbing

2013-09-17 Thread ianG

Hi Bill,



On 17/09/13 01:20 AM, Bill Frantz wrote:


The idea is that when serious problems are discovered with one
algorithm, you don't have to scramble to replace the entire crypto
suite. The other algorithm will cover your tail while you make an
orderly upgrade to your system.

Obviously you want to chose algorithms which are likely to have
different failure modes -- which I why I suggest that RC4 (or an
extension thereof) might still be useful. The added safety also allows
you to experiment with less examined algorithms.




The problem with adding multiple algorithms is that you are also adding 
complexity.  While you are perhaps ensuring against the failure of one 
algorithm, you are also adding a cost of failure in the complexity of 
melding.


E.g., as an example, look at the current SSL search for a secure 
ciphersuite (and try explaining it to the sysadms).  As soon as you add 
an extra algorithm, others are tempted to add their vanity suites, the 
result is not better but worse.


And, as we know, the algorithms rarely fail.  The NSA specifically 
targets the cryptosystem, not the algorithms.  It also doesn't like 
well-constructed and well-implemented systems.  (So before getting too 
exotic with the internals, perhaps we should get the basics right.)


In contrast to the component duplication approach, I personally prefer 
the layering duplication approach (so does the NSA apparently).  That 
is, have a low-level cryptosystem that provides the base encryption and 
authentication properties, and over that, layer an authorisation layer 
that adds any additional properties if desired (such as superencryption).


One could then choose complementary algorithms at each layer.  Having 
said all that, any duplication is expensive.  Do you really have the 
evidence that such extra effort is required?  Remember, while you're 
building this extra capability, customers aren't being protected at all, 
and are less likely to be so in the future.




iang

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] The paranoid approach to crypto-plumbing

2013-09-17 Thread ianG

On 17/09/13 01:40 AM, Tony Arcieri wrote:

On Mon, Sep 16, 2013 at 9:44 AM, Bill Frantz fra...@pwpconsult.com
mailto:fra...@pwpconsult.com wrote:

After Rijndael was selected as AES, someone suggested the really
paranoid should super encrypt with all 5 finalests in the
competition. Five level super encryption is probably overkill, but
two or three levels can offer some real advantages.


I wish there was a term for this sort of design in encryption systems
beyond just defense in depth. AFAICT there is not such a term.

How about the Failsafe Principle? ;)




A good question.  In my work, I've generally modelled it such that the 
entire system still works if one algorithm fails totally.  But I don't 
have a name for that approach.




iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] djb's McBits (with Tung Chaou and Peter Schwabe)

2013-09-16 Thread ianG

On 15/09/13 07:17 AM, Tony Arcieri wrote:

... djb is
working on McBits.


McBits: fast constant-time code-based cryptography

Abstract.
This paper presents extremely fast algorithms for code-based
public-key cryptography, including full protection against timing 
attacks. For example, at a 2^128 security level, this paper achieves a 
reciprocal decryption throughput of just 60493 cycles (plus cipher cost 
etc.) on a single Ivy Bridge core. These algorithms rely on an additive 
FFT for fast root computation, a transposed additive FFT for fast 
syndrome computation, and a sorting network to avoid cache-timing attacks.




CHES 2013 was late August, already.  Was anyone there?  Any comments on 
McBits?


(skimming paper reveals one gotcha -- huge keys, 64k for 2^80 security. 
 I'm guessing that FFT==fast fourier transform.)


iang




Slides:
http://cr.yp.to/talks/2013.06.12/slides-djb-20130612-a4.pdf
Paper:
http://binary.cr.yp.to/mcbits-20130616.pdf
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] real random numbers

2013-09-15 Thread ianG

On 15/09/13 00:38 AM, Kent Borg wrote:

On 09/14/2013 03:29 PM, John Denker wrote:



And once we have built such vaguely secure systems, why reject entropy
sources within those systems, merely because they you think they look
like squish?  If there is a random component, why toss it out?



He's not tossing it out, he's saying that it is no basis for measurement.

Think of the cryptography worldview -- suppliers of black boxes (MDs, 
encryptions, etc) to the software world are obsessed about the 
properties of the black box, and suppliers want them to be reliable and 
damn near perfect.  No come back, no liability.


Meanwhile, in the software world, we think very differently.  We want 
stuff that is good enough not perfect.  That's because we know that 
systems are so darn complex that the problems are going to occur 
elsewhere -- either other systems that don't have the cryptographic 
obsession, our own mistakes or user issues.


E.g., SHA1 is close to perfect for almost all software needs, but for 
the cryptographers, it isn't good enough any more!  We must have SHA2, 
SHA3, etc.  The difference for most real software is pretty much like 
how many bit angels can dance on a pinhead.


As John is on the supplier side, he needs a measurement that is totally 
reliable and totally accurate.  Squish must therefore be dropped from 
that measurement.


...

You dismiss things like clock skew, but when I start to imagine ways
to defeat interrupt timing as an entropy source, your Johnson noise
source also fails: by the time the adversary has enough information
about what is going on inside the GHz-plus box to infer precise clock
phase, precise interrupt timing, and how fast the CPU responds...they
have also tapped into the code that is counting your Johnson.



Once the adversary has done that, all bets are off.  The adversary can 
now probably count the keys bits in use, and is probably at the point 
where they can interfere at the bit level.


Typically, we don't build designs to that threat model, that way lies 
TPMs and other madness.  In risk terms, we accept that risk, the user 
loses, and we move on.




There are a lot of installed machines that can get useful entropy from
existing sources, and it seems you would have the man who is dying of
thirst die, because the water isn't pure enough.



It is a problem.  Those on the supplier side of the divide cannot 
deliver the water unless it is pure enough.  Those on the builder side 
don't need pure water when everything else is so much sewage.  But oh 
well, life goes on.




Certainly, if hardware manufacturers want to put dedicated entropy
sources in machines, I approve, and I am even going to use rdrand as
*part* of my random numbers, but in the mean time, give the poor servers
a sip of entropy.  (And bravo to Linux distributions that overruled the
purist Linux maintainer who thought no entropy was better than poorly
audited entropy, we are a lot more secure because of them.)



Right.  The more the merrier.



iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-14 Thread ianG

On 14/09/13 18:53 PM, Peter Fairbrother wrote:


But, I wonder, where do these longer equivalent figures come from?



http://keylength.com/ (is a better repository to answer your question.)



iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Random number generation influenced, HW RNG

2013-09-10 Thread ianG

On 10/09/13 06:29 AM, John Kelsey wrote:

  But I am not sure how much it helps against tampered chips.  If I can tamper 
with the noise source in hardware to make it predictable, it seems like I 
should also be able to make it simulate the expected behavior.  I expect this 
is more complicated than, say, breaking the noise source and the internal 
testing mechanisms so that the RNG outputs a predictable output stream, but I 
am not sure it is all that much more complicated.  How expensive is a 
lightweight stream cipher keyed off the time and the CPU serial number or some 
such thing to generate pseudorandom bits?  How much more to go from that to a 
simulation of the expectdd behavior, perhaps based on the same circutry used in 
the unhacked version to test the noise source outputs?



The question of whether one could simulate a raw physical source is 
tantalising.  I see diverse opinions as to whether it is plausible, and 
thinking about it, I'm on the fence.


I'd say it might be an unstudied problem -- for us.  It's sounding like 
an interesting EE/CS project, masters or PhD level?


If anyone has studied it, I'd bet fair money that the NSA has.

iang

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Availability of plaintext/ciphertext pairs (was Re: In the face of cooperative end-points, PFS doesn't help)

2013-09-10 Thread ianG

On 11/09/13 01:36 AM, Jerry Leichter wrote:

(Generating a different one for this purpose is pointless - it would have to be 
random, in which case you might as well generate the IV randomly.)



In a protocol I wrote with Zooko's help, we generate a random IV0 which 
is shared in the key exchange.


http://www.webfunds.org/guide/sdp/sdp1.html

Then, we also move the padding from the end to the beginning, fill it 
with a non-repeating length-determined value, and expand it to a size of 
16-31 bytes.  This creates what is in effect an IV1 or second 
transmitted IV.


http://www.webfunds.org/guide/sdp/pad.html

iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Techniques for malevolent crypto hardware

2013-09-09 Thread ianG

On 9/09/13 06:42 AM, James A. Donald wrote:

On 2013-09-09 11:15 AM, Perry E. Metzger wrote:

Lenstra, Heninger and others have both shown mass breaks of keys based
on random number generator flaws in the field. Random number
generators have been the source of a huge number of breaks over time.

Perhaps you don't see the big worry, but real world experience says
it is something everyone else should worry about anyway.


Real world experience is that there is nothing to worry about /if you do
it right/.  And that it is frequently not done right.

When you screw up AES or such, your test vectors fail, your unit test
fails, so you fix it, whereas if you screw up entropy, everything
appears to work fine.



Precisely.


It is hard, perhaps impossible, to have test suite that makes sure that
your entropy collection works.

One can, however, have a test suite that ascertains that on any two runs
of the program, most items collected for entropy are different except
for those that are expected to be the same, and that on any run, any
item collected for entropy does make a difference.

Does your unit test check your entropy collection?



When I audited the process for root key ceremony for CAcert, I worried a 
fair bit about randomness.  I decided the entropy was untestable 
(therefore unauditable).


So I wrote a process such that several people would bring their own 
entropy source.  E.g., in the one event, 3 sources were used, by 
independent people on independent machines:


  * I used a sha-stream of laptop camera on dark paper [0]
  * Teus used sound card driver [1]
  * OpenSSL's RNG.

The logic was that as long as one person was honest and had a good 
source, and as long as our mixing was verifiable, the result would be good.


Then, I wrote a small C program to mix it [2];  as small as possible so 
a room full of techies could spend no more than 10 minutes checking it 
on the day [3].


The output of this was then fed into the OpenSSL script to do the root 
key.  (I'm interested if anyone can spot a flaw in this concept.)




iang



[0] This idea from Jon Callas from memory, the idea is that the lack of 
light and lack of discrimination between pixels drives the photocells 
into a quantam uncertainty state.

[1] John Denker's sound card driver.
[2] As an amusing sidenote, I accidentally used | to mix the bytes not 
^.  My eyeball tests passed at 2 sources but at 3 sources it was 
starting to look decidedly wonky.
[3] It was discussed on the group at this time, it was advised that the 
output of the mix should be sha'd, which I eventually agreed with, but I 
don't think I did in the event.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-09 Thread ianG
 to understand the internal combustion engine before they
  can drive their car. The real world doesn’t work that way.



Right.  And the reasons for that failure are well understood, in 
multiple parts:  a. economics, b. architecture, and c. committees  
standards [4].


Meanwhile, there have been several *successful* deliveries of secure 
person to person communications where they have challenged those 
assumptions.




No government conspiracy required.



Absolutely!  Required, no.  But if there is interest in this direction, 
we made it too easy:


http://svn.cacert.org/CAcert/CAcert_Inc/Board/oss/oss_sabotage.html



We have seen the enemy and it is...


us,

and in committee, MORE OF US.  In caps, pun not intended :)

The real question is, for me, is whether we are less our own enemy 
apart, and more our own enemy when we get together?




Which all is not to say that the IETF people are bad, or easier to trick 
than other engineers, or dishonest or not hard working.  These 
complaints are strawmen.


It is to say that the IETF's long-chosen model of committees does have 
unforeseen consequences.


These consequences have been historically shown to correlate against 
security.  Perhaps only security, perhaps mildly, but the point is that 
there is precious little evidence that they have improved security.


So maybe all we want to say is that it is time for the IETF engineers to 
look at the numbers, and maybe be skeptical about whether the approach 
is generating security for the end users?




iang







[0] I was there in one of the committees for a decade or so (my company 
could only afford one, the OpenPGP one).  It was hard work, and this was 
an easy committee, with no real competition...  I never saw anyone being 
dishonest.  People worked hard.


[1] In the PGP case, I think it would, in the end, have been far better 
if Jon had just written the whole thing himself and published it as an 
informational draft.  We would have saved 9 of 10 years;  time that 
could have been spent on better UI integration.


[2] perhaps because their personal interests take them elsewhere on a 
learning path, they hop in to learn, then hop off.


[3] consider the disastrous counterpoint of CABForum, the committee for 
the security of the PKI revenue stream.


[4] a. the economics trap of free and open to access.  If e.g., 
either of these things didn't exist, spam wouldn't exist.
b. Email architecture is impractical to secure.  It's in the too hard 
basket, IMHO.  Too much metadata, too broad a standards approach over 
too many systems.
c. S/MIME was a product of standards committee, and the result is 
perhaps the best example of how not to do things.  The major email 
vendors all purchased the standards committee approach, again a 
reflection of established and mandated barriers to entry.  (Meanwhile, 
no major vendors signed up for OpenPGP, which at least was free to enter.)


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

[Cryptography] The One True Cipher Suite

2013-09-09 Thread ianG

On 9/09/13 02:16 AM, james hughes wrote:


I am honestly curious about the motivation not to choose more secure modes that 
are already in the suites?


Something I wrote a bunch of years ago seems apropos, perhaps minimally 
as a thought experiment:




Hypothesis #1 -- The One True Cipher Suite


In cryptoplumbing, the gravest choices are apparently on the nature of 
the cipher suite. To include latest fad algo or not? Instead, I offer 
you a simple solution. Don't.


There is one cipher suite, and it is numbered Number 1.

Cypersuite #1 is always negotiated as Number 1 in the very first 
message. It is your choice, your ultimate choice, and your destiny. Pick 
well.


If your users are nice to you, promise them Number 2 in two years. If 
they are not, don't. Either way, do not deliver any more cipher suites 
for at least 7 years, one for each hypothesis.


   And then it all went to pot...

We see this with PGP. Version 2 was quite simple and therefore stable -- 
there was RSA, IDEA, MD5, and some weird padding scheme. That was it. 
Compatibility arguments were few and far between. Grumbles were limited 
to the padding scheme and a few other quirks.


Then came Versions 3-8, and it could be said that the explosion of 
options and features and variants caused more incompatibility than any 
standards committee could have done on its own.


   Avoid the Champagne Hangover

Do your homework up front.

Pick a good suite of ciphers, ones that are Pareto-Secure, and do your 
best to make the combination strong [1]. Document the short falls and do 
not worry about them after that. Cut off any idle fingers that can't 
keep from tweaking. Do not permit people to sell you on the marginal 
merits of some crazy public key variant or some experimental MAC thing 
that a cryptographer knocked up over a weekend or some minor foible that 
allows an attacker to learn your aunty's birth date after asking a 
million times.


Resist the temptation. Stick with The One.





http://iang.org/ssl/h1_the_one_true_cipher_suite.html
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Why are some protocols hard to deploy? (was Re: Opening Discussion: Speculation on BULLRUN)

2013-09-09 Thread ianG

On 8/09/13 21:24 PM, Perry E. Metzger wrote:

On Sat, 07 Sep 2013 18:50:06 -0700 John Gilmore g...@toad.com wrote:

It was never clear to me why DNSSEC took so long to deploy,

[...]

PS:...


I believe you have answered your own question there, John. Even if we
assume subversion, deployment requires cooperation from too many
people to be fast.

One reason I think it would be good to have future key management
protocols based on very lightweight mechanisms that do not require
assistance from site administrators to deploy is that it makes it
ever so much easier for things to get off the ground. SSH deployed
fast because one didn't need anyone's cooperation to use it -- if you
had root on a server and wanted to log in to it securely, you could
be up and running in minutes.



It's also worth remembering that one reason the Internet succeeded was 
that it did not need the permission of the local telcos and the purchase 
of expensive ISO/OSI stuff from the IT companies in order to get up and 
going.


This lesson is repeated over and over again.  Eliminate permission, and 
win.  Insert multiple permission steps and lose.




We need to make more of our systems like that. The problem with
DNSSEC is it is so obviously architecturally correct but so
difficult to do deploy without many parties cooperating that it has
acted as an enormous tar baby.




iang

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Trapdoor symmetric key

2013-09-08 Thread ianG

On 8/09/13 16:42 PM, Phillip Hallam-Baker wrote:

Two caveats on the commentary about a symmetric key algorithm with a
trapdoor being a public key algorithm.

1) The trapdoor need not be a good public key algorithm, it can be
flawed in ways that would make it unsuited for use as a public key
algorithm. For instance being able to compute the private key from the
public or deduce the private key from multiple messages.

2) The trapdoor need not be a perfect decrypt. A trapdoor that reduced
the search space for brute force search from 128 bits to 64 or only
worked on some messages would be enough leverage for intercept purposes
but make it useless as a public key system.



Thanks.  This far better explains the conundrum.  There is a big 
difference between a conceptual public key algorithm, and one that is 
actually good enough to compete with the ones we typically use.



iang

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-07 Thread ianG

On 7/09/13 01:51 AM, Peter Gutmann wrote:

ianG i...@iang.org writes:


And, controlling processes is just what the NSA does.

https://svn.cacert.org/CAcert/CAcert_Inc/Board/oss/oss_sabotage.html


How does '(a) Organizations and Conferences' differ from SOP for these sorts
of things?



In principle, it doesn't -- which is why SOPs are saboteur's tools of 
preference.  They are used against you, as the lesser experienced people 
can't see the acts behind [1]


The point is one of degree.  SOPs are there to resolve real disputes. 
They can also be used to cause disputes, and to turn any innocent thing 
into a fight.  So do that, and keep doing that!  Pretty soon the org 
becomes a farce.


In contrast, strong leadership (the chair) knows when to put the lid on 
such trivialities and move on.  So, part of the overall strategy is to 
neutralise the strong chair [2].  As John just reported:


  *  NSA employees participted throughout, and occupied leadership roles
 in the committee and among the editors of the documents

Slam dunk.  If the NSA had wanted it, they would have designed it 
themselves.  The only conclusion for their presence that is rational is 
to sabotage it [3].




iang




[0]   SOPs is standard operating procedures.
[1]   This is the flaw in don't attribute to malice what can be 
explained by incompetence.  Explaining by incompetence does not 
eliminate that malice inspired incompetence.  Remember, we are all 
innoculated against malice, so we prefer to see benign causes.
[2]  this is not to say that committees are ill-intentioned or people 
are bad, but that it only takes a few with malicious intent and 
expertise to bring the whole game to a halt.  Cartels such as IETF WGs 
are fundamentally and inescapably fragile.
[3]  as a sort of summer-flu-shot, I present that document to each new 
board as their SOPs.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-07 Thread ianG

On 7/09/13 03:58 AM, Jon Callas wrote:


Could an encryption algorithm be explicitly designed to have properties like this?  I 
don't know of any, but it seems possible.  I've long suspected that NSA might want this 
kind of property for some of its own systems:  In some cases, it completely controls key 
generation and distribution, so can make sure the system as fielded only uses 
good keys.  If the algorithm leaks without the key generation tricks leaking, 
it's not just useless to whoever grabs onto it - it's positively hazardous.  The gun that 
always blows up when the bad guy tries to shoot it


We know as a mathematical theorem that a block cipher with a back door *is* a 
public-key system. It is a very, very, very valuable thing, and suggests other 
mathematical secrets about hitherto unknown ways to make fast, secure public 
key systems.



I'm not as yet seeing that a block cipher with a backdoor is a public 
key system, but I really like the mental picture this is trying to create.


In order to encrypt to that system, one needs the (either) key.  If 
everyone has it (either) the system is ruined.


A public key system is an artiface where one can distribute the public 
key, and not have to worry about the system being ruined;  it's still 
perfectly usable.  Whereas with a symmetric system with two keys, either 
key being distributed ruins the system.


One could argue that the adversary would prefer the cleaner, more 
complete semantics of the public key system -- maybe that is what the 
theorem assumes?  But if I was the NSA I'd be happy with the compromise. 
 I'm good at keeping *my key secret* at least.




iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] People should turn on PFS in TLS

2013-09-07 Thread ianG

On 6/09/13 21:11 PM, Perry E. Metzger wrote:

On Fri, 6 Sep 2013 18:56:51 +0100 Ben Laurie b...@links.org wrote:

The problem is that there's nothing good [in the way of ciphers]
left for TLS  1.2.


So, lets say in public that the browser vendors have no excuse left
for not going to 1.2.

I hate to be a conspiracy nutter, but it is that kind of week. Anyone
at a browser vendor resisting the move to 1.2 should be viewed with
deep suspicion.

(Heck, if they're not on the government's payroll, then shame on them
for retarding progress for free. They should at least be charging. And
yes, I'm aware many of the people resisting are probably doing so
without realizing they're harming internet security, but we can no
longer presume that is the motive.)

Chrome handles 1.2, there is no longer any real excuse for the others
not to do the same.



The sentiment I agree with.  But the record of such transitions is not good.

E.g., Back in September 2009 Ray  Dispensa discovered a serious bug 
with renegotiation in SSL.  According to SSL Pulse, it took until around 
April of this year [0] before 80% of the SSL hosts were upgraded to 
cover the bug.


Which gives us an OODA response loop of around 3-4 years.

And, that was the best it got -- the SSL community actually cared about 
that bug.  It gets far worse in stuff that they consider not to be a 
bug, such as HTTPS Everywhere, TLS/SNI, MD5, browser security fixes for 
phishing, HTTP-better-than-self-signed, HTTPS starting up with its own 
self-signed cert, etc, etc.




iang


[0] it depends on how you measure the 80% mark, though.
PS: More here on OODA loops
http://financialcryptography.com/mt/archives/001210.html
http://financialcryptography.com/mt/archives/001444.html





___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Why prefer symmetric crypto over public key crypto?

2013-09-07 Thread ianG

On 7/09/13 09:05 AM, Jaap-Henk Hoepman wrote:


Public-key cryptography is less well-understood than symmetric-key 
cryptography. It is also tetchier than symmetric-key crypto, and if you pay 
attention to us talking about issues with nonces, counters, IVs, chaining 
modes, and all that, you see that saying that it's tetchier than that is a 
warning indeed.


You have the same issues with nonces, counters, etc. with symmetric crypto so I 
don't see how that makes it preferable over public key crypto.




It's a big picture thing.  At the end of the day, symmetric crypto is 
something that good software engineers can master, and relatively well, 
in a black box sense.  Public key crypto not so easily, that requires 
real learning.  I for one am terrified of it.


Therefore, what Bruce is saying is that the architecture should 
recognise this disparity, and try and reduce the part played by public 
key crypto.  Wherever  whenever you can get part of the design over to 
symmetric crypto, do it.  Wherever  whenever you can use the natural 
business relationships to reduce the need for public key crypto, do that 
too!




iang

ps; http://iang.org/ssl/h2_divide_and_conquer.html#h2.4
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-07 Thread ianG

On 7/09/13 10:15 AM, Gregory Perry wrote:


Correct me if I am wrong, but in my humble opinion the original intent
of the DNSSEC framework was to provide for cryptographic authenticity
of the Domain Name Service, not for confidentiality (although that
would have been a bonus).



If so, then the domain owner can deliver a public key with authenticity 
using the DNS.  This strikes a deathblow to the CA industry.  This 
threat is enough for CAs to spend a significant amount of money slowing 
down its development [0].


How much more obvious does it get [1] ?

iang



[0] If one is a finance geek, one can even calculate how much money the 
opponents are willing to spend.
[1] As an aside, NSA/DoD have invested significant capital in the PKI as 
well.  Sufficient that they will be well aligned with the CA mission, 
and sufficient that they will approve of any effort to keep the CAs in 
business.  But this part is far less obvious.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-06 Thread ianG

On 6/09/13 04:50 AM, Peter Gutmann wrote:

Perry E. Metzger pe...@piermont.com writes:


At the very least, anyone whining at a standards meeting from now on that
they don't want to implement a security fix because it isn't important to
the user experience or adds minuscule delays to an initial connection or
whatever should be viewed with enormous suspicion.



It isn't the whiners that are the NSA plants, but the people behind 
them, egging them on, while also mounting attacks on the competent 
honest ones to confuse and bewilder them.




I think you're ascribing way too much of the usual standards committee
crapification effect to enemy action.



The general process is first to push the group into crap, and then to 
influence it with competence.  In order to influence, the group's own 
competence must be neutralised first.




For example I've had an RFC draft for a
trivial (half a dozen lines of code) fix for a decade of oracle attacks and
whatnot on TLS sitting there for ages now and can't get the TLS WG chairs to
move on it (it's already present in several implementations because it's so
simple, but without a published RFC no-one wants to come out and commit to
it).  Does that make them NSA plants?  There's drafts for one or two more
fairly basic fixes to significant problems from other people that get stalled
forever, while the draft for adding sound effects to the TLS key exchange gets
fast-tracked.  It's just what standards committees do.



And, controlling processes is just what the NSA does.

https://svn.cacert.org/CAcert/CAcert_Inc/Board/oss/oss_sabotage.html

The process of an inside takeover is well known in *certain* circles. 
It only takes one or two very smart competent people to take down an 
entire organisation.  The mechanisms might well be described as 
crapification then exploitation.


This is not to say that the IETF WG chairs are NSA plants, nor that all 
or any particular IETF committee is sunk.  Rather, it is to say that it 
is very difficult to stop a committee being hopeless, and it's rather 
easy to tip a good committee into it.




(If anyone knows of a way of breaking the logjam with TLS, let me know).



In contrast, it is not well known how to repair the damage once done. 
The normal method is to abandon ship, swim away, build another ship with 
1 or 2 others.




Peter.





iang

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-06 Thread ianG

On 6/09/13 11:32 AM, ianG wrote:


And, controlling processes is just what the NSA does.

https://svn.cacert.org/CAcert/CAcert_Inc/Board/oss/oss_sabotage.html



Oops, for those unfamiliar with CAcert's peculiar use of secure 
browsing, drop the 's' in the above URL.  Then it will securely load.


(thanks Joe!)


iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] People should turn on PFS in TLS (was Re: Fwd: NYTimes.com: N.S.A. Foils Much Internet Encryption)

2013-09-06 Thread ianG

On 6/09/13 20:15 PM, Daniel Veditz wrote:

On 9/6/2013 9:52 AM, Raphaël Jacquot wrote:

To meet today’s PCI DSS crypto standards DHE is not required.


PCI is about credit card fraud.



So was SSL ;-)  Sorry, couldn't resist...



Mastercard/Visa aren't worried that
criminals are storing all your internet purchase transactions with the
hope they can crack it later; if the FBI/NSA want your CC number they
can get it by asking.



That's what the crims do to, they ask for all the numbers, they don't 
bother much with SSL.




iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-06 Thread ianG

On 6/09/13 08:04 AM, John Kelsey wrote:


It is possible Dual EC DRBG had its P and Q values generated to insert a 
trapdoor, though I don't think anyone really knows that (except the people who 
generated it, but they probably can't prove anything to us at this point).  
It's also immensely slower than the other DRBGs, and I have a hard time seeing 
why anyone would use it.  (But if you do, you should generate your own P and Q.)



Think bigger picture, think about the intervention possibilities.

E.g., when the NSA goes to a major commercial supplier who is about to 
ship some product that is SP 800-90, they can agree to indeed do that, 
but switch around to the Dual EC DBRG.  And still maintain their 
standards compliance.  As it is likely a closed source, hush-hush area, 
it can even be done without the adversary (who was once called the 
customer) knowing.




...

Where do the world's crypto random numbers come from?  My guess is
some version of the Windows crypto api and /dev/random
or /dev/urandom account for most of them.


I'm starting to think that I'd probably rather type in the results of
a few dozen die rolls every month in to my critical servers and let
AES or something similar in counter mode do the rest.

A d20 has a bit more than 4 bits of entropy. I can get 256 bits with
64 die rolls, or, if I have eight dice, 16 rolls of the group. If I
mistype when entering the info, no harm is caused. The generator can
be easily tested for correct behavior if it is simply a block cipher.


If you're trying to solve the problem of not trusting your entropy source, this 
is reasonable, but it doesn't exactly scale to normal users.  Entropy 
collection in software is a pain in the ass, and my guess is that the 
overwhelming majority of developers are happy to punt and just use the OS' 
random numbers.



Right.  If you don't care, just use what the OS provides.  /dev/urandom 
or CAPI or whatever.  If you do care, you should implement a 
collector-mixer-DRBG design yourself.





iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] NSA and cryptanalysis

2013-09-06 Thread ianG

On 6/09/13 04:44 AM, Peter Gutmann wrote:

John Kelsey crypto@gmail.com writes:


If I had to bet, I'd bet on bad rngs as the most likely source of a
breakthrough in decrypting lots of encrypted traffic from different sources.


If I had to bet, I'd bet on anything but the crypto.  Why attack when you can
bypass [1].

Peter.

[1] From Shamir's Law [2], crypto is bypassed, not penetrated.
[2] Well I'm going to call it a law, because it deserves to be.
[3] This is a recursive footnote [3].



It looks like it is all of the above.  These are the specific 
interventions I have seen mention of so far:


* weakened algorithms/protocols for big players (e.g., GSM, Cisco)
* weakening of RNGs
* inside access by 'covert agents' to hand over secrets (e.g., big 4)
* corruption of the standards process (NIST 2006?)
* corruption of certification process (CSC)
* crunching of poor passwords
* black ops to steal keys
* black ops to pervert systems

Which makes sense.  Why would the biggest player just do one thing ? 
No, they are going to do everything within their power.  They'll try all 
the tricks.  Why not, they've got the money...


What is perhaps more interesting is how these tricks interplay with each 
other.  That's something that we'll have trouble seeing and imagining.




iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] Popular curves (was: NSA and cryptanalysis)

2013-09-04 Thread ianG

On 3/09/13 18:13 PM, Phillip Hallam-Baker wrote:


The real issue is that the P-521 curve has IP against it, so if you
want to use freely usable curves, you're stuck with P-256 and P-384
until some more patents expire. That's more of it than 192 bit
security. We can hold our noses and use P-384 and AES-256 for a while.

 Jon


What is the state of prior art for the P-384? When was it first published?

Given that RIM is trying to sell itself right now and the patents are
the only asset worth having, I don't have good feelings on this. Well
apart from the business opportunities for expert witnesses specializing
in crypto.

The problem is that to make the market move we need everyone to decide
to go in the same direction. So even though my employer can afford a
license, there is no commercial value to that license unless everyone
else has access.


Do we have an ECC curve that is (1) secure and (2) has a written
description prior to 1 Sept 1993?



(Not answering your direct question.)  Personally, I was happy to plan 
on using DJB's Curve25519.  He's done the research and says it is good. 
 Comments?




Due to submarine patent potential, even that is not necessarily enough
but it would be a start.




iang


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Functional specification for email client?

2013-08-31 Thread ianG

Some comments, only.

On 30/08/13 11:11 AM, Ray Dillinger wrote:



Okay...

User-side spec:

1.  An email address is a short string freely chosen by the email user.
 It is subject to the constraint that it must not match anyone else's
 email address, but may (and should) be pronounceable in ordinary
language
 and writable with the same character set that the user uses for
writing.
 They require extension with a domain as current email addresses do,
 but not a domain name in the IETF sense; just a chosen disambiguator
 (from a finite set of a million or so) to make name collisions less of
 a problem.

2.  An email user may have more than one email address.  In fact s/he can
 make up more email addresses at any time.  He or she may choose to
associate
 a tagline -- name, handle, slogan or whatever -- with the address.



An email user may have one or more identities... (confusion between 
email addresses, keys, chat handles, etc).




3.  When an email user gets an email, s/he is absolutely sure that it comes
 from the person who holds the email address listed in its from line.
 S/he may or may not have any clue who that person is.  S/he is also
 sure that no one else has seen the contents of the email.  The
tagline
 and email address are listed in the from: line.



This requirement is troubling, and it has bedevilled many systems 
because it has artificially locked them into perfect traffic analysis, 
low key agility, poor economics, and messy identity semantics.


It typically is an assumption of the email providers that an email 
address must have a certificate, and this allows the certificate to be 
'checked' against the email address.  But it is not necessary nor 
particularly effective.


A better requirement might be worded:

When a user receives an email, she is sure that it comes from the stated 
identity as found in the address book.  The stated identity may not be 
related to the from line.



4.  A user has an address book. The address book can be viewed as a
whole or
 as seen by just one of the user's email addresses.  IOW, if you
have an
 email address that you use for your secret society and a different
email
 address that you use for your job, you can choose to be one or
the other
 and your address book will reflect only the contacts that you have
seen
 from that address or have been visible to under that address.

5.  A mail client observes all email addresses that go through it.



all identities (email addresses and/or keys)...



When a
 user receives mail from someone who has not directly sent them mail
before,
 the client opens a visible entry in the address book and makes
available
 a record of previous less-direct contacts with that address, for
example
 from posts to mailing lists, from CC: lists on emails, etc.  The
client
 also makes visible a list of possible contact sources; places where
the
 correspondent may have seen the address s/he's writing to.
However, often
 enough, especially with cases where it's a scribbled on a napkin
address,
 the client just won't know.

6.  When a user sends mail, s/he knows that no one other than the holder of
 the address/es s/he's sending it to will see the body of the mail,
and also
 that the recipient will be able to verify absolutely that the mail
did in
 fact come from the holder of the user's address.



This needs to nailed down, otherwise the system falls into the abyss of 
digital signatures.  What this means (at a lower level) is that every 
mail is digitally signed by the sender.  It needs to be stated that the 
signature of the sender's key means that the message came from the 
sender's key, and not anything else.  Especially, it is not a signed 
contract, not a non-repudiable bla bla, and is not even a proof that the 
person sent the message (without significant other support).


That is, the digsig is a low-level protocol tool, not a legal digital 
signature.


Also, to bed in a complete understanding, a separate requirement 6.b 
should be added for a second signing process using separate signing 
keys following the notions expressed in (eg) EU digital signature 
directive (eg) OpenPGP cleartext signing.  However, this should be 
clearly stated as optional, as such digital signatures are fraught, and 
if not optional, the system will fail to be implemented and be accepted.






iang

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] NSA and cryptanalysis

2013-08-31 Thread ianG

On 31/08/13 06:10 AM, Aaron Zauner wrote:


On Aug 30, 2013, at 1:17 PM, Jerry Leichter leich...@lrw.com wrote:


So the latest Snowden data contains hints that the NSA (a) spends a great deal of money 
on cracking encrypted Internet traffic; (b) recently made some kind of a cryptanalytic 
breakthrough.  What are we to make of this?  (Obviously, this will all be 
wild speculation unless Snowden leaks more specific information - which wouldn't fit his 
style, at least as demonstrated so far.)


I read that WP report too. IMHO this can only be related to RSA (factorization, 
side-channel attacks).



It's all speculation of course, but that is what it feels like to me. 
An interesting clue from the earlier report is that they aren't there 
yet, they're building towards a capability.  They've figured out some 
way to crack in theoretically, and with a big investment they'll get there.


Which suggests a combination of massive crunch power, keys on the margin 
*and* cribs from side-channel attacks.  The bright shiny new 3rd 
division of the NSA is responsible for the side-channel attack.  And it 
was very expensive...  Coincidence?


Or, it could all be fluff, designed to suck money from cow in w.DC. 
Many a conman has made rich by claiming some secret invention;  the 
investors are the muggins for putting their money in without doing the 
due diligence.




iang

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Separating concerns

2013-08-29 Thread ianG

Hi Phill,

On 28/08/13 21:31 PM, Phill wrote:

And for a company it is almost certain that 'secure against intercept by any 
government other than the US' is an acceptable solution.



I think that was acceptable in general up until recently.  But, I 
believe the threat scenario has changed, and for the worse.


The firewall between national intelligence and all-of-government has 
been breached.  It is way beyond leaks, it is now a documented firehose 
with pipelines so well laid that the downstream departments have 
promulgated their deception plans.


And, they told us so.  In the comments made by the NSA, they have very 
clearly stated that if there is evidence of a crime, they will keep the 
data.  The statement they made is a seismic shift;  the NSA is now a 
domestic  criminal intelligence agency.  I suspect the penny has not 
dropped on this shift as yet, but they have said it is so.


In threat  risk terms, it is now reasonable to consider that the USA 
government will provide national intelligence to back up a criminal 
investigation against a large company.  And, it is not unreasonable to 
assume that they will launch a criminal investigation in order to force 
some other result, nor is it unreasonable for a competitor to USA 
commercial interests to be facing a USA supplier backed by leaks.


E.g., Airbus or Huawei or Samsung ...  Or any company that is engaged in 
a lawsuit against the US government.  Or any wall street bank being 
investigated by the DoJ for mortgage fraud, or any international bank 
with ops in the USA.  Or any company in Iran, Iraq, Syria, Afghanistan, 
Pakistan, India, Palestine,   or gambling companies in the 
Caribbean, Gibraltar, Australia, Britain.  Or any arms deal or energy deal.




(Yes, that makes the task harder.)


iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] Petnames Zooko's triangle -- theory v. practice (was Email and IM are...)

2013-08-28 Thread ianG

On 28/08/13 02:44 AM, radi...@gmail.com wrote:

Zooko's triangle, pet names...we have cracked the THEORY of secure naming, just 
not the big obstacle of key exchange.



Perhaps in a sense of that, I can confirm that we may have an elegant 
theory but practice still eludes us.  I'm working with a design that was 
based on pure petnames  ZT, and it does not deliver as yet.


One part of the problem is that there are too many things demanding 
names, which leads to addressbook explosion.  I have many payment 
vehicles, many instruments, and in a fuller system, many identities. 
Each demanding at least one petname.


And so do my many counterparties.  A second part of the problem is that 
petnames are those I give myself to some thing, but in some definitional 
sense, I never export my petnames (which is from which they derive their 
security).  Meanwhile, the owner of another thing also has a name for it 
which she prefers to communicate about, so it transpires that there is a 
clash between her petname and my petname.  To resolve this I am 
exploring the use of nicknames, which are owner-distributed names, in 
contrast to petnames which are private names.


Which of course challenges the user even more as she now has two 
namespaces of subtle distinction to manage.  Kerckhoffs rolls six times 
in his grave.


Then rises the notion of secured nicknames, as, if Alice can label her 
favourite payment receptacle Alice's shop then so can Mallory.  Doh! 
Introduction can resolve that in theory, but in practice we're right 
back to the world of identity trickery and phishing.  So we need a way 
to securely accept nicknames, deal with clashes, and then preserve that 
security context for the time when someone wishes to pay the real Alice. 
 Otherwise we're back to that pre-security world known as secure browsing.


Then, map the privacy question over the above mesh, and we're in a 
traffic analyst's wetdream.  One minor advantage here is that, 
presswise, we only need to do a little better than Bitcoin, which is no 
high barrier ;)


In sum, I think ZT has inspired us.  It asks wonderfully elegant 
questions, and provides a model to think about the issues.  Petnames and 
related things like capabilities answer a portion of those questions, 
but many remain.  Implementation challenges!




... And I don't think the wider public was concerned/scared enough to care 
before Snowden. Let's hope they care long enough to adopt any viable solutions 
to the problem that might pop up in the wake of all this. The traffic on this 
list the past week is a very welcome thing.



Yes.  I was never scared of the NSA.  But the NSA and the FBI and the 
DEA and every local police force ... that's terrifying.  That's a purer 
essence of terror, far worse than terrorism.  We need a new word.




iang

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Email and IM are ideal candidates for mix networks

2013-08-27 Thread ianG

On 26/08/13 08:47 AM, Richard Clayton wrote:


Even without the recent uproar over email privacy, at some point, someone was
going to come up with a product along the following lines:  Buy a cheap,
preconfigured box with an absurd amount of space (relative to the huge amounts
of space, like 10GB, the current services give you); then sign up for a service
that provides your MX record and on-line, encrypted backup space for a small
monthly fee.  (Presumably free services to do the same would also appear,
perhaps from some of the dynamic DNS providers.)


Just what the world needs, more free email sending provision!  sigh



Right.  One of the problems with email (as pointed out in OP's original 
post) is that it is free to send *and* it can be sent to everyone.  The 
combination of these two assumptions/requirements is essential for spam.


Chat systems have pretty much killed spam by making it non-possible to 
send to everyone.  You need an introduction/invite/process/barrier, first.


This has worked pretty well.  Maybe the writing is on the wall?

Maybe we just need to let email die?

We can move email over to the 'IM technology' layer.  We can retain the 
email metaphor by simply adding it to chat clients, and by adding IM 
technology to existing email clients.  Both clients can allow us to 
write emails and send them, over their known IM channels to known contacts.


Why do we need the 1980s assumption of being able to send freely to 
everyone, anyway?




iang

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: combining entropy

2008-10-25 Thread IanG
Jonathan Katz wrote:
 I think it depends on what you mean by N pools of entropy.


I can see that my description was a bit weak, yes.  Here's a better
view, incorporating the feedback:

   If I have N people, each with a single pool of entropy,
   and I pool each of their contributions together with XOR,
   is that as good as it gets?

My assumptions are:

 * I trust no single person and their source of entropy.

 * I trust at least one person + pool.

 * Entropy by its definition is independent and is private
   (but it is worth stating these, as any leaks will kill us!)

 * Efficiency is not a concern, we just expand the pool size
   (each pool is size X, and the result is size X).

 * The people have ordinary skill.



now to respond to the questions:


1.  I am assuming that at least one pool is good entropy.  This is
partly an assumption of desperation or simplicity.

In practice, no individual (source or person) is trusted at an
isolated level.  But this leads to a sort of circular argument that
says, nobody is trusted.  We can solve this two ways:

I join the circle.  I trust myself, *but* I don't trust
my source of entropy.  So this is still hopeful.

We ensure that there are at least two cartels in the
circle that don't trust each other!  Then, add a dash
of game theory, and the two cartel pools should at
least be independent of each other, and therefore the
result should be good entropy.

I suspect others could more logically arrive at a better assumption,
but for now, the assumption of one trusted person/pool seems to
cover it.

2.  Having thought about Stephan's comment a bit more (because it
arrived first), and a bit more about John D's entropy comments
(because they were precise), it is clear that I need to stress the
privacy / independence criteria, even if strictly covered by the
definition of entropy.  Too much of the practical aspects will
depend on ensuring independence of the pools to just lean blithely
on the definitions.  I had missed that dependency.

3.  The proposals on concatenation and cleanup are tempting.  In
Jon's words, it can solve obvious problems.  However, they introduce
a complexity of understanding the cleanup function, and potential
for failures.  Jack's tradeoffs.  This has made me realise the last
assumption, now added:

   The people have ordinary skill.

Which means they are unable to determine whether a cryptographically
complex cleanup function is indeed cleaning, or not.

Here, then, we reach an obvious limit, in that the people have to be
able to determine that the XOR is doing its job, and they need to be
able to do a bit of research to decide what is their best guess at
their private entropy source.



Thanks to all.

iang


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


combining entropy

2008-10-24 Thread IanG
If I have N pools of entropy (all same size X) and I pool them
together with XOR, is that as good as it gets?

My assumptions are:

 * I trust no single source of Random Numbers.
 * I trust at least one source of all the sources.
 * no particular difficulty with lossy combination.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Lava lamp random number generator made useful?

2008-09-20 Thread IanG
Jerry Leichter wrote:

 At ThinkGeek, you can now, for only $6.99, buy yourself a USB-powered
 mini lava lamp (see http://www.thinkgeek.com/gadgets/lights/7825/). 
 All you need is some way to watch the thing - perhaps a USB camera -
 and some software to extract random bits.  (This isn't *really* a lava
 lamp - the lamp is filled with a fluid containing many small reflective
 plastic chips, lit from below by a small incandescent bulb which also
 generates the heat that keeps the fluid circulating.  From any given
 vantage point, you get flashes as one of the plastic chips gets into
 just the right position to give you a reflected view of the bulb.  These
 should be pretty easy to extract, and should be quite  random.  Based on
 observation, the bit rate won't be very high - a bit every couple of
 seconds - though perhaps you can use cameras at a couple of vantage
 points.  Still, worth it for the bragging rights.)


Does anyone know of a cheap USB random number source?

As a meandering comment, it would be extremely good for us if we had
cheap pocket random number sources of arguable quality [1].

I've often thought that if we had an open source hardware design of
a USB random number generator ... that cost a few pennies to add
onto any other USB toy ... then we could ask the manufacturers to
throw it in for laughs.  Something like a small mountable disk that
returns randoms on every block read, so the interface is trivial.

Then, when it comes time to generate those special keys, we could
simply plug it in, run it, clean up the output in software and use
it.  Hey presto, all those nasty software and theoretical
difficulties evaporate.

iang

[1] the competitive process and a software clean-up would sort out
any quality issues.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Quiet in the list...

2008-09-06 Thread IanG

Allen wrote:

So I'll ask a question. I saw the following on another list:


   I stopped using WinPT after is crashed too many times.
   I am now using Thunderbird with the Enigmail plugin
   for GPG interface. It works rather flawlessly and I've
   never looked back.
http://pgp.mit.edu:11371/pks/lookup?search=0xBB678C30op=index



Yes, I regard the combination of Thunderbird + Enigmail + GPG as the
best existing solution for secure email.


What does anyone think of of the combo?



There are approximately these combos to achieve private 
email for the masses.


1.  GPG command line.  Will never achieve adoption.

2.  GPG + Engimail + Thunderbird.  Will never be totally 
robust because there is too much dependency.  Same as any 
similar combo.  So no widespread adoption.


3.  S/MIME.  Is so badly architectured that it can only be 
got going by neutering a lot of the assumptions, and that 
will take years of work.  Assuming everyone agrees, which 
they don't.  No widespread adoption possible.


4.  Skype.  Doesn't do email, but aside from that minor 
character flaw, it cracked everything else.  It's the best 
example of what it should look like.


5.  Browser solutions are cumbersome, but:  Hushmail versus 
gmail.  If google were to adopt hushmail, then this might 
work.  But, they won't.  They've passed the magic point 
where nose-thumbing at the state is part of the b-plan, and 
anyway, they *like* your data.  They are nothing without 
your data.


6.  Start from scratch.  Will take a long time, and only the 
larger projects have the resources to do this.  But, the 
larger projects are typically those that copy others 
architectures without thought.  See above.




All, strictly IMHO.  YMMV.  Private email for the masses 
isn't really in the foreseeable future.  The future is 
really moving towards newer architetures, email is old and 
tired.  Think mobile, chat and p2p directions.  But those 
guys are quite incremental in their improvements, so it will 
take a while.


(Yes, I know that's not the question you asked :)

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Quiet in the list...

2008-09-06 Thread IanG

Ben Laurie wrote:

IanG wrote:

2.  GPG + Engimail + Thunderbird.  Will never be totally robust because
there is too much dependency.


What does this mean? GPG + Enigmail, whilst not the best architecture I
ever heard of, is a tiny increment to the complexity of Thunderbird.

Are you saying anything other than big software has bugs?



No, interaction between different software packages has 
costs.  When you spend time to load up Thunderbird, then 
load up enigmail, then load up gpg ... this is more work 
than just loading up Tbird and sticking with it.


Then, when a new Thunderbird comes out, you load that up and 
the other packages cease to work.  What do you do?  Wait a 
few months until the others come back into line?  Or stop 
using encrypted email.  The masses do the latter, the geeks 
might do the former.


Most people download one thing and stick to it.  They follow 
the automated upgrades, or don't upgrade at all.  Most 
people have a life other than package management.  These are 
the masses.  For them, the softare has to work first time, 
every time, all the time.  And upgrade itself.


These are the target.  Aiming to do security for geeks alone 
is pointless, it just marks us out for special treatment. 
Using gpg is evidence of your guilt.  Using skype is normal, 
it's just the easiest way to chat and phone.





iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Can we copy trust?

2008-06-03 Thread IanG

Ed Gerck wrote:

Bill Frantz wrote:

[EMAIL PROTECTED] (Ed Gerck) on Monday, June 2, 2008 wrote:

To trust something, you need to receive information from sources 
OTHER than the source you want to trust, and from as many other 
sources as necessary according to the extent of the trust you want. 
With more trust extent, you are more likely to need more independent 
sources of verification.


In my real-world experience, this way of gaining trust is only
really used for strangers. For people we know, recognition and
memory are more compelling ways of trusting.


Recognition = a channel of information
memory = a channel of information

When you look at trust in various contexts, you will still find the need 
to receive information from sources OTHER than the source you want to 
trust. You may use these channels under different names, such as memory 
which is a special type of output that serves as input at a later point 
in time.



It is useful and efficient to get trust from third parties, 
but not essential, imho.  If you find yourself meeting 
someone for the first time in random circumstances, you can 
get to know them over time, and trust them, fully 2nd 
party-wise.


Trust comes from events of risk and reward, not from 
channels.  It just so happens that the best expressions of 
risk and reward are over independent therefore 3rd party 
channels.



The distinguishing aspect between information and trust is this: trust 
is that which is essential to a communication channel but cannot be 
transferred from a source to a destination using that channel.



Trust is an expression of something you may rely on.  It has 
risks, liabilities, obligations, etc.  Information does not 
(yet).



In other 
words, self-assertions cannot transfer trust. Trust me is, actually, a 
good indication not to trust.



Well.  Actions speak louder than words.  The *act* of a 
third party is to put their own reputation at risk if they 
say trust this 2nd person.  This works if the two people 
are independent, but not if the two people are dependent (or 
the same).  If they are independent, the costs incur to one 
party and the benefits incur to another party.


So the independent cost of placing the reputation at risk is 
a significant event.  You can rely on someone who will incur 
cost on your behalf.  Saying trust me carries no risks 
because the benefits cancel out the risks.




We can use this recognition and memory in the online world as well.
SSH automatically recognizes previously used hosts. Programs such
as the Pet Names Tool http://www.waterken.com/user/PetnameTool/
recognize public keys used by web sites, and provide us with a
human-recognizable name so we can remember our previous
interactions with that web site. Once we can securely recognize a
site, we can form our own trust decisions, without the necessity of
involving third parties.


Yes, where recognition is the OTHER channel that tells you that the 
value (given in the original channel) is correct. Just the value by 
itself is not useful for communicating trust -- you also need something 
else (eg, a digital sig) to provide the OTHER channel of information.



Attempting to cast trust as a aspect of channels is a 
technological approach, and will lead one astray, just as 
PKI did;  trust is built on acts, of humans, and involves 
parties and events, risks and rewards.  The channels are 
incidental.


You can see this better in the study of negotiation.  It is 
possible using this theorypractice to build trust, or to 
prove that no trust can be achieved.  Negotiation is 
primarily a paradigm of two parties.


(Economists will recognise it as game theory, prisoner's 
dilemma, perhaps agent-principal theory, etc.)


Your comment that someone who says trust me is in fact 
signalling that they cannot be trusted ... is more clearly 
explained in negotiation.  Often, someone will state up 
front that they want to find the win-win;  which is a signal 
that they are in the win-lose, because real win-win is about 
actions not words, and words in this case would lead to a 
false sense of security.




iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The perils of security tools

2008-05-26 Thread IanG

Steven M. Bellovin wrote:

On Sat, 24 May 2008 20:29:51 +0100
Ben Laurie [EMAIL PROTECTED] wrote:

Of course, we have now persuaded even the most stubborn OS that 
randomness matters, and most of them make it available, so perhaps

this concern is moot.

Though I would be interested to know how well they do it! I did have 
some input into the design for FreeBSD's, so I know it isn't

completely awful, but how do other OSes stack up?


I believe that all open source Unix-like systems have /dev/random
and /dev/urandom; Solaris does as well.



Yes, but with different semantics:

 /dev/urandom is a compatibility nod
 to Linux. On Linux, /dev/urandom will
 produce lower quality output if the
 entropy pool drains, while
 /dev/random will prefer to block and
 wait for additional entropy to be
 collected.  With Yarrow, this choice
 and distinction is not necessary,
 and the two devices behave
 identically. You may use either.

(random(4) from Mac OSX.)

Depending on where you are in the security paranoia 
equation, the differences matter little or a lot.  If doing 
medium level security, it's fine to outsource the critical 
components to the OS, and accept any failings.  If doing 
paranoid-level stuff, then best to implement ones own mix 
and just stir in the OS level offering.  That way we reduce 
the surface area for lower-layer config attacks like the 
Debian adventure.


iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: How to WASTE and want not

2004-05-08 Thread iang
This page seems to describe the security:

http://waste.sourceforge.net/security.html

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]