[Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-01 Thread Dirk-Willem van Gulik

Op 30 sep. 2013, om 05:12 heeft Christoph Anton Mitterer 
cales...@scientia.net het volgende geschreven:
 
 Not sure whether this has been pointed out / discussed here already (but
 I guess Perry will reject my mail in case it has):
 
 https://www.cdt.org/blogs/joseph-lorenzo-hall/2409-nist-sha-3
 This makes NIST seem somehow like liars,... on the one hand they claim

Do keep in mind that in this case the crux is not around SHA-3 as a 
specification/algorithm - but about the number of bits one should use.

One aspect in all this is into what engineering culture standards (such as 
those created by NIST) finally land. 

Is it in one which is a bit insecure and just does the absolute minimum; or is 
it in one where practitioners have certain gut-feels - and take them as 
absolute minimums ?

I do note that in crypto (possibly driven by the perceived expense of too many 
bits) we tend to very carefully observe the various bit lengths found in 
800-78-3, 800-131A , etc etc. And rarely go much beyond it*.

While in a lot of other fields - it is very common for 'run of the mill' 
constructions; such as when calculating a floor, wooden support beam, a joist, 
to take the various standards and liberally apply safety factors. A factor 10 
or 20x too strong is quite common *especially* in 'consumer' constructions.  

It is only when one does large/complex engineering works that you take the time 
to really calculate strength; and even then - a factor 2 or 3 is still very 
common; and barely raises an eyebrow with a cost conscious customer. 

So perhaps we need to look at those NIST et.al. standards in crypto and do the 
same - take them as a absolute minimum; but by default and routinely not feel 
guilty when we add a 10x or more. 

And at the same time evoke a certain 'feeling' of strength with our users. A 
supporting column can just 'look' right or too thin; a BMW car door can just 
make that right sound on closing***. 

And :) :) people like (paying for/owning) tools that look fit for purpose :) :) 
:).

Dw

*) and yes; compute power may have been an issue - but rarely is these days; I 
have a hard time measuring symmetric AES on outbound packet flows relative to 
all other stuff.
**) and yes; compute, interaction/UI/UX  joules may be a worry - but at the 
same time - CPU's have have gotten faster and clever UI's can background things 
or good engineers can device async/queues and what not.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-01 Thread Dirk-Willem van Gulik

Op 1 okt. 2013, om 17:59 heeft Jerry Leichter leich...@lrw.com het volgende 
geschreven:

 On Oct 1, 2013, at 3:29 AM, Dirk-Willem van Gulik di...@webweaving.org 
 wrote:
 ...I do note that in crypto (possibly driven by the perceived expense of too 
 many bits) we tend to very carefully observe the various bit lengths found 
 in 800-78-3, 800-131A , etc etc. And rarely go much beyond it*.
 
 While in a lot of other fields - it is very common for 'run of the mill' 
 constructions; such as when calculating a floor, wooden support beam, a 
 joist, to take the various standards and liberally apply safety factors. A 
 factor 10 or 20x too strong is quite common *especially* in 'consumer' 
 constructions….  

 It's clear what 10x stronger than needed means for a support beam:  We're 
 pretty good at modeling the forces on a beam and we know how strong beams of 
 given sizes are.  

Actually - do we ? I picked this example as it is one of those where this 'we 
know' falls apart on closer examination. Wood varies a lot; and our ratings are 
very rough. We drill holes through it; use hugely varying ways to 
glue/weld/etc. And we liberally apply safety factors everywhere; and a lot of 
'otherwise it does not feel right' throughout. And in all fairness - while you 
can get a bunch of engineers to agree that 'it is strong enough' - they'd argue 
endlessly and have 'it depends' sort of answers when you ask them how strong 
is it 'really' ?

 Oh, if you're talking brute force, sure, 129 bits takes twice as long as 128 
 bits.  
...
 If, on the other hand, you're talking analytic attacks, there's no way to 
 know ahead of time what matters.  

So I think you are hitting the crux of the matter - the material, like most, we 
work with, is not that easy to gauge. But then when we consider your example of 
DES:

 The ultimate example of this occurred back when brute force attacks against 
 DES, at 56 bits, were clearly on the horizon - so people proposed throwing 
 away the key schedule and making the key the full expanded schedule of 448 
 bits, or whatever it came to.  Many times more secure - except then 
 differential cryptography was (re-)discovered and it turned out that 448-bit 
 DES was no stronger than 56-bit DES.

with hindsight we can conclude that despite all this - despite all the various 
instutitions and interests conspiring, fighting and collaborating roughly 
yielded us a fair level of safety for a fair number of years - and that is 
roughly what we got. 

Sure - that relied on 'odd' things; like the s-boxes getting strengthened 
behind the scenes, the EFF stressing that a hardware device was 'now' cheap 
enough. But by and large - these where more or less done 'on time'. 

So I think we roughly got the minimum about right with DES. 

The thing which facinates/strikes me as odd - is that that is then exactly what 
we all implemented. Not more. Not less. No safety; no nothing. Just a bit of 
hand waving to how complex it all is; how hard it is to predict; so we listen 
to NIST* et.al. and that is it then.

*Despite* the fact that, as you so eloquently argue, the material we work with 
is notoriously unpredictable, finnicky and has many an uncontrolled unknown.

And any failures or issues come back to haunt us, not NIST et.al.

 There are three places I can think of where the notion of adding a safety 
 factor makes sense today; perhaps someone can add to the list, but I doubt 
 it will grow significantly longer:
 
 1.  Adding a bit to the key size when that key size is small enough;
 2.  Using multiple encryption with different mechanisms and independent keys;
 3.  Adding rounds to a round-based symmetric encryptor of the design we 
 currently use pretty universally (multiple S and P transforms with some 
 keying information mixed in per round, repeated for multiple rounds).  In a 
 good cipher designed according to our best practices today, the best attacks 
 we know of extend to some number of rounds and then just die - i.e., after 
 some number of rounds they do no better than brute force.  Adding a few more 
 beyond that makes sense.  But ... if you think adding many more beyond that 
 makes sense, you're into tin-foil hat territory.  We understand what certain 
 attacks look like and we understand how they (fail to) extend beyond some 
 number of rounds - but the next attack down the pike, about which we have no 
 theory, might not be sensitive to the number of rounds at all.

Agreed - and perhaps develop some routine practices around which way you layer; 
i.e. what is best wrapped inside which; and where do you (avoid) padding; or 
get the most out of IVs.
 
 These arguments apply to some other primitives as well, particularly hash 
 functions.  They *don't* apply to asymmetric cryptography, except perhaps for 
 case 2 above - though it may not be so easy to apply.  For asymmetric crypto, 
 the attacks are all algorithmic and mathematical in nature, and the game is 
 different.

Very good point (I did

Re: [Cryptography] Cryptographic mailto: URI

2013-09-24 Thread Dirk-Willem van Gulik

Op 20 sep. 2013, om 14:55 heeft Phillip Hallam-Baker hal...@gmail.com het 
volgende geschreven:

 On Fri, Sep 20, 2013 at 4:36 AM, Dirk-Willem van Gulik di...@webweaving.org 
 wrote:
 
 Op 19 sep. 2013, om 19:15 heeft Phillip Hallam-Baker hal...@gmail.com het 
 volgende geschreven:
 
  Let us say I want to send an email to al...@example.com securely.
 ...
  ppid:al...@example.com:example.net:Syd6BMXje5DLqHhYSpQswhPcvDXj+8rK9LaonAfcNWM
 …
...
 id.ns.namespace.fqdn-in-some-tld.
 
 which is in fact a first-come, first-served secure dynamic dns updatable zone 
 containing the public key.
 
 Which once created allows only updating to those (still) having the private 
 key of the public key that signed the initial claim of that id.
 
 Interesting, though I suspect this is attempting to meet different trust 
 requirements than I am.

Most likely. The aim was not so much to secure an entry - but to provide a 
sufficiently solid bread-crum trail to the information which could be used to 
do so; to be able to use both 'trust on first contact' -or- a trust chain; and 
to provide 'low cost' yet very robust pillars that can be managed by 
'untrusted' parties. 

Or in other words - the design focused more on a workable trust infrastructure 
with the governance pushed as close to the (end) user as possible; at the 
expense of some 'absolute default' trust (absolute  as in the sort of trust 
you'd get by default from 'some deity/governement/big-mega-crop says I am 
good/interacting with a legal entity).

Dw.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Cryptographic mailto: URI

2013-09-21 Thread Dirk-Willem van Gulik

Op 19 sep. 2013, om 19:15 heeft Phillip Hallam-Baker hal...@gmail.com het 
volgende geschreven:

 Let us say I want to send an email to al...@example.com securely. 
...
 ppid:al...@example.com:example.net:Syd6BMXje5DLqHhYSpQswhPcvDXj+8rK9LaonAfcNWM
...
 example.net is a server which will resolve the reference by means of a simple 
 HTTP query using the pattern http://host/.well-known/ppid/hash
 Syd...NWM is the Base64 hash of OID-SHA256 + SHA256(X)
..
 So to use this as a mechanism for ghetto key distribution receivers would add 
 the URI into their account. Or let their PKI discovery agent do it for them.

We've been experimenting with much the same. With two twists. Basic principle 
is the same. 

We use:

-   namespace:id

as to keep it short. ID is currently a sha1; namespace is a 2-3 char 
identifier. We then construct with this a 'hardcoded' zone name:

namespace.fqdn-in-some-tld.

which is to have a (signed) entry for in DNS:

id.ns.namespace.fqdn-in-some-tld.

which is in fact a first-come, first-served secure dynamic dns updatable zone 
containing the public key.

Which once created allows only updating to those (still) having the private key 
of the public key that signed the initial claim of that id. 

We assume that loss of a private key means one simply abandonds that entry in 
that namespace; and create anew; after which you update your handles in 
XMPP/messaging land (or in Phillip his example; Linked-In land). Part of the 
reason is that we thus allow id's which are tied to more anonymous/floating 
identifiers.

So the two twists we've made (which are not necessarily a good idea!) is that 
the id is really the public key its sha1 (as we're limited to RSASHA1 only 
throughout); and secondly we hardcode the postfixing 'fqdn-in-some-domain' you 
add after the id.ns. 

And we're also still somewhat in look-aside validation sort of land - with 
respect to trust of the fqdn.tld (which is why it is currently hardcoded).

And secondly - we're clearly not protecting the the identifier we add-in 
without any more revealing communication. We assume a subsequent check of the 
public key in SIG as a followup.

Dw.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Security is a total system problem (was Re: Perfection versus Forward Secrecy)

2013-09-15 Thread Dirk-Willem van Gulik

Op 13 sep. 2013, om 21:23 heeft Perry E. Metzger pe...@piermont.com het 
volgende geschreven:

 On Fri, 13 Sep 2013 08:08:38 +0200 Eugen Leitl eu...@leitl.org
 wrote:
 Why e.g. SWIFT is not running on one time pads is beyond me.
 
 I strongly suspect that delivering them securely to the vast number
 of endpoints involved and then securing the endpoints as well would
..
 The problem these days is not that something like AES is not good
 enough for our purposes. The problem is that we too often build a

While most documents on Swift its move from something very akin to OTP (called 
BKE) seem no longer to be on the internet; the documents:


http://web.archive.org/web/20070218160712/http://www.swift.com/index.cfm?item_id=57203
and

http://web.archive.org/web/20070928013437/http://www.swift.com/index.cfm?item_id=61595

should give you a good introduction; and outline quite clearly what 
organisational issues they where (and to this day stil are) in essence trying 
to solve. 

I found them quite good readings - with a lot of (often) implicit governance 
requirements which have wider applicability.  And in all fairness - quite a 
good example of an 'open' PKi in that specific setting if you postulate you 
trust SWIFT only so-so as a fair/honest broker of information - yet want to 
keep it out of the actual money path. A separation of roles/duties which some 
of the internet PKI's severly lack.

Dw.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Is ECC suspicious?

2013-09-06 Thread Dirk-Willem van Gulik

Op 6 sep. 2013, om 01:09 heeft Perry E. Metzger pe...@piermont.com het 
volgende geschreven:

 http://www.theguardian.com/world/2013/sep/05/nsa-how-to-remain-secure-surveillance
….
 The Suite B curves were picked some time ago. Maybe they have problems.
….
 Now, this certainly was a problem for the random number generator
 standard, but is it an actual worry in other contexts? I tend not to
 believe that but I'm curious about opinions.

Given the use, including that of the wider security/intelligence community, I'd 
expect any issues to be more with very specific curves (either tweaked to be 
that way; or through soft means promoted/pushed/suggested those who by 
happenstance have an issue) that with the ECC as an algorithm/technology class. 
As anything deeper than a curve would assume very aligned/top-down control and 
little political entropy. Not something which 'just the' signal intelligence 
community could easily enforce on the other cats.

Dw
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: Watermarking...

2010-04-20 Thread Dirk-Willem van Gulik

On 19 Apr 2010, at 23:29, Massimiliano Pala wrote:

 Hi all,
 
 I was wondering if any of you have some pointers on the security of
 watermarking. In particular I am interested in public-key or asymmetric
 watermarking algorithms.
 
 Also, do you know of any free-to-use (opensource/etc.) implementation
 that can be used for research-test purposes ?

I found:

http://techrepublic.com.com/1324-4-55.html
PKI based Semi-Fragile Watermark for Visual Content Authentication 
Chamidu Atupelage, Koichi Harada, Member, ACM

of use and easily hacked up.

Dw

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: HSM outage causes root CA key loss

2009-07-14 Thread Dirk-Willem van Gulik

Weger, B.M.M. de wrote:


- if they rely on the CA for signing CRLs (or whatever
   revocation mechanism they're using) then they have to find
   some other way to revoke existing certificates.

...

Seems to me that for signing CRLs it's better to have a separate
Revocation Authority (whose certificate should be issued by
the CA it is revoking for); then revoking can continue when the
CA loses its private key. The CA still may have revoking
authority as well, at least to revoke the Revocation Authority's
certificate...


Unfortunately those code paths seem rarely traveled/tested between 
implementations and even within a single implementations fraught with 
caveats; so one often ends up with a (sub) CA in the same chain as the 
cert one wants to revoke.


 Any other problems? Maybe something with key rollover or
 interoperability?

Aye - and there is another area which is even less traveled than above.

Dw

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Why the poor uptake of encrypted email? [Was: Re: Secrets and cell phones.]

2008-12-09 Thread Dirk-Willem van Gulik


On 8 Dec 2008, at 22:43, David G. Koontz wrote:


JOHN GALT wrote:

StealthMonger wrote:

This may help to explain the poor uptake of encrypted email.  It  
would

be useful to know exactly what has been discovered.  Can you provide
references?


The iconic Paper explaining this is Why Johnny Can't Encrypt  
available

here:  http://portal.acm.org/citation.cfm?id=1251435



Available from the Authors:

http://gaudior.net/alma/johnny.pdf



A later follow up (s/mime; more focus on the KDC):

http://www.simson.net/clips/academic/2005.SOUPS.johnny2.pdf

is IMHO more interesting - as it explores a more realistic hostile  
scenario, seems to pinpoint the core security issue better; and goes  
to some length to evaluate remedial steps. And it does show that a  
large swath of issues in PGP are indeed solvable/solved (now)


Thanks,

Dw

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Raw RSA binary string and public key 'detection'

2008-11-20 Thread Dirk-Willem van Gulik

Been looking at the Telnic (dev.telnic.org) effort.

In essence; NAPTR dns records which contain private details such as a  
phone number. These are encrypted against the public keys of your  
friends (so if you have 20 friends and 3 phone numbers visible to all  
friends - you need 20 subdomains x 3 NAPTR entries under your 'master').


Aside from the practicality of this - given a raw RSA encrypted block  
and a list of public keys - is there any risk that someone could  
establish which of those public keys may have been used to create that  
block ? I.e. something which would be done in bulk for large  
populations; so the use of large tables and what not is quite warranted.


Thanks,

Dw

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: once more, with feeling.

2008-09-17 Thread Dirk-Willem van Gulik

 ... discussion on CA/cert acceptance hurdles in the UI 

I am just wondering if we need a dose of PGP-style reality here.

We're really seeing 3 or 4 levels of SSL/TLS happening here - and whilst
they all appear use the same technology - the assurances, UI,  
operational

regimen, 'investment' and user expectations are way different:

A)  Symbolic Locks (think bicycle locks in Amsterdam, or the little
plastic luggage locks people use) - just a bit of reassurance
that snooping is not trivial.

= so you'd just want your browser to indicate something about
SSL, just like you causally mention mime-type or port-number.

B)  Some sort of assurance that you are talking to whom you think you
are talking to - and that such is the case next time round. And
in a grown up sort of way - but no need to go the investment
to have some paid minder reassuring you that there is no
monster under the bed. E.g. some privacy, say on an online forum.

= so in this case you'd probably want a near friction less
or perhaps even invisible initial persistent accept and
some sort of low-key warning if the cert or chain changed
over time beyond some range.

C)  Fair assurance that you are talking with whom you think
you are talking to - is really that entity - and some
trust. E.g. the canonical credit card payment case.

= behaviour as we have today.

D)  Proper TLS; where both each end of the connection has a well
defined idea of the reliability. E.g. the authenticate
properly with an x509 to a server with a cert against
an explicit list of CA's which are carefully selected
by the 'powers that be' and with full CRLs.

Unfortunately there is currently no way for the server to indicate any
of this; or the user to indicate what his or her expectations are.

So my take is that it is pretty much impossible to get the UI to do
the right thing - until it has this information* - and even then
you have a fair chunk of education left to do :).

But without it - the entire discussion is moot.

As to technical options to accomplish this - it would not be hard
to *_socialise_* a few small technical hints: i.e if it is a
straight  self-signed server, certificate, with minimal data - assume
A; case C is easy; and in case 'D' one would care enough to have
a proper set-up.

That just leaves case B - and distinguishing it from a failed C.  And
that is hard. Especially as a messy B should not compromise a C.

So I guess that needs some very clear marker from the site owner. Which
could be as simple as insisting on things like an funky DN, a CN with
the FQDN set to something like 'ad-hoc', a concept that a certificate
with just a CN, but no other O, OU, L or C fields.

And obviously one could try to boil the ocean; write a small RFC
detailing some OID to put in the certificate for case A  B :) - and
include the fewlines of openssl in the document to make your own
'B' certificate.

Key would not be the technical aspect - but socialising it with enough
webmaster folks** that there is enough of a mass to tempt them
browser boys. And that is the going to be the very hard part :)


Dw

*)  I strongly think that the current plug-ins which check if a
certificates fingerprint is the same from multiple vantage
points around the internet is really quite orthogonal to this
issue. So no solace there.

**) And capitalise on the fact that they need to recreate their  
certificates

as most folks seem to stick to the default 365 days.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: On the randomness of DNS

2008-07-30 Thread Dirk-Willem van Gulik


On 30 Jul 2008, at 19:57, Pierre-Evariste Dagand wrote:


But just how GREAT is that, really? Well, we don'
t know. Why? Because there isn't actually a way test for  
randomness. Your
DNS resolver could be using some easily predicted random number  
generator
like, say, a linear congruential one, as is common in the rand()  
library
function, but DNS-OARC would still say it was GREAT. Believe them  
when they

say it isn't GREAT, though!


Well, they are some tests to judge the quality of a random number
generator. The best known being the Diehard tests:

http://en.wikipedia.org/wiki/Diehard_tests
http://stat.fsu.edu/pub/diehard/

For sure, these tests might be an overkill here. Also, there must be
some tests in the Art of Computer Programming too but I don't have it
at hand right now (shame on me).

I don't see the point of evaluating the quality of a random number
generator by statistical tests. But I might be wrong, though.



Sorry - but something like AES(static-key) encrypt of i++ or SHA1(i++)  
will pass each and everyone of those test very nicely - but with a bit  
of code or silicon peeking - one can probably 'break' this with  
relative ease.


I fail to see how you could evaluate this without seeing the code (and  
even then - I doubt that one can properly do this -- the ?old? NSA  
habit of tweaking your random generated rather than your protocol/ 
algorithm when they wanted your produced upgraded to export quality -  
is terribly effective and very hard to spot).


Or am I missing something ?

Dw

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: On the randomness of DNS

2008-07-30 Thread Dirk-Willem van Gulik


On 30 Jul 2008, at 21:33, Ben Laurie wrote:


For sure, it would be better if we could check the source code and
match the implemented RNG against an already known RNG.
But, then, there is a the chicken or the egg problem: how would you
ensure that a *new* RNG is a good source of randomness ? (it's  
not a

rhetorical questions, I'm curious about other approaches).


By reviewing the algorithm and thinking hard.


But even then - is that really 'possible' - or is this fundamentally a  
black art ?


Dw

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: the joy of enhanced certs

2008-06-04 Thread Dirk-Willem van Gulik



On Wed, 4 Jun 2008, Perry E. Metzger wrote:


I'm thinking of starting a CA that sells super duper enhanced
security certs, where we make the company being certified sign a
document in which they promise that they're absolutely trustworthy.
To be really sure, we'll make them fax said document in on genuine
company letterhead, since no one can forge letterhead.


Sorry - not quite good enough. You lack that key thing to make this 
secure and win the war on them internet terrorists.


You totally missed the fundamental crucial and the totally aspect of your 
Unique Selling Proposition: it _has_ to be very very very expensive. And 
people have to know that it was, indeed, very very expensive.


Dw

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [mm] delegating SSL certificates

2008-03-17 Thread Dirk-Willem van Gulik


On Mar 16, 2008, at 7:52 PM, Ben Laurie wrote:


Dirk-Willem van Gulik wrote:
So I'd argue that while x509, its CA's and its CRL's are a serious  
pain to deal** with, and seem add little value if you assume avery  
diligent and experienced operational team -- they do provide a  
useful 'procedural' framework and workflow-guide which is in itself  
very valuable, relatively robust and are a little bit  
organisationally inherently fail-safe. The latter as you are  
forced to think about expiry of the assertions, what to do when a  
CRL is too old and so on.


I think there's a large gulf between the use case where the relying  
party and the CA are the same entity, and where they do not even  
have a contractual arrangement.


I think you are hitting a key point here. In a way - a CA (or some sub- 
CA) is less of an authority and more of a, ideally, somewhat  
consistent organizational realm.


CAs within a corporate environment may well be a good idea in some  
cases, indeed. As you know, we've been pushing on this idea at the  
Apache Software Foundation for some time now, hindered only by our  
laziness :-)


And at the same time we need to learn to, or be weaned away from, the  
hardened shell perimeter ideas, that of a single super reliable root -  
and start to see a CA as something like one of the Kerberos KDC's we  
trust, just a NIS+ server we like, etc.


Dw

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: delegating SSL certificates

2008-03-16 Thread Dirk-Willem van Gulik


On Mar 16, 2008, at 12:32 PM, Ben Laurie wrote:


[EMAIL PROTECTED] wrote:

So at the company I work for, most of the internal systems have
expired SSL certs, or self-signed certs.  Obviously this is bad.


You only think this is bad because you believe CAs add some value.

SSH keys aren't signed and don't expire. Is that bad?


Agred - that (signing) in itself not important  - however in a (large)  
corporate environment overall plain keys does force one to re-invent  
the same kind of CA and/or CRL wheel with respect to the expiring and  
the lack of a managed authority.


I recently came across two installations where ssh public keys where  
used to great avail (combined with a command= construct which would  
launch various curses/IBM tn3270 user interfaces), in one case  
combined with a commercial product where a x509 on a chipcard would  
login and 'unlock' a users windows home directory/registry. This  
system had been going on for many, many years and had seen several OS  
migrations.


With the advent of faster moving windows/laptops - a lot keys had been  
'lost'. Some due to real loosing of a laptop; most due to automated  
upgrades wiping the users transients home-directory/registry.


After a bit of scripting it seemed that for every key which had been  
used in the last few weeks; a little over 8 keys where 'dormant'. A  
manual quick sample confirmed that most of those where associated with  
lost/retired equipment (hire/fire was a well controlled HR process).  
Looking at an  authorized keys file revealed little - as few, if any,  
comments where filled out.


Couple of things suprized me, and/or where a serious of an eye opener  
to me:


A   Even very experienced sysadmins can make the conceptual
error that an old 'public key' is not 'dangerous' _because_
it is public. Therefore you do not need to keep careful
track of them or be 'super diligent' when managing your
key files.

B   The very nature of the ssh public key (esp. when generated
in an environment where the comment field is not easily
attributed to a specific user; e.g. on windows some tools
just put the text 'Generated by SShKey.exe' in that field)
is very hard to manage - and really assumes a 1:1 mapping
of a unix account to a real person.

C   The lack of expiry _combined_ with the lack of easily
'documenting' an ssh key (i.e. the full key or even the
fingerprint or bubblebabble of the ssh pub key is a bit
painful when you need to cross a cut-and-paste barrier)
easily creates and environment where the keys start to
lag behind or where the pain combined with false security
due to the 'A' misconception starts to conspire
against you.

And as an aside - as one of the organisations already had a PKI rolled  
out into every nook and cranny - so using the Roumen Petrof patches  
which add x509 to openssh* - has helped solve some of the worse  
excesses virtually overnight.


So I'd argue that while x509, its CA's and its CRL's are a serious  
pain to deal** with, and seem add little value if you assume avery  
diligent and experienced operational team -- they do provide a useful  
'procedural' framework and workflow-guide which is in itself very  
valuable, relatively robust and are a little bit organisationally  
inherently fail-safe. The latter as you are forced to think about  
expiry of the assertions, what to do when a CRL is too old and so on.


Or perhaps we're comparing apples and oranges; ssh is just a pure pub/ 
priv key pair -- whereas x509 is a management framework in which you  
happen to also have embedded and manage a pub/priv key pair along with  
a whole lot of other things.


However - as firewalls and hardening of the far-outer perimeter is  
increasingly becoming ineffective, as you increasingly look at fine  
grained controls close to the user and (end) applications -- we do  
need to come to grips (much) better with the distributed management  
tools which let us map those controls to the desired social/ 
organisational model they are functioning within.


Thanks,

Dw.

*: http://www.roumenpetrov.info/openssh/ (and I'd love those in  
openssh itself, and in solaris please :)
**: not in the least as they force you to tackle nasty organisational  
questions such as who is really responsible for what; rather than let  
it fester it into some ops-team 'we always did it like that' fudge.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: patent of the day

2008-01-24 Thread Dirk-Willem van Gulik

On Wed, 23 Jan 2008, Leichter, Jerry wrote:

 well be prior art, but the idea of erasing information by deliberately
 discarding a key is certainly not completely obvious except in
 retrospect.  If you look at any traditional crypto text, you won't

Hmm - it is commonly mentioned that (early) hardware based trusted
computer environments store a small key (or part thereof, the other part
beeing some PIN, etc) in their tamperproof environment (wired as to be
ereased when any tampering, xraying, temp shock, etc is detected) which is
during normal operations used to decrypt some flash or disk based larger
bit of key material inside the secure environment.

The other senario is that of using a multitude of public keys (with some
organisational semantic) which are used to encrypt a backup; destruction
of a specific private key then selectively takes out a certain set of
file(s) from the backup tape without having to drag that tape out of the
vault and having to erase a small piece of it.

Dw

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [EMAIL PROTECTED]: Skype security evaluation]

2005-10-26 Thread Dirk-Willem van Gulik


On Mon, 24 Oct 2005, cyphrpunk wrote:

 Is it possible that Skype doesn't use RSA encryption? Or if they do,
 do they do it without using any padding, and is that safe?

You may want to read the report itself:

http://www.skype.com/security/files/2005-031%20security%20evaluation.pdf

and perhaps section 3.2.3 (about padding) and 3.2.2 (about how RSA is
used) may help with this (and what it is used for in section 2).

Dw.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [Forwarded] RealID: How to become an unperson.

2005-07-08 Thread Dirk-Willem van Gulik

On Tue, 5 Jul 2005 [EMAIL PROTECTED] wrote:

 (currently in Boston, MA, after giving fingerprints at the
 airport immigration)

And you may have then noticed the interesting effect; in Germany we have
mandatory cards - carry them round always - but virtually have to show
them. And only to officials often.

In the US they have no official card - yet even the lowest clerk at the
blockbuster video asks for one...

Dw.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: encrypted tapes

2005-06-09 Thread Dirk-Willem van Gulik


On Wed, 8 Jun 2005, Perry E. Metzger wrote:

 Dan Kaminsky [EMAIL PROTECTED] writes:

  Yes, because key management is easy or free.

Eh - my experience is that that is where 99% of the cost is - in the whole
human procedures and vetting around it. The paper work, the auditing,
dealing with corperate power shuffles, getting 'hard' retention rules out
of the resonsible people and their conflicting advisors, etc.

 If you have no other choice, pick keys for the next five years,
 changing every six months, print them on a piece of paper, and put it
 in several safe deposit boxes. Hardcode the keys in the backup

We've been doing systems much like this; with the added twist that a) data
is keyed to a key matching how long its retention policy is, b) every
month or so certain private keys are destroyed as the data keyed to has
reached its limit and c) they are stored (with a recovery scheme) on
tamperproof dallas iButtons (which have a reliable clock) to make the
issues around operations (destroy at the right time) and trust (no need to
trust they key maker).

 Er, no. An error in CBC wipes out only the following block. Errors do
 not propagate past that in CBC. This is not especially worse than the
 situation right now.

And in actual practice we do not see this in the real world. We -do- see
serious issues with the compression used inside the drives though.
Specialist can help you - and the data you get back from them can then be
decrypted. The fact that it is opaque is not a problem for those recovery
experts.

Dw.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS passive sniffing

2004-12-05 Thread Dirk-Willem van Gulik


On Wed, 1 Dec 2004, Anne  Lynn Wheeler wrote:

 the other attack is on the certification authorities business process

Note that in a fair number of Certificate issuing processes common in
industry the CA (sysadmin) generates both the private key -and-
certificate, signs it and then exports both to the user their PC (usually
as part of a VPN or Single Sing on setup). I've seen situations more than
once where the 'CA' keeps a copy of both on file. Generally to ensure that
after the termination of an employeee or the loss of a laptop things 'can
be set right' again.

Suffice to say that this makes evesdropping even easier.

Dw

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Satellite eavesdropping of 802.11b traffic

2004-05-30 Thread Dirk-Willem van Gulik
On May 27, 2004, at 12:35 PM, John Kelsey wrote:
Does anyone know whether the low-power nature of wireless LANs 
protects them from eavesdropping by satellite?  Is there some simple 
reference that would easily let me figure out whether transmitters at 
a given power are in danger of eavesdropping by satellite?

If you assume a perfect vacuum (and note that the athmosphere is fairly 
opaque at 2.4 Ghz) and perfect antenna's etc - then the specific 
detectivity needed in space suggests a not unresonably sized (m2's) and 
cold antenna (below 180k) by very resonably NEP which is commercially 
available. Given the noise from the earth background (assuming a black 
body radiator) at 2.4, the Sun and the likelyhood that that largish 
antenna catches a fair chunk of exactly that  then you are at the edge 
of what would be realistic. However with some clever tricks and 
processing, like a phase array, you certainly should be able to at 
least detect that short (1-2mseconds) 100Khz wide 2.4Ghz transmisison 
at 0.1 watt is happening - assuming you know where to look. Listening 
in over a country-sized swath over a prologned periods of time is an 
entirely different story. Given that you then need to be at least 3-4 
order's of magnitude better - and that you only get at best square root 
when increase the easy things like  detector size etc, at best  - my 
guess would be that some flying or earthbound is a heck of a lot 
cheaper and more realistic.

There are some good papers on Lidar and Radar detections of clouds in 
the 3Ghz range at 12km which should give you more of an idea of the 
spatial resolution you could accomplish. When looking at these - bear 
in mind that the 2-3kWatt used is reflected by the ice particles - so 
what gets back is 30-40dBZ less - and that you can use a phased locked 
loop amplifier easily.

Dw
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Verisign CRL single point of failure

2004-04-01 Thread Dirk-Willem van Gulik
On Jan 9, 2004, at 8:06 PM, Rich Salz wrote:

dave kleiman wrote:

Because the client has a Certificate Revocation Checking function 
turned on
in a particular app (i.e. IE or NAV).

I don't think you understood my question.  Why is crl.verisign.com 
getting overloaded *now.*  What does the expiration of one of their CA 
certificates have to do with it?  Once you see that a cert has 
expired, there's no need whatsoever to go look at the CRL.  The point 
of a CRL is to revoke certificates prior to their expiration.
Though I have no particular experience with the virus-scan software; 
we've seen exactly
this behavior with a couple of medical app's build onto the same 
libraries. Once any cert
in the bundle is expired the software -insists- on checking with the 
CRL at startup. And it will
hang if it cannot. When it gets the info back - It does not cache the 
(negative) information;
nor does that seem to trigger any clever automated roll-over. We tried 
tricking it with flags like
'superseded' and cessationOfOperation in the reasons/cert status mask - 
but no avail. The
only workaround  we've found is to remove all expired certs from the 
system asap.

My guess it is just a bug in a library; albeit a commonly used one.

Dw.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]