Re: A mighty fortress is our PKI, Part II

2010-07-29 Thread Pat Farrell
On 07/28/2010 08:44 PM, Steven Bellovin wrote:
 When I look at this, though, little of the problem is inherent to
 PKI.  Rather, there are faulty communications paths.
 
 You note that at t+2-3 days, the CA read the news.  Apart from the
 question of whether or not 2-3 days is shortly after -- the time
 you suggest the next step takes place -- how should the CA or Realtek
 know about the problem? 
 [snip]
 The point about the communications delay is that it's inherent to
 anything involving the source company canceling anything -- whether
 it's a PKI cert, a pki cert, a self-validating URL, a KDC, or magic
 fairies who warn sysadmins not to trust certain software.

While I'm quoting Steve, his comment really drives me to a bigger break.

I'd like to build on this and make a more fundamental change. The
concept of a revocation cert/message was based on the standard practices
for things like stolen credit cards in the early 1990s. At the time, the
credit card companies published telephone book sized listings of stolen
and canceled credit cards. Merchant's had the choice of looking up each
card, or accepting a potential for loss.

A lot of the smart card development in the mid-90s and beyond was based
on the idea that the smart card, in itself, was the sole authorization
token/algorithm/implementation.

How about we posit that there is networking everywhere? People carry
cell phones that are serious computers and are connected to serious
networks.

When was the last time you used a paper Yellow Pages?

How about thinking of a solution that addresses 98% of all transactions
for 98% of all people in the places where 98% of business is done. At
some point, the perfect is the enemy of the good. If you have a selling
hut in the middle of nowhere, well, you probably don't have a lot of
computer power either. So calculating to do an RSA signature is out of
the question anyway.

A risk based approach would have an algorithm that looks at the value of
the transaction. Buying a meal at a fast food place is not worth a lot
of effort, so the definition of shortly after can be a second or so.
Buying new 3D TV can have a longer time, with the time allowance, and
expected/acceptable response time, perhaps time for automated actuarial
analysis. When you are signing a contract to buy a house, you can take a
day to verify that everything is proper.

We have fast computers and ubiquitous networking. Why are we still
thinking about systems based on 3 inch think paper books?

We seem to be solving a problem that no longer exists when you look at
it from first principals.

Pat

-- 
Pat Farrell
http://www.pfarrell.com

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Fwd: Introduction, plus: Open Transactions -- digital cash library

2010-07-29 Thread Ian G

Hi Bob,

On 28/07/10 9:08 PM, R.A. Hettinga wrote:

Anyone out there with a coding.clue wanna poke inside this thing and see if 
it's an actual bearer certificate -- and not yet another book-entry --  
transaction system?


Sorry to get your hopes up ... Just reading the words below not the 
code:  it is basically modelled on the SOX/Ricardo concepts, AFAICS.


As you know, the SOX concept used (PGP) keys to make an account with the 
server/issuer Ivan, or a long term persistent relationship, call them 
Alice and Bob.  DigiCash also had something like this too, it's 
essential for application robustness.


The simplest payments metaphor then is a signed instruction to transfer 
from Alice to Bob, which Ivan follows by issuing a signed receipt.  What 
you'd call double entry, but in Ricardo is distinct enough to deserve 
the monika triple-entry (not triple-signed, that is something different, 
another possible innovation).


Then, the blinding formula/transaction is simply a replacement for the 
standard payments tranaction above:  Alice withdraws a coin from Ivan, 
sends it to Bob, who deposits it with Ivan.


(Ricardo had Wagner too from around 2001, and like this author, had a 
path to add Chaum, with future extension to Brands.  The code for Chaum 
was mostly written, but wasn't factored correctly...)


Another possible clue:  the author has obviously taken on board the 
lessons of the Ricardian Contract form, and put that in there (albeit in 
XML).  I find that very encouraging, even the guys from DigiCash never 
understood that one!  So I'm guessing that they have studied their stuff.


BTW, FTR, I do not know who this is.


Cheers,
RAH
Who sees lucre down there in the mousetype and takes heart...



Lucre was 1-2k lines.  Ones heart beats blood into thin air until there 
is another 1-2 orders of body parts built on...  This is looking much 
more like that 1-2 orders of magnitude down the track.




iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-07-29 Thread Alexandre Dulaunoy
On Thu, Jul 29, 2010 at 3:09 AM, Nicolas Williams
nicolas.willi...@oracle.com wrote:

 This is a rather astounding misunderstanding of the protocol.  An
 OCSPResponse does contain unauthenticated plaintext[*], but that
 plaintext says nothing about the status of the given certificates -- it
 only says whether the OCSP Responder was able to handle the request.  If
 a Responder is not able to handle requests it should respond in some
 way, and it may well not be able to authenticate the error response,
 thus the status of the responder is unauthenticated, quite distinctly
 from the status of the certificate, which is authenticated.  Obviously
 only successful responses are useful.

I agree on this and but the implementation of OCSP has to deal with
all non definitive (to take the wording of the RFC) answers. That's
where the issue is. All the exception case, mentioned in 2.3, are
all unauthenticated and it seems rather difficult to provide authenticated
scheme for that part as you already mentioned in [*].

That's why malware authors are already adding fake entries of OCSP
server in the host file... simple and efficient.

I just wanted to raise the point that a model like PK-i relying on complex
scheme for security will easily fail at the obvious as the attacker
is often choosing the shortest/fastest path to reach his goal.


 [*] It's not generally possible to avoid unauthenticated plaintext
    completely in cryptographic protocols.  The meaning of a given bit
    of unauthenticated plaintext must be taken into account when
    analyzing a cryptographic protocol.

-- 
--                   Alexandre Dulaunoy (adulau) -- http://www.foo.be/
--                             http://www.foo.be/cgi-bin/wiki.pl/Diary
--         Knowledge can create problems, it is not through ignorance
--                                that we can solve them Isaac Asimov

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-07-29 Thread James A. Donald

On 2010-07-29 12:18 AM, Peter Gutmann wrote:

This does away with the need for a CA,
because the link itself authenticates the cert that's used.

Then there are other variations, cryptographically generated addresses, ...
all sorts of things have been proposed.

The killer, again, is the refusal of any browser vendor to adopt any of it.


Bittorrent links have this property.  A typical bittorent link looks 
like 
magnet:?xt=urn:btih:2ac7956f6d81bf4bf48b642058d31912479d8d8edn=South+Park+S14E06+201+HDTV+XviD-FQM+%5Beztv%5Dtr=http%3A%2F%2Fdenis.stalker.h3q.com%3A6969%2Fannounce


It is the equivalent of an immutable file in Tahoe.



In the case of FF someone actually wrote the code for them, and it was
rejected.  Without support from browser vendors, it doesn't matter what cool
ideas people come up with, it's never going to get any better.


The browser vendors are married to the CAs

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: deliberately crashing ancient computers (was: Re: A mighty fortress is our PKI)

2010-07-29 Thread Jerry Leichter

On Jul 28, 2010, at 11:04 AM, Jonathan Thornburg wrote:

http://www.crashie.com/ - if you're feeling malicious, just include
the one line JavaScript that will make IE6 crash, maybe eventually  
the

user will figure it out. (Or maybe not).


Please stop and think about the consequences before using something
like this!  People who are still using IE6, Windows 95, etc, are
usually doing so for reasons which make sense in their lives
I agree 100% with the statement that deliberately crashing other  
people's computers is inappropriate.  Don't do that.


But the reasons you give for why there are still IE6 installations out  
there (low computer literacy, slow connections, etc.) aren't quite  
right.  Apparently there are many internally-developed applications at  
companies that are IE6-only.  Often, these were developed by outside  
consultants for customers who have no internal development staff.   
These things keep the business running, and replacing them would be a  
large expense that the companies involved are not in a position to  
incur.


One of the biggest and most visible of such applications was the one  
that the national realtor's organization used to allow its members to  
get access to listings.  They resisted doing anything about that for  
many years.  (I understand that within the last year or so, they  
finally had to respond to complaints from their members and redo the  
site.)


It will be many years before these internal applications disappear.   
They are in a class similar to embedded systems, where replacement of  
working stuff is almost never done, and support obligations on  
long-obsolete software run for decades.  Microsoft would love to  
forget that IE6 ever existed - what was once their way of dominating  
much of the Internet has turned into a millstone around their necks;  
but they can't.  (Analogies to The Ring of Sauron come to mind)


An interesting benefit that some of the businesses with IE6-only  
internal software are finding is that, if they keep their employee's  
machines IE6-only, their employees are increasingly unable to access  
most Internet sites.  Talk about perverse incentives


-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


SHA256 reduced to 112 bits?

2010-07-29 Thread Paul Wouters


Hi,

I've heard rumors of an attack on the SHA-2 family reducing complexity of 
SHA256 to something less or equal of 112 bits.


This attack will apparently be announced in a few days - perhaps at Black Hat or
Def Con?

I would be interested in knowing more.

Paul

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-07-29 Thread Anne Lynn Wheeler

On 07/28/2010 11:52 PM, Pat Farrell wrote:

I'd like to build on this and make a more fundamental change. The
concept of a revocation cert/message was based on the standard practices
for things like stolen credit cards in the early 1990s. At the time, the
credit card companies published telephone book sized listings of stolen
and canceled credit cards. Merchant's had the choice of looking up each
card, or accepting a potential for loss.

A lot of the smart card development in the mid-90s and beyond was based
on the idea that the smart card, in itself, was the sole authorization
token/algorithm/implementation.



that was one of my points ridiculing PKI in the mid-90s ... that the CRL was a 
return to offline point-of-sale payment operation ... and seemed to motivate 
the work on OCSP.

The difference was that in the move to real-time online transactions ... it got 
much high quality operation ... not only could it establish real-time 
valid/not-valid ... but also other real-time characteristics like real-time 
credit limit, recent pattern of transactions, and much more. by comparison, 
OCSP was an extremely poor man's real-time, online transaction

smartcard payment cards started out being stand-alone stored-value to 
compensate for the extremely expensive and limited availability of 
point-of-sale in much of the world ... aka it was stored-value operation where 
the operation could be performed purely offline (the incremental cost of the 
smartcard chip was offset by savings not requiring realtime, online 
transaction).

The telco economics didn't apply to the US ... as seen by the introduction of 
stored-value magstripe based payment cards in the US that did real-time, online 
transaction ... which served the same market niche that the offline smartcard was performing 
in other parts of the world. Between the mid-90s and now, telco costs  connectivity has 
significantly changed around the world ... pervasive uniquitness of the internet, cellphone 
coverage, wireless, ... lots of things.

The common scenario in the past couple decades ... was looking to add more  
more feature/function to smartcards to find the magical economic justification ... 
unfortunately, the increase in feature/function tended to also drive cost ... 
keeping the break even point just out of reach.

Part of the certificateless public key work was to look at chips as a cost item (rather 
than profit item ... since lots of the smartcard work was driven by entities looking to 
profit by smartcard uptake). The challenge was something that had stronger integrity 
than highest rated smartcard but at effective fully loaded cost below magstripe (i.e. I 
had joked about taking a $500 milspec part, cost reducing by 3-4 orders of magnitude 
while improving the integrity). Another criteria was that it had to work within the 
time  power constraints of a (ISO14443) contactless transit turnstyle ... while 
not sacrificing any integrity  security.

By comparison ... one of the popular payment smartcards from the 90s looked at the transit 
turnstyle issue ... and proposed a wireless sleeve for their contact card ... and 15ft 
electromagnetic tunnels on the approach to each transit turnstyle ... where public 
would walk slowly thru the tunnel ... so that the transaction would have completed by the time the 
turnstyle was reached.

Part of achieving lower aggregate cost than magstripe ... was that even after 
extremely aggressive cost reduction, the unit cost was still 2-3 times that of 
magstripe ... however, if the issuing frequency could be reduced (for chip)... 
it was more than recouped (i.e. magstripe unit cost is possibly only 1% of 
fully loaded issuing costs). Changing the paradigm from institutional-centric 
(i.e. institution issued) to person-centric (i.e. person uses the same unit for 
multiple purposes and with multiple institutions) ... saves significant amount 
more (replaces an issuing model with a registration model).

Turns out supposedly a big issue for a transition from an institution-centric (institution issuing) 
to person-centric paradigm ... was addressing how can the institution trust the unit 
being registered. Turns out that trust issue may have been obfuscation ... after 
providing a solution to institution trust ... there was continued big push back to moving off an 
institutional issuing (for less obvious reasons) ... some of the patent stuff (previous mentions) 
covered steps for moving to person-centric paradigm (along with addressing institutional trust 
issues). Part of it involved tweaking some of the processes ... going all the way back to while the 
chip was still part of wafer (in chip manufacturing ... and doing the tweaks in such a way that 
didn't disrupt standard chip manufacturing ... but at the same time reduced steps/costs).

--
virtualization experience starting Jan1968, online at home since Mar1970

-
The Cryptography Mailing List

Re: A mighty fortress is our PKI, Part II

2010-07-29 Thread StealthMonger
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Jerry Leichter leich...@lrw.com writes:

 The only conceivable purpose for using a signature is that you can
 check it *offline*.  If you assume you can connect to the network,
 and that you can trust what you get from the network - why bother
 with a signature?  Simply check a cryptographic hash of the driver
 against an on-line database of known good drivers.

 This is right in line with Lynn Wheeler's frequent mention here that
 the use case for offline verification of certs for commerce
 basically doesn't exist.  It was a nice theory to develop 30 years
 ago, but today the rest of the framework assumes connectivity, and
 you buy nothing but additional problems by focusing on making just
 one piece work off-line.

Not quite.

Untraceable anonymity and untraceable pseudonymity remain one of the
important applications of cryptography, and both depend on store and
forward anonymizing networks which mix traffic by using high random
latency.

The saving qualifier for your assertion is for commerce.  True,
there is not yet a way to securely transmit and store commercial value
(money) offline, but it has not been proven impossible.

For these applications, the security has to be in the message, not the
connection.  Offline verification is essential.


 -- StealthMonger
 stealthmon...@nym.mixmin.net

 --
   stealthmail: Scripts to hide whether you're doing email, or when,
   or with whom.
 mailto:stealthsu...@nym.mixmin.net

Finger for key.

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Processed by Mailcrypt 3.5.8+ http://mailcrypt.sourceforge.net/

iEYEARECAAYFAkxReuIACgkQDkU5rhlDCl7izQCfXuxcHdDT5c54EpATviI+PXCO
MFEAoI62kO/DZcwkw++BpQ4Ey5jTVro6
=6mIw
-END PGP SIGNATURE-

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-07-29 Thread Nicolas Williams
On Thu, Jul 29, 2010 at 10:50:10AM +0200, Alexandre Dulaunoy wrote:
 On Thu, Jul 29, 2010 at 3:09 AM, Nicolas Williams
 nicolas.willi...@oracle.com wrote:
  This is a rather astounding misunderstanding of the protocol.  [...]
 
 I agree on this and but the implementation of OCSP has to deal with
 all non definitive (to take the wording of the RFC) answers. That's
 where the issue is. All the exception case, mentioned in 2.3, are
 all unauthenticated and it seems rather difficult to provide authenticated
 scheme for that part as you already mentioned in [*].
 
 That's why malware authors are already adding fake entries of OCSP
 server in the host file... simple and efficient.

A DoS attack on OCSP clients (which is all this really is) should either
cause the clients to fallback on CRLs or to fail the larger operation
(TLS handshake, whatever) altogether.  The latter makes this just a DoS.
The former makes this less than a DoS.

The real risk would be OCSP clients that don't bother with CRLs if OCSP
Responder can't respond successfully, but which proceed anyways af if
peers' certs are valid.  If there exist such clients, don't blame OCSP.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Obama administration seeks warrantless access to email headers.

2010-07-29 Thread Perry E. Metzger
Quoting:

  The administration wants to add just four words -- electronic
  communication transactional records -- to a list of items that the
  law says the FBI may demand without a judge's approval. Government
  lawyers say this category of information includes the addresses to
  which an Internet user sends e-mail; the times and dates e-mail was
  sent and received; and possibly a user's browser history. It does not
  include, the lawyers hasten to point out, the content of e-mail or
  other Internet communication.

https://www.washingtonpost.com/wp-dyn/content/article/2010/07/28/AR2010072806141.html

-- 
Perry E. Metzgerpe...@piermont.com

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Persisting /dev/random state across reboots

2010-07-29 Thread Richard Salz
At shutdown, a process copies /dev/random to /var/random-seed which is 
used on reboots.
Is this a good, bad, or shrug, whatever idea?
I suppose the idea is that all startup procs look the same ?

tnx.

--
STSM, WebSphere Appliance Architect
https://www.ibm.com/developerworks/mydeveloperworks/blogs/soma/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A slight modification of my comments on PKI.

2010-07-29 Thread Anne Lynn Wheeler

On 07/28/2010 10:34 PM, d...@geer.org wrote:

The design goal for any security system is that the number of
failures is small but non-zero, i.e., N0.  If the number of
failures is zero, there is no way to disambiguate good luck
from spending too much.  Calibration requires differing outcomes.
Regulatory compliance, on the other hand, stipulates N==0 failures
and is thus neither calibratable nor cost effective.  Whether
the cure is worse than the disease is an exercise for the reader.


another design goal for any security system might be security proportional to 
risk. the major use of SSL in the world today is hiding financial transaction 
information ... currently mostly credit card transactions. One of the issues is that the 
value of the transaction information to the merchants (paying for majority of the 
infrastructure) is the transaction profit ... which can be a dollar or two. The value of 
the transaction information to the attackers is the associated account limit/balance, 
which can be several hundred to several thousand dollars. This results in a situation 
where the attackers can afford to outspend the defenders by 100 times or more.

somewhat because of the work on the current payment transaction infrastructure 
(involving SSL, by the small client/server startup that had invented SSL), in 
the mid-90s, we were invited to participate in the x9a10 financial standard 
working group (which had been given the requirement to preserve the integrity 
of the financial infrastructure for all retail payments). the result was the 
x9.59 financial transaction standard. Part of the x9.59 financial transaction 
standard was slightly tweaking the paradigm and eliminating the value of the 
transaction information to the attackers ... which also eliminates the major 
use of SSL in the world today. It also eliminates the motivation behind the 
majority of the skimming and data breaches in the world (attempting to obtain 
financial transaction information for use in performing fraudulent financial 
transactions). note the x9.59 didn't do anything to prevent attacks on SSL, 
skimming attacks, data breaches, etc ... it just eliminated the
major criminal financial motivation for such attacks.

--
virtualization experience starting Jan1968, online at home since Mar1970

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Persisting /dev/random state across reboots

2010-07-29 Thread Thierry Moreau

Richard Salz wrote:
At shutdown, a process copies /dev/random to /var/random-seed which is 
used on reboots.

Is this a good, bad, or shrug, whatever idea?
I suppose the idea is that all startup procs look the same ?

tnx.


First look at http://en.wikipedia.org/wiki/Urandom

There is a tremendous value in the Linux kernel technology, including 
extensive peer review from an IT security perspective.


If you think there are security requirements not met (e.g. assurance of 
entropy characteristics, assurance of implementation configuration 
sanity), then you should state your design goals. Only thereafter we get 
an understanding of good, bad, or more relevant: improved.


Regards,

--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Persisting /dev/random state across reboots

2010-07-29 Thread Nicolas Williams
On Thu, Jul 29, 2010 at 03:47:01PM -0400, Richard Salz wrote:
 At shutdown, a process copies /dev/random to /var/random-seed which is 
 used on reboots.
 Is this a good, bad, or shrug, whatever idea?

If the entropy pool has other, reasonable/fast sources of entropy at
boot time, then seeding the entropy pool at boot time with a seed
generated at shutdown time is harmless (assuming a good enough entropy
pool design).  Else, then this approach can be a good idea (see below).

 I suppose the idea is that all startup procs look the same ?

The idea is to get enough entropy into the entropy pool as fast as
possible at boot time, faster than the system's entropy sources might
otherwise allow.

The security of a system that works this way depends critically on
several things: a) no one reads the seed between the time it's generated
and the time it's used to seed the entropy pool, b) the seed cannot be
used twice accidentally, c) the system can cope with crashes (i.e., no
seed at boot) such as by blocking reads of /dev/random and even
/dev/urandom until enough entropy is acquired, d) the entropy pool
treats the seed as entropy from any other source and applies the normal
mixing procedure to it, e) there is a way to turn off this chaining of
entropy across boots.  (Have I missed anything?)

(a) can't really be ensured.  But one could be sufficiently confident
that (a) is true that one would want to enable this.  (d) means that
every additional bit of entropy obtained from other sources at boot time
will make it harder for an attacker that managed to read this seed to
successfully mount any attacks on you.  (e) would be for the paranoid;
for most users, most of the time, chaining entropy across reboots is
probably a very good idea.  But most importantly, on-CPU RNGs should
make this totally pointless (see previous RNG-on-CPU threads).

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Persisting /dev/random state across reboots

2010-07-29 Thread Paul Wouters

On Thu, 29 Jul 2010, Richard Salz wrote:


At shutdown, a process copies /dev/random to /var/random-seed which is
used on reboots.
Is this a good, bad, or shrug, whatever idea?
I suppose the idea is that all startup procs look the same ?


better then not.

A lot of (pseudo)random comes from disk or network interrupts. These are often
similar during stock system startup. It is even more important if there is no
harddisk but flashdisk, which is not contribting to entropy of the system. This
was a big issue for openwrt (a Linux on Linksys routers) which booted so
similarly every time that there was not enough random left at all.

By saving the entropy from a longer run system at shutdown, you increase the
entropy of the next boot by adding randomness from the previous state(s)

Paul

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A slight modification of my comments on PKI.

2010-07-29 Thread Anne Lynn Wheeler

for the fun of it ... from today ...

Twenty-Four More Reasons Not To Trust Your Browser's Padlock
http://blogs.forbes.com/firewall/2010/07/29/twenty-four-more-reasons-not-to-trust-your-browsers-padlock/?boxes=Homepagechannels

from above:

On stage at the Black Hat security conference Wednesday, Hansen and Sokol 
revealed 24 new security issues with SSL and TLS, the digital handshakes that 
browsers use to assure users they're at a trusted site and that their 
communication is encrypted against snoops.

... snip ...

adding further fuel to long ago motivation that prompted me to coin the term 
merchant comfort certificates.

... as an aside, we were tangentially involved in the cal. data breach notification legislation. we had 
been brought in to help wordsmith the cal. electronic signature act ... and some of the participants 
were heavily involved in privacy issues. They had done in-depth consumer privacy studies and the number 
one issue came up identity theft, namely the account fraud form where criminals 
use account /or transaction information (from data breaches) to perform fraudulent financial 
transactions. It appeared that little or nothing was being done about such data breaches ... and they 
appeared to believe that the publicity from the data breach notifications would motivate corrective 
action to be taken (and as mention in previous post ... we took a slightly different approach to the 
problem in the x9.59 financial transaction standard ... eliminating the ability of crooks to use such 
information for fraudulent transactions).

--
virtualization experience starting Jan1968, online at home since Mar1970

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Persisting /dev/random state across reboots

2010-07-29 Thread John Denker
On 07/29/2010 12:47 PM, Richard Salz wrote:

 At shutdown, a process copies /dev/random to /var/random-seed which is 
 used on reboots.  [1]

Actually it typically copies from /dev/urandom not /dev/random,
but we agree, the basic idea is to save a seed for use at the
next boot-up.

 Is this a good, bad, or shrug, whatever idea?

Before we can answer that, we must have a brief what's your 
threat model discussion.  As always, I define random to 
mean random enough for the application.  The demands vary 
wildly from application to application.  Interesting use cases
include:
 a) low-grade randomness: For non-adversarial applications
  such as Monte Carlo integration of a physics problem, 
  almost any RNG will do.
 b) high-grade randomness: For high-stakes adversarial
  applications, including crypto and gaming, I wouldn't trust 
  /dev/urandom at all, and details of how it gets seeded are 
  just re-arranging the deck chairs on the Titanic.

Discussion:

A) For low-grade applications, procedure [1] is well suited.
It is clearly better than nothing, although it could be
improved.

A conspicuous weakness of procedure [1] is that gets skipped
if the machine goes down due to a software fault, or power
failure, or really anything other than an orderly shutdown.

Rather than writing the file at the last possible opportunity,
it would make at least as much sense to write it much earlier, 
perhaps immediately after boot-up.

B)  At the other extreme, for high-grade applications, /dev/random
is not (by itself) good enough, and asking how it gets seeded is 
just re-arranging deck chairs on the Titanic.

Tangential remark:  It would be nice if the random-seed file 
could be written in a way that did not deplete the amount of
entropy stored in /dev/random ... but given the existing linkage 
between /dev/random and /dev/urandom it cannot.  On the other 
hand, if you have reason to worry about this issue, you shouldn't 
be using /dev/urandom at all anyway.  Remember what von Neuman 
said about living in sin.

Tangential remark:  You could worry about how carefully we
need to read-protect the random-seed file (and all backups
thereof).  But again, if you are worried at that level of
detail, you shouldn't be using a PRNG anyway.  If it needs 
a seed, you are living in sin.

Constructive suggestion:  Use something like Turbid:
  http://www.av8n.com/turbid/
i.e. something that generates a steady stream of honest-to-
goodness entropy.

If you are not sure whether you need Turbid, you ought to use 
it.  It's cheap insurance.  The cost of implementing Turbid is 
very small compared to the cost of proving you don't need it.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com