Re: Security by asking the drunk whether he's drunk

2008-12-27 Thread Jerry Leichter

On Dec 26, 2008, at 2:39 AM, Peter Gutmann wrote:


d...@geer.org writes:

I'm hoping this is just a single instance but it makes you remember  
that the
browser pre-trusted certificate authorities really needs to be  
cleaned up.


Given the more or less complete failure of commercial PKI for both  
SSL web
browsing and code-signing (as evidenced by the multibillion-dollar  
cybercrime
industry freely doing all the things that SSL certs and code-signing  
were

supposed to prevent them from doing), it's not so much cleaned up as
replaced with something that may actually work
I just had an interesting experience with a different sort of  
failure:  I tried to buy a DVD from The Teaching Company (www.teach12.com 
).  When I went to check out - or even if when I connect to the top  
level at https://www.teach12.com - I get a complaint that their cert  
is signed  by a unknown authority.  It turns out that they recently  
put an EV certificate in place.  It's issued by VeriSign Class 3  
Extended Validation SSL SGC CA - which neither Safari 3.2.1 nor  
Firefox 3.0.5 on my Mac have ever heard of!


I got in touch with the company and actually received intelligent  
responses both at their 800 number - I placed my order that way - and  
in a response from their customer service people.  Most remarkable -  
almost all organizations ignore such communication.  It's ironic that  
those who appear to be trying the hardest are being screwed over by  
the system that's currently in place - and will inadvertently be  
involved in training users to simply bypass yet another kind of bad  
cert warning.


(I can highly recommend the courses that The Teaching Company  
distributes, by the way.  I usually borrow them from the library, but  
I've bought a few of the best here and there - especially when they  
have sales, as they do right now.)


-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Security by asking the drunk whether he's drunk

2008-12-25 Thread Jerry Leichter

 Just one minor observation:

On Dec 22, 2008, at 5:18 AM, Peter Gutmann wrote:

This leads to a scary rule of thumb for defenders:

1. The attackers have more CPU power than any legitimate user will  
ever have,
  and it costs them nothing to apply it.  Any defence based on  
resource

  consumption is in trouble.

2. The attackers have more money than any legitimate user will ever  
have, and
  it costs them nothing to apply it.  Any defence built around  
financial

  outlay as a limiting factor is in trouble.

  Corollary: Systems that can't defend themselves against a  
situation where
  the financial cost of any operation (for example registering a new  
account)

  is effectively zero is in trouble.
This one is a bit more complicated.  Attackers have access to large  
amounts of money *in relatively small units*.  No matter how many  
credit card accounts you steal, it would be pretty much impossible to  
create an actual, properly populated, physical storefront in a decent  
shopping area.  You can be fairly confident that a physical store is  
what it appears to be.


Granted, what you're discussing is on-line fraud.  My point is that  
this is yet another difference between the on-line and brick-and- 
mortar worlds, and one that leads us astray when we try to apply our  
real-world reasonableness filters to the on-line world.  There are  
many inter-related elements here.  Perhaps the biggest factor is  
*time*:  On-line frauds can be setup, draw in victims, and disappear  
very quickly - only to reappear someplace else.  This allows them to  
built using what is effectively the float on stolen identities - much  
of which will be found and revoked by the end of a billing cycle.  The  
real world has much more inertia - there are many steps involved in  
building out a physical storefront, they take time, and your money has  
to be good across that entire time.  Note that many real-world  
frauds rely on the ability to short-cut what are normally time- 
consuming procedures and disappear before the controls can kick in.   
(Think of check kiting, or of the guys from what appear to be long- 
established local paving companies that pave your driveway with  
cheap oil and are gone by the next morning.)


EV certificates (unsuccessfully) attempt to bring some of this real- 
world checking on line:  They are expensive, and you have to pay in  
one lump.  They're not going to accept a bunch of credit cards.  They  
check your identity, which if done right takes time *and indirectly  
checks that you actually have a history*.  Of course, the actual  
practice is different and, given the incentives in the industry -  
where there is no penalty for giving out an invalid EV certificate,  
and a reward for getting the job done quickly - this is all illusion.


Long-running frauds, while certainly not unknown (hello, Bernie  
Madoff), are relatively rare:  Every day out there is another chance  
to get caught.  The preferred mode of fraud will always be get 'em  
hooked, fleece 'em, get out of town - as fast as you can.  Can we get  
some of the advantages of this real-world fact in the on-line world?   
The best example I know of is CMU's Perspectives effort:  If something  
looks the same to many observers over a period of time, it's more  
likely to be trustworthy.  Of course, if this kind of thing catches  
on, it will be much harder for a startup to gain instant recognition.   
The Internet need for speed isn't compatible with safety.  Some  
tradeoffs are inevitable.


-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: CPRNGs and assurance...

2008-12-18 Thread Jerry Leichter

On Dec 17, 2008, at 3:18 PM, Perry E. Metzger wrote:


I'd like to expand on a point I made a little while ago about the
just throw everything at it, and hope the good sources drown out the
bad ones entropy collection strategy.

The biggest problem in security systems isn't whether you're using 128
bit or 256 bit AES keys or similar trivia. The biggest problem is the
limited ability of the human mind to understand a design. This leads
to design bugs and implementation bugs. Design and implementation
flaws are the biggest failure mode for security systems, not whether
it will take all the energy in our galaxy vs. the entire visible
universe to brute force a key.

So, if you're designing any security system, the biggest thing on your
mind has to be how to validate that the system is secure. That
requires ways to know your design was correct, and ways to know you
actually implemented your design correctly

Excellent points.

For the particular case of random generators based on mixing multiple  
sources, I would suggest that there are some obvious - if, apparently,  
little-used - testing strategies that will eliminate the most common  
failure modes:


1.  Test the combiner.  The combiner is a deterministic function.  If  
you give it known inputs, the results will always be the same.  The  
result is supposed to depend sensitively on all the inputs, so if you  
change any input, you should get very outputs.  This kind of testing  
would have avoid the Debian fiasco.


Note that knowing you have to write such a test will also discourage  
throwing in all sorts of complexity you don't understand because it  
can't hurt.  It can, and has.


2.  There are many tests you can apply that will detect *non*- 
randomness.  Test the *inputs* to your combiner.  If an input  
consistently fails, think about whether it's adding adding enough  
value to be worth the complexity.  If your inputs normally succeed and  
start failing ... something is wrong.


Since it's cheap to do, you might as well apply the same test to the  
output of the combiner - but don't expect to learn anything:  With any  
decent combiner, even fixed inputs should produce random-looking  
output.  So any problem detected this way is very serious.

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: CPRNGs are still an issue.

2008-12-17 Thread Jerry Leichter

On Dec 16, 2008, at 12:10 PM, Simon Josefsson wrote:

...I agree with your recommendation to write an AES key to devices at
manufacturing time.  However it always comes with costs, including:

1) The cost of improving the manufacture process sufficiently well to
make it unlikely that compromised AES keys are set in the factory.

2) The cost of individualizing each device.

Each of these costs can be high enough that alternative approaches can
be cost-effective. (*) My impression is that the cost and risks in 1)
are often under-estimated, to the point where they can become a
relatively cheap attack vector.

/Simon

(*) In case anyone doubts how the YubiKey works, which I'm affiliated
with, we took the costs in 1) and 2).  But they are large costs.  We
considered to require users to go through an initial configuration  
step

to set the AES key themselves.  However, the usability cost in that is
probably higher than 1) and 2).
Configuration at installation seems to be worth considering.  It's a  
matter of making that as easy as possible.  Asking users for the AES  
key is not easy - people aren't good at generating, or even entering,  
random 128-bit strings.  However, you might be able to get them to  
push a reset button - or even connect and disconnect the device - a  
number of times and use the timing as a source of entropy.  For  
something like a network interface, it might be reasonable to assume  
that an attacker is unlikely to be present at exactly the time of  
initial configuration, so simply pulling bits off the wire/out of the  
air during initialization isn't unreasonable.  In general, given the  
assumption that it's easier to keep the initialization environment  
reasonably secure than it is the general fielded environment, and that  
you can afford much more time during initial configuration than is  
likely during normal operation, all kinds of things that are marginal  
if used operationally may be workable for initial configuration.   
(Also, of course, operational use may be unattended, but in most cases  
you can assume that initial configuration is attended.)

-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: CPRNGs are still an issue.

2008-12-17 Thread Jerry Leichter

On Dec 16, 2008, at 4:22 PM, Charles Jackson wrote:
I probably should not be commenting, not being a real device guy.   
But,
variations in temperature and time could be expected to change SSD  
timing.
Temperature changes will probably change the power supply voltages  
and shift
some of the thresholds in the devices.  Oscillators will drift with  
changes
in temperature and voltage.  Battery voltages tend to go down over  
time and
up with temperature.  In addition, in some systems the clock  
frequency is
purposely swept over something like a 0.1% range in order to smooth  
out the
RF emissions from the device.  (This can give a 20 or 30 dB  
reduction in
peak emissions at a given frequency.  There is, of course, no change  
in

total emissions.)

Combine all of these factors, and one can envision the SSD cycles  
taking
varying numbers of system clock ticks and consequently the low order  
bits of
a counter driven by a system clock would be random.  However, one  
would
have to test this kind of entropy source carefully and would have to  
keep
track of any changes in the manufacturing processes for both the SSD  
and the

processor chip.

Is there anyone out there who knows about device timing that can say  
more?
I'm not a device guy either, but I've had reason to learn a bit more  
about SSD's than is widely understood.


SSD's are complicated devices.  Deep down, the characteristics of the  
underlying storage are very, very different from those of a disk.   
Layers of sophisticated hardware/firmware intervene to make a solid- 
state memory look like a disk.  To take a very simple example:  The  
smallest unit you can read from/write to solid state memory is several  
times the size of a disk block.  So to allow software to continue to  
read and write individual disk blocks, you have to do a layer of  
buffering and blocking/deblocking.  A much more obscure one is that  
the throughput of the memory is maximum when you are doing either all  
reads or all writes; anywhere in between slows it down.  So higher- 
performance SSD's play games with what is essentially double  
buffering:  Do all reads against a segment of memory, while sending  
writes to a separate copy as well as a look-aside buffer to satisfy  
reads to data that was recently written.  Switch the roles of the two  
segments at some point.


Put all this together and the performance visible even at the OS  
driver level will certainly show all kinds of variation.  However,  
just because there's variation doesn't mean there's entropy to be  
had!  You'd need to have a sufficiently detailed model of the inner  
workings of the SSD to be confident that the variations aren't  
predictable.  However, you're not likely to get that:  Getting good  
performance out of SSD's is a black art.  The techniques are highly  
proprietary right now, because they are what make an SSD competitive.   
Further, of course, anything you did learn would likely apply to one  
manufacturing run of one model - just about anything could change at  
any time.


So ... use with extreme caution.  Estimate conservatively.  Mix any  
apparent entropy you get with other sources.

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: CPRNGs are still an issue.

2008-12-17 Thread Jerry Leichter

On Dec 15, 2008, at 2:28 PM, Joachim Strömbergson wrote:
...One could probably do a similar comparison to the increasingly  
popular
idea of building virtual LANs to connect your virtualized server  
running

on the same physical host. Ethernet frame reception time variance as
well as other real physical events should take a hit when moving into
the virtualization domain. After all, replacing physical stuff with SW
is the whole point of virtualization.

Does anybody know what VMware, Parallels etc do to support entropy for
sources like this, or is it basically a forgotten/skipped/ignored  
feature?
They don't seem to be doing very much yet - and the problems are very  
real.  All sorts of algorithms assume that an instance of a running OS  
has some unique features associated with it, and at the least (a)  
those will be fairly stable over time; (b) there will never be two  
instances at the same time.  In different contexts and uses,  
virtualization breaks both of these.  The virtual image captures  
everything there is to say about the running OS and all its  
processes.  Nothing stops you from running multiple copies at once.   
Nothing stops you from saving an image, so replaying the same machine  
state repeatedly.  Conversely, if something in the underlying hardware  
is made available to provide uniqueness of some kind, the ability to  
stop the VM and move it elsewhere - typically between almost any two  
instructions - means that you can't rely on this uniqueness except in  
very constrained ways.


People move to virtualization with the idea that a virtual machine is  
just like a physical machine, only more flexible.  Well - it's either  
just like, or it's more flexible!  It can't be both.  In fact,  
more flexible is what sells virtualization, and the effects can be  
very subtle and far-reaching.  We don't really understand them.

-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: CPRNGs are still an issue.

2008-12-16 Thread Jerry Leichter

On Dec 15, 2008, at 2:09 PM, Perry E. Metzger wrote:

Bill Frantz fra...@pwpconsult.com writes:

I find myself in this situation with a design I'm working on. I
have an ARM chip, where each chip has two unique numbers burned
into the chip for a total of 160 bits. I don't think I can really
depend on these numbers being secret, since the chip designers
thought they would be useful for DRM. It certainly will do no harm
to hash them into the pool, and give them a zero entropy weight.

The system will be built with SSD instead of HDD, so Damien's
comment hits close to home. I hope to be able to use timing of
external devices, the system communicates with a number of these,
along with a microsecond counter to gather entropy from clock skew
between the internal clock and the clocks in those devices.

Unfortunately the system doesn't normally have a user, so UI
timings will be few and far between.

Short of building special random number generation hardware, does
anyone have any suggestions for additional sources?


Given the usual threat model for a device like this, I'd just store an
AES key at the factory and use it in counter mode (or something very
similar) as your PRNG.

Agree in general.  Just one point:


One big issue might be that if you can't store the counter across
device resets, you will need a new layer of indirection -- the obvious
one is to generate a new AES key at boot, perhaps by CBCing the real
time clock with the permanent AES key and use the new key in counter
mode for that session.
This strikes me as additional complication for little purpose.  Keep  
the same AES key - in fact, it might even be useful to either store  
the generated key schedules or even to generate open code for the  
particular device-specific key.  Take the real time clock's value for  
the upper 64 bits of a the input to AES, and use a counter starting at  
0 for the lower 64 bits.  As long as the precision of the RTC is  
sufficient that you can never have two boots with the same value,  
you're fine.  (If you actually have a bigger RTC values you can throw  
away low-order bits.)


Given that we *are* assuming an SSD, of course, you could presumably  
store values across boots - though there are advantages to the RTC,  
since it avoids having to have special cases for things like the  
initialization of the stored value and recovery if the SSD is replaced.

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Attacking a secure smartcard

2008-12-07 Thread Jerry Leichter
I've previously mentioned Flylogic as a company that does cool attacks  
on chip-level hardware protection.  In http://www.flylogic.net/blog/?p=18 
, they talk about attacking the ST16601 Smartcard - described by the  
vendor as offering Very high security features including EEPROM flash  
erase (bulk-erase).  The chip is covered by a metal mesh that, if cut  
or shorted, blocks operation.  However, Flylogic reports:


Using our techniques we call, “magic” (okay, it’s not magic but we’re  
not telling), we opened the bus and probed it keeping the chip alive.   
We didn’t use any kind of expensive SEM or FIB.  The equipment used  
was available back in the 90’s to the average hacker!  We didn’t even  
need a university lab.  Everything we used was commonly available for  
under $100.00 USD.
This is pretty scary when you think that they are certifying these  
devices under all kinds of certifications around the world.


-- Jerry




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: AES HDD encryption was XOR

2008-12-07 Thread Jerry Leichter

On Dec 7, 2008, at 4:10 AM, Alexander Klimov wrote:

http://www.heise-online.co.uk/security/Encrypting-hard-disk-housing-cracked--/news/112141 
:


With its Digittrade Security hard disk, the German vendor
Digittrade has launched another hard disk housing based on the
unsafe IM7206 controller by the Chinese manufacturer Innmax.
The German vendor prominently advertises the product's strong
128-bit AES encryption on its packaging and web page. In
practice, however, the hard disk data is only encrypted using
a primitive XOR mechanism with an identical 512-Byte block for
each sector.
Oh, but that 512-byte block is generated using Triple AES, and is  
highly, highly secure!  :-)


An interesting bit of wording from the site linked to above:   
According to current cryptography research, this would be virtually  
impossible, even with a short key length of only 128 bits.  Although  
the sentence accurately states that AES-128 is thought to be secure  
within the state of current and expected cryptographic knowledge, it  
propagates the meme of the short key length of only 128 bits.  A key  
length of 128 bits is beyond any conceivable brute force attack - in  
and of itself the only kind of attack for which key length, as such,  
has any meaning.  But, as always, bigger *must* be better - which  
just raises costs when it leads people to use AES-256, but all too  
often opens the door for the many snake-oil super-secure cipher  
systems using thousands of key bits.

   -- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Lava lamp random number generator made useful?

2008-09-19 Thread Jerry Leichter
The Lava Lamp Random Number generator (at http://www.lavarnd.org/)  
generates true random numbers from the images of a couple of lava  
lamps.  Of course, as a source of randomness for cryptographic  
purposes, it's useless because it's visible to everyone (though I  
suppose it might be used for Rabin's beacons).


At ThinkGeek, you can now, for only $6.99, buy yourself a USB-powered  
mini lava lamp (see http://www.thinkgeek.com/gadgets/lights/7825/).   
All you need is some way to watch the thing - perhaps a USB camera -  
and some software to extract random bits.  (This isn't *really* a lava  
lamp - the lamp is filled with a fluid containing many small  
reflective plastic chips, lit from below by a small incandescent bulb  
which also generates the heat that keeps the fluid circulating.  From  
any given vantage point, you get flashes as one of the plastic chips  
gets into just the right position to give you a reflected view of the  
bulb.  These should be pretty easy to extract, and should be quite   
random.  Based on observation, the bit rate won't be very high - a bit  
every couple of seconds - though perhaps you can use cameras at a  
couple of vantage points.  Still, worth it for the bragging rights.)


An alternative, also at ThinkGeek, is a USB-powered Plasma Ball (at http://www.thinkgeek.com/geektoys/science/964e/) 
.  The arc discharges should be even easier to convert into a  
bitstream, though it's probably a more biased source than the lava  
lamp, so will need more post-processing.


-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


<    1   2   3