Re: [Cryptography] People should turn on PFS in TLS (was Re: Fwd: NYTimes.com: N.S.A. Foils Much Internet Encryption)

2013-09-13 Thread John Kelsey
On Sep 10, 2013, at 3:56 PM, Bill Stewart bill.stew...@pobox.com wrote:

 One point which has been mentioned, but perhaps not emphasised enough - if 
 NSA have a secret backdoor into the main NIST ECC curves, then even if the 
 fact of the backdoor was exposed - the method is pretty well known - without 
 the secret constants no-one _else_ could break ECC.
 So NSA could advocate the widespread use of ECC while still fulfilling their 
 mission of protecting US gubbmint communications from enemies foreign and 
 domestic. Just not from themselves.


I think this is completely wrong.

First, there aren't any secret constants to those curves, are there?  The 
complaint Dan Bermstein has about the NIST curves is that they (some of them) 
were generated using a verifiably random method, but that the seeds looked 
pretty random.  The idea here, if I understand it correctly, is that if the 
guys doing the generation knew of some property that made some of the choices 
of curves weak, they could have tried a huge number of seeds till they happened 
upon one that led to a weak curve.  If they could afford to try N seeds and do 
whatever examination of the curve was needed to check it for weakness, then the 
weak property they were looking for couldn't have had a probability much lower 
than about 1/N.  

I think the curves were generated in 1999 (that's the date on the document I 
could find), so we are probably talking about less than 2^{80} operations 
total.  Unlike the case with the Dual EC generator, where a backdoor could have 
been installed with no risk that anyone else could discover it, in this case, 
they would have to generate curves until one fell in some weak curve class that 
they knew about, and they would have to hope nobody else ever discovered that 
weak curve class, lest all the federal users of ECC get broken at once.  

The situation you are describing works for dual ec drbg, but not for the NIST 
curves, as best I understand things.  

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Radioactive random numbers

2013-09-13 Thread Chris Kuethe
(curse you anti-gmail-top-posting zealots...)

On Wed, Sep 11, 2013 at 3:47 PM, Dave Horsfall d...@horsfall.org wrote:

 Another whacky idea...

 Given that there is One True Source of randomness to wit radioactive
 emission, has anyone considered playing with old smoke detectors?


Yep. For fun I wrote a custom firmware for the Sparkfun Geiger counter to
do random bit or byte generation that I could mix into my system's entropy
pool. I'll eventually update the code to also work with the ExcelPhysics
APOC.

acknowledging some prior art: http://www.fourmilab.ch/hotbits/

The ionising types are being phased out in favour of optical (at least in
 Australia) so there must be heaps of them lying around.


There are heaps of them at big-box retailers in the US, with no sign of
going away. I got a couple for $5 each.


 I know - legislative requirements, HAZMAT etc, but it ought to make for a
 good thought experiment.


Low activity sources seem to be fairly unencumbered. There are plenty of
places that will sell calibrated test sources or lumps of random ore for
educational use. Then you get to tell people funny stories about the time
you bought radioactive material on the internet, and someone else gets to
do the compliance paperwork (if necessary).

Homebrew geiger counter rigs aren't exactly practical or scalable - I don't
want to make my datacenter guys cut open a case of smoke detectors and
solder a dozen GM tubes so we can have good random numbers. A better
solution might be to use one of the various thumb-drive sized AVR-USB
boards: load in a simple firmware to emulate a serial port, and emit
samples from the onboard ADCs and RC oscillators... no soldering required.

I was going to say that it's simple to inspect the code - even the
generated assembly or the raw hex - for undesired behavior, then I
remembered the USB side is non-trivial. If you're not using the onboard USB
hardware it's much easier to verify that you're only doing an ADC sample, a
timer read, a couple of comparisons, a UART write, and nothing else
(assuming you offload the whitening to your host's entropy pool).

-- 
GDB has a 'break' feature; why doesn't it have 'fix' too?
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Radioactive random numbers

2013-09-13 Thread Dan Veeneman
On 9/11/2013 6:47 PM, Dave Horsfall wrote:
 Given that there is One True Source of randomness to wit radioactive 
 emission, has anyone considered playing with old smoke detectors?
I did that a decade ago, to wit:

http://etoan.com/random-number-generation/index.html


Cheers,
Dan
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] Stealthy Dopant-Level Hardware Trojans

2013-09-13 Thread Eugen Leitl

http://people.umass.edu/gbecker/BeckerChes13.pdf

Stealthy Dopant-Level Hardware Trojans ?

Georg T. Becker1

, Francesco Regazzoni2

, Christof Paar1,3 , and Wayne P. Burleson1

1University of Massachusetts Amherst, USA

2TU Delft, The Netherlands and ALaRI - University of Lugano, Switzerland

3Horst ortz Institut for IT-Security, Ruhr-Universiat Bochum, Germany

Abstract. 

In recent years, hardware Trojans have drawn the attention of governments and
industry as well as the scientific community. One of the main concerns is
that integrated circuits, e.g., for military or critical infrastructure
applications, could be maliciously manipulated during the manufacturing
process, which often takes place abroad. However, since there have been no
reported hardware Trojans in practice yet, little is known about how such a
Trojan would look like, and how dicult it would be in practice to implement
one.

In this paper we propose an extremely stealthy approach for implementing
hardware Trojans below the gate level, and we evaluate their impact on the
security of the target device. Instead of adding additional circuitry to the
target design, we insert our hardware Trojans by changing the dopant polarity
of existing transistors. Since the modified circuit appears legitimate on all
wiring layers (including all metal and polysilicon), our family of Trojans is
resistant to most detection techniques, including fine-grain optical
inspection and checking against golden chips.  We demonstrate the
ectiveness of our approach by inserting Trojans into two designs | a digital
post-processing derived from Intel's cryptographically secure RNG design used
in the Ivy Bridge processors and a side-channel resistant SBox implementation
and by exploring their detectability and their ects on security.

Keywords: Hardware Trojans, malicious hardware, layout modifications, Trojan
side-channel


signature.asc
Description: Digital signature
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Radioactive random numbers

2013-09-13 Thread Thor Lancelot Simon
On Thu, Sep 12, 2013 at 11:00:47AM -0400, Perry E. Metzger wrote:
 
 In addition to getting CPU makers to always include such things,
 however, a second vital problem is how to gain trust that such RNGs
 are good -- both that a particular unit isn't subject to a hardware
 defect and that the design wasn't sabotaged. That's harder to do.

Or that a design wasn't sabotaged intentionally wasn't sabotaged
accidentally while dropping it into place in a slightly different
product.  I've always thought highly of the design of the Hifn RNG
block, and the outside analysis of it which they published, but years
ago at Reefedge we found a bug in its integration into a popular Hifn
crypto processor that evidently had slipped through the cracks -- I
discussed it in more detail last year at
http://permalink.gmane.org/gmane.comp.security.cryptography.randombit/3020 .

Thor
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] [cryptography] very little is missing for working BTNS in Openswan

2013-09-13 Thread Taral
On Thu, Sep 12, 2013 at 12:04 PM, Nico Williams n...@cryptonector.com wrote:
 Note: you don't just want BTNS, you also want RFC5660 -- IPsec
 channels.  You also want to define a channel binding for such channels
 (this is trivial).

I am not convinced. It's supposed to be *better than nothing*. Packets
that are encrypted between me and whatever gateway the endpoint elects
to use are strictly better than unencrypted packets, from a security
and privacy standpoint.

Insisting that BTNS should not be used without X, Y, and Z had
better come with a detailed explanation of why BTNS without X, Y, Z
makes me *less* secure than no BTNS at all.

-- 
Taral tar...@gmail.com
Please let me know if there's any further trouble I can give you.
-- Unknown
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Thoughts on hardware randomness sources

2013-09-13 Thread Marcus D. Leech

On 09/12/2013 10:38 PM, Thor Lancelot Simon wrote:
The audio subsystem actually posed *two* obvious opportunities: 
amplifier noise from channels with high final stage gain but connected 
by a mixer to muted inputs, and clock skew between system timers and 
audio sample clocks. The former requires a lot of interaction with 
specific audio hardware at a low level, and with a million different 
wirings of input to mixer to ADC, it looks hard (though surely not 
impossible) to quickly code up anything generally useful. The latter 
would be easier, and it has the advantage you can do it 
opportunistically any time the audio subsystem is doing anything 
*else*, without even touching the actual sample data. Unfortunately, 
both of them burn power like the pumps at Fukushima, which makes them 
poorly suited for the small systems with few other sources of entropy 
which were one of my major targets for this. So they are still sitting 
on some back back back burner. Someday, perhaps... Thor 
There are a class of hyper-cheap USB audio dongles with very 
uncomplicated mixer models.  A small flotilla of those might get you 
some fault-tolerance.
  My main thought on such things relates to servers, where power 
consumption isn't really much of an issue.   Similarly these hyper-cheap 
($10.00)
  DVB-T dongles based on the RTL2832U can be made to run in SDR mode, 
and give you a basebanded sample stream of a wide variety of tuned
  RF frequencies--put a terminator on the input, chose your frequency, 
crank up the gain, and pull samples until you're bored



This topic has suddenly become interesting to me in my work life, so I'm 
currently looking at the sensors API for Android.  I thought I had left 
Android work
  behind, but it's coming back to haunt me.  I was playing with the 
sensor outputs on a Nexus tablet today, and it has an impressive array 
of sensors.
  I suspect each of them could contribute a few bits/second of entropy 
without too much trouble.  More investigation is necessary.


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] About those fingerprints ...

2013-09-13 Thread Jerry Leichter
[Perry - this is likely getting too far off-topic, but I've included the list 
just in case you feel otherwise.  -- J]

On Sep 12, 2013, at 12:53 AM, Andrew W. Donoho a...@ddg.com wrote:

 
 On Sep 11, 2013, at 12:13 , Jerry Leichter leich...@lrw.com wrote:
 
 On Sep 11, 2013, at 9:16 AM, Andrew W. Donoho a...@ddg.com wrote:
 Yesterday, Apple made the bold, unaudited claim that it will never save the 
 fingerprint data outside of the A7 chip.
 By announcing it publicly, they put themselves on the line for lawsuits and 
 regulatory actions all over the world if they've lied.
 
 Realistically, what would you audit?  All the hardware?  All the software, 
 including all subsequent versions?
 Jerry,
 
 
 
   First I would audit that their open source security libraries, which 
 every app has to use, are the same as I can compile from sources.
Well ... there's an interesting point here.  If it's an open source library - 
what stops you from auditing it today?

On OSX, at least, if I were to worry about this, I'd just replace the libraries 
with my own compiled versions.  Apple has a long history of being really slow 
about updating the versions of open source software they use.  Things have 
gotten a bit better, but often through an odd path:  Apple gave up on 
maintaining their own release of Java and dumped the responsibility back on 
Oracle (who've been doing a pretty miserable job on the security front).  For 
the last couple of years, Apple distributed a X client which was always behind 
the times - and there was an open source effort, XQuartz, which provided more 
up to date versions.  Recently, Apple decided to have people pull down the 
XQuartz version.  (For both Java and X, they make the process very 
straightforward for users - but the important point is that they're simply 
giving you access to someone else's code.)  They've gone this way with a 
non-open source component as well, of course - Flash.  They never built it 
themselves;
  now they don't even give you help in downloading it.

But ... suppose you replaced, say, OpenSSL with your own audited copy.  There 
are plenty of ways for code that *uses* it to leak information, or just misuse 
the library.  On top of that, we're mainly talking user-level code.  Much of 
the privileged code is closed source.

So ... I'm not sure were auditing gets you.  If you're really concerned about 
security, you need to use trusted code throughout.  A perfectly reasonable 
choice, perhaps - though having regularly used both Linux and MacOS, I'd much 
rather use MacOS (and give up on some level of trustworthiness) for many kinds 
of things.  (There are other things - mainly having to do with development and 
data analysis - that I'd rather do on Linux.)

 Second, the keychain on iOS devices is entirely too mysterious for this iOS 
 developer. This needs some public light shone on it. What exactly is the 
 relationship between the software stack and the ARM TPM-equivalent.
I agree with you.  I really wish they'd make (much) more information available 
about this.  But none of this is open source.  I've seen some analysis done by 
people who seem to know their stuff, and the way keys are *currently* kept on 
iOS devices is pretty good:  They are encrypted using device-specific data 
that's hard to get off the device, and the decrypted versions are destroyed 
when the device locks.  But there are inherent limits as what can be done here: 
 If you want the device to keep receiving mail when it's locked, you have to 
keep the keys used to connect to mail servers around even when it's locked.

 Third, in iOS 7, I can make a single line change and start syncing my 
 customer's keychain data through iCloud. At WWDC this year, Apple did not 
 disclose how they keep these keys secure. (As it is a busy conference, I may 
 have missed it.)
Keychain syncing was part of the old .Mac stuff, and in that case it was clear: 
They simply synced the keychain files.  As I said, there is some information 
out there about how those are secured, and as best I've been able to determine, 
they are OK.  I wish more information was available.

It's not clear whether iCloud will do something different.  Apparently Apple 
removed keychain syncing from the final pre-release version of iOS 7 - it's now 
marked as coming soon.  The suspicion is that in the post-Snowden era, they 
decided they need to do something more to get people to trust it.  (Or, I 
suppose, they may actually have found a bug)  We'll see what happens when 
they finally turn it on.

 Fourth, does Apple everywhere use the same crypto libraries as developers are 
 required to use?
Developers aren't *required* to use any particular API's.  Could there be some 
additional crypto libraries that they've kept private?  There's no way to know, 
but it's not clear why they would bother.  The issue is presumably that NSA 
might force them to include a back door in the user-visible libraries - but 
what would Apple gain, beyond 

Re: [Cryptography] [cryptography] very little is missing for working BTNS in Openswan

2013-09-13 Thread Nico Williams
On Mon, Sep 09, 2013 at 10:25:03AM +0200, Eugen Leitl wrote:
 Just got word from an Openswan developer:
 
 
 To my knowledge, we never finished implementing the BTNS mode.
 
 It wouldn't be hard to do --- it's mostly just conditionally commenting out
 code.
 
 There's obviously a large potential deployment base for
 BTNS for home users, just think of Openswan/OpenWRT.

Note: you don't just want BTNS, you also want RFC5660 -- IPsec
channels.  You also want to define a channel binding for such channels
(this is trivial).

To summarize: IPsec protects discrete *packets*, not discrete packet
*flows*.  This means that -depending on configuration- you might be
using IPsec to talk to some peer at some address at one moment, and the
next you might be talking to a different peer at the same address, and
you'd never know the difference.  IPsec channels consist of ensuring
that the peer's ID never changes during the life of a given packet flow
(e.g., TCP connection).  BTNS pretty much requires IPsec configurations
of that make you vulnerable in this way.  I think it should be obvious
now that IPsec channels is a necessary part of any BTNS
implementation.

Nico
-- 
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Killing two IV related birds with one stone

2013-09-13 Thread Nico Williams
On Wed, Sep 11, 2013 at 06:51:16PM -0400, Perry E. Metzger wrote:
 It occurs to me that specifying IVs for CBC mode in protocols
 like IPsec, TLS, etc. be generated by using a block cipher in counter
 mode and that the IVs be implicit rather than transmitted kills two
 birds with one stone.
 
 The first bird is the obvious one: we now know IVs are unpredictable
 and will not repeat.
 
 The second bird is less obvious: we've just gotten rid of a covert
 channel for malicious hardware to leak information.

I like this, and I've wondered about this in the past as well.  But note
that this only works for ordered {octet, datagram} streams.  It can't
work for DTLS, for example, or GSS-API, or Kerberos, or ESP, 

This can be implemented today anywhere that explicit IVs are needed;
there's only a need for the peer to know the seed if they need to be
able to verify that you're not leaking through IVs.  Of course, we
should want nodes to verify that their peers are not leaking through
IVs.

There's still nonces that are needed at key exchange and authentication
time that can still leak key material / PRNG state.  I don't think you
can get rid of all covert channels...  And anyways, your peers could
just use out-of-band methods of leaking session keys and such.

BTW, Kerberos generally uses confounders instead of IVs.  Confounders
are just explicit IVs sent encrypted.  Confounders leak just as much
(but no more) than explicit IVs, so confounding is a bit pointless --
worse, it wastes resources: one extra block encryption/decryption
operation per-message.

Nico
-- 
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] Finding Entropy Isn't That Hard

2013-09-13 Thread Kent Borg

On 09/11/2013 07:18 PM, Perry E. Metzger wrote:
the world's routers, servers, etc. do not have good sources, 
especially at first boot time, and for customer NAT boxes and the like 
the price points are vicious. 


I agree that things like consumer NAT boxes have a tricky problem, and 
anything that needs high bandwidth random data, but otherwise routers 
and servers are not as bad off as people say.  At least in the case of 
modern servers that are running enough of an OS to include a good 
entropy-pool RGN (like Linux's urandom*).


These boxes have GHz-plus clocks, so fast that the box doesn't have 
that clock, it only exists on-chip.  It is multiplied up from a lower 
frequency external crystal oscillator by an analog PLL which is also 
on-chip.  This fastest clock is commonly used to drive an on-chip 
counter.  These chips also have interrupts from the outside world.  
There is real entropy in the interaction between the two.


What is that value of that counter when the interrupt is serviced? I 
assert there is entropy in the LSB of that value.  A GHz-plus clock is 
running just too fast for someone meters (or kilometers) away to know 
its exact value.  And every time the observer might get the LSB wrong, a 
bit of entropy got by: Use that data to stir an entropy pool.


How do we know it is hard to know the value of a GHz-plus counter? 
Because of the engineering problems suffered by people trying to build 
fast systems.  Clock distribution is difficult--there is a reason that 
high speed clock doesn't exist off-chip, the skew becomes great.  Even 
on-chip clock distribution is tricky and requires careful layout rules 
when designing the chip.  And even on this fast chip, the uses of the 
fastest clock are limited and any functions that will work on a slower 
clock will get a slower clock. Clock distribution is hard.  Hard within 
a large IC, hard on a circuit board, hard between circuit boards, hard 
between boxes, hard between equipment racks, hard between 
buildings...how far away is this nefarious observer, the one who you 
worry might be able to infer the LSB?  I think more than a few cm is too 
far away and if you don't have security at that radius, you don't have 
security.



[* Until Linux kernel 3.6 the person maintaining urandom was busily 
turning off interrupts as a source of entropy, I think because he didn't 
know how much entropy he was getting so better not to get it at all 
(huh?).  In 3.6 this was changed to use all interrupts as entropy 
sources, which is good.  This means earlier kernels aren't so 
good--though I notice that Ubuntu's kernel has the 3.6 improvement in 
their version of 3.2, so individual distributions will vary.]



-kb
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Finding Entropy Isn't That Hard

2013-09-13 Thread Kent Borg

On 09/12/2013 10:41 AM, Kent Borg wrote:
routers and servers are not as bad off as people say. 


Not that more sources is bad.  A new trustworthy HW entropy source would 
be good.  Even a suspect rdrand is worth XORing in (as Linux does on the 
machine I am using right now).


But if you thirst for more entropy keep looking in your current 
hardware, server boxes are particularly good hunting grounds for a few 
more dribs of entropy:


 - current RPM of all the fans (the proverbial entropy-starved server 
can have a lot of fans)

 - temperatures
 - voltages
 - disk (SMART) statistics (temperatures and error counts, multiplied 
by the number of disks)


These are all things that could wear out or go wrong, which means they 
need monitoring because...you can't otherwise know what they are.  Cool, 
that's a decent definition of entropy.  Sample them regularly and hash 
them into your entropy pool.  Not a lot of bandwidth there, but if your 
RNG does a good job of hiding its internal state, and you are mixing in 
a dozen more bits here and a dozen more bits there...pretty soon you 
have made the attacker's job a lot harder.


Maybe not as sexy as a lavalamp or radioactive gizmos, but more 
practical and available now.


-kb



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Summary of the discussion so far

2013-09-13 Thread Nico Williams
On Wed, Sep 11, 2013 at 04:03:44PM -0700, Nemo wrote:
 Phillip Hallam-Baker hal...@gmail.com writes:
 
  I have attempted to produce a summary of the discussion so far for use
  as a requirements document for the PRISM-PROOF email scheme. This is
  now available as an Internet draft.
 
  http://www.ietf.org/id/draft-hallambaker-prismproof-req-00.txt
 
 First, I suggest removing all remotely political commentary and sticking
 to technical facts.  Phrases like questionable constitutional validity
 have no place in an Internet draft and harm the document, in my opinion.

Privacy relative to PRISMs is a political problem first and foremost.
The PRIM operators, if you'll recall, have a monopoly on the use of
force.  They have the rubber hoses.  No crypto can get you out of that
bind.

I'm extremely skeptical of anti-PRISM plans.  I'd start with:

 - open source protocols
 - two or more implementations of each protocol, preferably one or more
   being open source
 - build with multiple build tools, examine their output[*]
 - run on minimal OSes, on minimal hardware [**]

After that... well, you have to trust counter-parties, trusted third
parties, ...  It get iffy real quick.

The simplest protocols to make PRISM-proof are ones where there's only
one end-point.  E.g., filesystems.  Like Tahoe-LAFS, ZFS, and so on.
One end-point - no counter-parties nor third parties to compromise.
The one end-point (or multiple instances of it) is still susceptible to
lots of attacks, including local attacks involving plain old dumb
security bugs.

Next simplest: real-time messaging (so OTR is workable).

Traffic analysis can't really be defeated, not in detail.

On the other hand, the PRISMs can't catch low-bandwidth communications
over dead drops.  The Internet is full of dead drops.  This makes one
wonder why bother with PRISMs.  Part of the answer is that as long as
the PRISMs were secret the bad guys might have used weak privacy
protection methods.  But PRISMs had to exist by the same logic that all
major WWII powers had to have atomic weapons programs (and they all
did): if it could be built, it must be, and adversaries with the
requisite resources must be assumed to have built their own.

Anti-PRISM seems intractable to me.

Nico

[*] Oops, this is really hard; only a handful of end-users will ever do
this.  The goal is to defeat the Thonpson attack -- Thompson trojans
bit-rot; using multiple build tools and dissassembly tools would be
one way to increase the bit-rot speed.

[**] Also insanely difficult.  Not gonna happen for most people; the
 ones who manage it will still be susceptible to traffic analysis
 and, if of interest, rubber hose cryptanalysis.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] real random numbers

2013-09-13 Thread John Denker
Executive summary:

The soundcard on one of my machines runs at 192000 Hz.  My beat-up 
old laptop runs at 96000.  An antique server runs at only 48000. 
There are two channels and several bits of entropy per sample.
That's /at least/ a hundred thousand bits per second of real 
industrial-strength entropy -- the kind that cannot be cracked, 
not by the NSA, not by anybody, ever.

Because of the recent surge in interest, I started working on a 
new version of turbid, the software than manages the soundcard 
and collects the entropy.  Please give me another week or so.

The interesting point is that you rally want to rely on the
laws of physics.  Testing the output of a RNG can give an upper 
bound on the amount of entropy, but what we need is a lower bound, 
and only physics can provide that.  The physics only works if 
you /calibrate/ the noise source.  A major selling point of turbid
is the calibration procedure.  I'm working to make that easier for 
non-experts to use.


Concerning radioactive sources:

My friend Simplicio is an armchair cryptographer.  He has a proposal 
to replace triple-DES with quadruple-rot13.  He figures that since it
is more complicated and more esoteric, it must be better.

Simplicio uses physics ideas in the same way.  He thinks radioactivity 
is the One True Source of randomness.  He figures that since it is
more complicated and more esoteric, it must be better.

In fact, anybody who knows the first thing about the physics involved
knows that quantum noise and thermal noise are two parts of the same
elephant.  Specifically, there is only one physical process, as shown
by figure 1 here:
  http://www.av8n.com/physics/oscillator.htm
Quantum noise is the low-temperature asymptote, and thermal noise is
the high-temperature asymptote of the /same/ physical process.

So ... could we please stop talking about radioactive random number
generators and quantum random number generators?  It's embarrassing.

It is true but irrelevant that somebody could attempt a denial-of-service
attack against a thermal-noise generator by pouring liquid nitrogen
over it.  This is irrelevant several times over because:
 a) Any decrease in temperature would be readily detectable, and the 
  RNG could continue to function.  Its productivity would go down by
  a factor of 4, but that's all.
 b) It would be far more effective to pour liquid nitrogen over other
  parts of the computer, leading to complete failure.
 c) It would be even more effective (and more permanent) to pour sulfuric 
  acid over the computer.
 d) Et cetera.

The point is, if the attacker can get that close to your computer, you 
have far more things to worry about than the temperature of your noise 
source.  Mathematical cryptographers should keep in mind the proverb 
that says: If you don't have physical security, you don't have security.

To say the same thing in more positive terms:  If you have any halfway-
reasonable physical security, a thermal noise source is just fine, 
guaranteed by the laws of physics.

In practice, the nonidealities associated with radioactive noise are 
far greater than with thermal noise sources ... not to mention the cost 
and convenience issues.

As I have been saying for more than 10 years, several hundred thousand 
bits per second of industrial-strength entropy is plenty for a wide
range of practical applications.  If anybody needs more than that, we
can discuss it ... but in any case, there are a *lot* of services out 
there that would overnight become much more secure if they started 
using a good source of truly random bits.

The main tricky case is a virtual private server hosted in the cloud.
You can't add a real soundcard to a virtual machine.  My recommendation 
for such a machine is to use a high-quality PRNG and re-seed it at 
frequent intervals.  This is a chicken-and-egg situation:
 a) If you have /enough/ randomness stored onboard the VPS, you can 
  set up a secure pipe to a trusted randomness server somewhere else,
  and get more randomness that way.
 b) OTOH if the VPS gets pwned once, it might be pwned forever, because 
  the bad guys can watch the new random bits coming in, at which point
  the bits are no longer random.
 c) On the third hand, if the bad guys drop even one packet, ever,
  you can recover at that point.
 d) I reckon none of this is worth worrying about too much, because
  at some point the bad guys just strong-arm the hosting provider
  and capture your entire virtual machine.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Radioactive random numbers

2013-09-13 Thread Eugen Leitl
On Thu, Sep 12, 2013 at 08:47:16AM +1000, Dave Horsfall wrote:
 Another whacky idea...
 
 Given that there is One True Source of randomness to wit radioactive 

What makes you think that e.g. breakdown oin a reverse biased
Zener diode is any less true random? Or thermal noise in a
crappy CMOS circuit?

In fact, 
http://en.wikipedia.org/wiki/Hardware_random_number_generator#Physical_phenomena_with_quantum-random_properties
listens a lot of potential sources, some with a higher
rate and more private than others.

 emission, has anyone considered playing with old smoke detectors?
 
 The ionising types are being phased out in favour of optical (at least in 
 Australia) so there must be heaps of them lying around.
 
 I know - legislative requirements, HAZMAT etc, but it ought to make for a 
 good thought experiment.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Perfection versus Forward Secrecy

2013-09-13 Thread Eugen Leitl
On Thu, Sep 12, 2013 at 09:33:34AM -0700, Tony Arcieri wrote:

 What's really bothered me about the phrase perfect forward secrecy is
 it's being applied to public key algorithms we know will be broken as soon
 as a large quantum computer has been built (in e.g. a decade or two).

I do not think that the spooks are too far away from open research in
QC hardware. It does not seem likely that we'll be getting real QC
any time soon, if ever.

The paranoid nuclear option remains: one time pads. There is obviously
a continuum for XORing with output very large state PRNGs and
XORing with one time pads. It should be possible to build families
of such which resist reverse-engineering the state. While
juggling around several MByte or GByte keys is inconvenient, some
applications are well worth it.

Why e.g. SWIFT is not running on one time pads is beyond me.

 Meanwhile people seem to think that it's some sort of technique that will
 render messages unbreakable forever.


signature.asc
Description: Digital signature
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] [cryptography] very little is missing for working BTNS in Openswan

2013-09-13 Thread Paul Wouters

On Thu, 12 Sep 2013, Nico Williams wrote:


Note: you don't just want BTNS, you also want RFC5660 -- IPsec
channels.  You also want to define a channel binding for such channels
(this is trivial).

To summarize: IPsec protects discrete *packets*, not discrete packet
*flows*.  This means that -depending on configuration- you might be
using IPsec to talk to some peer at some address at one moment, and the
next you might be talking to a different peer at the same address, and
you'd never know the difference.  IPsec channels consist of ensuring
that the peer's ID never changes during the life of a given packet flow
(e.g., TCP connection).  BTNS pretty much requires IPsec configurations
of that make you vulnerable in this way.  I think it should be obvious
now that IPsec channels is a necessary part of any BTNS
implementation.


This is exactly why BTNS went nowhere. People are trying to combine
anonymous IPsec with authenticated IPsec. Years dead-locked in channel
binding and channel upgrades. That's why I gave up on BTNS. See also
the last bit of my earlier post regarding Opportunistic Encryption.

We can use IDs to identify anonymous and sandbox these connections. If
you want authenticated IPsec, use a different loaded policy that has
nothing to do with OE IPsec. In libreswan terms:

conn anonymous
right=yourip
rightid=@serverid
rightrsasigkey=0xAQ[]
left=%any
leftid=@anonymous
leftrsasigkey=%fromike

conn admin
[all your normal X.509 authentication stuff]

Merging these into one, is exactly why we got transport mode,
authenticated header,IKEv2 narrowing and a bunch of BTNS drafts no
one uses.

Stop making crypto harder!

Paul
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Finding Entropy Isn't That Hard

2013-09-13 Thread Marcus Leech


[* Until Linux kernel 3.6 the person maintaining urandom was busily turning off interrupts as a source of entropy, I think because he didn't know how much entropy he was getting so better not to get it at all (huh?). In 3.6 this was changed to use all interrupts as entropy sources, which is good. This means earlier kernels aren't so good--though I notice that Ubuntu's kernel has the 3.6 improvement in their version of 3.2, so individual distributions will vary.]-kb

I'll also observe that on new mobile platforms, there are typically a flotilla of physical-world sensors. The low-level drivers for
 these should be contributing entropy to the pool in the kernel. At the apps layer, typically, the "raw" sensor values have been
 filtered by application-specific algorithms, so that they're less useful as entropy sources at that level.

For example, low-G accelerometers are quite noisy -- these are typically used as multi-axis rotation sensors (they use the gravity-field orientation to sense rotation).

Any physical-world sensor driver, where the sensor inherently has a bit of noise, I think has a "moral obligation" to contribute bits to the kernel entopy pool.





___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Finding Entropy Isn't That Hard

2013-09-13 Thread Kent Borg

On 09/13/2013 11:59 AM, Marcus Leech wrote:
Any physical-world sensor driver, where the sensor inherently has a 
bit of noise, I think has a moral obligation to contribute bits to 
the kernel entopy pool.


Within limits.  Mixing the entropy pool on Linux takes work and battery 
power.


Looking at some random Android kernel source code (git commit c73c9662) 
it looks like add_interrupt_randomness() is happening for every 
interrupt (your Android device's kernel may vary), so there is probably 
plenty of entropy.  And add_interrupt_randomness() throttles to only 
feed the randomness once a second, not wasting our time or battery.


Don't carry moral obligation beyond what is reasonable!

-kb

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Stealthy Dopant-Level Hardware Trojans

2013-09-13 Thread Perry E. Metzger
On Fri, 13 Sep 2013 11:49:24 +0200 Eugen Leitl eu...@leitl.org
wrote:
 
 http://people.umass.edu/gbecker/BeckerChes13.pdf
 
 Stealthy Dopant-Level Hardware Trojans[...]

This is pretty clearly a big deal. The fact that you can skew HRNGs
just by fiddling with dopant levels is something I would have
suspected, but now that we know, I think need for chip companies
to provide access to the raw HRNG output has become even more obvious.

It is not a question of not trusting the engineers who work on the
hardware. It is a question of not wanting to trust every
single individual in a long supply chain.

Perry
-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Introducing strangers. Was: Thoughts about keys

2013-09-13 Thread Eugen Leitl
On Wed, Sep 11, 2013 at 07:32:04PM +0200, Guido Witmond wrote:

  With a FOAF routing scheme with just 3 degrees of separation there
  are not that many strangers left.
 
 How do you meet people outside your circle of friends?

You don't. The message is routed through the social network, until
it reaches your destination.
 
 How do you stay anonymous? With FOAF, you have a single identity for it

By running onion routers like Tor on top of that routed network.
With FOAF I don't mean a specific system, but a generic small-world
social network, where each member is reachable in a small number
of hops.

 to work. I offer people many different identities. But all of them are
 protected, and all communication encrypted.
 
 That's what my protocol addresses. To introduce new people to one
 another, securely. You might not know the person but you are sure that
 your private message is encrypted and can only be read by that person.
 
 Of course, as it's a stranger, you don't trust them with your secrets.
 
 For example, to let people from this mailing list send encrypted mail to
 each other, without worrying about the keys. The protocol has already
 taken care of that. No fingerprint checking. No web of trust validation.
 
 
  If you add opportunistic encryption at a low transport layer, plus
  additional layers on top of you've protected the bulk of traffic.
 
 I don't just want to encrypt the bulk, I want to encrypt everything, all

With multilayer transport protection, you'll get multiple layers
of encryption for your typical connection.

 the time. It makes Tor traffic much more hidden.
 
 
 There is more
 
 The local CA (one for each website) signs both the server and client
 certificates. The client only identifies itself to the server after it
 has recognized the server certificate. This blocks phishing attempts to
 web sites (only a small TOFU risk remains). And that can be mitigated
 with a proper dose of Certificate Transparency.
 
 Kind regards, Guido Witmond,
 
 
 Please see the site for more details:
   http://eccentric-authentication.org/


signature.asc
Description: Digital signature
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

[Cryptography] Security is a total system problem (was Re: Perfection versus Forward Secrecy)

2013-09-13 Thread Perry E. Metzger
On Fri, 13 Sep 2013 08:08:38 +0200 Eugen Leitl eu...@leitl.org
wrote:
 Why e.g. SWIFT is not running on one time pads is beyond me.

I strongly suspect that delivering them securely to the vast number
of endpoints involved and then securing the endpoints as well would
radically limit the usefulness. Note that it appears that even the
NSA generally prefers to compromise endpoints rather than attack
crypto.

The problem these days is not that something like AES is not good
enough for our purposes. The problem is that we too often build a
reinforced steel door in a paper wall.

Perry
-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Summary of the discussion so far

2013-09-13 Thread Perry E. Metzger
On Fri, 13 Sep 2013 15:46:58 -0500 Nico Williams
n...@cryptonector.com wrote:
 On Fri, Sep 13, 2013 at 03:17:35PM -0400, Perry E. Metzger wrote:
  On Thu, 12 Sep 2013 14:53:28 -0500 Nico Williams
  n...@cryptonector.com wrote:
   Traffic analysis can't really be defeated, not in detail.
  
  What's wrong with mix networks?
 
 First: you can probably be observed using them.

Sure, but the plan I described a few weeks ago would presumably end
with hundreds of thousands or millions of users if it worked at all.

 Second: I suspect that to be most effective the mix network also
 has to be most inconvenient (high latency, for example).

Sure, that's true for voice and such. However, for messaging
apps, that's not an issue. See my claims here:
http://www.metzdowd.com/pipermail/cryptography/2013-August/016874.html

(That was part of a three message sequence that began with these two:
http://www.metzdowd.com/pipermail/cryptography/2013-August/016870.html
and
http://www.metzdowd.com/pipermail/cryptography/2013-August/016872.html

but only the second of those two is really relevant to this
particular discussion.)

 Third: the mix network had better cross multiple jurisdictions that
 are not accustomed to cooperating with each other.  This seems very
 difficult to arrange.

That's important for onion networks, not mix networks. I understand
that the distinction isn't well understood by most, but it can be
summarized thus: an onion network depends on no one observing the
whole network to provide security, while a mix network uses
sufficient cover traffic and delay induction to prevent people from
being able to learn much even if they can observe the whole network
and control a minority of nodes.

Perry
-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] one time pads

2013-09-13 Thread John Kelsey
Switching from AES to one-time pads to solve your practical cryptanalysis 
problems is silly.  It replaces a tractable algorithm selection problem with a 
godawful key management problem, when key management is almost certainly the 
practical weakness in any broken system designed by non-idiots.  

--Jhn


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] Other curves and algos used in France

2013-09-13 Thread Erwann ABALEA
2013/9/10 james hughes hugh...@mac.com

 [...]
 Lastly, going a partial step seems strange also. Why do we what to put
 ourselves through this again so soon? The French government suggests 2048
 now (for both RSA and DHE), and will only last 6 years. From
  http://www.ssi.gouv.fr/IMG/pdf/RGS_B_1.pdf


They also published their own curve (a 256 bits GF(p) one), named FRP256v1 (
http://www.legifrance.gouv.fr/affichTexte.do?cidTexte=JORFTEXT24668816).
But since they don't provide any detail on the parameters' choice, and the
use of this curve isn't mandatory at all, I prefer the Brainpool ones.

They're also pushing for ECKCDSA adoption, by asking HSM manufacturers to
include this mechanism. I don't know anything on this.

-- 
Erwann.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] prism proof email, namespaces, and anonymity

2013-09-13 Thread Perry E. Metzger
On Fri, 13 Sep 2013 16:55:05 -0400 John Kelsey crypto@gmail.com
wrote:
 Everyone,
 
 The more I think about it, the more important it seems that any
 anonymous email like communications system *not* include people who
 don't want to be part of it, and have lots of defenses to prevent
 its anonymous communications from becoming a nightmare for its
 participants.  If the goal is to make PRISM stop working and make
 the email part of the internet go dark for spies (which definitely
 includes a lot more than just US spies!), then this system has to
 be something that lots of people will want to use.  
 
 There should be multiple defenses against spam and phishing and
 other nasty things being sent in this system, with enough
 designed-in flexibility to deal with changes in attacker behavior
 over tome.

Indeed. As I said in the message I just pointed Nico at:
http://www.metzdowd.com/pipermail/cryptography/2013-August/016874.html

Quoting myself:

   Spam might be a terrible, terrible problem in such a network since
   it could not easily be traced to a sender and thus not easily
   blocked, but there's an obvious solution to that. I've been using
   Jabber, Facebook and other services where all or essentially all
   communications require a bi-directional decision to enable messages
   for years now, and there is virtually no spam in such systems
   because of it. So, require such bi-directional friending within
   our postulated new messaging network -- authentication is handled
   by the public keys of course. 

 Some thoughts off the top of my head.  Note that while I think all
 these can be done with crypto somehow, I am not thinking of how to
 do them yet, except in very general terms.  
 
 a.  You can't freely send messages to me unless you're on my
 whitelist.  

That's my solution. As I note, it seems to work for Jabber, Facebook
and other such systems, so it may be sufficient.

 b.  This means an additional step of sending me a request to be
 added to your whitelist.  This needs to be costly in something the
 sender cares about--money, processing power, reputation, solving a
 captcha, rate-limits to these requests, whatever.

I'm not sure about that. Jabber doesn't really rate limit the number
of friend requests I get per second but I don't seem to get terribly
many, perhaps because fakes at most could hide some attempted phish
in a user@domain name, which isn't very useful to scammers.

 g.  The format of messages needs to be restricted to block malware,
 both the kind that wants to take over your machine and the kind
 that wants to help the attacker track you down.  Plain text email
 only?  Some richer format to allow foreign language support?  

My claim that I make in my three messages from August 25 is that it
is probably best if we stick to existing formats so that we can
re-use existing clients. My idea was that you still talk IMAP and
SMTP and Jabber to a server you control (a $40 box you get at Best Buy
or the like) using existing mail and chat clients, but that past your
server everything runs the new protocols.

In addition to the message I linked to above, see also:
http://www.metzdowd.com/pipermail/cryptography/2013-August/016870.html
http://www.metzdowd.com/pipermail/cryptography/2013-August/016872.html
for my wider proposals.

I agree this makes email delivered malware continue to be a bit of a
problem, though you could only get it from your friends.

Perry
-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] [cryptography] very little is missing for working BTNS in Openswan

2013-09-13 Thread Nico Williams
On Thu, Sep 12, 2013 at 08:28:56PM -0400, Paul Wouters wrote:

 Stop making crypto harder!

I think you're arguing that active attacks are not a concern.  That's
probably right today w.r.t. PRISMs.   And definitely wrong as to cafe
shop wifi.

The threat model is the key.  If you don't care about active attacks,
then you can get BTNS with minimal effort.  This is quite true.

At least some times we need to care about active attacks.

 On Thu, 12 Sep 2013, Nico Williams wrote:
 Note: you don't just want BTNS, you also want RFC5660 -- IPsec
 channels.  You also want to define a channel binding for such channels
 (this is trivial).
 
 This is exactly why BTNS went nowhere. People are trying to combine
 anonymous IPsec with authenticated IPsec. Years dead-locked in channel
 binding and channel upgrades. That's why I gave up on BTNS. See also
 the last bit of my earlier post regarding Opportunistic Encryption.

It's hard to know exactly why BTNS failed, but I can think of:

 - It was decades too late; it (and IPsec channels) should have been
   there from the word (RFC1825, 1995), and even then it would have been
   too late to compete with TLS given that the latter required zero
   kernel code additions while the former required lots.

 - I only needed it as an optimization for NFS security at a time when
   few customers really cared about deploying secure NFS because Linux
   lacked mature support for it.  It's hard to justify a bunch of work
   on multiple OSes for an optimization to something few customers used
   even if they should have been using it.

 - Just do it all in user-land has pretty much won.  Any user-land
   protocol you can think of, from TLS, to DJB's MinimaLT, to -heck-
   even IKE and ESP over UDP, will be easier to implement and deploy
   than anything that requires matching kernel implementations in
   multiple OSes.

   You see this come up *all* the time in Apps WG.  People want SCTP,
   but for various reasons (NAATTTS) they can't, so they resort to
   putting an entire SCTP or SCTP-like stack in user-land and run it
   over UDP.  Heck, there's entire TCP/IP user-land stacks designed to
   go faster than any general-purpose OS kernel's TCP/IP stack does.

   Yeah, this is a variant of the first reason.

There's probably other reasons; listing them all might be useful.  These
three were probably enough to doom the project.

The IPsec channel part is not really much more complex than, say,
connected UDP sockets.  But utter simplicity four years ago was
insufficient -- it needed to have been there two decades ago.

Nico
-- 
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography