Re: [cryptography] Kernel space vs userspace RNG

2016-05-05 Thread Tony Arcieri
On Thu, May 5, 2016 at 2:40 AM, shawn wilson  wrote:

> I wonder what the gain is for putting RNGs in the kernel.
>
A naive userspace RNG will duplicate its internal state when you fork,
which can be catastrophic in a cryptographic context. That's a problem that
can be fixed by configuring a proper pthread_atfork() (or thereabouts)
callback to reseed a userspace RNG when a process forks, but illustrative
of the sorts of sharp edges that can occur with userspace RNGs.

If performance is important, properly implemented userspace RNGs can be
helpful, but they're easy to screw up.

-- 
Tony Arcieri
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Kernel space vs userspace RNG

2016-05-05 Thread Russell Leidich
All else being equal, I would prefer to have my TRNG in the kernel, for the
aforementioned reasons of memory access security.

But in the real world, this distinction is minor. More significantly,
kernel TRNGs differ from userspace ones in their use of hardware sources of
randomness, such as network packet contents and mouse movements.
Conventionally, this is considered to be a good thing because it provides a
diversity of entropy sources which are difficult to model, and must all be
modelled in different ways. By comparison, my own userspace TRNGs (Jytter
and Enranda) rely on only the CPU timer.

I would argue, however, that using hardware randomness is fundamentally
less secure at the only level that actually matters (overall real world
systemic security), even if the ultimate source of the randomness is
perfect in the quantum sense. The reason has nothing to do with the source
itself. Rather, it's the bus between the CPU and the source which is so
horribly exploitable, to say nothing of the bugs invited by touching so
much hardware. It takes little sophistication or money to insert a probe
between the two, or better yet, to manufacture a motherboard with such a
tap built in. Sure, a CPU manufacturer could record accesses to the timer
which resides on die, but then they would have the problem of needing to
conspire with motherboard vendors to radiate that data back to the cloud,
perhaps via a network chip which "accidentally" contacts a particular IP on
rare occasion. But less conspiratorially speaking, a bus tap could be
installed in an evil maid attack using a screwdriver. For that matter, it's
not too difficult to imagine a drone which could fly into a data center and
deposit a high precision electromagnetic sensor on the outside of a server
rack, sensitive to the frequencies used on the frontside bus. At least in
principle, Fourier analysis could be used to reverse engineer the signals
travelling across the bus from the 2D slice of radiation incident to the
receiving surface of the sensor. MRI machines have been using similar radio
wave decoding math for decades, with obvious success.

However, said evil maid could not read the inputs to a timer-based TRNG so
easily, because doing so would generally require the root password or an OS
vulnerability or a JTAG connection to the CPU pads, in which case all of
encryption is moot anyway. If said TRNG resided in userspace, then in
theory a security hole in an application could facilitate remote
compromise, but the same could be said of applications which read
/dev/random, then store the results in their userspace memory.

If I were to use any hardware other than the CPU timer, I would want an
encrypted connection between the hardware source and the CPU core, leaving
as little decrypted raw entropy in memory or higher level caches as
possible. For example, CPU debug registers would be preferable to a line in
the level 2 cache. There is also the question of key exchange spoofing
across that leaky bus hierarchy. And where would we get the entropy to
encrypt that connection? D'oh! Ah, but we could use trusted platform
modules! Uhm, no, because it's much easier to create weak hardware RNGs
which look solid than to engineer the CPU to poison timer-based TRNGs with
predictable timestamps, because those timestamps would stick out like sore
thumbs. And also no because TPMs reside on the same leaky bus, usually LPC
which is indirectly connected to PCIe, affording two attacks for the price
of one. I'm more sanguine about the sort of TRNG registers that DJ
mentioned, which are readable in userspace but reside on-die, than any
external solutions, although I don't trust them completely because
weakening them in an indetectable manner would require much less
sophisticated engineering than weakening the timestamp; they might be
combined for greater security.

One criticism against timer-based TRNGs is that when booting very simple
devices disconnected from the network, their outputs will become more
predictable. This is probably true, but part of the validation and testing
of the TRNG would be to run it under such circumstances (probably in
relative cryostasis) and appropriately adjust the lower bound entropy. It's
much easier to perform such characterization for a timer-based TRNG than a
"kitchen sink" TRNG susceptible to the unknown statistical vagaries of a
wide diversity of hardware.

In other words, it's better to have weak entropy that you know to be weak,
and can scale to strength, than strong entropy which is susceptible to
unpredictable massive downspikes in quality, especially insofar as concerns
hardware which was never intended to behave as a TRNG, e.g. a spinning
disc. What is hard for the attacker to model is also hard for the designer
to model.

It's obviously appealing, then, to think of hybridizing timer and device
entropy. All else being equal, this would seem to be the most secure
approach. If we disregard the negative implications for bandwidth 

Re: [cryptography] Kernel space vs userspace RNG

2016-05-05 Thread shawn wilson
On May 5, 2016 2:22 PM,  wrote:
>

> I think this sums it up well. Today you are thrown into having to know
> what to do specifically because it's a system level problem (matching
> entropy sources to extractors to PRNGs to consuming functions).
>
> The OS kernel does a thing well that is it's job - taking single physical
> instances of entropy sources, post processing it and making it available
> to all userland and kernel consumers.
>
> However kernel writers cannot address the full system issue because they
> don't know what hardware they are running on. They don't know if they are
> in a VM. They don't know whether or not they have access to entropic datao
> or whether something else has access to the same data.
>
> So one of the "things you should know" is if you run a modern Linux,
> Solaris or Windows on specific CPUs in specific environments (like not in
> a VM) then it can and will serve your userland programs with
> cryptographically useful random numbers, at the cost of a fairly large
> attack surface (drivers, APIs, kernel code, timing, memory etc.)
>
> Intel came down firmly on the side of enabling the userland. One
> instruction puts entropic state into the register of your running userland
> program. Smaller attack surface, simpler, quicker, serves multiple users
> whether or not they are running in on bare metal or in a VM. You have to
> trust the VM (as you do for anything else you do in a VM). Stuff is done
> in hardware to make sure it serves multiple consumers, just as an OS does
> stuff to serve multiple consumers.
>
> A SW userland RNG is an effective method to connect entropy sources you
> know about on your system to algorithms that meet your needs. The recent
> switch to NIST requiring 192 bits or greater in key strength has
> precipitated a few 256 bit SW SP800-90 implementations. I know, I wrote a
> couple of them and I've reviewed a few others that have been written in
> response to the NIST change.
>
> SW RNG code is also easy to take through certification.
> The different is you take the system through certification, not just the
> code (except for CAVS). An OS kernel writer doesn't have that advantage.
>
> So my general view is that if you are tasked with enabling random numbers
> in your application, userland is usually a better place to do it. Maybe in
> a decent library used directly by your application. Maybe with some
> trivial inline assembler. But only if you can control the entropy source
> and the sharing of it. If you can use HW features (RdRand, RdSeed, other
> entropy sources, AES-NI, Hash instructions etc.) then your SW task is
> simplified, but it assumes you know what hardware you are writing for.
> Ditto for other platforms I'm less familiar with.
>
> The mistake I have seen, particularly in certain 'lightweight' SSL
> libraries is to say "It's our policy not to do the RNG thing - we trust
> the OS to provide entropy" and read from /dev/urandom as a result (because
> /dev/random blocks on many platforms). They are trusting the thing that is
> not in a place where it can guarantee entropy sources are available. It
> will work on some platforms and will certainly fail on some platforms,
> particularly lightweight platforms with Linux kernels on CPUs with no
> deliberately designed source of entropy which is where lightweight SSL
> libraries are used most.
>

This was pretty much my thinking (though idk Intel thought similar). If
this is debatable, that's fine as long as my view isn't totally
batt-shit-crazy :)
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] You can be too secure

2016-05-05 Thread Kevin

I see what you mean :)


On 5/5/2016 2:45 PM, Ron Garret wrote:

On May 5, 2016, at 11:13 AM, Kevin  wrote:


One can never be to secure!

Actually, I learned the hard way last week that this is not true.

Four years ago I bought a 2010 MacBook air from a private party (i.e. I’ve 
owned it for four years, and it was two years old when I bought it).  I did a 
clean install of OS X, and used the machine with no problems for the next four 
years.

Last week, someone apparently put an iCloud lock on the machine.  It turns out 
that wiping the hard drive does not remove the machine’s iCloud binding.  If 
the machine has been associated with an iCloud account at any time in its 
history, only the owner of the associated account (or Apple) can remove that 
binding.  And Apple will only do it if you can produce a proof-of-purchase, 
which for them is a receipt from an authorized reseller.  The iCloud lock is 
implemented in EFI firmware, so not even replacing the internal drive will 
remove it.

It gets worse: Apple refuses to contact the owner of the iCloud account that 
placed the lock.  The lock message provides no information (it simply says, 
“Machine locked pending investigation.”)  So even if the machine I bought was 
stolen (I have a lot of evidence that it wasn’t, but no proof) I can’t return 
it to its rightful owner because I have no idea who it is.  Apple knows, but 
they won’t tell me (which is understandable) nor will they contact that person 
on my behalf (which is not).  They also don’t provide any way of checking 
whether a Mac has an existing iCloud binding.  (They provide this service for 
mobile devices, but not for Macs.)  The only way to tell is to take the machine 
into an Apple store and have them check it.

IMHO that’s too secure.

rg




---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] You can be too secure

2016-05-05 Thread Jeffrey Walton
On Thu, May 5, 2016 at 2:45 PM, Ron Garret  wrote:
>
> On May 5, 2016, at 11:13 AM, Kevin  wrote:
>
>> One can never be to secure!
>
> Actually, I learned the hard way last week that this is not true.
>
> Four years ago I bought a 2010 MacBook air from a private party (i.e. I’ve 
> owned it for four years, and it was two years old when I bought it).  I did a 
> clean install of OS X, and used the machine with no problems for the next 
> four years.
>
> Last week, someone apparently put an iCloud lock on the machine.  It turns 
> out that wiping the hard drive does not remove the machine’s iCloud binding.  
> If the machine has been associated with an iCloud account at any time in its 
> history, only the owner of the associated account (or Apple) can remove that 
> binding.  And Apple will only do it if you can produce a proof-of-purchase, 
> which for them is a receipt from an authorized reseller.  The iCloud lock is 
> implemented in EFI firmware, so not even replacing the internal drive will 
> remove it.
>
> It gets worse: Apple refuses to contact the owner of the iCloud account that 
> placed the lock.  The lock message provides no information (it simply says, 
> “Machine locked pending investigation.”)  So even if the machine I bought was 
> stolen (I have a lot of evidence that it wasn’t, but no proof) I can’t return 
> it to its rightful owner because I have no idea who it is.  Apple knows, but 
> they won’t tell me (which is understandable) nor will they contact that 
> person on my behalf (which is not).  They also don’t provide any way of 
> checking whether a Mac has an existing iCloud binding.  (They provide this 
> service for mobile devices, but not for Macs.)  The only way to tell is to 
> take the machine into an Apple store and have them check it.
>

Drag them into court... Let them spend $25,000 attempting to defend
their position. It will cost you about $50.00 to file it.

Money is the only thing corporations care about. Hit back where it hurts.

Jeff
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] You can be too secure

2016-05-05 Thread Ron Garret

On May 5, 2016, at 11:13 AM, Kevin  wrote:

> One can never be to secure!

Actually, I learned the hard way last week that this is not true.

Four years ago I bought a 2010 MacBook air from a private party (i.e. I’ve 
owned it for four years, and it was two years old when I bought it).  I did a 
clean install of OS X, and used the machine with no problems for the next four 
years.

Last week, someone apparently put an iCloud lock on the machine.  It turns out 
that wiping the hard drive does not remove the machine’s iCloud binding.  If 
the machine has been associated with an iCloud account at any time in its 
history, only the owner of the associated account (or Apple) can remove that 
binding.  And Apple will only do it if you can produce a proof-of-purchase, 
which for them is a receipt from an authorized reseller.  The iCloud lock is 
implemented in EFI firmware, so not even replacing the internal drive will 
remove it.

It gets worse: Apple refuses to contact the owner of the iCloud account that 
placed the lock.  The lock message provides no information (it simply says, 
“Machine locked pending investigation.”)  So even if the machine I bought was 
stolen (I have a lot of evidence that it wasn’t, but no proof) I can’t return 
it to its rightful owner because I have no idea who it is.  Apple knows, but 
they won’t tell me (which is understandable) nor will they contact that person 
on my behalf (which is not).  They also don’t provide any way of checking 
whether a Mac has an existing iCloud binding.  (They provide this service for 
mobile devices, but not for Macs.)  The only way to tell is to take the machine 
into an Apple store and have them check it.

IMHO that’s too secure.

rg

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Kernel space vs userspace RNG

2016-05-05 Thread dj
> On 05/05/16 09:40 AM, shawn wilson wrote:
>> Just reflecting on the Linux RNG thread a bit ago, is there any
>> technical reason to have RNG in kernel space?
>
> The procurement of an RNG source for crypto is always a *system* design
> issue.
>
> The expectation that a kernel offering (intended for a wide range of CPU
> architectures, each of which being deployed in its own range of systems)
> can solve this system issue is IMHO naive.
>
> Thus, kernel space vs user space makes little difference.
>
> This being said, the kernel developers appear to make good faith efforts
> to adapt to the ever evolving digital electronics paradigms prevailing
> in a few mainstream system architectures. Is this effective versus some
> criteria for RNG quality? Is this good enough for you?
>
> It's your duty to figure out, I guess.
>
> Regards,
>
> - Thierry Moreau
>

I think this sums it up well. Today you are thrown into having to know
what to do specifically because it's a system level problem (matching
entropy sources to extractors to PRNGs to consuming functions).

The OS kernel does a thing well that is it's job - taking single physical
instances of entropy sources, post processing it and making it available
to all userland and kernel consumers.

However kernel writers cannot address the full system issue because they
don't know what hardware they are running on. They don't know if they are
in a VM. They don't know whether or not they have access to entropic datao
or whether something else has access to the same data.

So one of the "things you should know" is if you run a modern Linux,
Solaris or Windows on specific CPUs in specific environments (like not in
a VM) then it can and will serve your userland programs with
cryptographically useful random numbers, at the cost of a fairly large
attack surface (drivers, APIs, kernel code, timing, memory etc.)

Intel came down firmly on the side of enabling the userland. One
instruction puts entropic state into the register of your running userland
program. Smaller attack surface, simpler, quicker, serves multiple users
whether or not they are running in on bare metal or in a VM. You have to
trust the VM (as you do for anything else you do in a VM). Stuff is done
in hardware to make sure it serves multiple consumers, just as an OS does
stuff to serve multiple consumers.

A SW userland RNG is an effective method to connect entropy sources you
know about on your system to algorithms that meet your needs. The recent
switch to NIST requiring 192 bits or greater in key strength has
precipitated a few 256 bit SW SP800-90 implementations. I know, I wrote a
couple of them and I've reviewed a few others that have been written in
response to the NIST change.

SW RNG code is also easy to take through certification.
The different is you take the system through certification, not just the
code (except for CAVS). An OS kernel writer doesn't have that advantage.

So my general view is that if you are tasked with enabling random numbers
in your application, userland is usually a better place to do it. Maybe in
a decent library used directly by your application. Maybe with some
trivial inline assembler. But only if you can control the entropy source
and the sharing of it. If you can use HW features (RdRand, RdSeed, other
entropy sources, AES-NI, Hash instructions etc.) then your SW task is
simplified, but it assumes you know what hardware you are writing for.
Ditto for other platforms I'm less familiar with.

The mistake I have seen, particularly in certain 'lightweight' SSL
libraries is to say "It's our policy not to do the RNG thing - we trust
the OS to provide entropy" and read from /dev/urandom as a result (because
/dev/random blocks on many platforms). They are trusting the thing that is
not in a place where it can guarantee entropy sources are available. It
will work on some platforms and will certainly fail on some platforms,
particularly lightweight platforms with Linux kernels on CPUs with no
deliberately designed source of entropy which is where lightweight SSL
libraries are used most.

DJ





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Kernel space vs userspace RNG

2016-05-05 Thread Kevin
I personally feel that this is overkill.  However, it is always a good 
idea to cover all of your bases so I would never say that it's a bad 
idea.  One can never be to secure!



On 5/5/2016 5:40 AM, shawn wilson wrote:


Just reflecting on the Linux RNG thread a bit ago, is there any 
technical reason to have RNG in kernel space? There are things like 
haveged which seem to work really well and putting or charging code in 
any kernel can be a bit of a battle (as it should be with code as 
complex as that involving crypto - wouldn't want people missing an 
exploit your new system exposes and accepting it*). So I wonder what 
the gain is for putting RNGs in the kernel.


The only argument I can think of against this is non technical - if 
you rely on users to pick their RNG implementation, they are liable to 
get it wrong. This may be valid but I'm still curious about the 
technical reasons for RNG in kernel space.


Also, if kernel space is really necessary, I'd think publishing as a 
dkms type package would gain more traction for getting into mainline 
(but this is probably OT here)


* Obviously that same argument can be made of userspace programs but 
I'd much prefer my exploits happen at a less privileged ring whenever 
possible :)




___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography




---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Kernel space vs userspace RNG

2016-05-05 Thread Michael Greene
One technical reason could be that at least some of the entropy sources are 
also in the kernel, so it makes some sense to put the RNG there, too. It'd 
probably be more implementation effort to be able use the same entropy sources 
in a userspace tool. 

Another justification could be that it is more difficult to modify kernel 
memory than it is to modify userspace memory, so it might be considered more 
trustworthy. 


On May 5, 2016 2:40:51 AM PDT, shawn wilson  wrote:
>Just reflecting on the Linux RNG thread a bit ago, is there any
>technical
>reason to have RNG in kernel space? There are things like haveged which
>seem to work really well and putting or charging code in any kernel can
>be
>a bit of a battle (as it should be with code as complex as that
>involving
>crypto - wouldn't want people missing an exploit your new system
>exposes
>and accepting it*). So I wonder what the gain is for putting RNGs in
>the
>kernel.
>
>The only argument I can think of against this is non technical - if you
>rely on users to pick their RNG implementation, they are liable to get
>it
>wrong. This may be valid but I'm still curious about the technical
>reasons
>for RNG in kernel space.
>
>Also, if kernel space is really necessary, I'd think publishing as a
>dkms
>type package would gain more traction for getting into mainline (but
>this
>is probably OT here)
>
>* Obviously that same argument can be made of userspace programs but
>I'd
>much prefer my exploits happen at a less privileged ring whenever
>possible
>:)
>
>
>
>
>___
>cryptography mailing list
>cryptography@randombit.net
>http://lists.randombit.net/mailman/listinfo/cryptography

-- 
Michael Greene
Software Engineer 
mgre...@securityinnovation.com___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Kernel space vs userspace RNG

2016-05-05 Thread shawn wilson
Just reflecting on the Linux RNG thread a bit ago, is there any technical
reason to have RNG in kernel space? There are things like haveged which
seem to work really well and putting or charging code in any kernel can be
a bit of a battle (as it should be with code as complex as that involving
crypto - wouldn't want people missing an exploit your new system exposes
and accepting it*). So I wonder what the gain is for putting RNGs in the
kernel.

The only argument I can think of against this is non technical - if you
rely on users to pick their RNG implementation, they are liable to get it
wrong. This may be valid but I'm still curious about the technical reasons
for RNG in kernel space.

Also, if kernel space is really necessary, I'd think publishing as a dkms
type package would gain more traction for getting into mainline (but this
is probably OT here)

* Obviously that same argument can be made of userspace programs but I'd
much prefer my exploits happen at a less privileged ring whenever possible
:)
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography