Re: random(4) and VMs

2018-09-18 Thread Sandy Harris
On Tue, Sep 18, 2018 at 7:03 PM John Denker  wrote:

> > Is a fix that only deals with a subset of the problem worth
> > considering? Just patch the VM support code so that any time a VM is
> > either booted or re-started after a save, the host system drops in
> > some entropy, ...
>
> Good solutions already exist for that subset of the problem.
>
> Configure your VM so that each guest has a virtual /dev/hwrng
> I know this works for qemu.
> I imagine it works for other VMs.
>
> If you find this unsatisfactory, please explain.

It may still leave a VM that is snapshotted & restarted vulnerable to
replay since the random state is saved & restored. I'm not sure this
is much of a problem since an attacker would presumably need
privileged access to the host to exploit it & if he has that, all is
lost anyway.


random(4) and VMs

2018-09-18 Thread Sandy Harris
Getting the random driver well initialised early enough is a hard
problem, at least on some machines.

Solutions have been proposed by various people. If I understand them
right, Ted Ts'o suggests modifying the boot loader to provide some
entropy & John Denker suggests that every machine should be
provisioned with some entropy in the kernel image at install time.
Both are general solutions, but I think both would require updating
the entropy store later. As far as I know, neither has yet been
implemented as accepted patches

Is a fix that only deals with a subset of the problem worth
considering? Just patch the VM support code so that any time a VM is
either booted or re-started after a save, the host system drops in
some entropy, This looks relatively easy to do, at least for Linux
VMs, and some of the code might be the same as what the more general
approaches would need.


Re: Fostering linux community collaboration on hardware accelerators

2017-10-11 Thread Sandy Harris
I shortened the cc list radically. If the discussion continues, it may
be a good idea to add people back. I added John Gilmore since I cite
one of his posts below.

Jonathan Cameron  wrote:

> On behalf of Huawei, I am looking into options to foster a wider community
> around the various ongoing projects related to Accelerator support within
> Linux.  The particular area of interest to Huawei is that of harnessing
> accelerators from userspace, but in a collaborative way with the kernel
> still able to make efficient use of them, where appropriate.
>
> We are keen to foster a wider community than one just focused on
> our own current technology.

Good stuff, but there are problems. e.g. see the thread starting
with my message here:
https://www.mail-archive.com/linux-crypto@vger.kernel.org/msg27274.html

My perspective is that of a crypto guy working on general-purpose
CPUs, anything from 32-bit ARM up. There are certainly problems for
devices with massive loads like a high-end router or with more limited
CPUs that I will not even pretend to address.

For me, far & away the biggest issue is having a good source of random
numbers; more-or-less all crypto depends on that. The Linux random(4)
RNG gets close, but there are cases where it may not be well
initialized soon enough on some systems. If a system provides a
hardware RNG, I will certainly use it to feed random(4). I do not care
nearly as much about anything else that might be in a hardware crypto
accelerator.

Separate accelerator devices require management, separating accesses
by different kernel threads or by user processes if they are allowed
to play, keeping them from seeing each other's keys, perhaps saving &
restoring state sometimes. Things that can be built into the CPU --
RNG instruction or register, AES instructions, Intel's instruction for
128-bit finite field multiplication which accelerates AES-GCM,
probably some I have not heard of -- require less management and are
usable by  any process, assuming either compiler support or some
assembler code. As a software guy, I'd far rather the hardware
designers gave me those than anything that needs a driver.


Re: Re: [PATCH 0/6] Add support for ECDSA algorithm

2017-08-22 Thread Sandy Harris
On Tue, Aug 22, 2017 at 12:14 PM, Tudor Ambarus
 wrote:
> Hi, Herbert,
>
> On 02/02/2017 03:57 PM, Herbert Xu wrote:
>>
>> Yes but RSA had an in-kernel user in the form of module signature
>> verification.  We don't add algorithms to the kernel without
>> actual users.  So this patch-set needs to come with an actual
>> in-kernel user of ECDSA.
>
>
> ECDSA can be used by the kernel module signing facility too. Is there
> any interest in using ECDSA by the kernel module signing facility?

I'd say keep it simple wherever possible; adding an algorithm should
need "is required by" not just "can be used by".

Even then, there is room for questions. In particular, whether such a
fragile algorithm should be trusted at all, let alone for signatures
on infrastructure modules that the whole OS will trust.
https://en.wikipedia.org/wiki/Elliptic_Curve_Digital_Signature_Algorithm#Security


Re: [Freedombox-discuss] Hardware Crypto

2017-08-13 Thread Sandy Harris
Showing only the key parts of the message:

> From: John Gilmore 

An exceedingly knowledgeable guy, one we should probably take seriously.
https://en.wikipedia.org/wiki/John_Gilmore_(activist)

> Most hardware crypto accelerators are useless, ...
> ... you might as well have
> just computed the answer in userspace using ordinary instructions.

A strong claim, but one I'm inclined to believe. In the cases where it
applies, it may be a problem for much of the Linux crypto work.

Some CPUs have special instructions to speed up some crypto
operations, and not just AES. For example, Intel has them for several
hashes and for elliptic curve calculations:
https://software.intel.com/en-us/articles/intel-sha-extensions
https://en.wikipedia.org/wiki/CLMUL_instruction_set
https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/polynomial-multiplication-instructions-paper.pdf

These move the goalposts; if doing it "using ordinary instructions" is
sometimes faster than hardware, then doing it with
application-specific instructions is even more likely to be faster.

> Even if a /dev/crypto interface existed and was faster for some kinds
> of operations than just doing the crypto manually, the standard crypto
> libraries would have to be portably tuned to detect when to use
> hardware and when to use software.  The libraries generally use
> hardware if it's available, since they were written with the
> assumption that nobody would bother with hardware crypto if it was
> slower than software.
>
> "Just make it fast for all cases" is hard when the hardware is poorly
> designed.  When the hardware is well designed, it *is* faster for all
> cases.  But that's uncommon.
>
> Making this determination in realtime would be a substantial
> enhancement to each crypto library.  Since it'd have to be written
> portably (or the maintainers of the portable crypto libraries won't
> take it back), it couldn't assume any particular timings of any
> particular driver, either in hardware or software.  So it would have
> to run some fraction of the calls (perhaps 1%) in more than one
> driver, and time each one, and then make decisions on which driver to
> use by default for the other 99% of the calls.  The resulting times
> differ dramatically, based on many factors, ...
>
> One advantage of running some of the calls using both hardware and
> software is that the library can check that the results match exactly,
> and abort with a clear message.  That would likely have caught some bugs
> that snuck through in earlier crypto libraries.

I'm not at all sure I'd want run-time testing of this since, at least
as a general rule, introducing complications to crypto code is rarely
a good idea. Such tests at development time seem like a fine idea,
though; do we have those already?

What about testing when it is time to decide on kernel configuration;
include a particular module or not? Another issue is whether the
module choice is all-or-nothing; if there is a hardware RNG can one
use that without loading the rest of the code for the crypto
accelerator?


Re: [Freedombox-discuss] Hardware Crypto

2017-08-10 Thread Sandy Harris
To me it seems obvious that if the hardware provides a real RNG, that
should be used to feed random(4). This solves a genuine problem and,
even if calls to the hardware are expensive, overall overhead will not
be high because random(4) does not need huge amounts of input.

I'm much less certain hardware acceleration is worthwhile for ciphers
& hashes, except where the CPU itself includes instructions to speed
them up.


Re: [RFC PATCH v12 3/4] Linux Random Number Generator

2017-07-23 Thread Sandy Harris
Sandy Harris <sandyinch...@gmail.com> wrote:

> The biggest problem with random(4) is that you cannot generate good
> output without a good seed & just after boot, ...
>
> The only really good solution I know of is to find a way to provide a
> chunk of randomness early in the boot process. John Denker has a good
> discussion of doing this by modifying the kernel image & Ted talks of
> doing it via the boot loader. ...

Would it be enough to have a kernel module that does more-or-less what
the current shell scripts do, but earlier in the boot process? Throw
the stored data into the random(4) driver at module init time & update
it periodically later. This would not help much for first boot on a
new system, unless its store could be updated during install; Denker's
point that you need each system provisioned differently is important.
However it looks like it would be enough on other boots.

It also looks like it might be easier to implement & test. In
particular it is an isolated do-one-thing-well tool; the programmer
only needs to worry about his or her module, not several different
boot loaders or the procedures that distros have for CD images or
manufacturers for device setup.


Re: [RFC PATCH v12 3/4] Linux Random Number Generator

2017-07-18 Thread Sandy Harris
On Tue, Jul 18, 2017 at 5:08 PM, Theodore Ts'o  wrote:

> I've been trying to take the best features and suggestions from your
> proposal and integrating them into /dev/random already.

A good approach.

> Things that I've chosen not take is basically because I disbelieve
> that the Jitter RNG is valid. ...

The biggest problem with random(4) is that you cannot generate good
output without a good seed & just after boot, especially first boot on
a new system, you may not have enough entropy. A user space process
cannot do it soon enough and all the in-kernel solutions (unless you
have a hardware RNG) pose difficulties.

The only really good solution I know of is to find a way to provide a
chunk of randomness early in the boot process. John Denker has a good
discussion of doing this by modifying the kernel image & Ted talks of
doing it via the boot loader. Neither looks remarkably easy. Other
approaches like making the kernel read a seed file or passing a
parameter on the kernel command line have been suggested but, if I
recall right, rejected.

As I see it, the questions about Jitter, or any other in-kernel
generator based on timing, are whether it is good enough to be useful
until we have one of the above solutions or useful as a
defense-in-depth trick after we have one. I'd say yes to both.

There's been a lot of analysis. Stephan has a detailed rationale & a
lot of test data in his papers & the Havege papers also discuss
getting entropy from timer operations. I'd say the best paper is
McGuire et al:
https://static.lwn.net/images/conf/rtlws11/random-hardware.pdf

There is enough there to convince me that grabbing some (256?) bits
from such a generator early in the initialization is worthwhile.

> So I have been trying to do the evolution thing already.
> ...

> I'm obviously biased, but I don't see I see the Raison d'Etre for
> merging LRNG into the kernel.

Nor I.


Re: [kernel-hardening] Re: [PATCH] random: silence compiler warnings and fix race

2017-06-20 Thread Sandy Harris
On Tue, Jun 20, 2017 at 5:49 AM, Jeffrey Walton  wrote:
> On Tue, Jun 20, 2017 at 5:36 AM, Theodore Ts'o  wrote:
>> On Tue, Jun 20, 2017 at 10:53:35AM +0200, Jason A. Donenfeld wrote:

>>> > Suppressing all messages for all configurations cast a wider net than
>>> > necessary. Configurations that could potentially be detected and fixed
>>> > likely will go unnoticed. If the problem is not brought to light, then
>>> > it won't be fixed.

> Are there compelling reasons a single dmesg warning cannot be provided?
>
> A single message avoids spamming the logs. It also informs the system
> owner of the problem. An individual or organization can then take
> action based on their risk posture. Finally, it avoids the kernel
> making policy decisions for a user or organization.

I'd say the best solution is to have no configuration option
specifically for these messages. Always give some, but let
DEBUG_KERNEL control how many.

If DEBUG_KERNEL is not set, emit exactly one message & ignore any
other errors of this type. On some systems, that message may have to
be ignored, on some it might start an incremental process where one
problem gets fixed only to have another crop up & on some it might
prompt the admin to explore further by compiling with DEBUG_KERNEL.

If DEBUG_KERNEL is set, emit a message for every error of this type.


Re: get_random_bytes returns bad randomness before seeding is complete

2017-06-03 Thread Sandy Harris
Stephan's driver, the HAVEGE system & several others purport to
extract entropy from a series of timer calls. Probably the best
analysis is in the Mcguire et al. paper at
https://static.lwn.net/images/conf/rtlws11/random-hardware.pdf & the
simplest code in my user-space driver at
https://github.com/sandy-harris/maxwell The only kernel-space code I
know of is Stephan's.

If the claim that such calls give entropy is accepted (which I think
it should be) then if we get one bit per call, need 100 or so bits &
space the calls 100 ns apart, loading up a decent chunk of startup
entropy takes about 10,000 ns or 10 microseconds which looks like an
acceptable delay. Can we just do that very early in the boot process?

Of course this will fail on systems with no high-res timer. Are there
still some of those? It might be done in about 1000 times as long on a
system that lacks the realtime library's nanosecond timer but has the
Posix standard microsecond timer, implying a delay time in the
milliseconds. Would that be acceptable in those cases?


Re: [kernel-hardening] Re: get_random_bytes returns bad randomness before seeding is complete

2017-06-02 Thread Sandy Harris
The only sensible & general solution for the initialisation problem
that I have seen is John Denker's.
http://www.av8n.com/computer/htm/secure-random.htm#sec-boot-image

If I read that right, it would require only minor kernel changes &
none to the API Ted & others are worrying about. It would be secure
except against an enemy who can read your kernel image or interfere
with your install process. Assuming permissions are set sensibly, that
means an enemy who already has root & such an enemy has lots of much
easier ways to break things, so we need not worry about that case.

The difficulty is that it would require significant changes to
installation scripts. Still, since it is a general solution to a real
problem, it might be better to implement that rather than work on the
other suggestions in the thread.


Re: [PATCH] crypto: Allow ecb(cipher_null) in FIPS mode

2017-04-22 Thread Sandy Harris
On Sat, Apr 22, 2017 at 3:54 PM, Sandy Harris <sandyinch...@gmail.com> wrote:

> In the FreeS/WAN project, back around the turn of the century,
> we refused to implement several things required by the RFCs

Link to documentation:
http://www.freeswan.org/freeswan_trees/freeswan-2.06/doc/compat.html#dropped


Re: [PATCH] crypto: Allow ecb(cipher_null) in FIPS mode

2017-04-22 Thread Sandy Harris
On Sat, Apr 22, 2017 at 2:56 AM, Stephan Müller  wrote:

> Am Freitag, 21. April 2017, 17:25:41 CEST schrieb Stephan Müller:

> Just for the records: for FIPS 140-2 rules, cipher_null is to be interpreted
> as a memcpy on SGLs. Thus it is no cipher even though it sounds like one.
>
> cipher_null is also needed for seqiv which is required for rfc4106(gcm(aes)),
> which is an approved cipher. Also, it is needed for authenc() which uses it
> for copying the AAD from src to dst.
>
> That said, cipher_null must not be used for "encryption" operation but rather
> for handling data that is not subjected to FIPS 140-2 rules.

In the FreeS/WAN project, back around the turn of the century,
we refused to implement several things required by the RFCs
because we thought they were insecure: null cipher, single
DES & 768-bit DH Group 1.

At that time, not having DES did cause some problems in
interoperating with other IPsec implementations, but I
doubt it would today. Neither of the other dropped items
caused any problems at all.

Today I'd say drop all of those plus the 1024-bit Group 2,
and then look at whether others should go as well. As of
2001 or so, the 1536-bit Group 5 was very widely used,
so dropping it might well be problematic, but I am not
certain if it is either secure or widely used now.


Re: [PATCH] crypto: sun4i-ss: support the Security System PRNG

2016-11-17 Thread Sandy Harris
Add Ted T'so to cc list. Shouldn't he be included on anything affecting
the random(4) driver?

On Tue, Oct 18, 2016 at 8:34 AM, Corentin Labbe
 wrote:

> From: LABBE Corentin 
>
> The Security System have a PRNG.
> This patch add support for it as an hwrng.

Which is it? A PRNG & a HW RNG are quite different things.
It would, in general, be a fairly serious error to treat a PRNG
as a HWRNG.

If it is just a prng (which it appears to be from a quick look
at your code) then it is not clear it is useful since the
random(4) driver already has two PRNGs. It might be
but I cannot tell.
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] poly1305: generic C can be faster on chips with slow unaligned access

2016-11-02 Thread Sandy Harris
On Wed, Nov 2, 2016 at 4:09 PM, Herbert Xu  wrote:

> On Wed, Nov 02, 2016 at 06:58:10PM +0100, Jason A. Donenfeld wrote:
>> On MIPS chips commonly found in inexpensive routers, this makes a big
>> difference in performance.
>>
>> Signed-off-by: Jason A. Donenfeld 
>
> Can you give some numbers please? What about other architectures
> that your patch impacts?

In general it is not always clear that using whatever hardware crypto
is available is a good idea. Not all such hardware is fast, some CPUs
are, some CPUs have hardware for AES, and even if the hardware is
faster than the CPU, the context switch overheads may exceed the
advantage.

Ideally the patch development or acceptance process would be
testing this, but I think it might be difficult to reach that ideal.

The exception is a hardware RNG; that should always be used unless
it is clearly awful. It cannot do harm, speed is not much of an issue,
and it solves the hardest problem in the random(4) driver, making
sure of correct initialisation before any use.
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Entropy sources (was: /dev/random - a new approach)

2016-08-25 Thread Sandy Harris
On Thu, Aug 25, 2016 at 5:30 PM, H. Peter Anvin <h...@zytor.com> wrote:

> The network stack is a good source of entropy, *once it is online*.
> However, the most serious case is while the machine is still booting,
> when the network will not have enabled yet.
>
> -hpa

One possible solution is at:
https://github.com/sandy-harris/maxwell

A small (< 700 lines) daemon that gets entropy from timer imprecision
and variations in time for arithmetic (cache misses, interrupts, etc.)
and pumps it into /dev/random. Make it the first userspace program
started and all should be covered. Directory above includes a PDF doc
with detailed rationale and some discussion of alternate solutions.

Of course if you are dealing with a system-on-a-chip or low-end
embedded CPU & the timer is really inadequate, this will not work
well. Conceivably well enough, but we could not know that without
detailed analysis for each chip in question.
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Fwd: maxwell RNG

2016-07-31 Thread Sandy Harris
Version 2 of my maxwell(8) rng is now at:
https://github.com/sandy-harris/maxwell

It is a fairly simple program to gather entropy from timer
fluctuations & feed results into the Linux random(4) device. Small
--only 600 lines so reasonably easy to audit -- cheap & effective.

It has not changed much since the original (2012) release. The main
changes now are a new interface (cleaner command-line option set) and
a variety of cleanups to both code & documentation.
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v3 1/4] crypto: add template handling for RNGs

2016-07-18 Thread Sandy Harris
On Mon, Jul 18, 2016 at 3:14 AM, Herbert Xu  wrote:
> Stephan Mueller  wrote:

>> This patch adds the ability to register templates for RNGs. RNGs are
>> "meta" mechanisms using raw cipher primitives. Thus, RNGs can now be
>> implemented as templates to allow the complete flexibility the kernel
>> crypto API provides.

I do not see why this might be desirable, let alone necessary.
Security-critical code should be kept as simple as possible.
Don't we need just one good RNG?
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v5 0/7] /dev/random - a new approach

2016-06-19 Thread Sandy Harris
On Sun, Jun 19, 2016 at 3:36 PM, Pavel Machek  wrote:

>> The following patch set provides a different approach to /dev/random ...
>
> Dunno. It is very similar to existing rng, AFAICT.

I do not think so. A lot of the basic principles are the same of course,
but Stephan is suggesting some real changes. On the other hand, I'm
not sure all of them are good ideas & Ted has already incorporated
some into the driver, so it is debatable how much here is really useful.

> And at the very least, constants in existing RNG could be tuned
> to provide "entropy at the boot time".

No, this is a rather hard problem & just tweaking definitely will
not solve it. Ted's patches, Stephan's, mine, the grsecurity
stuff and the kernel hardening project all have things that
might help, but as far as I can see there is no complete
in-kernel solution yet.

Closest thing I have seen to a solution are Denker's suggestions at:
http://www.av8n.com/computer/htm/secure-random.htm#sec-boot-image

Those, though, require changes to build & installation methods
& it might be hard to get distros & device vendors to do it.

> So IMO this should be re-done as tweaks to existing design, not as
> completely new RNG.

I agree, & I think Stephan has already done some of that.
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v4 0/5] /dev/random - a new approach

2016-06-17 Thread Sandy Harris
David Jaša  wrote:

>
> BTW when looking at an old BSI's issue with Linux urandom that Jarod
> Wilson tried to solve with this series:
> https://www.spinics.net/lists/linux-crypto/msg06113.html
> I was thinking:
> 1) wouldn't it help for large urandom consumers if kernel created a DRBG
> instance for each of them? It would likely enhance performance and solve
> BSI's concern of predicting what numbers could other urandom consumers
> obtain at cost of memory footprint
> and then, after reading paper associated with this series:
> 2) did you evaluate use of intermediate DRBG fed by primary generator to
> instantiate per-node DRBG's? It would allow initialization of all
> secondary DRBGs right after primary generator initialization.

Theodore Ts'o, the random maintainer, already has a patch that
seems to deal with this issue. He has posted more than one
version & I'm not sure this is the best or latest, but ...
https://lkml.org/lkml/2016/5/30/22
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Could this be applied to random(4)?

2016-05-27 Thread Sandy Harris
A theoretical paper on getting provably excellent randomness from two
relatively weak input sources.
https://www.sciencenews.org/article/new-technique-produces-real-randomness
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: AES-NI: slower than aes-generic?

2016-05-26 Thread Sandy Harris
On Thu, May 26, 2016 at 2:49 PM, Stephan Mueller  wrote:

> Then, the use of the DRBG offers users to choose between a Hash/HMAC and CTR
> implementation to suit their needs. The DRBG code is agnostic of the
> underlying cipher. So, you could even use Blowfish instead of AES or whirlpool
> instead of SHA -- these changes are just one entry in drbg_cores[] away
> without any code change.

Not Blowfish in anything like the code you describe! It has only
64-bit blocks which might or might not be a problem, but it also has
an extremely expensive key schedule which would be awful if you want
to rekey often.

I'd say if you want a block cipher there you can quite safely restrict
the interface to ciphers with the same block & key sizes as AES.
Implement AES and one of the other finalists (I'd pick Serpent) to
test, and others can add the remaining finalists or national standards
like Korean ARIA or the Japanese one if they want them.
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: AES-NI: slower than aes-generic?

2016-05-26 Thread Sandy Harris
Stephan Mueller  wrote:

> for the DRBG and the LRNG work I am doing, I also test the speed of the DRBG.
> The DRBG can be considered as a form of block chaining mode on top of a raw
> cipher.
>
> What I am wondering is that when encrypting 256 16 byte blocks, I get a speed
> of about 170 MB/s with the AES-NI driver. When using the aes-generic or aes-
> asm, I get up to 180 MB/s with all else being equal. Note, that figure
> includes a copy_to_user of the generated data.

Why are you using AES? Granted, it is a reasonable idea, but when Ted
replaced the non-blocking pool with a DBRG, he used a different cipher
(I think chacha, not certain) and I think chose not to use the crypto
library implementation to avoid kernel bloat.

So he has adopted on of your better ideas. Why not follow his
lead on how to implement it?
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: better patch for linux/bitops.h

2016-05-05 Thread Sandy Harris
On Wed, May 4, 2016 at 11:50 PM, Theodore Ts'o  wrote:

> Instead of arguing over who's "sane" or "insane", can we come up with
> a agreed upon set of tests, and a set of compiler and compiler
> versions ...

I completely fail to see why tests or compiler versions should be
part of the discussion. The C standard says the behaviour in
certain cases is undefined, so a standard-compliant compiler
can generate more-or-less any code there.

As long as any of portability, reliability or security are among our
goals, any code that can give undefined behaviour should be
considered problematic.

> But instead of arguing over what works and doesn't, let's just create
> the the test set and just try it on a wide range of compilers and
> architectures, hmmm?

No. Let's just fix the code so that undefined behaviour cannot occur.

Creating test cases for a fix and trying them on a range of systems
would be useful, perhaps essential, work. Doing tests without a fix
would be a complete waste of time.
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: random(4) changes

2016-04-26 Thread Sandy Harris
On Mon, Apr 25, 2016 at 12:06 PM, Andi Kleen <a...@firstfloor.org> wrote:

> Sandy Harris <sandyinch...@gmail.com> writes:
>
> There is also the third problem of horrible scalability of /dev/random
> output on larger systems, for which patches are getting ignored.

I did not write that. I think Andi is quoting himself here, not me.

> https://lkml.org/lkml/2016/2/10/716
>
> Ignoring problems does not make them go away.
>
> -Andi
> --
> a...@linux.intel.com -- Speaking for myself only
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


random(4) changes

2016-04-22 Thread Sandy Harris
Stephan has recently proposed some extensive changes to this driver,
and I proposed a quite different set earlier. My set can be found at:
https://github.com/sandy-harris

This post tries to find the bits of both proposals that seem clearly
worth doing and entail neither large implementation problems nor large
risk of throwing out any babies with the bathwater.

Unfortunately, nothing here deals with the elephant in the room -- the
distinctly hard problem of making sure the driver is initialised well
enough & early enough. That needs a separate post, probably a separate
thread. I do not find Stepan's solution to this problem plausible and
my stuff does not claim to deal with it, though it includes some
things that might help.

I really like Stephan's idea of simplifying the interrupt handling,
replacing the multiple entropy-gathering calls in the current driver
with one routine called for all interrupts. See section 1.2 of his
doc. That seems to me a much cleaner design, easier both to analyse
and to optimise as a fast interrupt handler. I also find Stephan's
arguments that this will work better on modern  systems -- VMs,
machines with SSDs, etc. -- quite plausible.

Note, though, that I am only talking about the actual interrupt
handling, not the rest of Stephan's input handling code: the parity
calculation and XORing the resulting single bit into the entropy pool.
I'd be happier, at least initially, with a patch that only implemented
a single-source interrupt handler that gave 32 or 64 bits to existing
input-handling code.

Stephan: would you want to provide such a patch?
Ted: would you be inclined to accept it?

I also quite like Stephan's idea of replacing the two output pools
with a NIST-approved DBRG, mainly because this would probably make
getting various certifications easier. I also like the idea of using
crypto lib code for that since it makes both testing & maintenance
easier. This strikes me, though, as a do-when-convenient sort of
cleanup task, not at all urgent unless there are specific
certifications we need soon.

As for my proposals, I of course think they are full of good ideas,
but there's only one I think is really important.

In the current driver -- and I think in Stephan's, though I have not
looked at his code in any detail, only his paper -- heavy use of
/dev/urandom or the kernel get_random_bytes() call can deplete the
entropy available to /dev/random. That can be a serious problem in
some circumstances, but I think I have a fix.

You have an input pool (I) plus a blocking pool (B) & a non-blocking
pool (NB). The problem is what to do when NB must produce a lot of
output but you do not want to deplete I too much. B & NB might be
replaced by DBRGs and the problem would not change.

B must be reseeded before very /dev/random output, NB after some
number of output blocks. I used #define SAFE_OUT 503 but some other
number might be better depending how NB is implemented & how
paranoid/conservative one feels.

B can only produce one full-entropy output, suitable for /dev/random,
per reseed but B and NB are basically the same design so B can also
produce SAFE_OUT reasonably good random numbers per reseed. Use those
to reseed NB.and you reduce the load on I for reseeding NB from
SAFE_OUT (use I every time NB is reseeded) to SAFE_OUT*SAFE_OUT (use I
only to reseed B).

This does need analysis by cryptographers, but at a minimum it is
basically plausible and, even with some fairly small value for
SAFE_OUT, it greatly alleviates the problem.
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC][PATCH 0/6] /dev/random - a new approach

2016-04-22 Thread Sandy Harris
On Thu, Apr 21, 2016 at 10:51 PM, Theodore Ts'o <ty...@mit.edu> wrote:

> I still have a massive problem with the claims that the "Jitter" RNG
> provides any amount of entropy.  Just because you and I might not be
> able to analyze it doesn't mean that somebody else couldn't.  After
> all, DUAL-EC DRNG was very complicated and hard to analyze.  So would
> be something like
>
>AES(NSA_KEY, COUNTER++)
>
> Very hard to analyze indeed.  Shall we run statistical tests?  They'll
> pass with flying colors.
>
> Secure?  Not so much.
>
> - Ted

Jitter, havege and my maxwell(8) all claim to get entropy from
variations in timing of simple calculations, and the docs for
all three give arguments that there really is some entropy
there.

Some of those arguments are quite strong. Mine are in
the PDF at:
https://github.com/sandy-harris/maxwell

I find any of those plausible as an external RNG feeding
random(4), though a hardware RNG or Turbid is preferable.
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] crypto: implement DH primitives under akcipher API

2016-03-02 Thread Sandy Harris
Salvatore Benedetto  wrote:

>> > > > +static int dh_check_params_length(unsigned int p_len)
>> > > > +{
>> > > > +   switch (p_len) {
>> > > > +   case 768:
>> > > > +   case 1024:
>> > > > +   case 1536:
>> > > > +   case 2048:
>> > > > +   case 3072:
>> > > > +   case 4096:
>> > > > +   return 0;
>> > > > +   }
>> > > > +   return -EINVAL;
>> > > > +}
As far back as 1999, the FreeS/WAN project refused to
implement the 768-bit IPsec group 1 (even though it was
the only one required by the RFCs) because it was not thought
secure enough. I think the most-used group was 1536-bit
group 5; it wasn't in the original RFCs but nearly everyone
implemented it.

>> And besides, I would like to disallow all < 2048 right from the start.

I'm not up-to-date on the performance of attacks. You may be right,
or perhaps the minimum should be even higher. Certainly there is
no reason to support 768 or 1024-bit groups.

On the other hand, we should consider keeping the 1536-bit
group since it is very widely used, likely including by people
we'll want to interoperate with.

> Hmm.. What range would you suggest?

There are at least two RFCs which define additional groups.
Why not just add some or all of those?
https://tools.ietf.org/html/rfc3526
https://tools.ietf.org/html/rfc5114
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/7] A couple of generated files

2016-03-01 Thread Sandy Harris
This set of patches, plus some later ones that simplify the
code and get rid of one major bug are now at:
https://github.com/sandy-harris

Directory for these changes is random.gcm

An out-of-kernel test program for an older version
is in random.test

On Sat, Nov 7, 2015 at 1:50 PM, Sandy Harris <sandyinch...@gmail.com> wrote:

> There are two groups of changes, each controlled by a config
> variable. Default for both is 'n'.
>
> CONFIG_RANDOM_INIT: initialise the pools with data from
> /dev/urandom on the machine that compiles the kernel.
> Comments for the generator program scripts/gen_random.c
> have details.
>
> The main change in random.c is adding conditionals
> to make it use the random data if CONFIG_RANDOM_INIT
> is set. There is also a trivial fix updating a reference to an
> obsoleted in a comment, and I added some sanity-check
> #if tests for odd #define parameter values.
>
> This is a fairly simple change. I do not think it needs a config
> variable; it should just be the default. However I put it under
> config control for testing.
>
> CONFIG_RANDOM_GCM controls a much larger and
> less clearly desirable set of changes. It switches
> compilation between random.c and and a heavily
> modified version random_gcm.c
>
> This uses the hash from AES-GCM instead of SHA-1,
> and that allows a lot of other changes. The main
> design goal was to decouple the two output pools
> so that heavy use of the nonblocking pool cannot
> deplete entropy in the input pool. The nonblocking
> pool usually rekeys from the blocking pool instead.
> random_gcm.c has extensive comments on both
> the rationale for this approach & the details of my
> implementation.
>
> random_gcm.c is not close to being a finished
> product, in particular my code is not yet well
> integrated with existing driver code.
>
> Most of the code was developed and has been
> fairly well tested outside the kernel.
> Test program is at:
> https://github.com/sandy-harris/random.test
>
> I just dropped a large chunk of that code into
> a copy of random.c, made modifications to
> make the style match better & to get it to
> compile in the kernel context, then deleted
> a few chunks of existing driver code and
> replaced them with calls to my stuff.
>
> Proper integration would involve both
> replacing more of the existing code with
> new and moving a few important bits of
> the existing code into some of my functions.
> In particular, my stuff does not yet block
> in the right places.
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: ipsec impact on performance

2015-12-03 Thread Sandy Harris
This article is old (turn of the century) but it may have numbers
worth comparing to
http://www.freeswan.org/freeswan_trees/CURRENT-TREE/doc/performance.html
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: A new, fast and "unbreakable" encryption algorithm

2015-11-18 Thread Sandy Harris
On Wed, Nov 18, 2015 at 12:10 AM, Ismail Kizir  wrote:

> I've developed a new encryption algorithm, which dynamically changes
> the key according to plaintext and practically impossible to break.

There is a very long history of crypto whose author considers is
secure being quickly broken. This happens to nearly all methods
devised by amateurs and quite a few from professionals.

Despite that, amateurs like me & (I presume) you keep trying.
This is probably a good thing. Here's one of mine:
https://aezoo.compute.dtu.dk/doku.php?id=enchilada

> I also opened to public with MIT dual License.

This is excellent. Many people make claims for their
algorithm without publishing details, which is ludicrous
since no-one can analyze it without those details. You
have avoided that pitfall.

> I will present a paper on a Turkish National Inet-tr 2015 Symposium

A paper describing the design would make analysis
much easier than doing it from source code, and
like every other algorithm yours will need lots of
analysis before it might become sensible for
people to trust it.

I suggest you subscribe to the crypto list:
http://www.metzdowd.com/mailman/listinfo/cryptography

Once your paper is published, post a link there
to invite analysis.
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 3/7] Initialise pools randomly if CONFIG_RANDOM_INIT=y

2015-11-07 Thread Sandy Harris
Signed-off-by: Sandy Harris <sandyinch...@gmail.com>
---
 drivers/char/random.c | 50 ++
 1 file changed, 46 insertions(+), 4 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index d0da5d8..e222e0f 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -231,7 +231,7 @@
  * not be attributed to the Phil, Colin, or any of authors of PGP.
  *
  * Further background information on this topic may be obtained from
- * RFC 1750, "Randomness Recommendations for Security", by Donald
+ * RFC 4086, "Randomness Requirements for Security", by Donald
  * Eastlake, Steve Crocker, and Jeff Schiller.
  */
 
@@ -275,13 +275,19 @@
 /*
  * Configuration information
  */
+#ifdef CONFIG_RANDOM_INIT
+
+#include 
+
+#else
 #define INPUT_POOL_SHIFT   12
 #define INPUT_POOL_WORDS   (1 << (INPUT_POOL_SHIFT-5))
 #define OUTPUT_POOL_SHIFT  10
 #define OUTPUT_POOL_WORDS  (1 << (OUTPUT_POOL_SHIFT-5))
-#define SEC_XFER_SIZE  512
-#define EXTRACT_SIZE   10
+#endif
 
+#define EXTRACT_SIZE   10
+#define SEC_XFER_SIZE  512
 #define DEBUG_RANDOM_BOOT 0
 
 #define LONGS(x) (((x) + sizeof(unsigned long) - 1)/sizeof(unsigned long))
@@ -296,6 +302,27 @@
 #define ENTROPY_SHIFT 3
 #define ENTROPY_BITS(r) ((r)->entropy_count >> ENTROPY_SHIFT)
 
+/* sanity checks */
+
+#if ((ENTROPY_SHIFT+INPUT_POOL_SHIFT) >= 16)
+#ifndef CONFIG_64BIT
+#error *_SHIFT values problematic for credit_entropy_bits()
+#endif
+#endif
+
+#if ((INPUT_POOL_WORDS%16) || (OUTPUT_POOL_WORDS%16))
+#error Pool size not divisible by 16, which code assumes
+#endif
+
+#if (INPUT_POOL_WORDS < 32)
+#error Input pool less than a quarter of default size
+#endif
+
+#if (INPUT_POOL_WORDS < OUTPUT_POOL_WORDS)
+#error Strange configuration, input pool smalller than output
+#endif
+
+
 /*
  * The minimum number of bits of entropy before we wake up a read on
  * /dev/random.  Should be enough to do a significant reseed.
@@ -442,16 +469,23 @@ struct entropy_store {
 };
 
 static void push_to_pool(struct work_struct *work);
+
+#ifndef CONFIG_RANDOM_INIT
 static __u32 input_pool_data[INPUT_POOL_WORDS];
 static __u32 blocking_pool_data[OUTPUT_POOL_WORDS];
 static __u32 nonblocking_pool_data[OUTPUT_POOL_WORDS];
+#endif
 
 static struct entropy_store input_pool = {
.poolinfo = _table[0],
.name = "input",
.limit = 1,
.lock = __SPIN_LOCK_UNLOCKED(input_pool.lock),
-   .pool = input_pool_data
+#ifdef CONFIG_RANDOM_INIT
+   .pool = pools,
+#else
+   .pool = input_pool_data,
+#endif
 };
 
 static struct entropy_store blocking_pool = {
@@ -460,7 +494,11 @@ static struct entropy_store blocking_pool = {
.limit = 1,
.pull = _pool,
.lock = __SPIN_LOCK_UNLOCKED(blocking_pool.lock),
+#ifdef CONFIG_RANDOM_INIT
+   .pool = pools + INPUT_POOL_WORDS,
+#else
.pool = blocking_pool_data,
+#endif
.push_work = __WORK_INITIALIZER(blocking_pool.push_work,
push_to_pool),
 };
@@ -470,7 +508,11 @@ static struct entropy_store nonblocking_pool = {
.name = "nonblocking",
.pull = _pool,
.lock = __SPIN_LOCK_UNLOCKED(nonblocking_pool.lock),
+#ifdef CONFIG_RANDOM_INIT
+   .pool = pools + INPUT_POOL_WORDS + OUTPUT_POOL_WORDS,
+#else
.pool = nonblocking_pool_data,
+#endif
.push_work = __WORK_INITIALIZER(nonblocking_pool.push_work,
push_to_pool),
 };
-- 
2.5.0

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 6/7] Produces generated/random_init.h for random driver

2015-11-07 Thread Sandy Harris
Signed-off-by: Sandy Harris <sandyinch...@gmail.com>
---
 scripts/gen_random.c | 260 +++
 1 file changed, 260 insertions(+)
 create mode 100644 scripts/gen_random.c

diff --git a/scripts/gen_random.c b/scripts/gen_random.c
new file mode 100644
index 000..07b447f
--- /dev/null
+++ b/scripts/gen_random.c
@@ -0,0 +1,260 @@
+/*
+ * Program to select random numbers for initialising things
+ * in the random(4) driver.
+ *
+ * A different implementation of basically the same idea is
+ * one of several kernel security enhancements at
+ * https://grsecurity.net/
+ *
+ * This program:
+ *
+ *limits the range of Hamming weights
+ *every byte has at least one bit 1, one 0
+ *different every time it runs
+ *
+ * data from /dev/urandom
+ * results suitable for inclusion by random.c
+ * writes to stdout, expecting makefile to redirect
+ *
+ * makefile should also delete the output file after it is
+ * used in compilation of random.c. This is more secure; it
+ * forces the file to be rebuilt and a new version used in
+ * every compile. It also prevents an enemy just reading an
+ * output file in the build directory and getting the data
+ * that is in use in the current kernel. This is not full
+ * protection since they might look in the kernel image,
+ * but it seems to be the best we can do.
+ *
+ * This falls well short of the ideal initialisation solution,
+ * which would give every installation (rather than every
+ * compiled kernel) a different seed. For that, see John
+ * Denker's suggestions at:
+ * http://www.av8n.com/computer/htm/secure-random.htm#sec-boot-image
+ *
+ * On the other hand, neither sort of seed is necessary if
+ *either  you have a trustworthy hardware RNG
+ *or  you have secure stored data
+ * In those cases, the device can easily be initialised well; the
+ * only difficulty is to ensure this is done early enough.
+ *
+ * Inserting random data at compile time can do no harm and may
+ * sometimes make attacks harder. It is not an ideal solution, and
+ * not always necessary, but cheap and probably the best we can do
+ * during the build (rather than install) process.
+ *
+ * This is certainly done early enough and the data is random
+ * enough, but it is not necessarily secret enough.
+ *
+ * In some cases -- for example, a firewall machine that compiles
+ * its own kernel -- this alone might be enough to ensure secure
+ * initialisation, since only an enemy who already has root could
+ * discover this data. Of course even in those cases it should not
+ * be used alone, only as one layer of a defense in depth.
+ *
+ * In other cases -- a kernel that is compiled once then used in
+ * a Linux distro or installed on many devices -- this is likely
+ * of very little value. It complicates an attack somewhat, but
+ * it clearly will not stop a serious attacker and may not even
+ * slow them down much.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+/*
+ * Configuration information
+ * moved from random.c
+ */
+#define INPUT_POOL_SHIFT   12
+#define INPUT_POOL_WORDS   (1 << (INPUT_POOL_SHIFT-5))
+#define OUTPUT_POOL_SHIFT  10
+#define OUTPUT_POOL_WORDS  (1 << (OUTPUT_POOL_SHIFT-5))
+
+#define TOTAL_POOL_WORDS  (INPUT_POOL_WORDS + 2*OUTPUT_POOL_WORDS)
+
+typedef uint32_t u32 ;
+
+int accept(u32) ;
+int hamming(u32);
+void do_block( int, char * ) ;
+void usage(void) ;
+
+int urandom ;
+
+int main(int argc, char **argv)
+{
+   if( (urandom = open("/dev/urandom", O_RDONLY)) == -1 )  {
+   fprintf(stderr, "gen_random_init: no /dev/urandom, cannot 
continue\n") ;
+   exit(1) ;
+   }
+   printf("/* File generated by gen_random_init.c */\n\n") ;
+   /*
+* print our constants into output file
+* ensuring random.c has the same values
+*/
+   printf("#define INPUT_POOL_WORDS %d\n", INPUT_POOL_WORDS) ; 
+   printf("#define OUTPUT_POOL_WORDS %d\n", OUTPUT_POOL_WORDS) ;
+   printf("#define INPUT_POOL_SHIFT %d\n\n", INPUT_POOL_SHIFT) ; 
+ 
+   /*
+* Initialise the pools with random data
+* This is done unconditionally
+*/
+   do_block( TOTAL_POOL_WORDS, "pools" ) ;
+
+#ifdef CONFIG_RANDOM_GCM
+
+#define ARRAY_ROWS  8  /* 4 pools get 2 constants each 
   */
+#define ARRAY_WORDS(4 * ARRAY_ROWS)/* 32-bit words, 128-bit 
constants */
+
+/*
+ * If we are using the GCM hash, set up an array of random
+ * constants for it.
+ *
+ * The choice of 32 words (eight 128-bit rows, 1024 bits) for
+ * this is partly arbitrary, partly reasoned. 256 bits would
+ * almost certainly be enough, but 1024 is convenient.
+ *
+ * The AES-GCM hash initialises its accumulator all-zero and uses
+ * a 128-bit multiplier, H. I chose instead to use two constants,
+ * one to initialise the accumulator 

[PATCH 5/7] Conditionals for CONFIG_RANDOM_INIT and CONFIG_RANDOM_GCM

2015-11-07 Thread Sandy Harris
Signed-off-by: Sandy Harris <sandyinch...@gmail.com>
---
 drivers/char/Makefile | 25 -
 1 file changed, 24 insertions(+), 1 deletion(-)

diff --git a/drivers/char/Makefile b/drivers/char/Makefile
index d8a7579..7d095e5 100644
--- a/drivers/char/Makefile
+++ b/drivers/char/Makefile
@@ -2,7 +2,30 @@
 # Makefile for the kernel character device drivers.
 #
 
-obj-y  += mem.o random.o
+obj-y  += mem.o
+
+ifeq ($(CONFIG_RANDOM_GCM),y)
+  random_c = random_gcm.c
+  random_o = random_gcm.o
+  random_no= random.o
+else
+  random_c = random.c
+  random_o = random.o
+  random_no= random_gcm.o
+endif
+obj-y  += $(random_o)
+
+# remove the generated file after use so that
+# a fresh one is built (by scripts/gen_random)
+# for every compile
+# remove random_no so it will not get linked
+ifeq ($(CONFIG_RANDOM_INIT),y)
+init-file = include/generated/random_init.h
+$(random_o): $(random_c) $(init-file)
+   $(CC) $< -o $@
+   $(Q) rm --force $(init-file) $(random_no)
+endif
+
 obj-$(CONFIG_TTY_PRINTK)   += ttyprintk.o
 obj-y  += misc.o
 obj-$(CONFIG_ATARI_DSP56K) += dsp56k.o
-- 
2.5.0

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 7/7] Create generated/random_init.h, used by random driver

2015-11-07 Thread Sandy Harris
Signed-off-by: Sandy Harris <sandyinch...@gmail.com>
---
 Kbuild   | 21 +
 scripts/Makefile |  1 +
 2 files changed, 22 insertions(+)

diff --git a/Kbuild b/Kbuild
index f55cefd..494c665 100644
--- a/Kbuild
+++ b/Kbuild
@@ -5,6 +5,7 @@
 # 2) Generate timeconst.h
 # 3) Generate asm-offsets.h (may need bounds.h and timeconst.h)
 # 4) Check for missing system calls
+# 5) Generate random_init.h
 
 # Default sed regexp - multiline due to syntax constraints
 define sed-y
@@ -98,3 +99,23 @@ missing-syscalls: scripts/checksyscalls.sh $(offsets-file) 
FORCE
 
 # Keep these three files during make clean
 no-clean-files := $(bounds-file) $(offsets-file) $(timeconst-file)
+
+#
+# 5) Generate random_init.h
+
+ifdef CONFIG_RANDOM_INIT
+init-file := include/generated/random_init.h
+used-file := scripts/gen_random
+source-file := $(used-file).c
+always  += $(init-file)
+targets  += $(init-file)
+$(init-file) : $(used-file)
+   $(Q) $(used-file) > $(init-file)
+ifdef CONFIG_RANDOM_GCM
+$(used-file) : $(source-file)
+   $(CC) $< -DCONFIG_RANDOM_GCM -o $@
+else
+$(used-file) : $(source-file)
+   $(CC) $< -o $@
+endif
+endif
diff --git a/scripts/Makefile b/scripts/Makefile
index 1b26617..3cea546 100644
--- a/scripts/Makefile
+++ b/scripts/Makefile
@@ -18,6 +18,7 @@ hostprogs-$(CONFIG_BUILDTIME_EXTABLE_SORT) += sortextable
 hostprogs-$(CONFIG_ASN1)+= asn1_compiler
 hostprogs-$(CONFIG_MODULE_SIG)  += sign-file
 hostprogs-$(CONFIG_SYSTEM_TRUSTED_KEYRING) += extract-cert
+hostprogs-$(CONFIG_RANDOM_INIT) += gen_random
 
 HOSTCFLAGS_sortextable.o = -I$(srctree)/tools/include
 HOSTCFLAGS_asn1_compiler.o = -I$(srctree)/include
-- 
2.5.0

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 1/7] A couple of generated files

2015-11-07 Thread Sandy Harris
Signed-off-by: Sandy Harris <sandyinch...@gmail.com>
---
 .gitignore | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/.gitignore b/.gitignore
index fd3a355..dd80bfd 100644
--- a/.gitignore
+++ b/.gitignore
@@ -112,3 +112,6 @@ all.config
 
 # Kdevelop4
 *.kdev4
+
+certs/x509_certificate_list
+scripts/gen_random
-- 
2.5.0

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/7] A couple of generated files

2015-11-07 Thread Sandy Harris
On Sat, Nov 7, 2015 at 12:01 PM, Jason Cooper <ja...@lakedaemon.net> wrote:
> On Sat, Nov 07, 2015 at 09:30:36AM -0500, Sandy Harris wrote:
>> Signed-off-by: Sandy Harris <sandyinch...@gmail.com>
>> ---
>>  .gitignore | 3 +++
>>  1 file changed, 3 insertions(+)
>>
>> diff --git a/.gitignore b/.gitignore
>> index fd3a355..dd80bfd 100644
>> --- a/.gitignore
>> +++ b/.gitignore
>> @@ -112,3 +112,6 @@ all.config
>>
>>  # Kdevelop4
>>  *.kdev4
>> +
>> +certs/x509_certificate_list
>> +scripts/gen_random
>
> Is there a .gitignore file in scripts/ ?
>

Yes, though I wasn't aware of that.
I guess gen_random should be there instead of in the global file.
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/7] A couple of generated files

2015-11-07 Thread Sandy Harris
Jason Cooper <ja...@lakedaemon.net> wrote:

> I know we talked  about this series offlist, but we need to fill in
> folks who are seeing it for the first time.  Usually, this is done with
> a coverletter (--coverletter for git format-patch).

Yes, your help plus the O'Reilly book got me using git without
too many errors, but I'm still getting things wrong & missing
the cover letter was one.

> No need to resend
> before receiving feedback, but would you mind replying with a
> description of the problem you're attempting to solve and how the series
> solves it?

There are two groups of changes, each controlled by a config
variable. Default for both is 'n'.

CONFIG_RANDOM_INIT: initialise the pools with data from
/dev/urandom on the machine that compiles the kernel.
Comments for the generator program scripts/gen_random.c
have details.

The main change in random.c is adding conditionals
to make it use the random data if CONFIG_RANDOM_INIT
is set. There is also a trivial fix updating a reference to an
obsoleted in a comment, and I added some sanity-check
#if tests for odd #define parameter values.

This is a fairly simple change. I do not think it needs a config
variable; it should just be the default. However I put it under
config control for testing.

CONFIG_RANDOM_GCM controls a much larger and
less clearly desirable set of changes. It switches
compilation between random.c and and a heavily
modified version random_gcm.c

This uses the hash from AES-GCM instead of SHA-1,
and that allows a lot of other changes. The main
design goal was to decouple the two output pools
so that heavy use of the nonblocking pool cannot
deplete entropy in the input pool. The nonblocking
pool usually rekeys from the blocking pool instead.
random_gcm.c has extensive comments on both
the rationale for this approach & the details of my
implementation.

random_gcm.c is not close to being a finished
product, in particular my code is not yet well
integrated with existing driver code.

Most of the code was developed and has been
fairly well tested outside the kernel.
Test program is at:
https://github.com/sandy-harris/random.test

I just dropped a large chunk of that code into
a copy of random.c, made modifications to
make the style match better & to get it to
compile in the kernel context, then deleted
a few chunks of existing driver code and
replaced them with calls to my stuff.

Proper integration would involve both
replacing more of the existing code with
new and moving a few important bits of
the existing code into some of my functions.
In particular, my stuff does not yet block
in the right places.
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 1/4] Crypto: Crypto driver support aes/des/des3 for rk3288

2015-11-06 Thread Sandy Harris
On Thu, Nov 5, 2015 at 8:17 PM, Zain Wang  wrote:
> The names registered are:
> ecb(aes) cbc(aes) ecb(des) cbc(des) ecb(des3_ede) cbc(des3_ede)
> You can alloc tags above in your case.

Why on Earth are you allowing DES? Here's a reference from around the
turn of the century on why the FreeS/WAN project refused to implement
it then:
http://www.freeswan.org/freeswan_trees/freeswan-1.97/doc/politics.html#desnotsecure

In 1998 a $200,000-odd purpose-built machine using FPGAs could break
DES in a few days. Morre's Law applies; my guess would be that today
you could break it in hours for well under $10,000 using either GPUs
or Intel's Xeon Phi.

Even if you have to implement DES because you need it as a component
for 3DES and some standards still require 3DES, single DES should not
be exposed in the user interface.
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Randomness for crypto, github repositories

2015-10-21 Thread Sandy Harris
I've just created github repositories for two projects:

https://github.com/sandy-harris/random.test

Test program for things I want to add to the Linux random(4) driver. I
am proposing a fairly radical rewrite. This gives an executable test
program for my new code, not a driver.

https://github.com/sandy-harris/maxwell

A demon to feed random(4) with entropy derived from the timer.
Intended mainly for use on limited systems which may lack other good
sources.
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Cryptography] Randomness for crypto, github repositories

2015-10-21 Thread Sandy Harris
On Wed, Oct 21, 2015 at 1:06 PM,   wrote:

> I've only looked at it briefly, but I have a question.. Are you trying to
> use the GCM Galois multiply as an entropy extractor?

Yes, the basic idea is to use a series of GCM multiplies over the pool
data to replace the hashing of that data in the current driver. There
are complications; each hash uses two quasi-constants -- initialiser
and GCM multiplier -- and hashes a counter along with the pool data.
The counter changes on every iteration and is sometimes changed more
drastically, and the constants are sometimes updated

> I don't know of any proof that it is a good extractor for any class of
> entropic data. That doesn't mean there isn't one, but I've not heard of
> one.

Good question. It seems to me th at if it is secure for its
authentication usage, where it replaces an HMAC, then it should be
safe in this application. But no, I don't have a proof & the question
is worth some analysis.
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/2] ath9k: export HW random number generator

2015-07-28 Thread Sandy Harris
On Mon, Jul 27, 2015 at 7:01 AM, Stephan Mueller smuel...@chronox.de wrote:

 This one does not look good for a claim that the RNG produces white noise. An
 RNG that is wired up to /dev/hwrng should produce white noise. Either by
 having an appropriate noise source or by conditioning the output of the noise
 source.

Yes.

 When conditioning the output, you have to be careful about the entropy claim.

A very good analysis of how to deal with this is in Denker's Turbid paper:
http://www.av8n.com/turbid/

In particular, see section 4.2 on Saturation

 However, the hwrandom framework does not provide any conditioning logic.

At first sight, this sounds like a blunder to me, but I have not
looked at hwrandom at all. Is there a rationale?

For example, not building conditioning into that driver would make
perfect sense if the output were just being fed into the random(4)
which does plenty of mixing. The only problem then would be to make
sure of giving random(4) reasonable entropy estimates.
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v5 06/14] crypto: marvell/CESA: add DES support

2015-06-17 Thread Sandy Harris
On Tue, Jun 16, 2015 at 5:59 AM, Boris Brezillon
boris.brezil...@free-electrons.com wrote:

 Add support for DES operations.

Why on Earth should we do that? DES is demonstrably insecure. The only
possible excuse for allowing it anywhere in a modern code base is that
you need it to implement triple DES, and even that should by now be
deprecated in favour of more modern ciphers which are much faster and
thought to be  more secure.

Here's documentation from around the turn of the century
http://www.freeswan.org/freeswan_trees/freeswan-1.5/doc/DES.html

Moore's Law applies, so the $200,000 FPGA machine that broke DES in
days in 1998 might be dirt cheap today. Certainly breaking DES on one
of today's clusters would be fast and cheap as well, given that it
took only a few months in 1998 using the Internet as the Conectio
fabric.
http://www.interhack.net/pubs/des-key-crack/
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add random_initialized command line param

2015-05-19 Thread Sandy Harris
On Mon, May 18, 2015 at 6:58 PM, Herbert Xu herb...@gondor.apana.org.au wrote:

 Stephan Mueller smuel...@chronox.de wrote:

 I hear more and more discussions about recommendations to use AES 256 and not
 AES 128.

Or perhaps other ciphers with 256-bit keys. Salsa, ChaCha and several of
the Caesar candidates support those.

 These kind of recommendations will eventually also affect the entropy
 requirements for noise sources. This is my motivation for the patch: allowing
 different user groups to set the minimum bar for the nonblocking pool to
 *higher* levels (the examples for 80 to 112 bits or 100 to 125 bits shall 
 just
 show that there are active revisions of entropy requirements).

 Does anyone need to raise this from 128 today? If not then this
 patch is pointless.

There is an RFC for ChaCha in IETF protocols
https://www.rfc-editor.org/rfc/rfc7539.txt
That RFC is new, issued this month, so it will probably be a while
before we need to worry about it.

I do think finding a way to support changing the init requirement from
128 to 256 bits will be useful at some point. However, I doubt it is
urgent since few protocols need it now. On the other hand, IPsec and
TLS both include AES-256, I think.

When we do do it, I see no reason to support anything other than 128
and 256, and I am not sure about retaining 128. Nor do I see any
reason this should be a command-line option rather than just a
compile-time constant.
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Counter Size in CTR mode for AES Driver

2015-04-11 Thread Sandy Harris
sri sowj srisowj4li...@gmail.com wrote:

 I have seen multiple open source drivers for AES(CTR) mode for
 different Crypto Hardware Engines, I was not really sure on
 countersize,nonce etc.
 Please can any one provide some info on the following

Not what you asked for, but in case it is useful here is the counter
management code from a version of the random(4) driver that
I am working on:


/*
 * 128-bit counter to mix in when hashing
 /

static u32 iter_count = 0 ;
static spinlock_t counter_lock ;

/*
 * constants are from SHA-1
 * ones in counter[] are used only once, in initialisation
 * then random data is mixed in there
 */
#define COUNTER_DELTA 0x67452301

static u32 counter[] = {0xEFCDAB89, 0x98BADCFE, 0x10325476, 0xC3D2E1F0} ;

/*
 * Code is based on my own work in the Enchilada cipher:
 * https://aezoo.compute.dtu.dk/doku.php?id=enchilada
 *
 * Mix operations so Hamming weight changes more than for a simple
 * counter. This may not be strictly necessary, but a simple counter
 * can be considered safe only if you trust the crypto completely.
 * Low Hamming weight differences in inputs do allow some attacks on
 * block ciphers or hashes and the high bits of a large counter that
 * is only incremented do not change for aeons.
 *
 * The extra code here is cheap insurance.
 * Somewhat nonlinear since it uses +, XOR and rotation.
 *
 * For discussion, see mailing list thread starting at:
 * http://www.metzdowd.com/pipermail/cryptography/2014-May/021345.html
 */
static void count(void)
{
spin_lock( counter_lock ) ;

/*
* Limit the switch to  256 cases
* should work with any CPU  compiler
*
* Five constants used, all primes
* roughly evenly spaced, around 50, 100, 150, 200, 250
*/
switch( iter_count ){
/*
* mix three array elements
* each element is used twice
* once on left, once on right
* pattern is circular
*/
case 47:
counter[1] += counter[2] ;
break ;
case 101:
counter[2] += counter[3] ;
break ;
case 197:
counter[3] += counter[1] ;
break ;
/*
* inject counter[0] into that loop
* loop and counter[0] use +=
* so use ^= here
*/
case 149:
counter[1] ^= counter[0] ;
break ;
/*
* restart loop
* include a rotation for nonlinearity
*/
case 251:
counter[0] = ROTL( counter[0], 5) ;
iter_count = -1 ;
break ;
/*
* for 247 out of every 252 iterations
* the switch does nothing
*/
default:
break ;
}
/*
* counter[0] is almost purely a counter
* uses += instead of ++ to change Hamming weight more
* nothing above affects it, except the rotation
*/
counter[0] += COUNTER_DELTA ;
iter_count++ ;

spin_unlock( counter_lock ) ;
}
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: crypto: zeroization of sensitive data in af_alg

2014-11-10 Thread Sandy Harris
On Sun, Nov 9, 2014 at 5:33 PM, Stephan Mueller smuel...@chronox.de wrote:

 while working on the AF_ALG interface, I saw no active zeroizations of memory
 that may hold sensitive data that is maintained outside the kernel crypto API
 cipher handles. ...

 I think I found the location for the first one: hash_sock_destruct that should
 be enhanced with a memset(0) of ctx-result.

See also a thread titled memset() in crypto code? on the linux
crypto list. The claim is that gcc can optimise memset() away so you
need a different function to guarantee the intended results. There's a
patch to the random driver that uses a new function
memzero_explicit(), and one of the newer C standards has a different
function name for the purpose.
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: memset() in crypto code?

2014-10-07 Thread Sandy Harris
I have started a thread about this on the gcc help mailing list
https://gcc.gnu.org/ml/gcc-help/2014-10/msg00047.html

We might consider replacinging memzero_explicit with memset_s() since
that is in the C!! standard, albeit I think as optional. IBM, Apple,
NetBSD, ... have that.
https://mail-index.netbsd.org/tech-userlevel/2012/02/24/msg006125.html
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: memset() in crypto code?

2014-10-06 Thread Sandy Harris
On Mon, Oct 6, 2014 at 1:44 PM, Jason Cooper ja...@lakedaemon.net wrote:

 On Sat, Oct 04, 2014 at 11:09:40PM -0400, Sandy Harris wrote:
 There was recently a patch to the random driver to replace memset()
 because, according to the submitter, gcc sometimes optimises memset()
 away which might leave data unnecessarily exposed. The solution
 suggested was a function called memzero_explicit(). There was a fair
 bit of discussion and the patch was accepted.

 Do you have a pointer?

https://lkml.org/lkml/2014/8/25/497
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: memset() in crypto code?

2014-10-06 Thread Sandy Harris
On Mon, Oct 6, 2014 at 1:44 PM, Jason Cooper ja...@lakedaemon.net wrote:

 On Sat, Oct 04, 2014 at 11:09:40PM -0400, Sandy Harris wrote:

 There was recently a patch to the random driver to replace memset()
 because, according to the submitter, gcc sometimes optimises memset()
 away ...

 memzero_explicit() is a good start, ...

As I see it, memzero_explicit() is a rather ugly kluge, albeit an
acceptable one in the circumstances.

A real fix would make memset() do the right thing reliably; if the
programmer puts in memset( x, 0, nbytes) then the memory should be
cleared, no ifs or buts. I do not know or care if that means changes
in the compiler or in the library code or even both, but the fix
should make the standard library code work right, not require adding a
new function and expecting everyone to use it.
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


memset() in crypto code?

2014-10-04 Thread Sandy Harris
There was recently a patch to the random driver to replace memset()
because, according to the submitter, gcc sometimes optimises memset()
away which might leave data unnecessarily exposed. The solution
suggested was a function called memzero_explicit(). There was a fair
bit of discussion and the patch was accepted.

In the crypto directory of the kernel source I have:

$ grep memset *.c | wc -l
133
$

I strongly suspect some of these should be fixed.
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RFC possible changes for Linux random device

2014-09-15 Thread Sandy Harris
I have started a thread with the above title on Perry's crypto list. Archive at:
http://www.metzdowd.com/pipermail/cryptography/2014-September/022795.html

First message was:

I have some experimental code to replace parts of random.c It is not
finished but far enough along to seek comment. It does compile with
either gcc or clang, run and produce reasonable-looking results but is
not well-tested. splint(1) complains about parts of it, but do not
think it is indicating any real problems.

Next two posts will be the main code and a support program it uses.

I change nothing on the input side; the entropy collection and
estimation parts of existing code are untouched. The hashing and
output routines, though, are completely replaced, and much of the
initialisation code is modified.

It uses the 128-bit hash from AES-GCM instead of 160-bit SHA-1.
Changing the hash allows other changes. One design goal was improved
decoupling so that heavy use of /dev/urandom does not deplete the
entropy pool for /dev/random. Another was simpler mixing in of
additional data in various places.
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Testing the PRNG driver of the Allwinner Security System A20

2014-07-02 Thread Sandy Harris
On Tue, Jul 1, 2014 at 7:14 AM, Corentin LABBE
clabbe.montj...@gmail.com wrote:

 I am writing the PRNG driver for the Allwinner Security System SoC A20.

The datasheet my search turned up (v1, Feb. 2013) just says:  160-bit
hardware PRNG with 192-bit seed and gives no other details. Do you
have more info, perhaps from a more recent version or talking to the
company?

 I didn't know how to test it, so ...

Unless you have much more info, I see no point in enabling it or
writing a driver. You need a true hardware RNG to seed it, so you need
random(4) /dev/random anyway and can just use /dev/urandom for PRNG
requirements.

Using this device might have an advantage if it is much faster or less
resource-hungry than urandom, but I see nothing in its documentation
that indicates it is. Anyway, do your applications need that? And, if
so, would an application-specific PRNG be better yet?

Then there is the crucial question of trusting the device. Kerckhoff's Principle
(http://en.citizendium.org/wiki/Kerckhoffs%27_Principle)
has been a maxim for cryptographers since the 19th century; no-one
should even consider trusting it until full design details are made
public and reviewed.

Even then, there might be serious doubts, since hardware can be very
subtly sabotaged and an RNG is a tempting target for an intelligence
agency.
(http://arstechnica.com/security/2013/09/researchers-can-slip-an-undetectable-trojan-into-intels-ivy-bridge-cpus/)
That article discusses Intel and the NSA, but similar worries apply
elsewhere. Allwinner is a fabless company, so you also need to worry
about whatever fab they use.
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Freescale SEC1 and AEAD / IPSEC

2014-04-25 Thread Sandy Harris
On Thu, Apr 24, 2014 at 9:34 AM, leroy christophe
christophe.le...@c-s.fr wrote:

 I'm progressing well on the SEC1 implementation. I now have HASH (non HMAC)
 , DES and AES working properly.

Why DES? More than a decade ago, the first Linux IPsec implementation
refused to do single DES because it was known to be insecure.
http://www.freeswan.org/freeswan_trees/freeswan-1.97/doc/faq.html#noDES.faq

At that time, we had to have DES internally so we could implement triple DES.
We just did not provide single DES in the user interface or the list of cipher
suites we would negotiate.

Today you could consider dropping DES entirely since AES is so widespread,
far faster than triple DES, and thought secure. As of 2001 or so, it roughly
doubled IPsec throughput:
http://www.freeswan.org/freeswan_trees/freeswan-1.97/doc/performance.html#perf.more
Today it probably does even better; there has been a lot of work on
optimising it.

 So it is now time to look at AEAD.

A fine idea. AES-GCM is the first obvious candidate. Newer work includes:
http://tools.ietf.org/html/draft-agl-tls-chacha20poly1305-00

There is a competition for new AEAD methods, still in early stages so it
does not yet give any solid results you could implement. Likely worth a
look, though:
http://competitions.cr.yp.to/caesar.html
https://aezoo.compute.dtu.dk/doku.php

 I don't know much yet about crypto algorithm so forgive me if I ask stupid
 questions.

An area that was a problem a decade ago was forward secrecy.

http://www.freeswan.org/freeswan_trees/freeswan-1.97/doc/interop.html#req.features
FreeS/WAN default is to provide perfect forward secrecy. ... The PFS
settings on the two ends must match. There is no provision in the
protocol for negotiating whether to use PFS; you need to either set
both ends to use it or set them both not to.

The difference is what happens if an enemy gets the keys used in
authenticating connection setup. Without PFS, that lets him obtain
actual encryption keys and read everything sent on those connections,
possibly including old messages he has archived. With PFS he gets no
message content without doing a man-in-the-middle attack (using the
authentication keys to trick servers into giving him data that lets
him get encryption keys), he has to do another such attack every time
you change keys, and there is no way to do one for old messages.

I do not know if the revised RFCs have fixed this, but I am very much
of the opinion that PFS should be the default everywhere.
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] CPU Jitter RNG: inclusion into kernel crypto API and /dev/random

2013-10-30 Thread Sandy Harris
Theodore Ts'o ty...@mit.edu wrote:

 Fundamentally, what worries me about this scheme (actually, causes the
 hair on the back of my neck to rise up on end) is this statement in
 your documentation[1]:

When looking at the sequence of time deltas gathered
during testing [D] , no pattern can be detected. Therefore, the
fluctuation and the resulting distribution are not based on a
repeating pattern and must be considered random.

 [1] http://www.chronox.de/jent/doc/CPU-Jitter-NPTRNG.html

 Just because we can't detect a pattern does **not** mean that it is
 not based on a repeating pattern, and therefore must be considered
 random.  We can't detect a pattern in RDRAND, so does that mean it's
 automatically random?  Why, no.
 ...
 It may be that there is some very complex state which is hidden inside
 the the CPU execution pipeline, the L1 cache, etc., etc.  But just
 because *you* can't figure it out, and just because *I* can't figure
 it out doesn't mean that it is ipso facto something which a really
 bright NSA analyst working in Fort Meade can't figure out.  (Or heck,
 a really clever Intel engineer who has full visibility into the
 internal design of an Intel CPU)

 Now, it may be that in practice, an adversary won't be able to carry
 out a practical attack ...

It seems worth noting here that Ted's reasons for skepticism
apply not just to Stephan's Jitter generator, but to others such
as Havege (already in Debian) which are based on differences
in speed of arithmetic operations, presumably due to cache
 TLB misses, pipeline stalls, etc. Also to ones based on
variations in delays from timer calls such as my maxwell(8).

It is also worth mentioning that, while Stephan has done
thorough testing on a range of CPUs, others have test
 rationale info as well. The Havege papers have a lot,
my maxwell paper has a little, and there's:
McGuire, Okech  Schiesser,
Analysis of inherent randomness of the Linux kernel,
http://lwn.net/images/conf/rtlws11/random-hardware.pdf

I know my stuff is not an adequate answer to Ted, but
I suspect some of the others may be.
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Fwd: [PATCH] CPU Jitter RNG: inclusion into kernel crypto API and /dev/random

2013-10-14 Thread Sandy Harris
Stephan Mueller smuel...@chronox.de wrote:

Paper has:

 the time delta is partitioned into chunks of 1 bit starting at the
lowest bit   The 64 1 bit chunks of the time value are XORed with
each other to  form a 1 bit value.

As I read that, you are just taking the parity. Why not use that
simpler description  possibly one of several possible optimised
algorithms for the task:
http://graphics.stanford.edu/~seander/bithacks.html

 I am fully aware that the bit operation is inefficient. Yet it is
 deliberately inefficient, because that folding loop performs actual
 work for the RNG (the collapse of 64 bits into one bit) and at the very
 same time, it is the fixed instruction set over which I measure the time
 variations.

 Thus, the folding loop can be considered as the entropy source ...

 As the RNG is not primarily about speed, the folding operation should
 stay inefficient.

OK, that makes sense.

If what you are doing is not a parity computation, then you need a
better description so people like me do not misread it.

 It is not a parity computation that the folding loop performs. The code
 XORs each individual bit of the 64 bit stream with each other, whereas
 your cited document implies an ANDing of the bits (see section
 Computing parity the naive way of the cited document).

No. The AND is used in a trick; x(x-1) gives a value with exactly
one bit set, the lowest bit set in x. The code there just counts that
way for efficiency.

Parity asks whether the number of set bits is odd or even. For
example this is another way to find the parity of x.

   for( p = 0; x ; x = 1 )
 p ^= (x1) ;

From your description (I haven't looked at the code) you are
computing parity. If so, say that. If not, explain.

This appears to be missing the cryptographically strong
mixing step which most RNG code includes. If that is
what you are doing, you need to provide some strong
arguments to justify it.

 The key here is that there is no cryptographic function needed for
 mixing as in fact I am not really mixing things. I am simply
 concatenating the new bit to the previously obtained bits. That is it.

 The argument for doing that is that the time deltas are independent of
 each other. ...

 ... each bit from the folding operation therefore contains
 one bit of entropy. So, why should I mix it even further with some
 crypto function?

That does make sense, but in security code I tend to prefer a
belt-and-suspenders approach. Even believing that each
individual bit is truly random, I'd still mix some just in case.

 Can you please help me understand why you think that a whitening
 function (cryptographic or not) is needed in the case of the CPU Jitter
 RNG, provided that I can show that each individual bit coming from the
 folding operation has one bit of entropy?

Basically, sheer paranoia. I'd mix and whiten just on general
principles. Since efficiency is not a large concern, there is little
reason not to.

On the other hand, most RNGs use a hash because they need
to distill some large amount of low-entropy input into a smaller
high-entropy output. With high input entropy, you do not need
the hash and can choose some cheaper mixer.

 I will present the RNG at the Linux Symposium in Ottawa this year.
 

I live in Ottawa, ...

 As mentioned before, I would really like to meet you there to have a cup
 of coffee over that matter.

Sounds good. Ted, will you be around?
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] CPU Jitter RNG: inclusion into kernel crypto API and /dev/random

2013-10-14 Thread Sandy Harris
On Mon, Oct 14, 2013 at 9:38 AM, Sandy Harris sandyinch...@gmail.com wrote:

 Stephan Mueller smuel...@chronox.de wrote:

 Can you please help me understand why you think that a whitening
 function (cryptographic or not) is needed in the case of the CPU Jitter
 RNG, provided that I can show that each individual bit coming from the
 folding operation has one bit of entropy?

 Basically, sheer paranoia. I'd mix and whiten just on general
 principles. Since efficiency is not a large concern, there is little
 reason not to.

 On the other hand, most RNGs use a hash because they need
 to distill some large amount of low-entropy input into a smaller
 high-entropy output. With high input entropy, you do not need
 the hash and can choose some cheaper mixer.

You could use strong mixing/whitening:

Feed into random(4) and let it do the mixing.

Use some early outputs from your RNG to key an AES
instance. Then encrypt later outputs; this gives a 64 in 64
out mixer that is cryptographically strong but perhaps a bit
slow in the context.

Alternately, quite a few plausible components for fast cheap
mixing are readily available.

The Aria cipher has one that is 128 in 128 out. It multiplies
a 128-bit object by a fixed Boolean matrix, makes every
output bit depend on many input bits. It is fairly cheap,
used in every round and the cipher is acceptably fast.

The column transform from AES is 32 in 32 out and makes
every output byte depend on every input byte. It is fast; has
to be since it is used four times in every round.

A two-way pseudo-Hadamard transform (PHT) is 2n bits in
and 2n out, requires only two additions, makes both n-bit
outputs depend on both inputs.

PHT can be applied recursively to mix 4n, 8n, ...

My QHT is 32 in 32 out, makes every /bit/ of output
depend on every bit of input. It is a tad expensive;
two multiplications  two modulo operations. File
qht.c at:
ftp://ftp.cs.sjtu.edu.cn:990/sandy/maxwell/

To mix 64 bits, I'd use two qht() calls to mix the 32-bit
halves then a two-way PHT.
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] CPU Jitter RNG: inclusion into kernel crypto API and /dev/random

2013-10-14 Thread Sandy Harris
On Mon, Oct 14, 2013 at 10:40 AM, Stephan Mueller smuel...@chronox.de wrote:

 Another thing: when you start adding whitening functions, other people
 are starting (and did -- thus I added section 4.3 to my documentation)
 to complain that you hide your weaknesses behind the whiteners. I simply
 want to counter that argument and show that RNG produces white noise
 without a whitener.

Yes, you absolutely have to test the unwhitened input entropy, and
provide a way for others to test it so they can have confidence in your
code and it can be tested again if it is going to be used on some new
host. You do a fine job of that; your paper has the most detailed
analysis I have seen. Bravo.

However, having done that, I see no reason not to add mixing.
Using bit() for getting one bit of input and rotl(x) for rotating
left one bit, your code is basically, with 64-bit x:

   for( i=0, x = 0 ; i  64; i++, x =rotl(x) )
x |= bit()

Why not declare some 64-bit constant C with a significant
number of bits set and do this:

   for( i=0, x = 0 ; i  64; i++, x =rotl(x) ) // same loop control
  if( bit() ) x ^= C ;

This makes every output bit depend on many input bits
and costs almost nothing extra.

In the unlikely event that the overhead here matters,
your deliberately inefficient parity calculation in bit()
could easily be made faster to compensate.
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] CPU Jitter RNG: inclusion into kernel crypto API and /dev/random

2013-10-14 Thread Sandy Harris
On Mon, Oct 14, 2013 at 11:26 AM, Stephan Mueller smuel...@chronox.de wrote:

Why not declare some 64-bit constant C with a significant

 Which constant would you take? The CRC twist values? The SHA-1 initial
 values? Or the first few from SHA-256?

The only essential requirement is that it not be something stupidly
regular like a 64-bit string 0x.

I'd pick an odd number so the low bit always changes, and a
constant with about half the bits set, maybe 24  n  40 or
some such. I'm not certain either of those is strictly required
but I'd do them anyway.
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] CPU Jitter RNG: inclusion into kernel crypto API and /dev/random

2013-10-14 Thread Sandy Harris
Stephan Mueller smuel...@chronox.de wrote:

 [quoting me]

 ...your code is basically, with 64-bit x:

   for( i=0, x = 0 ; i  64; i++, x =rotl(x) )
x |= bit()

 Why not declare some 64-bit constant C with a significant
number of bits set and do this:

   for( i=0, x = 0 ; i  64; i++, x =rotl(x) ) // same loop control
  if( bit() ) x ^= C ;

This makes every output bit depend on many input bits
and costs almost nothing extra.

 Ok, let me play a bit with that. Maybe I can add another flag to the
 allocation function so that the caller can decide whether to use that.
 If the user is another RNG, you skip that mixing function, otherwise you
 should take it.

I'd say just do it. It is cheap enough and using it does no harm
even where it is not strictly needed. Adding a flag just gives the
calling code a chance to get it wrong. Better not to take that risk
if you don't have to.
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] CPU Jitter RNG: inclusion into kernel crypto API and /dev/random

2013-10-11 Thread Sandy Harris
On Fri, Oct 11, 2013 at 2:38 PM, Stephan Mueller smuel...@chronox.de wrote:

I like the basic idea. Here I'm alternately reading the email and the
page you link to  commenting on both.

A nitpick in the paper is that you cite RFC 1750. That was superceded
some years back by RFC 4086
http://tools.ietf.org/html/rfc4086

(Ted's comments in the actual driver had the same problem last
I looked. That is excusable since they were written long ago.)

I think you may be missing some other citations that should be
there, to previous work along similar lines. One is the HAVEGE
work, another:
McGuire, Okech  Schiesser,
Analysis of inherent randomness of the Linux kernel,
http://lwn.net/images/conf/rtlws11/random-hardware.pdf

Paper has:

 the time delta is partitioned into chunks of 1 bit starting at the lowest bit
  The 64 1 bit chunks of the time value are XORed with each other to
 form a 1 bit value.

As I read that, you are just taking the parity. Why not use that simpler
description  possibly one of several possible optimised algorithms
for the task: http://graphics.stanford.edu/~seander/bithacks.html

If what you are doing is not a parity computation, then you need a
better description so people like me do not misread it.

A bit later you have:

 After obtaining the 1 bit folded and unbiased time stamp,
 how is it mixed into the entropy pool? ... The 1 bit folded
 value is XORed with 1 bit from the entropy pool.

This appears to be missing the cryptographically strong
mixing step which most RNG code includes. If that is
what you are doing, you need to provide some strong
arguments to justify it.

Sometimes doing without is justified; for example my
code along these lines
ftp://ftp.cs.sjtu.edu.cn:990/sandy/maxwell/
does more mixing than I see in yours, but probably
not enough overall. That's OK because I am just
feeding into /dev/random which has plenty of
mixing.

It is OK for your code too if you are feeding into
/dev/random, but it looks problematic if your code
is expected to stand alone.

Ah! You talk about whitening a bit later. However,
you seem to make it optional, up to the user. I
cannot see that that is a good idea.

At the very least I think you need something like
the linear transform from the ARIA cipher -- fast
and cheap, 128 bits in  128 out and it makes
every output bit depend on every input bit. That
might not be enough, though.

You require compilation without optimisation. How does
that interact with kernel makefiles? Can you avoid
undesirable optimisations in some other way, such as
volatile declartions?

 I am asking whether this RNG would good as an inclusion into the Linux
 kernel for:

 - kernel crypto API to provide a true random number generator as part of
 this API (see [2] appendix B for a description)

My first reaction is no. We have /dev/random for the userspace
API and there is a decent kernel API too. I may change my
mind here as I look more at your appendix  maybe the code.

 - inclusion into /dev/random as an entropy provider of last resort when
 the entropy estimator falls low.

Why only 'of last resort'? If it can provide good entropy, we should
use it often.

 I will present the RNG at the Linux Symposium in Ottawa this year. There
 I can give a detailed description of the design and testing.

I live in Ottawa, don't know if I'll make it to the Symposium this
year. Ted; I saw you at one Symposium; are you coming this
year?
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH][RFC] CPU Jitter random number generator (resent)

2013-05-22 Thread Sandy Harris
Stephan Mueller smuel...@chronox.de wrote:

 Ted is right that the non-deterministic behavior is caused by the OS
 due to its complexity. ...

   For VM's, it means we should definitely use
  paravirtualization to get randomness from the host OS.
 ...

 That is already in place at least with KVM and Xen as QEMU can pass
 through access to the host /dev/random to the guest. Yet, that approach
 is dangerous IMHO because you have one central source of entropy for
 the host and all guests. One guest can easily starve all other guests
 and the host of entropy. I know that is the case in user space as well.

Yes, I have always thought that random(4) had a problem in that
area; over-using /dev/urandom can affect /dev/random. I've never
come up with a good way to fix it, though.

 That is why I am offering an implementation that is able to
 decentralize the entropy collection process. I think it would be wrong
 to simply update /dev/random with another seed source of the CPU
 jitter -- it could be done as one aspect to increase the entropy in
 the system. I think users should slowly but surely instantiate their own
 instance of an entropy collector.

I'm not sure that's a good idea. Certainly for many apps just seeding
a per-process PRNG well is enough, and a per-VM random device
looks essential, though there are at least two problems possible
because random(4) was designed before VMs were at all common
so it is not clear it can cope with that environment. The host
random device may be overwhelmed, and the guest entropy may
be inadequate or mis-estimated because everything it relies on --
devices, interrupts, ... -- is virtualised.

I want to keep the current interface where a process can just
read /dev/random or /dev/urandom as required. It is clean,
simple and moderately hard for users to screw up. It may
need some behind-the-scenes improvements to handle new
loads, but I cannot see changing the interface itself.

 I would personally think that precisely for routers, the approach
 fails, because there may be no high-resolution timer. At least trying
 to execute my code on a raspberry pie resulted in a failure: the
 initial jent_entropy_init() call returned with the indication that
 there is no high-res timer.

My maxwell(8) uses the hi-res timer by default but also has a
compile-time option to use the lower-res timer if required. You
still get entropy, just not as much.

This affects more than just routers. Consider using Linux on
a tablet PC or in a web server running in a VM. Neither needs
the realtime library; in fact adding that may move them away
from their optimisation goals.

  What I'm against is relying only on solutions such as HAVEGE or
  replacing /dev/random with something scheme that only relies on CPU
  timing and ignores interrupt timing.

 My question is how to incorporate some of that into /dev/random.
 At one point, timing info was used along with other stuff. Some
 of that got deleted later, What is the current state? Should we
 add more?

 Again, I would like to suggest that we look beyond a central entropy
 collector like /dev/random. I would like to suggest to consider
 decentralizing the collection of entropy.

I'm with Ted on this one.

--
Who put a stop payment on my reality check?
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH][RFC] CPU Jitter random number generator (resent)

2013-05-21 Thread Sandy Harris
I very much like the basic notion here. The existing random(4) driver
may not get enough entropy in a VM or on a device like a Linux router
and I think work such as yours or HAVEGE
(http://www.irisa.fr/caps/projects/hipsor/) are important research.
The paper by McGuire et al of Analysis of inherent randomness of the
Linux kernel (http://lwn.net/images/conf/rtlws11/random-hardware.pdf)
seems to show that this is a fine source of more entropy.

On the other hand, I am not certain you are doing it in the right
place. My own attempt (ftp://ftp.cs.sjtu.edu.cn:990/sandy/maxwell/)
put it in a demon that just feeds /dev/random, probably also not the
right place. haveged(8) (http://www.issihosts.com/haveged/) also puts
it in a demon process. It may, as you suggest, belong in the kernel
instead, but I think there are arguments both ways.

Could we keep random(4) mostly as is and rearrange your code to just
give it more entropy? I think the large entropy pool in the existing
driver is essential since we sometimes want to generate things like a
2 Kbit PGP key and it is not clear to me that your driver is entirely
trustworthy under such stress.

On Tue, May 21, 2013 at 2:44 AM, Stephan Mueller smuel...@chronox.de wrote:
 Hi,

 [1] patch at http://www.chronox.de/jent/jitterentropy-20130516.tar.bz2

 A new version of the CPU Jitter random number generator is released at
 http://www.chronox.de/ . The heart of the RNG is about 30 lines of easy
 to read code. The readme in the main directory explains the different
 code files. A changelog can be found on the web site.

 In a previous attempt (http://lkml.org/lkml/2013/2/8/476), the first
 iteration received comments for the lack of tests, documentation and
 entropy assessment. All these concerns have been addressed. The
 documentation of the CPU Jitter random number generator
 (http://www.chronox.de/jent/doc/CPU-Jitter-NPTRNG.html and PDF at
 http://www.chronox.de/jent/doc/CPU-Jitter-NPTRNG.pdf -- the graphs and
 pictures are better in PDF) offers a full analysis of:

 - the root cause of entropy

 - a design of the RNG

 - statistical tests and analyses

 - entropy assessment and explanation of the flow of entropy

 The document also explains the core concept to have a fully
 decentralized entropy collector for every caller in need of entropy.

 Also, this RNG is well suitable for virtualized environments.
 Measurements on OpenVZ and KVM environments have been conducted as
 documented. As the Linux kernel is starved of entropy in virtualized as
 well as server environments, new sources of entropy are vital.

 The appendix of the documentation contains example use cases by
 providing link code to the Linux kernel crypto API, libgcrypt and
 OpenSSL. Links to other cryptographic libraries should be straight
 forward to implement. These implementations follow the concept of
 decentralized entropy collection.

 The man page provided with the source code explains the use of the API
 of the CPU Jitter random number generator.

 The test cases used to compile the documentation are available at the
 web site as well.

 Note: for the kernel crypto API, please read the provided Kconfig file
 for the switches and which of them are recommended in regular
 operation. These switches must currently be set manually in the
 Makefile.

 Ciao
 Stephan

 Signed-off-by: Stephan Mueller smuel...@chronox.de
 --
 To unsubscribe from this list: send the line unsubscribe linux-crypto in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html



-- 
Who put a stop payment on my reality check?
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH][RFC] CPU Jitter random number generator (resent)

2013-05-21 Thread Sandy Harris
On Tue, May 21, 2013 at 3:01 PM, Theodore Ts'o ty...@mit.edu wrote:

 I continue to be suspicious about claims that userspace timing
 measurements are measuring anything other than OS behaviour.

Yes, but they do seem to contain some entropy. See links in the
original post of this thread, the havege stuff and especially the
McGuire et al paper.

  But that
 doesn't mean that they shouldn't exist.  Personally, I believe you
 should try to collect as much entropy as you can, from as many places
 as you can.

Yes.

  For VM's, it means we should definitely use
 paravirtualization to get randomness from the host OS.

Yes, I have not worked out the details but it seems clear that
something along those lines would be a fine idea.

 For devices like Linux routers, what we desperately need is hardware
 assist;  [or] mix
 in additional timing information either at kernel device driver level,
 or from systems such as HAVEGE.

 What I'm against is relying only on solutions such as HAVEGE or
 replacing /dev/random with something scheme that only relies on CPU
 timing and ignores interrupt timing.

My question is how to incorporate some of that into /dev/random.
At one point, timing info was used along with other stuff. Some
of that got deleted later, What is the current state? Should we
add more?

--
Who put a stop payment on my reality check?
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC][PATCH] Entropy generator with 100 kB/s throughput

2013-02-10 Thread Sandy Harris
On Sun, Feb 10, 2013 at 1:50 PM, Theodore Ts'o ty...@mit.edu wrote:

 On Sun, Feb 10, 2013 at 01:46:18PM +0100, Stephan Mueller wrote:

 However, the CPU has timing jitter in the execution of instruction. And
 I try to harvest that jitter. The good thing is that this jitter is
 always present and can be harvested on demand.

 How do you know, though, that this is what you are harvesting?
 ...
 And what's your proof that your entropy source really is an entropy
 source?

One paper that seems to show there is some randomness in
such measurements is McGuire, Okech  Schiesser
Analysis of inherent randomness of the Linux kernel,
http://lwn.net/images/conf/rtlws11/random-hardware.pdf

They do two clock calls with a usleep() between, take the
low bit of the difference and pack them unmixed into
bytes for testing. Their tests show over 7.5 bits of entropy
per byte, even with interrupts disabled. The same paper
shows that simple arithmetic sequences give some
apparent entropy, due to TLB misses, interrupts, etc.

There are lots of caveats in how this should be used and
it is unclear how much real entropy it gives, but is seems
clear it gives some.

My own program to feed into random(4) is based on
such things:
ftp://ftp.cs.sjtu.edu.cn:990/sandy/maxwell/

HAVEGE also uses them
http://www.irisa.fr/caps/projects/hipsor/
 there is a havegd daemon for Linux
http://www.issihosts.com/haveged/

random(4) also mixed in timer data at one point,
which seems the correct thing for it to do. Later
I heard something about that code having been
removed. What is the current status?
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC][PATCH] Entropy generator with 100 kB/s throughput

2013-02-10 Thread Sandy Harris
On Sun, Feb 10, 2013 at 2:32 PM, Stephan Mueller smuel...@chronox.de wrote:

 On 10.02.2013 19:50:02, +0100, Theodore Ts'o ty...@mit.edu wrote:

 Given all your doubts on the high-precision timer, how can you
 reasonably state that the Linux kernel RNG is good then?

 The data from add_timer_randomness the kernel feeds into the input_pool
 is a concatenation of the event value, the jiffies and the get_cycles()
 value. The events hardly contains any entropy, the jiffies a little bit
 due to the coarse resolution of 250 or 1000 Hz. Only the processor
 cycles value provides real entropy.

There are multiple sources of entropy, though. There are reasons
not to fully trust any -- key strike statistics can be predicted if the
enemy knows the language, the enemy might be monitoring the
network. there is no keyboard or mouse on a headless server, a
diskless machine has no disk timing entropy and one with an
SSD or intelligent RAID controller very little,  However, with
multiple sources and conservative estimates, it is reasonable
to hope there is enough entropy coming in somewhere.

It is much harder to trust a system with single source of
entropy, perhaps impossible for something that is likely to
be deployed on the whole range of things Linux runs on,
from a cell phone with a single 32-bit CPU all the way to
beowulf-based supercomputers with thousands of
multicore chips.

Moeove, random(4) has both a large entropy pool (or
three, to be more precise) and strong crypto in the
mixing. If it /ever/ gets a few hundred bits of real
entropy then no-one without the resources of a
major government and/or a brilliant unpublished
attack on SHA-1 can even hope to break it.

In the default Linux setup, it gets few K bits of
reasonably good entropy from the initialisation
scripts, so attacks look impossible unless the
enemy already has root privileges or has
physical access to boot the machine from
other media  look at Linux storage.
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] crypto/arc4: now arc needs blockcipher support

2012-06-26 Thread Sandy Harris
On Wed, Jun 27, 2012 at 12:13 AM, Sebastian Andrzej Siewior
sebast...@breakpoint.cc wrote:
 Since commit ce6dd368 (crypto: arc4 - improve performance by adding
 ecb(arc4)) we need to pull in a blkcipher.

 |ERROR: crypto_blkcipher_type [crypto/arc4.ko] undefined!
 |ERROR: blkcipher_walk_done [crypto/arc4.ko] undefined!
 |ERROR: blkcipher_walk_virt [crypto/arc4.ko] undefined!

 Signed-off-by: Sebastian Andrzej Siewior sebast...@breakpoint.cc
 ---

 On a side note: do we pull in the blkcipher block mode for each cipher now to
 gain some extra performance like the openssl project? I was under the
 impression that is in general not worth it.

Arc4 is a stream cipher, NOT a block cipher. They are completely different
things, and the requirements for using them securely are different. In
particular, modes like ECB apply to block ciphers not to stream ciphers.

Unless these changes have been thoroughly analyzed by several
people who actually know crypto, they should be immediately reverted.
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RFC: redesigning random(4)

2011-12-11 Thread Sandy Harris
On Thu, Sep 29, 2011 at 2:46 PM, Sandy Harris sandyinch...@gmail.com wrote:

 I have been thinking about how random(4) might be redesigned ...

 ... make the input
 pool use Skein (or another SHA-3 candidate) and the output pools a
 modified counter-mode AES.

I now actually have most of the code for that and a substantial
rationale document, both in a first draft sort of state.

I have worked out how to use a block cipher in a way that has
the hard-to-invert property and does not either lose state when
it rekeys or encrypt successive counter values with a small
Hamming difference. It is fairly complex.

 Currently the driver uses SHA-1 for all three. ,,,

Having looked at the block cipher method in some detail, I've now
concluded that it is better to just use a hash which is non-invertible
by design and does not make analysis more difficult.

I may eventually have code  rationale for that too, but almost
certainly not soon.
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


A fiddle for /dev/random

2011-12-08 Thread Sandy Harris
The program below gives somewhat random initialisers for the
three pools in the random(4) driver. It is sort of a compile
time equivalent of dumping /var/run/random-seed in.

If /dev/urandom is present on the development machine
(as it should be in nearly all cases), data from it is used.
If not, the fallback is to use data from the Blowfish digits
of pi, mixed into a somewhat random order.

This is not particularly useful since it gives no real entropy.

On the other hand, since we are going to allocate storage
anyway, it costs almost nothing to initialise it randomly.
It may be of some small value as a defense-in-depth
sort of thing, making attacks more complex even if, by
itself, it cannot prevent any.

/*
Program to select numbers for initialising things
limits the range of Hamming weights
every byte has at least one bit 1, one 0

different every time it runs

writes to stdout, expecting makefile to redirect
results suitable for inclusion by random(4)
*/

#include stdio.h
#include unistd.h
#include stdlib.h
#include string.h
#include sys/types.h
#include sys/stat.h
#include fcntl.h

/*

Choose from a range of Hamming weights around 16
*/
#define MIN  11
#define MAX (32-MIN)

unsigned *data ;

int accept(unsigned) ;
void outarray( int, int, char *) ;
void swap(int, int) ;
int fillarray( unsigned *, int) ;

main(int argc, char **argv)
{
unsigned u ;
int urandom, i, a, b, c, n, nbytes ;

/*
set up defaults
correspond to random(4) defaults
*/
a = 128 ;
b = 32 ;
c = 0 ;
/*
arguments if given are
argv[1] - INPUT_POOL_WORDS
argv[2] - OUTPUT_POOL_WORDS
argv[3] - size of constants[]
*/
switch(argc){
// each of these falls though to next
case 4:
c = atoi(argv[3]) ;
case 3:
b = atoi(argv[2]) ;
case 2:
a = atoi(argv[1]) ;
case 1 :
// does nothing
break ;
default:
fprintf(stderr, getrand: bad arguments\n) ;
exit(1) ;
}
// array size in 32-bit words
n = a + (2*b) + c ;
nbytes = n*4 ;
if( (data = malloc(nbytes)) == NULL){
fprintf(stderr, getrand: malloc() fails\n) ;
exit(1) ;
}
// normal case: development machine has /dev/urandom
if( (urandom = open(/dev/urandom, O_RDONLY)) != -1 )  {
// fill data[] with random material
read(urandom,data,nbytes) ;
// replace any entries that fail criteria
for( i = 0 ; i  n ; i++ )
while( !accept(data[i]) )
read(urandom,data+i,4) ;
}
// no /dev/urandom, use digits of pi
else
if( (i = fillarray(data,n)) != n )  {
fprintf(stderr, getrand: no urandom  not enough 
digits of pi\n) ;
exit(1) ;
}

// output
printf(#define INPUT_POOL_WORDS %d\n, a ) ;
printf(#define OUTPUT_POOL_WORDS %d\n\n, b) ; 
outarray( 0, a, input_pool_data) ;
outarray( a, b, blocking_pool_data) ;
outarray( a+b, b, nonblocking_pool_data) ;
if(c)
outarray( a+(2*b), c, constants ) ;
exit(0) ;
}

void outarray( int start, int n, char *p)
{
int i, end, last ;
end = start + n ;
last = end - 1 ;
printf(static unsigned %s[] = {\n, p, n) ;
for( i = start ; i  end ; i++ ){
printf(\t0x%08xL, data[i]) ;
if( i != last )
printf(,) ;
printf(\n) ;
}
printf(\t} ;\n\n) ;
}

int accept(unsigned u)
{
int h,i ;
char *p ;
// reject low or high Hamming weights
h = hamming(u) ;
if( (hMIN) || (hMAX) )
return(0) ;
// at least one 1 and at least one 0 per byte
for( i = 0, p = (char *) u ; i  4 ; i++, p++ ){
switch(*p)  {
case '\0':
case '\255':
return(0) ;
default:
break ;
}
}
return(1) ;
}

/*
Kernighan's method
http://graphics.stanford.edu/~seander/bithacks.html
*/
int hamming(unsigned x)
{
int h ;
for (h = 0; x; h++)
  x = (x-1) ; // clear the least significant bit set
return(h) ;
}

/*
digits of pi
from Paul Kocher's code for
Schneier's Blowfish

used if no /dev/urandom
*/
static 

RFC: redesigning random(4)

2011-09-29 Thread Sandy Harris
I have been thinking about how random(4) might be redesigned and I now
have enough thoughts to put them on electrons and send them out. This
is all first-cut stuff, though I have thought about it some. It needs
comment, criticism and revision.

A few things I think we absolutely want to keep. At least the existing
input mixing code including its entropy estimation, the three-pool
structure, and the use (at least sometimes) of a hash for the mixing.
I think the large pool plus hashing is obviously superior to Yarrow,
where the pool is a single hash context. For our purposes, though
perhaps not elsewhere, it also seems better than Fortuna which
complicates things with multiple input pools; we do not need those.

I would change some things fairly radically, though. Hence this
message and an invitation to discussion.

I would define all pools in terms of 64-bit objects, make the input
pool use Skein (or another SHA-3 candidate) and the output pools a
modified counter-mode AES.

Currently the driver uses SHA-1 for all three. Sometime next year, the
SHA-3 competition will declare a winner. Using that might make getting
various certifications easier, so we should probably do that when we
can. The simplest way to do that might be to keep the three-pool setup
and make them all use a new hash, but I'll argue for making the two
output pools use a block cipher instead.

All  five of the finalists in the SHA-3 competition use some variant
of a wide-pipe strategy. The internal state of the hash is larger
than the output size by a factor of two or more, and (in most, though
I think JH just takes raw bits from state) there is an output function
that crunches the final state into the output. It appears Ted was
ahead of his time with the trick or folding an SHA-1 output in half to
get 80 output bits. It also appears that Ted's trick is no longer
needed; the hashes do it for us now.

All the finalists have an -256 and an -512 mode. Either way, we can
have a get256() function that gets 256 hashed bits. Things like
Skein-512-256 can be used directly; that has 512-bit internal state
and 256-bit output. For another hash, we might have to use the version
with 512-bit output and do our own folding to 256. However, it is
straightforward to have a get256() function either way. That should be
the only output function from the primary pool, and its output should
never go outside the driver. It should be used only to initialise or
update the other pools.

The get256() function should start by mixing in some new entropy. At
one point, Ted had an add_timer_randomness() call for this. I recall
some discussion about taking it out and I don't see it now. As I see
it, every extraction of main pool entropy should add something new
first. Perhaps add_timer_randomness() is enough, but we might look
around the kernel to see if there's more that could be got -- is the
process or file descriptor table random enough to be worth throwing
in, for example?

get256() should also mix at least the hash value worth of bits back
into the pool at the end. Currently this is done in extract_buf() with
a call to mix_pool_bytes(). I'd do it rather differently.

I suggest a new mixing primitive, to be used for all pools. It would
actually work with structure members, but here is simple code to
explain it:

#define SIZE 32 // must be even
#define DELTA 7// must be odd, no common factor with SIZE

static u64 buffer[SIZE] ;
static u64 *p = buffer ;
static u64 *end = Buffer+SIZE ;

// point q where p will be after SIZE iterations
static u64 *q = p + (SIZE/2) + DELTA ;

void mix64( u64 *data)
{
   *p ^= *data ;
   // pseudo-Hadamard transform
   *p += *q ;
   *q += *p ;
   // increment pointers
   p += DELTA ;
   q += DELTA ;
   // wrap around if needed
   if(p = end) p -= SIZE ;
   if(q = end) q -= SIZE ;
}

This has some nice properties. It is only used for
internally-generated data, to mix hash or cipher outputs back into
pools. It may be more efficient than mix_pool_bytes() which is
designed to handle external inputs and works one byte at a time. It is
somewhat non-linear because it mixes XORs with addition.

The starting point for q is chosen so that after SIZE/2 iterations,
every word in the pool is changed. In the example, with SIZE = 32 and
DELTA = 7, p runs through all words in 32 iterations. After 16
iterations, p has increased by (16*7)%32 = 16. For the second 16, it
starts at 16+7. That is just where q starts for the first 16, so
between them p and q update every word in the first 16 iterations.
This works for any values of SIZE and DELTA, provided SIZE is even and
the two have no common factors.

get256() should end with four calls to mix64(), mixing all its data back in.

I would make the two output pools AES-128 contexts, 22 64-bit words
each, and use a variant of counter mode to generate the outputs.
Counter mode has been rather extensively analyzed, notably in the
Schneier et al.Yarrow

random(4) overheads question

2011-09-26 Thread Sandy Harris
I'm working on a demon that collects timer randomness, distills it
some, and pushes the results into /dev/random.

My code produces the random material in 32-bit chunks. The current
version sends it to /dev/random 32 bits at a time, doing a write() and
an entropy-update ioctl() for each chunk. Obviously I could add some
buffering and write fewer and larger chunks. My questions are whether
that is worth doing and, if so, what the optimum write() size is
likely to be.

I am not overly concerned about overheads on my side of the interface,
unless they are quite large. My concern is whether doing many small
writes wastes kernel resources.
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-08 Thread Sandy Harris
On Thu, Sep 8, 2011 at 9:11 PM, Steve Grubb sgr...@redhat.com wrote:

 The system being low on entropy is another problem that should be addressed. 
 For our
 purposes, we cannot say take it from TPM or RDRND or any plugin board. We 
 have to have
 the mathematical analysis that goes with it, we need to know where the 
 entropy comes
 from, and a worst case entropy estimation.

Much of that is in the driver code's comments or previous email
threads. For example,
this thread cover many of the issues:
http://yarchive.net/comp/linux/dev_random.html
There are plenty of others as well.

 It has to be documented in detail.

Yes. But apart from code comments, what documentation
are we talking about? Googling for /dev/random on tldp.org
turns up nothing that treats this in any detail.


 The only
 way we can be certain is if its based on system events. Linux systems are 
 constantly
 low on entropy and this really needs addressing. But that is a separate 
 issue. For
 real world use, I'd recommend everyone use a TPM chip + rngd and you'll never 
 be short
 on random numbers.

Yes. Here's something I wrote on the Debian Freedombox list:

| No problem on a typical Linux desktop; it does not
| do much crypto and /dev/random gets input from
| keyboard  mouse movement, disk delays, etc.
| However, it might be a major problem for a plug
| server that does more crypto, runs headless, and
| use solid state storage.

| Some plug computers may have a hardware RNG,
| which is the best solution, but we cannot count on
| that in the general case.

| Where the plug has a sound card equivalent, and
| it isn't used for sound, there is a good solution
| using circuit noise in the card as the basis for
| a hardware RNG.
| http://www.av8n.com/turbid/paper/turbid.htm

| A good academic paper on the problem is:
| https://db.usenix.org/publications/library/proceedings/sec98/gutmann.html

| However, his software does not turn up in
| the Ubuntu repository. Is it in Debian?
| Could it be?

| Ubuntu, and I assume Debian, does have
| Havege, another researcher's solution
| to the same problem.
| http://www.irisa.fr/caps/projects/hipsor/

Some of that sort of discussion should be in the documentation.
I'm not sure how much currently is.

 But in the case where we are certifying the OS, we need the
 mathematical argument to prove that unaided, things are correct.

No, we cannot prove that unaided, things are correct if
by correct you mean urandom output is safe against all
conceivable attacks and by unaided you mean without
new entropy inputs. It is a PRNG, so without reseeding it
must be breakable in theory; that comes with the territory.

That need not be a problem, though. We cannot /prove/
that any of the ciphers or hashes in widespread use are
correct either. In fact, we can prove the opposite; they
are all provably breakable by an opponent with enough
resources, for extremely large values of enough.

Consider a block cipher like AES: there are three known
attacks that must break it in theory -- brute force search
for the key, or reduce the cipher to a set of equations
then feed in some known plaintext/ciphertext pairs and
solve for the key, or just collect enough known pairs to
build a codebook that breaks the cipher. We know the
brute force and codebook attacks are astronomically
expensive, and there are good arguments that algebra
is as well, but they all work in theory. Despite that, we
can use AES with reasonable confidence and with
certifications from various government bodies.

There are similar arguments for confidence in urandom.
The simplest are the size of the state relative to the
outputs and the XOR that reduces 160 bits of SHA-1
output to 80 of generator output. More detailed discussion is
in the first thread I cited above.

Barring a complete failure of SHA-1, an enemy who wants to
infer the state from outputs needs astronomically large amounts
of both data and effort.
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-07 Thread Sandy Harris
Jarod Wilson ja...@redhat.com wrote:

 Ted Ts'o wrote:

 Yeah, but there are userspace programs that depend on urandom not
 blocking... so your proposed change would break them.
 ...

 But only if you've set the sysctl to a non-zero value, ...

 But again, I want to stress that out of the box, there's absolutely no
 change to the way urandom behaves, no blocking, this *only* kicks in if you
 twiddle the sysctl because you have some sort of security requirement that
 mandates it.

So it only breaks things on systems with high security requirements?
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: IPsec performance (in)dependent on ingress rate?

2011-09-04 Thread Sandy Harris
On Thu, Sep 1, 2011 at 10:24 PM, Adam Tisovsky tisov...@gmail.com wrote:

 I’m doing some benchmarks of IPsec performance on Cisco router and I
 have experienced the situation described below. My question is whether
 anybody has performed similar tests on Linux (StrongSWAN, OpenSWAN,…)
 or any other security gateway and can tell how did it behave.

There is some info for FreeS/WAN, ancestor of the ones you mention:
http://www.freeswan.org/freeswan_trees/freeswan-2.06/doc/performance.html
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-04 Thread Sandy Harris
On Fri, Sep 2, 2011 at 10:37 PM, Jarod Wilson ja...@redhat.com wrote:

 Certain security-related certifications and their respective review
 bodies have said that they find use of /dev/urandom for certain
 functions, such as setting up ssh connections, is acceptable, but if and
 only if /dev/urandom can block after a certain threshold of bytes have
 been read from it with the entropy pool exhausted. ...

 At present, urandom never blocks, even after all entropy has been
 exhausted from the entropy input pool. random immediately blocks when
 the input pool is exhausted. Some use cases want behavior somewhere in
 between these two, where blocking only occurs after some number have
 bytes have been read following input pool entropy exhaustion. Its
 possible to accomplish this and make it fully user-tunable, by adding a
 sysctl to set a max-bytes-after-0-entropy read threshold for urandom. In
 the out-of-the-box configuration, urandom behaves as it always has, but
 with a threshold value set, we'll block when its been exceeded.

Is it possible to calculate what that threshold should be? The Yarrow
paper includes arguments about the frequency of rekeying required to
keep a block cipher based generator secure. Is there any similar
analysis for the has-based pool? ( If not, should we switch to a
block cipher?)

/dev/urandom should not block unless both it has produced enough
output since the last rekey that it requires a rekey and there is not
enough entropy in the input pool to drive that rekey.

But what is a reasonable value for enough in that sentence?
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 2/2] crypto, x86: SSSE3 based SHA1 implementation for x86-64

2011-08-08 Thread Sandy Harris
On Mon, Aug 8, 2011 at 1:48 PM, Locktyukhin, Maxim
maxim.locktyuk...@intel.com wrote:

 20 (and more) cycles per byte shown below are not reasonable numbers for SHA-1
 - ~6 c/b (as can be seen in some of the results for Core2) is the expected 
 results ...

Ten years ago, on Pentium II, one benchmark showed 13 cycles/byte for SHA-1.
http://www.freeswan.org/freeswan_trees/freeswan-2.06/doc/performance.html#perf.estimate
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html