Re: [PATCH] crypto: use timespec64 for jent_get_nstime

2016-06-20 Thread Stephan Mueller
Am Freitag, 17. Juni 2016, 17:59:41 schrieb Arnd Bergmann:

Hi Arnd,

> The jent_get_nstime() function uses __getnstimeofday() to get
> something similar to a 64-bit nanosecond counter. As we want
> to get rid of struct timespec to fix the y2038 overflow,
> this patch changes the code to use __getnstimeofday64()
> instead, which returns a timespec64 structure.
> 
> Nothing changes about the algorithm, but it looks like it
> might be better to use
> 
>  *out = ts.tv_sec * NSEC_PER_SEC + ts.tv_nsec;
> 
> or even
> 
>  *out = ktime_get_raw_fast_ns();

I tested ktime_get_raw_fast_ns and it works perfectly well for the use case, 
i.e. the RNG behavior is indistinguishable from RDTSC on x86.

Which time source is used for this function? I am wondering about 
architectures other than X86.

Note, this function is used as a fallback when random_get_entropy is not 
implemented. In addition the Jitter RNG has an online health test which will 
catch the failure of the time stamp operation. Hence, even if this function 
may not be suitable on one or the other arch, it should not hurt though.

PS: If someone is interested in the test code or the test results, let me 
know.

Ciao
Stephan
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v5 0/7] /dev/random - a new approach

2016-06-20 Thread Stephan Mueller
Am Dienstag, 21. Juni 2016, 01:12:55 schrieb Theodore Ts'o:

Hi Theodore,

> On Mon, Jun 20, 2016 at 09:00:49PM +0200, Stephan Mueller wrote:
> > The time stamp maintenance is the exact cause for the correlation: one HID
> > event triggers:
> > 
> > - add_interrupt_randomness which takes high-res time stamp, Jiffies and
> > some pointers
> > 
> > - add_input_randomness which takes high-res time stamp, Jiffies and HID
> > event value
> > 
> > The same applies to disk events. My suggestion is to get rid of the double
> > counting of time stamps for one event.
> > 
> > And I guess I do not need to stress that correlation of data that is
> > supposed to be entropic is not good :-)
> 
> What is your concern, specifically?  If it is in the entropy
> accounting, there is more entropy in HID event interrupts, so I don't
> think adding the extra 1/64th bit of entropy is going to be problematic.

My concern is that interrupts have *much* more entropy than 1/64th. With a 
revaluation of the assumed entropy in interrupts, we will serve *all* systems 
much better and not just systems with HID.

As said, I think we heavily penalize server type and VM environments against 
desktop systems by crediting entropy in large scale to HID and conversely to a 
much lesser degree to interrupts.
> 
> If it is that there are two timestamps that are closely correleated
> being added into the pool, the add_interrupt_randomness() path is
> going to mix that timestamp with the interrupt timings from 63 other
> interrupts before it is mixed into the input pool, while the
> add_input_randomness() mixes it directly into the pool.  So if you
> think there is a way this could be leveraged into attack, please give
> specifics --- but I think we're on pretty solid ground here.

I am not saying that there is an active attack vector. All I want is to 
revalue the entropy in one interrupt which can only be done if we drop the HID 
time stamp collection.

Ciao
Stephan
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v5 0/7] /dev/random - a new approach

2016-06-20 Thread Theodore Ts'o
On Mon, Jun 20, 2016 at 09:00:49PM +0200, Stephan Mueller wrote:
> 
> The time stamp maintenance is the exact cause for the correlation: one HID 
> event triggers:
> 
> - add_interrupt_randomness which takes high-res time stamp, Jiffies and some 
> pointers
> 
> - add_input_randomness which takes high-res time stamp, Jiffies and HID event 
> value
> 
> The same applies to disk events. My suggestion is to get rid of the double 
> counting of time stamps for one event.
> 
> And I guess I do not need to stress that correlation of data that is supposed 
> to be entropic is not good :-)

What is your concern, specifically?  If it is in the entropy
accounting, there is more entropy in HID event interrupts, so I don't
think adding the extra 1/64th bit of entropy is going to be problematic.

If it is that there are two timestamps that are closely correleated
being added into the pool, the add_interrupt_randomness() path is
going to mix that timestamp with the interrupt timings from 63 other
interrupts before it is mixed into the input pool, while the
add_input_randomness() mixes it directly into the pool.  So if you
think there is a way this could be leveraged into attack, please give
specifics --- but I think we're on pretty solid ground here.

Cheers,

- Ted

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 01/28] crypto: omap-aes: Fix registration of algorithms

2016-06-20 Thread Herbert Xu
On Mon, Jun 20, 2016 at 03:11:54PM +0300, Tero Kristo wrote:
>
> Did you check the rest of the series? I only got feedback for this
> and patch #2 on the series, shall I repost the remainder of the
> series as a whole or...?

Yes please repost them.

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v9 2/3] crypto: kpp - Add DH software implementation

2016-06-20 Thread Herbert Xu
On Mon, Jun 20, 2016 at 11:43:28AM +, Benedetto, Salvatore wrote:
>
> > While you're at it, it would be nice if you could make the encoded format
> > little-endian, that way we can make test vectors for all kpp algorithms use
> > the same format.
> > 
> 
> The input format is the same for DH and ECDH.
> Only the software implementation of ECC requires little-endian format.

What I mean is to get rid of the DH-specific code in testmgr.c
and directly encode the secret into the test vector rather than
calling the helper.

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 5/7] random: replace non-blocking pool with a Chacha20-based CRNG

2016-06-20 Thread Theodore Ts'o
On Mon, Jun 20, 2016 at 05:49:17PM +0200, Stephan Mueller wrote:
> 
> Is speed everything we should care about? What about:
> 
> - offloading of crypto operation from the CPU

In practice CPU offland is not helpful, and in fact, in most cases is
harmful, when one is only encrypting a tiny amount of data.  That's
because the cost of setup and teardown, not to mention key scheduling,
dominate.  This is less of the case in the case of the SIMD / AVX
optimizations --- but that's because these are CPU instructions, and
there really isn't any CPU offloading going on.

> - potentially additional security features a hardware cipher may provide like 
> cache coloring attack resistance?

Um have you even taken a *look* at how ChaCha20 is implemented?
*What* cache coloring attack is possible at all, period?

Hint: where are the lookup tables?  Where are the data-dependent
memory accesses in the ChaCha20 core?

- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] crypto : async implementation for sha1-mb

2016-06-20 Thread Megha Dey
From: Megha Dey 

Herbert wants the sha1-mb algorithm to have an async implementation:
https://lkml.org/lkml/2016/4/5/286.
Currently, sha1-mb uses an async interface for the outer algorithm
and a sync interface for the inner algorithm. This patch introduces
a async interface for even the inner algorithm.

Signed-off-by: Megha Dey 
Signed-off-by: Tim Chen 
---
 arch/x86/crypto/sha-mb/sha1_mb.c | 182 ++-
 crypto/mcryptd.c | 124 --
 include/crypto/internal/hash.h   |  12 +--
 include/crypto/mcryptd.h |   8 +-
 4 files changed, 163 insertions(+), 163 deletions(-)

diff --git a/arch/x86/crypto/sha-mb/sha1_mb.c b/arch/x86/crypto/sha-mb/sha1_mb.c
index 0a46491..ec2e76e 100644
--- a/arch/x86/crypto/sha-mb/sha1_mb.c
+++ b/arch/x86/crypto/sha-mb/sha1_mb.c
@@ -80,10 +80,10 @@ struct sha1_mb_ctx {
 static inline struct mcryptd_hash_request_ctx
*cast_hash_to_mcryptd_ctx(struct sha1_hash_ctx *hash_ctx)
 {
-   struct shash_desc *desc;
+   struct ahash_request *areq;
 
-   desc = container_of((void *) hash_ctx, struct shash_desc, __ctx);
-   return container_of(desc, struct mcryptd_hash_request_ctx, desc);
+   areq = container_of((void *) hash_ctx, struct ahash_request, __ctx);
+   return container_of(areq, struct mcryptd_hash_request_ctx, areq);
 }
 
 static inline struct ahash_request
@@ -93,7 +93,7 @@ static inline struct ahash_request
 }
 
 static void req_ctx_init(struct mcryptd_hash_request_ctx *rctx,
-   struct shash_desc *desc)
+   struct ahash_request *areq)
 {
rctx->flag = HASH_UPDATE;
 }
@@ -375,9 +375,9 @@ static struct sha1_hash_ctx *sha1_ctx_mgr_flush(struct 
sha1_ctx_mgr *mgr)
}
 }
 
-static int sha1_mb_init(struct shash_desc *desc)
+static int sha1_mb_init(struct ahash_request *areq)
 {
-   struct sha1_hash_ctx *sctx = shash_desc_ctx(desc);
+   struct sha1_hash_ctx *sctx = ahash_request_ctx(areq);
 
hash_ctx_init(sctx);
sctx->job.result_digest[0] = SHA1_H0;
@@ -395,7 +395,7 @@ static int sha1_mb_init(struct shash_desc *desc)
 static int sha1_mb_set_results(struct mcryptd_hash_request_ctx *rctx)
 {
int i;
-   struct  sha1_hash_ctx *sctx = shash_desc_ctx(&rctx->desc);
+   struct  sha1_hash_ctx *sctx = ahash_request_ctx(&rctx->areq);
__be32  *dst = (__be32 *) rctx->out;
 
for (i = 0; i < 5; ++i)
@@ -427,7 +427,7 @@ static int sha_finish_walk(struct mcryptd_hash_request_ctx 
**ret_rctx,
 
}
sha_ctx = (struct sha1_hash_ctx *)
-   shash_desc_ctx(&rctx->desc);
+   ahash_request_ctx(&rctx->areq);
kernel_fpu_begin();
sha_ctx = sha1_ctx_mgr_submit(cstate->mgr, sha_ctx,
rctx->walk.data, nbytes, flag);
@@ -519,11 +519,10 @@ static void sha1_mb_add_list(struct 
mcryptd_hash_request_ctx *rctx,
mcryptd_arm_flusher(cstate, delay);
 }
 
-static int sha1_mb_update(struct shash_desc *desc, const u8 *data,
- unsigned int len)
+static int sha1_mb_update(struct ahash_request *areq)
 {
struct mcryptd_hash_request_ctx *rctx =
-   container_of(desc, struct mcryptd_hash_request_ctx, desc);
+   container_of(areq, struct mcryptd_hash_request_ctx, areq);
struct mcryptd_alg_cstate *cstate =
this_cpu_ptr(sha1_mb_alg_state.alg_cstate);
 
@@ -539,7 +538,7 @@ static int sha1_mb_update(struct shash_desc *desc, const u8 
*data,
}
 
/* need to init context */
-   req_ctx_init(rctx, desc);
+   req_ctx_init(rctx, areq);
 
nbytes = crypto_ahash_walk_first(req, &rctx->walk);
 
@@ -552,7 +551,7 @@ static int sha1_mb_update(struct shash_desc *desc, const u8 
*data,
rctx->flag |= HASH_DONE;
 
/* submit */
-   sha_ctx = (struct sha1_hash_ctx *) shash_desc_ctx(desc);
+   sha_ctx = (struct sha1_hash_ctx *) ahash_request_ctx(areq);
sha1_mb_add_list(rctx, cstate);
kernel_fpu_begin();
sha_ctx = sha1_ctx_mgr_submit(cstate->mgr, sha_ctx, rctx->walk.data,
@@ -579,11 +578,10 @@ done:
return ret;
 }
 
-static int sha1_mb_finup(struct shash_desc *desc, const u8 *data,
-unsigned int len, u8 *out)
+static int sha1_mb_finup(struct ahash_request *areq)
 {
struct mcryptd_hash_request_ctx *rctx =
-   container_of(desc, struct mcryptd_hash_request_ctx, desc);
+   container_of(areq, struct mcryptd_hash_request_ctx, areq);
struct mcryptd_alg_cstate *cstate =
this_cpu_ptr(sha1_mb_alg_state.alg_cstate);
 
@@ -598,7 +596,7 @@ static int sha1_mb_finup(struct shash_desc *desc, const u8 
*data,
}
 
/* need to init cont

Re: [PATCH v5 0/7] /dev/random - a new approach

2016-06-20 Thread George Spelvin
> With that being said, wouldn't it make sense to:
> 
> - Get rid of the entropy heuristic entirely and just assume a fixed value of 
> entropy for a given event?

What does that gain you?  You can always impose an upper bound, but *some*
evidence that it's not a metronome is nice to have.

> - remove the high-res time stamp and the jiffies collection in 
> add_disk_randomness and add_input_randomness to not run into the correlation 
> issue?

Again, you can argue for setting the estimate to zero, but why *remove*
the timestamp?  Maybe you lose nothing,maybe you lose something, but it's 
definitely
a monotonic decrease.

> - In addition, let us credit the remaining information zero bits of entropy 
> and just use it to stir the input_pool.

Unfortunately, that is of limited use.  We mustn't remove more bits (of data,
as well as entropy) from the input pool that there are bits of entropy coming 
in.

So the extra uncounted entropy never goes anywhere and does very little good.
So any time the input pool is "full" (by counted entropy), then the uncounted
entropy has been squeezed out and thrown away.

> - Conversely, as we now would not have the correlation issue any more, let us 
> change the add_interrupt_randomness to credit each received interrupt one bit 
> of entropy or something in this vicinity?  Only if random_get_entropy returns 
> 0, let us drop the credited entropy rate to something like 1/10th or 1/20th 
> bit per event.

Baically, do you have a convincing argument that *eery* interrupt has
this?  Even those coming from strongly periodic signals like audio DMA
buffer fills?

> Hence, we cannot estimate the entropy level at runtime. All we can do is 
> having a good conservative estimate. And for such estimate, I feel that 
> throwing lots of code against that problem is not helpful.

I agree that the efficiency constraints preclude having a really
good solution.  But is it worth giving up?

For example, suppose wecome up with a decent estimator, but only use it
when we're low on entropy.  When things are comfortable, underestimate.

For example, a low-overhead entropy estimator can be derived from
Maurer's universal test.  There are all sort of conditions required to
get an accurate measurement of entropy, but violating them produces
a conservative *underestimate*, which is just fine for an on-line
entropy estimator.  You can hash non-binary inputs to save table space;
collisions cause an entropy underestimate.  You can use a limited-range
age counter (e.g. 1 byte); wraps cause entropy underestimate.  You need
to initialize the history table before measurements are accurate, but
initializing everything to zero causes an initial entropy underestimate.
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v5 0/7] /dev/random - a new approach

2016-06-20 Thread Stephan Mueller
Am Montag, 20. Juni 2016, 14:44:03 schrieb George Spelvin:

Hi George,

> > With that being said, wouldn't it make sense to:
> > 
> > - Get rid of the entropy heuristic entirely and just assume a fixed value
> > of entropy for a given event?
> 
> What does that gain you?  You can always impose an upper bound, but *some*
> evidence that it's not a metronome is nice to have.

You are right, but that is an online test -- and I suggest we have one. 
However, the heuristic with its fraction of bits maintenance and the 
asymptotic calculation in credit_entropy_bits is a bit over the top, 
considering that we know that the input data is far from accurate.
> 
> > - remove the high-res time stamp and the jiffies collection in
> > add_disk_randomness and add_input_randomness to not run into the
> > correlation issue?
> 
> Again, you can argue for setting the estimate to zero, but why *remove*
> the timestamp?  Maybe you lose nothing,maybe you lose something, but it's
> definitely a monotonic decrease.

The time stamp maintenance is the exact cause for the correlation: one HID 
event triggers:

- add_interrupt_randomness which takes high-res time stamp, Jiffies and some 
pointers

- add_input_randomness which takes high-res time stamp, Jiffies and HID event 
value

The same applies to disk events. My suggestion is to get rid of the double 
counting of time stamps for one event.

And I guess I do not need to stress that correlation of data that is supposed 
to be entropic is not good :-)

> 
> > - In addition, let us credit the remaining information zero bits of
> > entropy
> > and just use it to stir the input_pool.
> 
> Unfortunately, that is of limited use.  We mustn't remove more bits (of
> data, as well as entropy) from the input pool that there are bits of
> entropy coming in.

I am not saying that we take more bits out of the input pool. All I am 
suggesting is to take out the correlation and in the end credit the entropy 
source which definitely is available on all systems with higher entropy rates.
> 
> So the extra uncounted entropy never goes anywhere and does very little
> good. So any time the input pool is "full" (by counted entropy), then the
> uncounted entropy has been squeezed out and thrown away.
> 
> > - Conversely, as we now would not have the correlation issue any more, let
> > us change the add_interrupt_randomness to credit each received interrupt
> > one bit of entropy or something in this vicinity?  Only if
> > random_get_entropy returns 0, let us drop the credited entropy rate to
> > something like 1/10th or 1/20th bit per event.
> 
> Baically, do you have a convincing argument that *eery* interrupt has
> this?  Even those coming from strongly periodic signals like audio DMA
> buffer fills?

I am sure I cannot be convincing as you like it because in the end, entropy is 
relative.

But look at my measurements for my LRNG, tested with and without tickless 
kernel. I see timing variations which ten times the entropy rate I suggest 
here. Besides, the analysis I did for my Jitter RNG cannot be discarded 
either.
> 
> > Hence, we cannot estimate the entropy level at runtime. All we can do is
> > having a good conservative estimate. And for such estimate, I feel that
> > throwing lots of code against that problem is not helpful.
> 
> I agree that the efficiency constraints preclude having a really
> good solution.  But is it worth giving up?
> 
> For example, suppose wecome up with a decent estimator, but only use it
> when we're low on entropy.  When things are comfortable, underestimate.
> 
> For example, a low-overhead entropy estimator can be derived from
> Maurer's universal test.  There are all sort of conditions required to

I am not suggesting any test. Entropy cannot be measured. All we can measure 
are statistics. And I cannot see why one statistical test is better than the 
other. Thus, let us have the easiest statistical test there is: use a fixed, 
but appropriate entropy value for one event and be done with it.

All that statistical tests could be used for are health tests of the noise 
source.

> get an accurate measurement of entropy, but violating them produces
> a conservative *underestimate*, which is just fine for an on-line
> entropy estimator.  You can hash non-binary inputs to save table space;
> collisions cause an entropy underestimate.  You can use a limited-range
> age counter (e.g. 1 byte); wraps cause entropy underestimate.  You need
> to initialize the history table before measurements are accurate, but
> initializing everything to zero causes an initial entropy underestimate.


Ciao
Stephan
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 5/7] random: replace non-blocking pool with a Chacha20-based CRNG

2016-06-20 Thread H. Peter Anvin
On 06/20/16 08:49, Stephan Mueller wrote:
> Am Montag, 20. Juni 2016, 11:01:47 schrieb Theodore Ts'o:
> 
> Hi Theodore,
> 
>>
>> So simply doing chacha20 encryption in a tight loop in the kernel
>> might not be a good proxy for what would actually happen in real life
>> when someone calls getrandom(2).  (Another good question to ask is
>> when someone might be needing to generate millions of 256-bit session
>> keys per second, when the D-H setup, even if you were using ECCDH,
>> would be largely dominating the time for the connection setup anyway.)
> 
> Is speed everything we should care about? What about:
> 
> - offloading of crypto operation from the CPU
> 

This sounds like a speed operation (and very unlikely to be a win given
the usage).

> - potentially additional security features a hardware cipher may provide like 
> cache coloring attack resistance?

How about burning that bridge when and if we get to it?  It sounds very
hypothetical.

I guess I could add in some comments here about how a lot of these
problems can be eliminated by offloading an entire DRNG into hardware,
but I don't think it is productive.

-hpa


--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v4 0/5] /dev/random - a new approach

2016-06-20 Thread Stephan Mueller
Am Montag, 20. Juni 2016, 13:07:32 schrieb Austin S. Hemmelgarn:

Hi Austin,

> On 2016-06-18 12:31, Stephan Mueller wrote:
> > Am Samstag, 18. Juni 2016, 10:44:08 schrieb Theodore Ts'o:
> > 
> > Hi Theodore,
> > 
> >> At the end of the day, with these devices you really badly need a
> >> hardware RNG.  We can't generate randomness out of thin air.  The only
> >> thing you really can do requires user space help, which is to generate
> >> keys lazily, or as late as possible, so you can gather as much entropy
> >> as you can --- and to feed in measurements from the WiFi (RSSI
> >> measurements, MAC addresses seen, etc.)  This won't help much if you
> >> have an FBI van parked outside your house trying to carry out a
> >> TEMPEST attack, but hopefully it provides some protection against a
> >> remote attacker who isn't try to carry out an on-premises attack.
> > 
> > All my measurements on such small systems like MIPS or smaller/older ARMs
> > do not seem to support that statement :-)
> 
> Was this on real hardware, or in a virtual machine/emulator?  Because if
> it's not on real hardware, you're harvesting entropy from the host
> system, not the emulated one.  While I haven't done this with MIPS or
> ARM systems, I've taken similar measurements on SPARC64, x86_64, and
> PPC64 systems comparing real hardware and emulated hardware, and the
> emulated hardware _always_ has higher entropy, even when running the
> emulator on an identical CPU to the one being emulated and using KVM
> acceleration and passing through all the devices possible.
> 
> Even if you were testing on real hardware, I'm still rather dubious, as
> every single test I've ever done on any hardware (SPARC, PPC, x86, ARM,
> and even PA-RISC) indicates that you can't harvest entropy as
> effectively from a smaller CPU compared to a large one, and this effect
> is significantly more pronounced on RISC systems.

It was on real hardware. As part of my Jitter RNG project, I tested all major 
CPUs from small to big -- see Appendix F [1]. For MIPS/ARM, see the trailing 
part of the big table.

[1] http://www.chronox.de/jent/doc/CPU-Jitter-NPTRNG.pdf

Ciao
Stephan
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v4 0/5] /dev/random - a new approach

2016-06-20 Thread Austin S. Hemmelgarn

On 2016-06-18 12:31, Stephan Mueller wrote:

Am Samstag, 18. Juni 2016, 10:44:08 schrieb Theodore Ts'o:

Hi Theodore,



At the end of the day, with these devices you really badly need a
hardware RNG.  We can't generate randomness out of thin air.  The only
thing you really can do requires user space help, which is to generate
keys lazily, or as late as possible, so you can gather as much entropy
as you can --- and to feed in measurements from the WiFi (RSSI
measurements, MAC addresses seen, etc.)  This won't help much if you
have an FBI van parked outside your house trying to carry out a
TEMPEST attack, but hopefully it provides some protection against a
remote attacker who isn't try to carry out an on-premises attack.


All my measurements on such small systems like MIPS or smaller/older ARMs do
not seem to support that statement :-)
Was this on real hardware, or in a virtual machine/emulator?  Because if 
it's not on real hardware, you're harvesting entropy from the host 
system, not the emulated one.  While I haven't done this with MIPS or 
ARM systems, I've taken similar measurements on SPARC64, x86_64, and 
PPC64 systems comparing real hardware and emulated hardware, and the 
emulated hardware _always_ has higher entropy, even when running the 
emulator on an identical CPU to the one being emulated and using KVM 
acceleration and passing through all the devices possible.


Even if you were testing on real hardware, I'm still rather dubious, as 
every single test I've ever done on any hardware (SPARC, PPC, x86, ARM, 
and even PA-RISC) indicates that you can't harvest entropy as 
effectively from a smaller CPU compared to a large one, and this effect 
is significantly more pronounced on RISC systems.

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 5/7] random: replace non-blocking pool with a Chacha20-based CRNG

2016-06-20 Thread Stephan Mueller
Am Montag, 20. Juni 2016, 11:01:47 schrieb Theodore Ts'o:

Hi Theodore,

> 
> So simply doing chacha20 encryption in a tight loop in the kernel
> might not be a good proxy for what would actually happen in real life
> when someone calls getrandom(2).  (Another good question to ask is
> when someone might be needing to generate millions of 256-bit session
> keys per second, when the D-H setup, even if you were using ECCDH,
> would be largely dominating the time for the connection setup anyway.)

Is speed everything we should care about? What about:

- offloading of crypto operation from the CPU

- potentially additional security features a hardware cipher may provide like 
cache coloring attack resistance?

Ciao
Stephan
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v5 0/7] /dev/random - a new approach

2016-06-20 Thread Stephan Mueller
Am Montag, 20. Juni 2016, 11:28:38 schrieb Theodore Ts'o:

Hi Theodore,

> On Mon, Jun 20, 2016 at 07:51:59AM +0200, Stephan Mueller wrote:
> > - Correlation of noise sources: as outlined in [1] chapter 1, the three
> > noise sources of the legacy /dev/random implementation have a high
> > correlation. Such correlation is due to the fact that a HID/disk event at
> > the same time produces an IRQ event. The time stamp (which deliver the
> > majority of entropy) of both events are correlated. I would think that
> > the maintenance of the fast_pools partially breaks that correlation to
> > some degree though, yet how much the correlation is broken is unknown.
> 
> We add more entropy for disk events only if they are rotational (i.e.,
> not flash devices), and the justification for this is Don Davis's
> "Cryptographic randomness from air turbulence in disk drives" paper.
> There has also been work showing that by measuring disk completion
> times, you can actually notice when someone walks by the server
> (because the vibrations from footsteps affect disk head settling
> times).  In fact how you mount HDD's to isolate them from vibration
> can make a huge difference to the overall performance of your system.
> 
> As far as HID's are concerned, I will note that in 99.99% of the
> systems, if you have direct physical access to the system, you
> probably are screwed from a security perspective anyway.  Yes, one
> could imagine systems where the user might have access to keyboard and
> the mouse, and not be able to do other interesting things (such as
> inserting a BadUSB device into one of the ports, rebooting the system
> into single user mode, etc.).  But now you have to assume the user can
> actually manipulate the input devices down to jiffies 1ms or cycle
> counter (nanonsecond) level of granularity...
> 
> All of this being said, I will freely admit that the hueristics of
> entropy collection is by far one of the weaker aspects of the system.

With that being said, wouldn't it make sense to:

- Get rid of the entropy heuristic entirely and just assume a fixed value of 
entropy for a given event?

- remove the high-res time stamp and the jiffies collection in 
add_disk_randomness and add_input_randomness to not run into the correlation 
issue?

- In addition, let us credit the remaining information zero bits of entropy 
and just use it to stir the input_pool.

- Conversely, as we now would not have the correlation issue any more, let us 
change the add_interrupt_randomness to credit each received interrupt one bit 
of entropy or something in this vicinity? Only if random_get_entropy returns 
0, let us drop the credited entropy rate to something like 1/10th or 1/20th 
bit per event.

> Ultimately there is no way to be 100% accurate with **any** entropy
> system, since ENCRYPT(NSA_KEY, COUNTER++) has zero entropy, but good
> luck finding a entropy estimation system that can detect that.

+1 here

To me, all the entropy heuristic today is only a more elaborate test against 
the failure of a noise source. And this is how I use the 1st/2nd/3rd 
derivation in my code: it is just a failure test.

Hence, we cannot estimate the entropy level at runtime. All we can do is 
having a good conservative estimate. And for such estimate, I feel that 
throwing lots of code against that problem is not helpful.
> 
> > - The delivery of entropic data from the input_pool to the
> > (non)blocking_pools is not atomic (for the lack of better word), i.e. one
> > block of data with a given entropy content is injected into the
> > (non)blocking_pool where the output pool is still locked (the user cannot
> > obtain data during that injection time). With Ted's new patch set, two 64
> > bit blocks from the fast_pools are injected into the ChaCha20 DRNG. So,
> > it is clearly better than previously. But still, with the blocking_pool,
> > we face that issue. The reason for that issue is outlined in [1] 2.1. In
> > the pathological case with an active attack, /dev/random could have a
> > security strength of 2 * 128 bits of and not 2^128 bits when reading 128
> > bits out of it (the numbers are for illustration only, it is a bit better
> > as /dev/random is woken up at random_read_wakeup_bits intervals -- but
> > that number can be set to dangerous low levels down to 8 bits).
> 
> I believe what you are referring to is addressed by the avalanche
> reseeding feature.  Yes, that can be turned down by
> random_read_wakeup_bits, but (a) this requires root (at which point
> the game is up), and (b) in practice no one touches that knob.  You're
> right that it would probably be better to reject attempts to set that
> number to too-small a number, or perhaps remove the knob entirely.
> 
> Personally, I don't really use /dev/random, nor would I recommend it
> for most application programmers.  At this point, getrandom(2) really
> is the preferred interface unless you have some very specialized
> needs.

I fully agree. But there are use cases for /de

Re: [PATCH v5 0/7] /dev/random - a new approach

2016-06-20 Thread Theodore Ts'o
On Mon, Jun 20, 2016 at 07:51:59AM +0200, Stephan Mueller wrote:
> 
> - Correlation of noise sources: as outlined in [1] chapter 1, the three noise 
> sources of the legacy /dev/random implementation have a high correlation. 
> Such 
> correlation is due to the fact that a HID/disk event at the same time 
> produces 
> an IRQ event. The time stamp (which deliver the majority of entropy) of both 
> events are correlated. I would think that the maintenance of the fast_pools 
> partially breaks that correlation to some degree though, yet how much the 
> correlation is broken is unknown.

We add more entropy for disk events only if they are rotational (i.e.,
not flash devices), and the justification for this is Don Davis's
"Cryptographic randomness from air turbulence in disk drives" paper.
There has also been work showing that by measuring disk completion
times, you can actually notice when someone walks by the server
(because the vibrations from footsteps affect disk head settling
times).  In fact how you mount HDD's to isolate them from vibration
can make a huge difference to the overall performance of your system.

As far as HID's are concerned, I will note that in 99.99% of the
systems, if you have direct physical access to the system, you
probably are screwed from a security perspective anyway.  Yes, one
could imagine systems where the user might have access to keyboard and
the mouse, and not be able to do other interesting things (such as
inserting a BadUSB device into one of the ports, rebooting the system
into single user mode, etc.).  But now you have to assume the user can
actually manipulate the input devices down to jiffies 1ms or cycle
counter (nanonsecond) level of granularity...

All of this being said, I will freely admit that the hueristics of
entropy collection is by far one of the weaker aspects of the system.
Ultimately there is no way to be 100% accurate with **any** entropy
system, since ENCRYPT(NSA_KEY, COUNTER++) has zero entropy, but good
luck finding a entropy estimation system that can detect that.

> - The delivery of entropic data from the input_pool to the 
> (non)blocking_pools 
> is not atomic (for the lack of better word), i.e. one block of data with a 
> given entropy content is injected into the (non)blocking_pool where the 
> output 
> pool is still locked (the user cannot obtain data during that injection 
> time). 
> With Ted's new patch set, two 64 bit blocks from the fast_pools are injected 
> into the ChaCha20 DRNG. So, it is clearly better than previously. But still, 
> with the blocking_pool, we face that issue. The reason for that issue is 
> outlined in [1] 2.1. In the pathological case with an active attack, 
> /dev/random could have a security strength of 2 * 128 bits of and not 2^128 
> bits when reading 128 bits out of it (the numbers are for illustration only, 
> it is a bit better as /dev/random is woken up at random_read_wakeup_bits 
> intervals -- but that number can be set to dangerous low levels down to 8 
> bits).

I believe what you are referring to is addressed by the avalanche
reseeding feature.  Yes, that can be turned down by
random_read_wakeup_bits, but (a) this requires root (at which point
the game is up), and (b) in practice no one touches that knob.  You're
right that it would probably be better to reject attempts to set that
number to too-small a number, or perhaps remove the knob entirely.

Personally, I don't really use /dev/random, nor would I recommend it
for most application programmers.  At this point, getrandom(2) really
is the preferred interface unless you have some very specialized
needs.  


Cheers,

- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 5/7] random: replace non-blocking pool with a Chacha20-based CRNG

2016-06-20 Thread Theodore Ts'o
On Mon, Jun 20, 2016 at 01:19:17PM +0800, Herbert Xu wrote:
> On Mon, Jun 20, 2016 at 01:02:03AM -0400, Theodore Ts'o wrote:
> > 
> > It's work that I'm not convinced is worth the gain?  Perhaps I
> > shouldn't have buried the lede, but repeating a paragraph from later
> > in the message:
> > 
> >So even if the AVX optimized is 100% faster than the generic version,
> >it would change the time needed to create a 256 byte session key from
> >1.68 microseconds to 1.55 microseconds.  And this is ignoring the
> >extra overhead needed to set up AVX, the fact that this will require
> >the kernel to do extra work doing the XSAVE and XRESTORE because of
> >the use of the AVX registers, etc.
> 
> We do have figures on the efficiency of the accelerated chacha
> implementation on 256-byte requests (I've picked the 8-block
> version):

Sorry, I typo'ed this.  s/bytes/bits/.  256 bits / 32 bytes is the
much more common amount that someone might be trying to extract, to
get a 256 **bit** session key.

And also note my comments about how we need to permute the key
directly, and not just go through the set_key abstraction.  And when
you did your benchmarks, how often was XSAVE / XRESTORE happening ---
in between every single block operation?

Remember, what we're talking about for getrandom(2) in the most common
case is syscall, extrate a 32 bytes worth of keystream, ***NOT***
XOR'ing it with plaintext buffer, and then permuting the key.

So simply doing chacha20 encryption in a tight loop in the kernel
might not be a good proxy for what would actually happen in real life
when someone calls getrandom(2).  (Another good question to ask is
when someone might be needing to generate millions of 256-bit session
keys per second, when the D-H setup, even if you were using ECCDH,
would be largely dominating the time for the connection setup anyway.)

Cheers,

- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] crypto: caam - replace deprecated EXTRA_CFLAGS

2016-06-20 Thread Herbert Xu
On Thu, Jun 16, 2016 at 04:32:55PM +0300, Tudor Ambarus wrote:
> EXTRA_CFLAGS is still supported but its usage is deprecated.
> 
> Signed-off-by: Tudor Ambarus 

Patch applied.  Thanks.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 01/28] crypto: omap-aes: Fix registration of algorithms

2016-06-20 Thread Tero Kristo

On 07/06/16 13:48, Herbert Xu wrote:

On Wed, Jun 01, 2016 at 11:56:02AM +0300, Tero Kristo wrote:

From: Lokesh Vutla 

Algorithms can be registered only once. So skip registration of
algorithms if already registered (i.e. in case we have two AES cores
in the system.)

Signed-off-by: Lokesh Vutla 
Signed-off-by: Tero Kristo 


Patch applied.  Thanks.



Thanks,

Did you check the rest of the series? I only got feedback for this and 
patch #2 on the series, shall I repost the remainder of the series as a 
whole or...?


-Tero
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] crypto: fix semicolon.cocci warnings

2016-06-20 Thread Herbert Xu
On Wed, Jun 15, 2016 at 07:13:25PM +0800, kbuild test robot wrote:
> crypto/drbg.c:1637:39-40: Unneeded semicolon
> 
> 
>  Remove unneeded semicolon.
> 
> Generated by: scripts/coccinelle/misc/semicolon.cocci
> 
> CC: Stephan Mueller 
> Signed-off-by: Fengguang Wu 

Patch applied.  Thanks.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 0/2] Add SHA-3 algorithm and test vectors.

2016-06-20 Thread Herbert Xu
On Fri, Jun 17, 2016 at 10:30:34AM +0530, Raveendra Padasalagi wrote:
> This patchset adds the implementation of SHA-3 algorithm
> in software and it's based on original implementation
> pushed in patch https://lwn.net/Articles/518415/ with
> additional changes to match the padding rules specified
> in SHA-3 specification.
> 
> This patchset also includes changes in tcrypt module to
> add support for SHA-3 algorithms test and related test
> vectors for basic testing.
> 
> Broadcom Secure Processing Unit-2(SPU-2) engine supports
> offloading of SHA-3 operations in hardware, in order to
> add SHA-3 support in SPU-2 driver we needed to have the
> software implementation and test framework in place.
> 
> The patchset is based on v4.7-rc1 tag and its tested on
> Broadcom NorthStar2 SoC.
> 
> The patch set can be fetched from iproc-sha3-v2 branch
> of https://github.com/Broadcom/arm64-linux.git
> 
> Changes since v1:
>  - Renamed MODULE_ALIAS to MODULE_ALIAS_CRYPTO

All applied.  Thanks.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [patch] crypto: drbg - fix an error code in drbg_init_sym_kernel()

2016-06-20 Thread Herbert Xu
On Fri, Jun 17, 2016 at 12:16:19PM +0300, Dan Carpenter wrote:
> We accidentally return PTR_ERR(NULL) which is success but we should
> return -ENOMEM.
> 
> Fixes: 355912852115 ('crypto: drbg - use CTR AES instead of ECB AES')
> Signed-off-by: Dan Carpenter 

Patch applied.  Thanks.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] crypto: caam: fix misspelled upper_32_bits

2016-06-20 Thread Herbert Xu
On Thu, Jun 16, 2016 at 11:05:46AM +0200, Arnd Bergmann wrote:
> An endianess fix mistakenly used higher_32_bits() instead of
> upper_32_bits(), and that doesn't exist:
> 
> drivers/crypto/caam/desc_constr.h: In function 'append_ptr':
> drivers/crypto/caam/desc_constr.h:84:75: error: implicit declaration of 
> function 'higher_32_bits' [-Werror=implicit-function-declaration]
>   *offset = cpu_to_caam_dma(ptr);
> 
> Signed-off-by: Arnd Bergmann 
> Fixes: 261ea058f016 ("crypto: caam - handle core endianness != caam 
> endianness")

Patch applied.  Thanks.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [PATCH v9 2/3] crypto: kpp - Add DH software implementation

2016-06-20 Thread Benedetto, Salvatore


> -Original Message-
> From: Herbert Xu [mailto:herb...@gondor.apana.org.au]
> Sent: Monday, June 20, 2016 12:13 PM
> To: Benedetto, Salvatore 
> Cc: linux-crypto@vger.kernel.org
> Subject: Re: [PATCH v9 2/3] crypto: kpp - Add DH software implementation
> 
> On Fri, Jun 17, 2016 at 03:37:44PM +0100, Salvatore Benedetto wrote:
> >
> > +int crypto_dh_encode_key(char *buf, unsigned int len, const struct dh
> > +*params) {
> > +   struct kpp_secret *secret = (struct kpp_secret *)buf;
> 
> You can't do that.  The memory under buf may not be aligned so you need to
> copy it.

OK, I'll fix that and send a new version unless you have other comments.

Thanks for reviewing.

Salvatore
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [PATCH v9 2/3] crypto: kpp - Add DH software implementation

2016-06-20 Thread Benedetto, Salvatore


> -Original Message-
> From: Herbert Xu [mailto:herb...@gondor.apana.org.au]
> Sent: Monday, June 20, 2016 12:15 PM
> To: Benedetto, Salvatore 
> Cc: linux-crypto@vger.kernel.org
> Subject: Re: [PATCH v9 2/3] crypto: kpp - Add DH software implementation
> 
> On Fri, Jun 17, 2016 at 03:37:44PM +0100, Salvatore Benedetto wrote:
> >  * Implement MPI based Diffie-Hellman under kpp API
> >
> > +struct dh {
> > +   void *key;
> > +   void *p;
> > +   void *g;
> > +   unsigned int key_size;
> > +   unsigned int p_size;
> > +   unsigned int g_size;
> > +};
> > +
> > +int crypto_dh_key_len(const struct dh *params); int
> > +crypto_dh_encode_key(char *buf, unsigned int len, const struct dh
> > +*params); int crypto_dh_decode_key(const char *buf, unsigned int len,
> > +struct dh **params);
> 
> While you're at it, it would be nice if you could make the encoded format
> little-endian, that way we can make test vectors for all kpp algorithms use
> the same format.
> 

The input format is the same for DH and ECDH.
Only the software implementation of ECC requires little-endian format.

Regards,
Salvatore

> Thanks,
> --
> Email: Herbert Xu  Home Page:
> http://gondor.apana.org.au/~herbert/
> PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] crypto : async implementation for sha1-mb

2016-06-20 Thread Herbert Xu
On Fri, Jun 17, 2016 at 12:28:30PM -0700, Megha Dey wrote:
>
> -static void sha1_mb_async_exit_tfm(struct crypto_tfm *tfm)
> -{
> - struct sha1_mb_ctx *ctx = crypto_tfm_ctx(tfm);
> + ahash_request_set_tfm(areq, child);
> + ahash_request_set_callback(areq, CRYPTO_TFM_REQ_MAY_SLEEP, NULL, NULL);

You need to fix them all, not just the one that I picked out.

> @@ -874,11 +893,13 @@ static struct ahash_alg sha1_mb_async_alg = {
>   .cra_name   = "sha1",
>   .cra_driver_name= "sha1_mb",
>   .cra_priority   = 200,
> - .cra_flags  = CRYPTO_ALG_TYPE_AHASH | 
> CRYPTO_ALG_ASYNC,
> + .cra_flags  = CRYPTO_ALG_TYPE_AHASH |
> + CRYPTO_ALG_ASYNC,

Please don't include unrelated changes such as white-space fixes
like this.

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v9 2/3] crypto: kpp - Add DH software implementation

2016-06-20 Thread Herbert Xu
On Fri, Jun 17, 2016 at 03:37:44PM +0100, Salvatore Benedetto wrote:
>  * Implement MPI based Diffie-Hellman under kpp API
>
> +struct dh {
> + void *key;
> + void *p;
> + void *g;
> + unsigned int key_size;
> + unsigned int p_size;
> + unsigned int g_size;
> +};
> +
> +int crypto_dh_key_len(const struct dh *params);
> +int crypto_dh_encode_key(char *buf, unsigned int len, const struct dh 
> *params);
> +int crypto_dh_decode_key(const char *buf, unsigned int len, struct dh 
> **params);

While you're at it, it would be nice if you could make the encoded
format little-endian, that way we can make test vectors for all
kpp algorithms use the same format.

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v9 2/3] crypto: kpp - Add DH software implementation

2016-06-20 Thread Herbert Xu
On Fri, Jun 17, 2016 at 03:37:44PM +0100, Salvatore Benedetto wrote:
>
> +int crypto_dh_encode_key(char *buf, unsigned int len, const struct dh 
> *params)
> +{
> + struct kpp_secret *secret = (struct kpp_secret *)buf;

You can't do that.  The memory under buf may not be aligned so
you need to copy it.

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v8 1/3] crypto: scatterwak - Add scatterwalk_sg_copychunks

2016-06-20 Thread Herbert Xu
On Wed, Jun 15, 2016 at 05:52:42PM +0300, Tudor Ambarus wrote:
> This patch adds the function scatterwalk_sg_copychunks which writes
> a chunk of data from a scatterwalk to another scatterwalk.
> It will be used by caam driver to remove the leading zeros
> for the output data of the RSA algorithm, after the computation completes.
> 
> Signed-off-by: Tudor Ambarus 

Not only does dropping leading zeroes lead to questions about
information leaks, but it makes every one of our RSA drivers
look crap.

So I'm going to change this and make it not skip leading zeroes.

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v5 0/7] /dev/random - a new approach

2016-06-20 Thread Pavel Machek
Hi!

> > On Sun 2016-06-19 17:58:41, Stephan Mueller wrote:
> > > Hi Herbert, Ted,
> > > 
> > > The following patch set provides a different approach to /dev/random which
> > > I call Linux Random Number Generator (LRNG) to collect entropy within the
> > > Linux kernel. The main improvements compared to the legacy /dev/random is
> > > to provide sufficient entropy during boot time as well as in virtual
> > > environments and when using SSDs. A secondary design goal is to limit the
> > > impact of the entropy collection on massive parallel systems and also
> > > allow the use accelerated cryptographic primitives. Also, all steps of
> > > the entropic data processing are testable. Finally massive performance
> > > improvements are visible at /dev/urandom and get_random_bytes.
> > 
> > Dunno. It is very similar to existing rng, AFAICT. And at the very
> > least, constants in existing RNG could be tuned to provide "entropy at
> > the boot time".
> 
> The key differences and thus my main concerns I have with the current design 
> are the following items. If we would change them, it is an intrusive change. 
> As of now, I have not seen that intrusive changes were accepted. This led me 
> to develop a competing algorithm.

Well, intrusive changes are hard to do, but replacing whole subsystems
is even harder -- and rightly so.

> - There was a debate around my approach assuming one bit of entropy per 
> received IRQ. I am really wondering about that discussion when there is a 
> much 
> bigger "forcast" problem with the legacy /dev/random: how can we credit HIDs 
> up to 11 bits of entropy when the user (a potential adversary) triggers these 
> events? I am sure I would be shot down with such an approach if I would 
> deliver that with a new implementation.

Well, if you can demonstrate 11 bits is too much, go ahead... I'm sure
that is rather easy to adjust.

But I believe that 1) user is not normally an adversary and 2) the
best thing for him to do would still be "pressing nothing". It will be
hard to press keys (etc) with great enough precision...

Best regards,
Pavel

-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) 
http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html