Re: [RFC PATCH v2] crypto: Add IV generation algorithms

2017-01-04 Thread Binoy Jayan
Hi Herbert,

On 2 January 2017 at 12:23, Herbert Xu  wrote:
> On Mon, Jan 02, 2017 at 12:16:45PM +0530, Binoy Jayan wrote:
>
> Right.  The actual number of underlying tfms that do the work
> won't change compared to the status quo.  We're just structuring
> it such that if the overall scheme is supported by the hardware
> then we can feed more than one sector at a time to it.

I was thinking of continuing to have the iv generation algorithms as template
ciphers instead of regular 'skcipher' as it is easier to inherit the parameters
from the underlying cipher (e.g. aes) like cra_blocksize, cra_alignmask,
ivsize, chunksize etc.

Usually, the underlying cipher for the template ciphers are instantiated
in the following function:

skcipher_instance:skcipher_alg:init()

Since the number of such cipher instances depend on the key count, which is
not known at the time of creation of the cipher (it's passed to as an argument
to the setkey api), the creation of those have to be delayed until the setkey
operation of the template cipher. But as Mark pointed out, the users of this
cipher may get confused if the creation of the underlying cipher fails while
trying to do a 'setkey' on the template cipher. I was wondering if I can create
a single instance of the cipher and assign it to tfms[0] and allocate the
remaining instances when the setkey operation is called later with the encoded
key_count so that errors during cipher creation are uncovered earlier.

Thanks,
Binoy
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: console noise after commit c1e9b3b0eea

2017-01-04 Thread Shannon Nelson

Resurrecting an old thread, pulled out of
http://www.spinics.net/lists/linux-crypto/msg19192.html


On Wed, Apr 20, 2016 at 9:18 AM, Anatoly Pugachev  
wrote:

On Wed, Apr 20, 2016 at 1:33 AM, Sowmini Varadhan
 wrote:


Hi Anatoly,

after commit c1e9b3b0eea1 ("hwrng: n2 - Attach on T5/M5, T7/M7 SPARC

CPUs")

I get a *lot* of console noise on my T5-2, of the form:

n2rng f028f21c: Selftest failed on unit 0
n2rng f028f21c: Test buffer slot 0 [0x]
n2rng f028f21c: Test buffer slot 1 [0xe63f56d6a22eb116]
n2rng f028f21c: Test buffer slot 2 [0xe63f56d6a22eb116]
n2rng f028f21c: Test buffer slot 3 [0xe63f56d6a22eb116]
n2rng f028f21c: Test buffer slot 4 [0xe63f56d6a22eb116]
n2rng f028f21c: Test buffer slot 5 [0xe63f56d6a22eb116]
n2rng f028f21c: Test buffer slot 6 [0xe63f56d6a22eb116]
n2rng f028f21c: Test buffer slot 7 [0xe63f56d6a22eb116]

Why/when is your commit needed on my T5-2?

I'm not sure how this was tested, but if you need to revise it and test
on sparc, please let me know- I think it needs more work on sparc.


Sowmini,

the patch/commit is actually quite trivial, it just adds device_id
matches for newer T5/M7 CPUs to n2rng_match structure. Without this
patch, n2rng does not work on this newer CPUs. Works well on my T5-2
LDOM (tested with rng-tools and gpg --gen-key). I don't have M7


  

Anatoly, I think your LDOM is why you don't see the problem.  Yes, your 
patch works just fine when running in a client LDOM, but we see a 
problem when running this on sparc "bare metal".  Did you test this on 
the bare metal so that the self-test would run?


It seems there's an issue with the self-test in the newer hardware and 
the driver will never stop trying to retest the hardware.  I'm 
contemplating a patch to limit the self-test attempts, at least until we 
can figure out what is the root of the issue.


sln


machine to test it with.

Why the n2rng selftest fails on your machine - I've no idea... Just to
silence it, you can blacklist this module, since it does not work your
hardware anyway.

Can you please send me "prtconf -pv" output from your machine, as well
information on how do you run linux as LDOM container or baremetal
T5-2 ?

Mine T5-2 is one of the last firmware releases (run from solaris 11.3
control domain):

root@deimos:/home/sysadmin# prtdiag -v
 FW Version 
Sun System Firmware 9.5.3 2015/11/25 09:50

sysadmin@deimos:~$ ldm -V

Logical Domains Manager (v 3.3.0.0.17)
Hypervisor control protocol v 1.12
Using Hypervisor MD v 1.4

System PROM:
Hostconfig  v. 1.6.3@(#)Hostconfig 1.6.3 2015/11/25
08:57
Hypervisor  v. 1.15.3   @(#)Hypervisor 1.15.3 2015/11/11
17:15
OpenBootv. 4.38.3   @(#)OpenBoot 4.38.3 2015/11/11
10:38


Can you please check what firmware release if yours T5-2 server and
probably update? I'm not sure it would help, but anyway.




--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/5] ARM: wire up HWCAP2 feature bits to the CPU modalias

2017-01-04 Thread Ard Biesheuvel
On 2 January 2017 at 23:40, Russell King - ARM Linux
 wrote:
> On Mon, Jan 02, 2017 at 09:06:04PM +, Ard Biesheuvel wrote:
>> On 31 October 2016 at 16:13, Russell King - ARM Linux
>>  wrote:
>> > On Sat, Oct 29, 2016 at 11:08:36AM +0100, Ard Biesheuvel wrote:
>> >> On 18 October 2016 at 11:52, Ard Biesheuvel  
>> >> wrote:
>> >> > Wire up the generic support for exposing CPU feature bits via the
>> >> > modalias in /sys/device/system/cpu. This allows udev to automatically
>> >> > load modules for things like crypto algorithms that are implemented
>> >> > using optional instructions.
>> >> >
>> >> > Signed-off-by: Ard Biesheuvel 
>> >> > ---
>> >> >  arch/arm/Kconfig  |  1 +
>> >> >  arch/arm/include/asm/cpufeature.h | 32 
>> >> >  2 files changed, 33 insertions(+)
>> >> >
>> >>
>> >> Russell,
>> >>
>> >> do you have any concerns regarding this patch? If not, I will drop it
>> >> into the patch system.
>> >
>> > It's still something I need to look at... I've been offline last week,
>> > and sort-of offline the previous week, so I'm catching up.
>> >
>>
>> Hi Russell,
>>
>> Any thoughts yet?
>
> None, and the patch is well buried now that it'll take me a while to
> find... back in mid-October?  Yea, I'll have to drop everything and
> go digging through my mailboxes to find it... and I'm just catching
> up (again) after a week and a bit's time offline - yep, it's wonderful
> timing.  Sorry, no time to look at it right now, you're not the only
> one wanting my attention at the moment.
>

No worries. It is not exactly urgent, but it is a useful enhancement
nonetheless.

> Please try again in about a week's time - don't leave it a few months,
> and please include the patch.
>

OK
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] crypto: arm64/aes - add scalar implementation

2017-01-04 Thread Ard Biesheuvel
This adds a scalar implementation of AES, based on the precomputed tables
that are exposed by the generic AES code. Since rotates are cheap on arm64,
this implementation only uses the 4 core tables (of 1 KB each), and avoids
the prerotated ones, reducing the D-cache footprint by 75%.

On Cortex-A57, this code manages 13.0 cycles per byte, which is ~34% faster
than the generic C code. (Note that this is still >13x slower than the code
that uses the optional ARMv8 Crypto Extensions, which manages <1 cycles per
byte.)

Signed-off-by: Ard Biesheuvel 
---

Raw performance data after the patch, which was generated on a 2 GHz
Cortex-A57 (AMD Seattle B1).

 arch/arm64/crypto/Kconfig   |   4 +
 arch/arm64/crypto/Makefile  |   3 +
 arch/arm64/crypto/aes-cipher-core.S | 126 
 arch/arm64/crypto/aes-cipher-glue.c |  69 +++
 4 files changed, 202 insertions(+)

diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig
index 0bf0f531f539..0826f8e599a6 100644
--- a/arch/arm64/crypto/Kconfig
+++ b/arch/arm64/crypto/Kconfig
@@ -41,6 +41,10 @@ config CRYPTO_CRC32_ARM64_CE
depends on KERNEL_MODE_NEON && CRC32
select CRYPTO_HASH
 
+config CRYPTO_AES_ARM64
+   tristate "AES core cipher using scalar instructions"
+   select CRYPTO_AES
+
 config CRYPTO_AES_ARM64_CE
tristate "AES core cipher using ARMv8 Crypto Extensions"
depends on ARM64 && KERNEL_MODE_NEON
diff --git a/arch/arm64/crypto/Makefile b/arch/arm64/crypto/Makefile
index 9d2826c5fccf..a893507629eb 100644
--- a/arch/arm64/crypto/Makefile
+++ b/arch/arm64/crypto/Makefile
@@ -44,6 +44,9 @@ sha512-arm64-y := sha512-glue.o sha512-core.o
 obj-$(CONFIG_CRYPTO_CHACHA20_NEON) += chacha20-neon.o
 chacha20-neon-y := chacha20-neon-core.o chacha20-neon-glue.o
 
+obj-$(CONFIG_CRYPTO_AES_ARM64) += aes-arm64.o
+aes-arm64-y := aes-cipher-core.o aes-cipher-glue.o
+
 AFLAGS_aes-ce.o:= -DINTERLEAVE=4
 AFLAGS_aes-neon.o  := -DINTERLEAVE=4
 
diff --git a/arch/arm64/crypto/aes-cipher-core.S 
b/arch/arm64/crypto/aes-cipher-core.S
new file mode 100644
index ..22d1bc46feba
--- /dev/null
+++ b/arch/arm64/crypto/aes-cipher-core.S
@@ -0,0 +1,126 @@
+/*
+ * Scalar AES core transform
+ *
+ * Copyright (C) 2017 Linaro Ltd 
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include 
+#include 
+
+   .text
+   .align  5
+
+   rk  .reqx0
+   out .reqx1
+   in  .reqx2
+   rounds  .reqx3
+   tt  .reqx4
+   lt  .reqx2
+
+   .macro  __hround, out0, out1, in0, in1, in2, in3, t0, t1, enc
+   ldp \out0, \out1, [rk], #8
+
+   ubfxw13, \in0, #0, #8
+   ubfxw14, \in1, #8, #8
+   ldr w13, [tt, w13, uxtw #2]
+   ldr w14, [tt, w14, uxtw #2]
+
+   ubfxw15, \in2, #16, #8
+   ubfxw16, \in3, #24, #8
+   ldr w15, [tt, w15, uxtw #2]
+   ldr w16, [tt, w16, uxtw #2]
+
+   .if \enc
+   ubfxw17, \in1, #0, #8
+   ubfxw18, \in2, #8, #8
+   .else
+   ubfxw17, \in3, #0, #8
+   ubfxw18, \in0, #8, #8
+   .endif
+   ldr w17, [tt, w17, uxtw #2]
+   ldr w18, [tt, w18, uxtw #2]
+
+   .if \enc
+   ubfx\t0, \in3, #16, #8
+   ubfx\t1, \in0, #24, #8
+   .else
+   ubfx\t0, \in1, #16, #8
+   ubfx\t1, \in2, #24, #8
+   .endif
+   ldr \t0, [tt, \t0, uxtw #2]
+   ldr \t1, [tt, \t1, uxtw #2]
+
+   eor \out0, \out0, w13
+   eor \out1, \out1, w17
+   eor \out0, \out0, w14, ror #24
+   eor \out1, \out1, w18, ror #24
+   eor \out0, \out0, w15, ror #16
+   eor \out1, \out1, \t0, ror #16
+   eor \out0, \out0, w16, ror #8
+   eor \out1, \out1, \t1, ror #8
+   .endm
+
+   .macro  fround, out0, out1, out2, out3, in0, in1, in2, in3
+   __hround\out0, \out1, \in0, \in1, \in2, \in3, \out2, \out3, 1
+   __hround\out2, \out3, \in2, \in3, \in0, \in1, \in1, \in2, 1
+   .endm
+
+   .macro  iround, out0, out1, out2, out3, in0, in1, in2, in3
+   __hround\out0, \out1, \in0, \in3, \in2, \in1, \out2, \out3, 0
+   __hround\out2, \out3, \in2, \in1, \in0, \in3, \in1, \in0, 0
+   .endm
+
+   .macro  do_crypt, round, ttab, ltab
+   ldp w5, w6, [in]
+   ldp w7, w8, [in, #8]
+  

Re: [PATCH] crypto: Replaced gcc specific attributes with macros from compiler.h

2017-01-04 Thread Gideon D'souza
Any update on this patch, should I base it on another tree, this was
based off of linus's tree right when he released 4.10-rc2

Should I send it close to the next merge window?

On Sat, Dec 31, 2016 at 9:26 PM,   wrote:
> From: Gideon Israel Dsouza 
>
> Continuing from this commit: 52f5684c8e1e
> ("kernel: use macros from compiler.h instead of __attribute__((...))")
>
> I submitted 4 total patches. They are part of task I've taken up to
> increase compiler portability in the kernel. I've cleaned up the
> subsystems under /kernel /mm /block and /security, this patch targets
> /crypto.
>
> There is  which provides macros for various gcc specific
> constructs. Eg: __weak for __attribute__((weak)). I've cleaned all
> instances of gcc specific attributes with the right macros for the crypto
> subsystem.
>
> I had to make one additional change into compiler-gcc.h for the case when
> one wants to use this: __attribute__((aligned) and not specify an alignment
> factor. From the gcc docs, this will result in the largest alignment for
> that data type on the target machine so I've named the macro
> __aligned_largest. Please advise if another name is more appropriate.
>
> Signed-off-by: Gideon Israel Dsouza 
> ---
>  crypto/ablkcipher.c  | 5 +++--
>  crypto/acompress.c   | 3 ++-
>  crypto/aead.c| 3 ++-
>  crypto/ahash.c   | 3 ++-
>  crypto/akcipher.c| 3 ++-
>  crypto/blkcipher.c   | 7 ---
>  crypto/cts.c | 5 +++--
>  crypto/kpp.c | 3 ++-
>  crypto/pcbc.c| 3 ++-
>  crypto/rng.c | 3 ++-
>  crypto/scompress.c   | 3 ++-
>  crypto/shash.c   | 9 +
>  crypto/skcipher.c| 3 ++-
>  include/linux/compiler-gcc.h | 1 +
>  14 files changed, 34 insertions(+), 20 deletions(-)
>
> diff --git a/crypto/ablkcipher.c b/crypto/ablkcipher.c
> index d676fc5..d880a48 100644
> --- a/crypto/ablkcipher.c
> +++ b/crypto/ablkcipher.c
> @@ -19,6 +19,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  #include 
>
>  #include 
> @@ -394,7 +395,7 @@ static int crypto_ablkcipher_report(struct sk_buff *skb, 
> struct crypto_alg *alg)
>  #endif
>
>  static void crypto_ablkcipher_show(struct seq_file *m, struct crypto_alg 
> *alg)
> -   __attribute__ ((unused));
> +   __maybe_unused;
>  static void crypto_ablkcipher_show(struct seq_file *m, struct crypto_alg 
> *alg)
>  {
> struct ablkcipher_alg *ablkcipher = >cra_ablkcipher;
> @@ -468,7 +469,7 @@ static int crypto_givcipher_report(struct sk_buff *skb, 
> struct crypto_alg *alg)
>  #endif
>
>  static void crypto_givcipher_show(struct seq_file *m, struct crypto_alg *alg)
> -   __attribute__ ((unused));
> +   __maybe_unused;
>  static void crypto_givcipher_show(struct seq_file *m, struct crypto_alg *alg)
>  {
> struct ablkcipher_alg *ablkcipher = >cra_ablkcipher;
> diff --git a/crypto/acompress.c b/crypto/acompress.c
> index 887783d..47d1162 100644
> --- a/crypto/acompress.c
> +++ b/crypto/acompress.c
> @@ -20,6 +20,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  #include 
>  #include 
>  #include 
> @@ -50,7 +51,7 @@ static int crypto_acomp_report(struct sk_buff *skb, struct 
> crypto_alg *alg)
>  #endif
>
>  static void crypto_acomp_show(struct seq_file *m, struct crypto_alg *alg)
> -   __attribute__ ((unused));
> +   __maybe_unused;
>
>  static void crypto_acomp_show(struct seq_file *m, struct crypto_alg *alg)
>  {
> diff --git a/crypto/aead.c b/crypto/aead.c
> index 3f5c5ff..f794b30 100644
> --- a/crypto/aead.c
> +++ b/crypto/aead.c
> @@ -24,6 +24,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  #include 
>
>  #include "internal.h"
> @@ -132,7 +133,7 @@ static int crypto_aead_report(struct sk_buff *skb, struct 
> crypto_alg *alg)
>  #endif
>
>  static void crypto_aead_show(struct seq_file *m, struct crypto_alg *alg)
> -   __attribute__ ((unused));
> +   __maybe_unused;
>  static void crypto_aead_show(struct seq_file *m, struct crypto_alg *alg)
>  {
> struct aead_alg *aead = container_of(alg, struct aead_alg, base);
> diff --git a/crypto/ahash.c b/crypto/ahash.c
> index 2ce8bcb..e58c497 100644
> --- a/crypto/ahash.c
> +++ b/crypto/ahash.c
> @@ -23,6 +23,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  #include 
>
>  #include "internal.h"
> @@ -493,7 +494,7 @@ static int crypto_ahash_report(struct sk_buff *skb, 
> struct crypto_alg *alg)
>  #endif
>
>  static void crypto_ahash_show(struct seq_file *m, struct crypto_alg *alg)
> -   __attribute__ ((unused));
> +   __maybe_unused;
>  static void crypto_ahash_show(struct seq_file *m, struct crypto_alg *alg)
>  {
> seq_printf(m, "type : ahash\n");
> diff --git a/crypto/akcipher.c b/crypto/akcipher.c
> index def301e..cfbdb06 100644
> --- a/crypto/akcipher.c
> +++ b/crypto/akcipher.c
> @@ -17,6 +17,7 @@
>