Re: [PATCH] crypto: x86/twofish-3way - Fix %rbp usage

2017-12-18 Thread Ingo Molnar

* Eric Biggers  wrote:

> There may be a small overhead caused by replacing 'xchg REG, REG' with
> the needed sequence 'mov MEM, REG; mov REG, MEM; mov REG, REG' once per
> round.  But, counterintuitively, when I tested "ctr-twofish-3way" on a
> Haswell processor, the new version was actually about 2% faster.
> (Perhaps 'xchg' is not as well optimized as plain moves.)

XCHG has implicit LOCK semantics on all x86 CPUs, so that's not a surprising 
result I think.

Thanks,

Ingo


[PATCH v2] crypto: AF_ALG - limit mask and type

2017-12-18 Thread Stephan Müller
The user space interface allows specifying the type and the mask field
used to allocate the cipher. As user space can precisely select the
desired cipher by using either the name or the driver name, additional
selection options for cipher are not considered necessary and relevant
for user space.

This fixes a bug where user space is able to cause one cipher to be
registered multiple times potentially exhausting kernel memory.

Reported-by: syzbot 
Cc: 
Signed-off-by: Stephan Mueller 
---
 crypto/af_alg.c | 10 ++
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/crypto/af_alg.c b/crypto/af_alg.c
index 1e5353f62067..4f4cfc5a7ef3 100644
--- a/crypto/af_alg.c
+++ b/crypto/af_alg.c
@@ -150,7 +150,6 @@ EXPORT_SYMBOL_GPL(af_alg_release_parent);
 
 static int alg_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
 {
-   const u32 forbidden = CRYPTO_ALG_INTERNAL;
struct sock *sk = sock->sk;
struct alg_sock *ask = alg_sk(sk);
struct sockaddr_alg *sa = (void *)uaddr;
@@ -176,9 +175,12 @@ static int alg_bind(struct socket *sock, struct sockaddr 
*uaddr, int addr_len)
if (IS_ERR(type))
return PTR_ERR(type);
 
-   private = type->bind(sa->salg_name,
-sa->salg_feat & ~forbidden,
-sa->salg_mask & ~forbidden);
+   /*
+* The use of the salg_feat and salg_mask are forbidden as they expose
+* too much of the low-level handling which is not suitable for
+* hostile code.
+*/
+   private = type->bind(sa->salg_name, 0, 0);
if (IS_ERR(private)) {
module_put(type->owner);
return PTR_ERR(private);
-- 
2.14.3




Re: [PATCH RESEND] crypto: af_alg - add keylen checking to avoid NULL ptr passing down

2017-12-18 Thread Li Kun



在 2017/12/19 3:12, Eric Biggers 写道:

On Mon, Dec 18, 2017 at 01:40:36PM +, Li Kun wrote:

alg_setkey do not check the keylen whether it is zero, so the key
may be ZERO_SIZE_PTR when keylen is 0, which will pass the
copy_from_user's checking and be passed to the lower functions as key.

If the lower functions only check the key if it is NULL, ZERO_SIZE_PTR
will pass the checking, and will cause null ptr dereference, so it's
better to intercept the invalid parameters in the upper functions.

This patch is also suitable to fix CVE-2017-15116 for stable trees.

Signed-off-by: Li Kun 
Cc: sta...@vger.kernel.org
---
  crypto/af_alg.c | 2 ++
  1 file changed, 2 insertions(+)

diff --git a/crypto/af_alg.c b/crypto/af_alg.c
index 337cf38..10f22f3 100644
--- a/crypto/af_alg.c
+++ b/crypto/af_alg.c
@@ -210,6 +210,8 @@ static int alg_setkey(struct sock *sk, char __user *ukey,
u8 *key;
int err;
  
+	if (!keylen)

+   return -EINVAL;
key = sock_kmalloc(sk, keylen, GFP_KERNEL);
if (!key)
return -ENOMEM;
--

If the length is 0 then why is the underlying ->setkey() method dereferencing
the pointer?  Which algorithm does this happen for?  Checking for NULL makes no
sense; the length needs to be checked instead.
The drbg_kcapi_reset has wrongly treated the (length == 0) && (data != 
NULL) as an valid

combination of parameters which has been fixed by herbert in
commit 8fded59 (crypto: drbg - Convert to new rng interface).
But i think to avoid such things happening again, maybe we could 
intercept the length 0 error

in the entry of setkey ?


static int drbg_kcapi_reset(struct crypto_rng *tfm, u8 *seed, unsigned 
int slen)

{
..
if (0 < slen) {
drbg_string_fill(_string, seed, slen);
return drbg_instantiate(drbg, _string, coreref, pr);
} else {
struct drbg_gen *data = (struct drbg_gen *)seed;
/* allow invocation of API call with NULL, 0 */
if (!data)
return drbg_instantiate(drbg, NULL, coreref, pr);
drbg_set_testdata(drbg, data->test_data);
/* linked list variable is now local to allow modification */
drbg_string_fill(_string, data->addtl->buf,
 data->addtl->len);
return drbg_instantiate(drbg, _string, coreref, pr);
}
}


Eric


--
Best Regards
Li Kun



[PATCH] crypto: x86/twofish-3way - Fix %rbp usage

2017-12-18 Thread Eric Biggers
From: Eric Biggers 

Using %rbp as a temporary register breaks frame pointer convention and
breaks stack traces when unwinding from an interrupt in the crypto code.

In twofish-3way, we can't simply replace %rbp with another register
because there are none available.  Instead, we use the stack to hold the
values that %rbp, %r11, and %r12 were holding previously.  Each of these
values represents the half of the output from the previous Feistel round
that is being passed on unchanged to the following round.  They are only
used once per round, when they are exchanged with %rax, %rbx, and %rcx.

As a result, we free up 3 registers (one per block) and can reassign
them so that %rbp is not used, and additionally %r14 and %r15 are not
used so they do not need to be saved/restored.

There may be a small overhead caused by replacing 'xchg REG, REG' with
the needed sequence 'mov MEM, REG; mov REG, MEM; mov REG, REG' once per
round.  But, counterintuitively, when I tested "ctr-twofish-3way" on a
Haswell processor, the new version was actually about 2% faster.
(Perhaps 'xchg' is not as well optimized as plain moves.)

Reported-by: syzbot 
Signed-off-by: Eric Biggers 
---
 arch/x86/crypto/twofish-x86_64-asm_64-3way.S | 112 ++-
 1 file changed, 60 insertions(+), 52 deletions(-)

diff --git a/arch/x86/crypto/twofish-x86_64-asm_64-3way.S 
b/arch/x86/crypto/twofish-x86_64-asm_64-3way.S
index 1c3b7ceb36d2..e7273a606a07 100644
--- a/arch/x86/crypto/twofish-x86_64-asm_64-3way.S
+++ b/arch/x86/crypto/twofish-x86_64-asm_64-3way.S
@@ -55,29 +55,31 @@
 #define RAB1bl %bl
 #define RAB2bl %cl
 
+#define CD0 0x0(%rsp)
+#define CD1 0x8(%rsp)
+#define CD2 0x10(%rsp)
+
+# used only before/after all rounds
 #define RCD0 %r8
 #define RCD1 %r9
 #define RCD2 %r10
 
-#define RCD0d %r8d
-#define RCD1d %r9d
-#define RCD2d %r10d
-
-#define RX0 %rbp
-#define RX1 %r11
-#define RX2 %r12
+# used only during rounds
+#define RX0 %r8
+#define RX1 %r9
+#define RX2 %r10
 
-#define RX0d %ebp
-#define RX1d %r11d
-#define RX2d %r12d
+#define RX0d %r8d
+#define RX1d %r9d
+#define RX2d %r10d
 
-#define RY0 %r13
-#define RY1 %r14
-#define RY2 %r15
+#define RY0 %r11
+#define RY1 %r12
+#define RY2 %r13
 
-#define RY0d %r13d
-#define RY1d %r14d
-#define RY2d %r15d
+#define RY0d %r11d
+#define RY1d %r12d
+#define RY2d %r13d
 
 #define RT0 %rdx
 #define RT1 %rsi
@@ -85,6 +87,8 @@
 #define RT0d %edx
 #define RT1d %esi
 
+#define RT1bl %sil
+
 #define do16bit_ror(rot, op1, op2, T0, T1, tmp1, tmp2, ab, dst) \
movzbl ab ## bl,tmp2 ## d; \
movzbl ab ## bh,tmp1 ## d; \
@@ -92,6 +96,11 @@
op1##l T0(CTX, tmp2, 4),dst ## d; \
op2##l T1(CTX, tmp1, 4),dst ## d;
 
+#define swap_ab_with_cd(ab, cd, tmp)   \
+   movq cd, tmp;   \
+   movq ab, cd;\
+   movq tmp, ab;
+
 /*
  * Combined G1 & G2 function. Reordered with help of rotates to have moves
  * at begining.
@@ -110,15 +119,15 @@
/* G1,2 && G2,2 */ \
do16bit_ror(32, xor, xor, Tx2, Tx3, RT0, RT1, ab ## 0, x ## 0); \
do16bit_ror(16, xor, xor, Ty3, Ty0, RT0, RT1, ab ## 0, y ## 0); \
-   xchgq cd ## 0, ab ## 0; \
+   swap_ab_with_cd(ab ## 0, cd ## 0, RT0); \
\
do16bit_ror(32, xor, xor, Tx2, Tx3, RT0, RT1, ab ## 1, x ## 1); \
do16bit_ror(16, xor, xor, Ty3, Ty0, RT0, RT1, ab ## 1, y ## 1); \
-   xchgq cd ## 1, ab ## 1; \
+   swap_ab_with_cd(ab ## 1, cd ## 1, RT0); \
\
do16bit_ror(32, xor, xor, Tx2, Tx3, RT0, RT1, ab ## 2, x ## 2); \
do16bit_ror(16, xor, xor, Ty3, Ty0, RT0, RT1, ab ## 2, y ## 2); \
-   xchgq cd ## 2, ab ## 2;
+   swap_ab_with_cd(ab ## 2, cd ## 2, RT0);
 
 #define enc_round_end(ab, x, y, n) \
addl y ## d,x ## d; \
@@ -168,6 +177,16 @@
decrypt_round3(ba, dc, (n*2)+1); \
decrypt_round3(ba, dc, (n*2));
 
+#define push_cd()  \
+   pushq RCD2; \
+   pushq RCD1; \
+   pushq RCD0;
+
+#define pop_cd()   \
+   popq RCD0;  \
+   popq RCD1;  \
+   popq RCD2;
+
 #define inpack3(in, n, xy, m) \
movq 4*(n)(in), xy ## 0; \
xorq w+4*m(CTX),xy ## 0; \
@@ -223,11 +242,8 @@ ENTRY(__twofish_enc_blk_3way)
 *  %rdx: src, RIO
 *  %rcx: bool, if true: xor output
 */
-   pushq %r15;
-   pushq %r14;
pushq %r13;
pushq %r12;
-   pushq %rbp;
pushq %rbx;
 
pushq %rcx; /* bool xor */
@@ -235,40 +251,36 @@ ENTRY(__twofish_enc_blk_3way)
 
inpack_enc3();
 
-   encrypt_cycle3(RAB, RCD, 0);
-   encrypt_cycle3(RAB, RCD, 1);
-   encrypt_cycle3(RAB, RCD, 2);
-   encrypt_cycle3(RAB, RCD, 3);
-   encrypt_cycle3(RAB, RCD, 4);
-   encrypt_cycle3(RAB, RCD, 5);
-   encrypt_cycle3(RAB, RCD, 6);
- 

Re: [PATCH RESEND] crypto: af_alg - add keylen checking to avoid NULL ptr passing down

2017-12-18 Thread Eric Biggers
On Mon, Dec 18, 2017 at 01:40:36PM +, Li Kun wrote:
> alg_setkey do not check the keylen whether it is zero, so the key
> may be ZERO_SIZE_PTR when keylen is 0, which will pass the
> copy_from_user's checking and be passed to the lower functions as key.
> 
> If the lower functions only check the key if it is NULL, ZERO_SIZE_PTR
> will pass the checking, and will cause null ptr dereference, so it's
> better to intercept the invalid parameters in the upper functions.
> 
> This patch is also suitable to fix CVE-2017-15116 for stable trees.
> 
> Signed-off-by: Li Kun 
> Cc: sta...@vger.kernel.org
> ---
>  crypto/af_alg.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/crypto/af_alg.c b/crypto/af_alg.c
> index 337cf38..10f22f3 100644
> --- a/crypto/af_alg.c
> +++ b/crypto/af_alg.c
> @@ -210,6 +210,8 @@ static int alg_setkey(struct sock *sk, char __user *ukey,
>   u8 *key;
>   int err;
>  
> + if (!keylen)
> + return -EINVAL;
>   key = sock_kmalloc(sk, keylen, GFP_KERNEL);
>   if (!key)
>   return -ENOMEM;
> -- 

If the length is 0 then why is the underlying ->setkey() method dereferencing
the pointer?  Which algorithm does this happen for?  Checking for NULL makes no
sense; the length needs to be checked instead.

Eric


Re: RFC: Crypto: Race in updating available writable socket memory.

2017-12-18 Thread Stephan Mueller
Am Montag, 18. Dezember 2017, 15:26:00 CET schrieb Jonathan Cameron:

Hi Jonathan,

> Hi All,
> 
> I came across this one but am not sure how heavy weight a fix we want to
> apply.
> 
> I was getting failures on af_alg_readable in af_alg_get_rsgl which made
> little sense as I had wmem set to a very large value.  This was caused by a
> race between ctx->rcvused += err in af_alg_get_rsgl and
> ctx->rcvused -= rsgl->sg_num_bytes in af_alg_free_areq_sgls which was being
> called from an interrupt thread.  It was wrapping in a downwards direction
> once in a large number of cycles.
> 
> Testing was using the AIO interface and libkcapi on the userspace side and a
> (hopefully) soon to be posted new Hisilicon 161x crypto accelerator driver.
> 
> So it seems that this field needs some protection.
> 1. Could make it an atomic_t (works fine, but feels rather inelegant!)
> 2. Leverage the complexity seen in sock.c if it is needed...
> 
> So the question is whether making it an atomic_t is the right way to go.

Considering that there is only one location for an increment, one location for 
a dercrement and one location for using that counter (where the use of the 
counter interprets it as int), I would feel that using an atomic_t would fully 
suffice here.

Would you care to send a patch?

Ciao
Stephan


RFC: Crypto: Race in updating available writable socket memory.

2017-12-18 Thread Jonathan Cameron
Hi All,

I came across this one but am not sure how heavy weight a fix we want to apply.

I was getting failures on af_alg_readable in af_alg_get_rsgl which made little
sense as I had wmem set to a very large value.  This was caused by a race 
between
ctx->rcvused += err in af_alg_get_rsgl and
ctx->rcvused -= rsgl->sg_num_bytes in af_alg_free_areq_sgls which was being
called from an interrupt thread.  It was wrapping in a downwards direction once
in a large number of cycles.

Testing was using the AIO interface and libkcapi on the userspace side and a
(hopefully) soon to be posted new Hisilicon 161x crypto accelerator driver.

So it seems that this field needs some protection.
1. Could make it an atomic_t (works fine, but feels rather inelegant!)
2. Leverage the complexity seen in sock.c if it is needed...

So the question is whether making it an atomic_t is the right way to go.

Thanks,

Jonathan  


[PATCH RESEND] crypto: af_alg - add keylen checking to avoid NULL ptr passing down

2017-12-18 Thread Li Kun
alg_setkey do not check the keylen whether it is zero, so the key
may be ZERO_SIZE_PTR when keylen is 0, which will pass the
copy_from_user's checking and be passed to the lower functions as key.

If the lower functions only check the key if it is NULL, ZERO_SIZE_PTR
will pass the checking, and will cause null ptr dereference, so it's
better to intercept the invalid parameters in the upper functions.

This patch is also suitable to fix CVE-2017-15116 for stable trees.

Signed-off-by: Li Kun 
Cc: sta...@vger.kernel.org
---
 crypto/af_alg.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/crypto/af_alg.c b/crypto/af_alg.c
index 337cf38..10f22f3 100644
--- a/crypto/af_alg.c
+++ b/crypto/af_alg.c
@@ -210,6 +210,8 @@ static int alg_setkey(struct sock *sk, char __user *ukey,
u8 *key;
int err;
 
+   if (!keylen)
+   return -EINVAL;
key = sock_kmalloc(sk, keylen, GFP_KERNEL);
if (!key)
return -ENOMEM;
-- 
1.8.3.4



Re: [PATCH] crypto: af_alg - add keylen checking to avoid NULL ptr passing down

2017-12-18 Thread Li Kun



在 2017/12/18 20:00, Greg KH 写道:

On Mon, Dec 18, 2017 at 11:09:23AM +, Li Kun wrote:

alg_setkey do not check the keylen whether it is zero, so the key
may be ZERO_SIZE_PTR when keylen is 0, which will pass the
copy_from_user's checking and be passed to the lower functions as key.

If the lower functions only check the key if it is NULL, ZERO_SIZE_PTR
will pass the checking, and will cause null ptr dereference, so it's
better to intercept the invalid parameters in the upper functions.

This patch is also suitable to fix CVE-2017-15116 for stable trees.

Signed-off-by: Li Kun 
---
  crypto/af_alg.c | 2 ++
  1 file changed, 2 insertions(+)



This is not the correct way to submit patches for inclusion in the
stable kernel tree.  Please read:
 https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html
for how to do this properly.

sorry, i will resend this patch with "Cc: sta...@vger.kernel.org"





--
Best Regards
Li Kun



Re: [PATCH] crypto: af_alg - add keylen checking to avoid NULL ptr passing down

2017-12-18 Thread Greg KH
On Mon, Dec 18, 2017 at 11:09:23AM +, Li Kun wrote:
> alg_setkey do not check the keylen whether it is zero, so the key
> may be ZERO_SIZE_PTR when keylen is 0, which will pass the
> copy_from_user's checking and be passed to the lower functions as key.
> 
> If the lower functions only check the key if it is NULL, ZERO_SIZE_PTR
> will pass the checking, and will cause null ptr dereference, so it's
> better to intercept the invalid parameters in the upper functions.
> 
> This patch is also suitable to fix CVE-2017-15116 for stable trees.
> 
> Signed-off-by: Li Kun 
> ---
>  crypto/af_alg.c | 2 ++
>  1 file changed, 2 insertions(+)



This is not the correct way to submit patches for inclusion in the
stable kernel tree.  Please read:
https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html
for how to do this properly.




[PATCH] crypto: af_alg - add keylen checking to avoid NULL ptr passing down

2017-12-18 Thread Li Kun
alg_setkey do not check the keylen whether it is zero, so the key
may be ZERO_SIZE_PTR when keylen is 0, which will pass the
copy_from_user's checking and be passed to the lower functions as key.

If the lower functions only check the key if it is NULL, ZERO_SIZE_PTR
will pass the checking, and will cause null ptr dereference, so it's
better to intercept the invalid parameters in the upper functions.

This patch is also suitable to fix CVE-2017-15116 for stable trees.

Signed-off-by: Li Kun 
---
 crypto/af_alg.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/crypto/af_alg.c b/crypto/af_alg.c
index 337cf38..10f22f3 100644
--- a/crypto/af_alg.c
+++ b/crypto/af_alg.c
@@ -210,6 +210,8 @@ static int alg_setkey(struct sock *sk, char __user *ukey,
u8 *key;
int err;
 
+   if (!keylen)
+   return -EINVAL;
key = sock_kmalloc(sk, keylen, GFP_KERNEL);
if (!key)
return -ENOMEM;
-- 
1.8.3.4