Re: [PATCH] crypto: AF_ALG - limit mask and type

2017-12-12 Thread Stephan Mueller
Am Dienstag, 12. Dezember 2017, 09:57:37 CET schrieb Eric Biggers:

Hi Eric,

> Hi Stephan,
> 
> On Tue, Dec 12, 2017 at 07:09:08AM +0100, Stephan Müller wrote:
> > Hi Herbert,
> > 
> > you see the reported problem by simply using
> > 
> > sa.salg_mask = 0x;
> > 
> > Note, I am not fully sure about whether CRYPTO_AF_ALG_ALLOWED_MASK and
> > CRYPTO_AF_ALG_ALLOWED_TYPE have the correct value. But I think that all
> > that user space should reach is potentially the ASYNC flag and the
> > cipher types flags.
> > 
> > ---8<---
> > 
> > The user space interface allows specifying the type and the mask field
> > used to allocate the cipher. Only a subset of the type and mask is
> > considered relevant to be set by user space if needed at all.
> > 
> > This fixes a bug where user space is able to cause one cipher to be
> > registered multiple times potentially exhausting kernel memory.
> > 
> > Reported-by: syzbot 
> > Cc: 
> > Signed-off-by: Stephan Mueller 
> 
> The syzkaller reproducer triggered a crash in crypto_remove_spawns().  Is it
> possible the bug is still there somewhere, while this patch just makes it
> inaccessible through AF_ALG?

I think the issue is that the syzkaller generates a vast amount of registered 
ciphers. At one point in time, I would think that some implied limit is 
overflown. But I cannot say for sure.

Yet, it is definitely a bug to have more than one instance of the same cipher 
implementation registered.

> 
> Anyway, we definitely should expose as few algorithm flags to AF_ALG as
> possible.  There are just way too many things that can go wrong with
> exposing arbitrary flags.

Absolutely, I would even say that we should not expose any mask/type at all. 
I.e. the patch I offered here should be changed to set the mask/type to zero 
in all cases.
> 
> However, why do the check in every af_alg_type.bind() method instead of just
> once in alg_bind()?

You are quite right, that is the right place to add this code as it contains 
already some verification there.
> 
> If it can be done without breaking users, it also would be nice if we would
> actually validate the flags and return -EINVAL if unknown flags are
> specified. Otherwise users cannot test for whether specific flags are
> supported.

If we (and we need to hear Herbert) conclude that these values should not be 
exposed in the first place, I think we should not return any error but simply 
set it to zero.

If Herbert concludes that some flags are necessary, we should build a white-
list and return an error for any flag that is not in the white list.
> 
> Also, note that even after this fix there are still ways to register an
> arbitrarily large number of algorithms.  There are two classes of problems.
> 
> First, it can happen that a template gets instantiated for a request but the
> resulting algorithm does not exactly match the original request, so making
> the same request again will instantiate the template again.  This could
> happen by specifically requesting an untested algorithm (type=0,
> mask=CRYPTO_ALG_TESTED), which your patch fixes. However this can also
> happen in cases where neither the final ->cra_name nor the final
> ->cra_driver_name matches what was requested. For example asking for
> "cryptd(sha1)" results in .cra_name = "sha1" and .cra_driver_name =
> "cryptd(sha1-avx2)", or asking for "xts(ecb(aes))" results in .cra_name =
> "xts(aes)" and .cra_driver_name = "xts(ecb-aes-aesni)".
> 
> Probably the crypto API needs to be taught how to find the instantiated
> templates correctly.

Maybe a name mangling should be removed. A template/cipher has only two names, 
period. Either you use exactly these names or you will not find a cipher.
> 
> Second, you can just keep choosing different combinations of algorithms when
> instantiating templates, taking advantage of the fact that templates can be
> nested and some take multiple parameters, so the number of possible
> combinations grows exponentially.  I don't know how to easily solve this. 
> Perhaps crypto_free_skcipher(), crypto_free_ahash(), etc. should unregister
> the algorithm if it was created from a template and nothing else is using
> it; then the number of algorithms someone could instantiate via AF_ALG at a
> given time would be limited by their number of file descriptors.

There could be a large set of permutations of ciphers, I agree. However, do 
you think that in case all of them are registered, we have an issue? The goal 
is that if one template/cipher combo is registered once, any subsequent 
allocation of that combo should reuse the registered instance.

PS: The cipher allocation function has another long-standing bug which could 
be viewed as a DoS via AF_ALG: Assume you do not yet have gcm(aes) allocated. 
Now, AF_ALG allocates gcm_base(ctr(aes), ghash), the registered cipher 
instance will have *both*, the name and the driver name to be set to 
gcm_base(ctr(aes), 

Re: [PATCH] crypto: AF_ALG - limit mask and type

2017-12-12 Thread Eric Biggers
Hi Stephan,

On Tue, Dec 12, 2017 at 07:09:08AM +0100, Stephan Müller wrote:
> Hi Herbert,
> 
> you see the reported problem by simply using
> 
> sa.salg_mask = 0x;
> 
> Note, I am not fully sure about whether CRYPTO_AF_ALG_ALLOWED_MASK and
> CRYPTO_AF_ALG_ALLOWED_TYPE have the correct value. But I think that all
> that user space should reach is potentially the ASYNC flag and the
> cipher types flags.
> 
> ---8<---
> 
> The user space interface allows specifying the type and the mask field
> used to allocate the cipher. Only a subset of the type and mask is
> considered relevant to be set by user space if needed at all.
> 
> This fixes a bug where user space is able to cause one cipher to be
> registered multiple times potentially exhausting kernel memory.
> 
> Reported-by: syzbot 
> Cc: 
> Signed-off-by: Stephan Mueller 

The syzkaller reproducer triggered a crash in crypto_remove_spawns().  Is it
possible the bug is still there somewhere, while this patch just makes it
inaccessible through AF_ALG?

Anyway, we definitely should expose as few algorithm flags to AF_ALG as
possible.  There are just way too many things that can go wrong with exposing
arbitrary flags.

However, why do the check in every af_alg_type.bind() method instead of just
once in alg_bind()?

If it can be done without breaking users, it also would be nice if we would
actually validate the flags and return -EINVAL if unknown flags are specified.
Otherwise users cannot test for whether specific flags are supported.

Also, note that even after this fix there are still ways to register an
arbitrarily large number of algorithms.  There are two classes of problems.

First, it can happen that a template gets instantiated for a request but the
resulting algorithm does not exactly match the original request, so making the
same request again will instantiate the template again.  This could happen by
specifically requesting an untested algorithm (type=0, mask=CRYPTO_ALG_TESTED),
which your patch fixes.  However this can also happen in cases where neither the
final ->cra_name nor the final ->cra_driver_name matches what was requested.
For example asking for "cryptd(sha1)" results in .cra_name = "sha1" and
.cra_driver_name = "cryptd(sha1-avx2)", or asking for "xts(ecb(aes))" results in
.cra_name = "xts(aes)" and .cra_driver_name = "xts(ecb-aes-aesni)".

Probably the crypto API needs to be taught how to find the instantiated
templates correctly.

Second, you can just keep choosing different combinations of algorithms when
instantiating templates, taking advantage of the fact that templates can be
nested and some take multiple parameters, so the number of possible combinations
grows exponentially.  I don't know how to easily solve this.  Perhaps
crypto_free_skcipher(), crypto_free_ahash(), etc. should unregister the
algorithm if it was created from a template and nothing else is using it; then
the number of algorithms someone could instantiate via AF_ALG at a given time
would be limited by their number of file descriptors.

Eric


[PATCH] crypto: AF_ALG - limit mask and type

2017-12-11 Thread Stephan Müller
Hi Herbert,

you see the reported problem by simply using

sa.salg_mask = 0x;

Note, I am not fully sure about whether CRYPTO_AF_ALG_ALLOWED_MASK and
CRYPTO_AF_ALG_ALLOWED_TYPE have the correct value. But I think that all
that user space should reach is potentially the ASYNC flag and the
cipher types flags.

---8<---

The user space interface allows specifying the type and the mask field
used to allocate the cipher. Only a subset of the type and mask is
considered relevant to be set by user space if needed at all.

This fixes a bug where user space is able to cause one cipher to be
registered multiple times potentially exhausting kernel memory.

Reported-by: syzbot 
Cc: 
Signed-off-by: Stephan Mueller 
---
 crypto/af_alg.c | 7 +++
 crypto/algif_aead.c | 2 ++
 crypto/algif_hash.c | 2 ++
 crypto/algif_rng.c  | 2 ++
 crypto/algif_skcipher.c | 2 ++
 include/crypto/if_alg.h | 1 +
 include/linux/crypto.h  | 3 +++
 7 files changed, 19 insertions(+)

diff --git a/crypto/af_alg.c b/crypto/af_alg.c
index 1e5353f62067..16cfbde64048 100644
--- a/crypto/af_alg.c
+++ b/crypto/af_alg.c
@@ -1172,6 +1172,13 @@ int af_alg_get_rsgl(struct sock *sk, struct msghdr *msg, 
int flags,
 }
 EXPORT_SYMBOL_GPL(af_alg_get_rsgl);
 
+void af_alg_restrict_type_mask(u32 *type, u32 *mask)
+{
+   *type &= CRYPTO_AF_ALG_ALLOWED_TYPE;
+   *mask &= CRYPTO_AF_ALG_ALLOWED_MASK;
+}
+EXPORT_SYMBOL_GPL(af_alg_restrict_type_mask);
+
 static int __init af_alg_init(void)
 {
int err = proto_register(_proto, 0);
diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c
index 9d73be28cf01..5d21db83bdfd 100644
--- a/crypto/algif_aead.c
+++ b/crypto/algif_aead.c
@@ -463,6 +463,8 @@ static void *aead_bind(const char *name, u32 type, u32 mask)
if (!tfm)
return ERR_PTR(-ENOMEM);
 
+   af_alg_restrict_type_mask(, );
+
aead = crypto_alloc_aead(name, type, mask);
if (IS_ERR(aead)) {
kfree(tfm);
diff --git a/crypto/algif_hash.c b/crypto/algif_hash.c
index 76d2e716c792..f7660e80cd05 100644
--- a/crypto/algif_hash.c
+++ b/crypto/algif_hash.c
@@ -419,6 +419,8 @@ static void *hash_bind(const char *name, u32 type, u32 mask)
if (!tfm)
return ERR_PTR(-ENOMEM);
 
+   af_alg_restrict_type_mask(, );
+
hash = crypto_alloc_ahash(name, type, mask);
if (IS_ERR(hash)) {
kfree(tfm);
diff --git a/crypto/algif_rng.c b/crypto/algif_rng.c
index 150c2b6480ed..33a7064996f2 100644
--- a/crypto/algif_rng.c
+++ b/crypto/algif_rng.c
@@ -116,6 +116,8 @@ static struct proto_ops algif_rng_ops = {
 
 static void *rng_bind(const char *name, u32 type, u32 mask)
 {
+   af_alg_restrict_type_mask(, );
+
return crypto_alloc_rng(name, type, mask);
 }
 
diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c
index 9954b078f0b9..0a4987aa9d5c 100644
--- a/crypto/algif_skcipher.c
+++ b/crypto/algif_skcipher.c
@@ -314,6 +314,8 @@ static void *skcipher_bind(const char *name, u32 type, u32 
mask)
if (!tfm)
return ERR_PTR(-ENOMEM);
 
+   af_alg_restrict_type_mask(, );
+
skcipher = crypto_alloc_skcipher(name, type, mask);
if (IS_ERR(skcipher)) {
kfree(tfm);
diff --git a/include/crypto/if_alg.h b/include/crypto/if_alg.h
index 6abf0a3604dc..8ade69d46025 100644
--- a/include/crypto/if_alg.h
+++ b/include/crypto/if_alg.h
@@ -250,5 +250,6 @@ struct af_alg_async_req *af_alg_alloc_areq(struct sock *sk,
 int af_alg_get_rsgl(struct sock *sk, struct msghdr *msg, int flags,
struct af_alg_async_req *areq, size_t maxsize,
size_t *outlen);
+void af_alg_restrict_type_mask(u32 *type, u32 *mask);
 
 #endif /* _CRYPTO_IF_ALG_H */
diff --git a/include/linux/crypto.h b/include/linux/crypto.h
index 78508ca4b108..0d7694673fff 100644
--- a/include/linux/crypto.h
+++ b/include/linux/crypto.h
@@ -70,6 +70,9 @@
 #define CRYPTO_ALG_DYING   0x0040
 #define CRYPTO_ALG_ASYNC   0x0080
 
+#define CRYPTO_AF_ALG_ALLOWED_MASK 0x00ff
+#define CRYPTO_AF_ALG_ALLOWED_TYPE 0x00ff
+
 /*
  * Set this bit if and only if the algorithm requires another algorithm of
  * the same type to handle corner cases.
-- 
2.14.3