Re: [RFC PATCH 4.10 1/6] crypto/sha256: Refactor the API so it can be used without shash

2016-12-23 Thread Andy Lutomirski
On Fri, Dec 23, 2016 at 6:22 PM, Andy Lutomirski  wrote:
> There are some pieecs of kernel code that want to compute SHA256
> directly without going through the crypto core.  Adjust the exported
> API to decouple it from the crypto core.
>
> I suspect this will very slightly speed up the SHA256 shash operations
> as well by reducing the amount of indirection involved.
>

I should also mention: there's a nice potential cleanup that's
possible on top of this.  Currently, most of the accelerated SHA256
implementations just swap out the block function.  Another approach to
enabling this would be to restructure sha256_update along the lines
of:

sha256_block_fn_t fn = arch_sha256_block_fn(len);
sha256_base_do_update(sctx, data, len, arch_sha256_block_fn(len));

The idea being that arch code can decide whether to use an accelerated
block function based on context (x86, for example, can't always use
xmm regs) and length (on x86, using the accelerated versions for short
digests is very slow due to the state save/restore that happens) and
then the core code can just use it.

This would allow a lot of the boilerplate that this patch was forced
to modify to be deleted outright.

--Andy
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[RFC PATCH 4.10 3/6] bpf: Use SHA256 instead of SHA1 for bpf digests

2016-12-23 Thread Andy Lutomirski
BPF digests are intended to be used to avoid reloading programs that
are already loaded.  For use cases (CRIU?) where untrusted programs
are involved, intentional hash collisions could cause the wrong BPF
program to execute.  Additionally, if BPF digests are ever used
in-kernel to skip verification, a hash collision could give privilege
escalation directly.

SHA1 is no longer considered adequately collision-resistant (see, for
example, all the major browsers dropping support for SHA1
certificates).  Use SHA256 instead.

I moved the digest field to keep all of the bpf program metadata in
the same cache line.

Cc: Daniel Borkmann 
Cc: Alexei Starovoitov 
Signed-off-by: Andy Lutomirski 
---
 include/linux/filter.h | 11 +++
 init/Kconfig   |  1 +
 kernel/bpf/core.c  | 41 +++--
 3 files changed, 11 insertions(+), 42 deletions(-)

diff --git a/include/linux/filter.h b/include/linux/filter.h
index 702314253797..23df2574e30c 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -14,7 +14,8 @@
 #include 
 #include 
 #include 
-#include 
+
+#include 
 
 #include 
 
@@ -408,11 +409,11 @@ struct bpf_prog {
kmemcheck_bitfield_end(meta);
enum bpf_prog_type  type;   /* Type of BPF program */
u32 len;/* Number of filter blocks */
-   u32 digest[SHA_DIGEST_WORDS]; /* Program digest */
struct bpf_prog_aux *aux;   /* Auxiliary fields */
struct sock_fprog_kern  *orig_prog; /* Original BPF program */
unsigned int(*bpf_func)(const void *ctx,
const struct bpf_insn *insn);
+   u8  digest[SHA256_DIGEST_SIZE]; /* Program digest */
/* Instructions for interpreter */
union {
struct sock_filter  insns[0];
@@ -519,12 +520,6 @@ static inline u32 bpf_prog_insn_size(const struct bpf_prog 
*prog)
return prog->len * sizeof(struct bpf_insn);
 }
 
-static inline u32 bpf_prog_digest_scratch_size(const struct bpf_prog *prog)
-{
-   return round_up(bpf_prog_insn_size(prog) +
-   sizeof(__be64) + 1, SHA_MESSAGE_BYTES);
-}
-
 static inline unsigned int bpf_prog_size(unsigned int proglen)
 {
return max(sizeof(struct bpf_prog),
diff --git a/init/Kconfig b/init/Kconfig
index 223b734abccd..5a4e2d99cc38 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1634,6 +1634,7 @@ config BPF_SYSCALL
bool "Enable bpf() system call"
select ANON_INODES
select BPF
+   select CRYPTO_SHA256_LIB
default n
help
  Enable the bpf() system call that allows to manipulate eBPF
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 1eb4f1303756..911993863799 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -148,22 +148,18 @@ void __bpf_prog_free(struct bpf_prog *fp)
 
 int bpf_prog_calc_digest(struct bpf_prog *fp)
 {
-   const u32 bits_offset = SHA_MESSAGE_BYTES - sizeof(__be64);
-   u32 raw_size = bpf_prog_digest_scratch_size(fp);
-   u32 ws[SHA_WORKSPACE_WORDS];
-   u32 i, bsize, psize, blocks;
+   struct sha256_state sha;
+   u32 i, psize;
struct bpf_insn *dst;
bool was_ld_map;
-   u8 *raw, *todo;
-   __be32 *result;
-   __be64 *bits;
+   u8 *raw;
 
-   raw = vmalloc(raw_size);
+   psize = bpf_prog_insn_size(fp);
+   raw = vmalloc(psize);
if (!raw)
return -ENOMEM;
 
-   sha_init(fp->digest);
-   memset(ws, 0, sizeof(ws));
+   sha256_init();
 
/* We need to take out the map fd for the digest calculation
 * since they are unstable from user space side.
@@ -188,30 +184,7 @@ int bpf_prog_calc_digest(struct bpf_prog *fp)
}
}
 
-   psize = bpf_prog_insn_size(fp);
-   memset([psize], 0, raw_size - psize);
-   raw[psize++] = 0x80;
-
-   bsize  = round_up(psize, SHA_MESSAGE_BYTES);
-   blocks = bsize / SHA_MESSAGE_BYTES;
-   todo   = raw;
-   if (bsize - psize >= sizeof(__be64)) {
-   bits = (__be64 *)(todo + bsize - sizeof(__be64));
-   } else {
-   bits = (__be64 *)(todo + bsize + bits_offset);
-   blocks++;
-   }
-   *bits = cpu_to_be64((psize - 1) << 3);
-
-   while (blocks--) {
-   sha_transform(fp->digest, todo, ws);
-   todo += SHA_MESSAGE_BYTES;
-   }
-
-   result = (__force __be32 *)fp->digest;
-   for (i = 0; i < SHA_DIGEST_WORDS; i++)
-   result[i] = cpu_to_be32(fp->digest[i]);
-
+   sha256_finup(, raw, psize, fp->digest);
vfree(raw);
return 0;
 }
-- 
2.9.3

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  

[RFC PATCH 4.10 2/6] crypto/sha256: Make the sha256 library functions selectable

2016-12-23 Thread Andy Lutomirski
This will let other kernel code call into sha256_init(), etc. without
pulling in the core crypto code.

Signed-off-by: Andy Lutomirski 
---
 crypto/Kconfig  | 8 
 crypto/Makefile | 2 +-
 crypto/sha256_generic.c | 4 
 include/crypto/sha.h| 4 
 4 files changed, 17 insertions(+), 1 deletion(-)

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 160f08e721cc..85a2b3440c2b 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -10,6 +10,13 @@ config XOR_BLOCKS
 source "crypto/async_tx/Kconfig"
 
 #
+# Cryptographic algorithms that are usable without the Crypto API.
+# None of these should have visible config options.
+#
+config CRYPTO_SHA256_LIB
+   bool
+
+#
 # Cryptographic API Configuration
 #
 menuconfig CRYPTO
@@ -763,6 +770,7 @@ config CRYPTO_SHA512_MB
 
 config CRYPTO_SHA256
tristate "SHA224 and SHA256 digest algorithm"
+   select CRYPTO_SHA256_LIB
select CRYPTO_HASH
help
  SHA256 secure hash standard (DFIPS 180-2).
diff --git a/crypto/Makefile b/crypto/Makefile
index b8f0e3eb0791..d147d4c911f5 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -71,7 +71,7 @@ obj-$(CONFIG_CRYPTO_RMD160) += rmd160.o
 obj-$(CONFIG_CRYPTO_RMD256) += rmd256.o
 obj-$(CONFIG_CRYPTO_RMD320) += rmd320.o
 obj-$(CONFIG_CRYPTO_SHA1) += sha1_generic.o
-obj-$(CONFIG_CRYPTO_SHA256) += sha256_generic.o
+obj-$(CONFIG_CRYPTO_SHA256_LIB) += sha256_generic.o
 obj-$(CONFIG_CRYPTO_SHA512) += sha512_generic.o
 obj-$(CONFIG_CRYPTO_SHA3) += sha3_generic.o
 obj-$(CONFIG_CRYPTO_WP512) += wp512.o
diff --git a/crypto/sha256_generic.c b/crypto/sha256_generic.c
index f2747893402c..9df71ac66dc4 100644
--- a/crypto/sha256_generic.c
+++ b/crypto/sha256_generic.c
@@ -261,6 +261,8 @@ void sha256_final(struct sha256_state *sctx, u8 *out)
 }
 EXPORT_SYMBOL(sha256_final);
 
+#ifdef CONFIG_CRYPTO_HASH
+
 static int crypto_sha256_update(struct shash_desc *desc, const u8 *data,
unsigned int len)
 {
@@ -328,6 +330,8 @@ static void __exit sha256_generic_mod_fini(void)
 module_init(sha256_generic_mod_init);
 module_exit(sha256_generic_mod_fini);
 
+#endif /* CONFIG_CRYPTO_HASH */
+
 MODULE_LICENSE("GPL");
 MODULE_DESCRIPTION("SHA-224 and SHA-256 Secure Hash Algorithm");
 
diff --git a/include/crypto/sha.h b/include/crypto/sha.h
index 2b6978471605..381ba7fa5e3f 100644
--- a/include/crypto/sha.h
+++ b/include/crypto/sha.h
@@ -96,6 +96,8 @@ extern int crypto_sha1_update(struct shash_desc *desc, const 
u8 *data,
 extern int crypto_sha1_finup(struct shash_desc *desc, const u8 *data,
 unsigned int len, u8 *hash);
 
+#ifdef CONFIG_CRYPTO_SHA256_LIB
+
 static inline void sha256_init(struct sha256_state *sctx)
 {
sctx->state[0] = SHA256_H0;
@@ -121,6 +123,8 @@ static inline void sha256_finup(struct sha256_state *sctx, 
const u8 *data,
sha256_final(sctx, hash);
 }
 
+#endif /* CONFIG_CRYPTO_SHA256_LIB */
+
 extern int crypto_sha512_update(struct shash_desc *desc, const u8 *data,
  unsigned int len);
 
-- 
2.9.3

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[RFC PATCH 4.10 1/6] crypto/sha256: Refactor the API so it can be used without shash

2016-12-23 Thread Andy Lutomirski
There are some pieecs of kernel code that want to compute SHA256
directly without going through the crypto core.  Adjust the exported
API to decouple it from the crypto core.

I suspect this will very slightly speed up the SHA256 shash operations
as well by reducing the amount of indirection involved.

Cc: Ard Biesheuvel 
Cc: Herbert Xu 
Signed-off-by: Andy Lutomirski 
---
 arch/arm/crypto/sha2-ce-glue.c  | 10 ---
 arch/arm/crypto/sha256_glue.c   | 23 ++-
 arch/arm/crypto/sha256_neon_glue.c  | 34 +++--
 arch/arm64/crypto/sha2-ce-glue.c| 13 
 arch/arm64/crypto/sha256-glue.c | 59 +
 arch/x86/crypto/sha256_ssse3_glue.c | 46 +
 arch/x86/purgatory/purgatory.c  |  2 +-
 arch/x86/purgatory/sha256.c | 25 ++--
 arch/x86/purgatory/sha256.h | 22 --
 crypto/sha256_generic.c | 50 +++
 include/crypto/sha.h| 29 ++
 include/crypto/sha256_base.h| 40 -
 12 files changed, 184 insertions(+), 169 deletions(-)
 delete mode 100644 arch/x86/purgatory/sha256.h

diff --git a/arch/arm/crypto/sha2-ce-glue.c b/arch/arm/crypto/sha2-ce-glue.c
index 0755b2d657f3..8832c2f85591 100644
--- a/arch/arm/crypto/sha2-ce-glue.c
+++ b/arch/arm/crypto/sha2-ce-glue.c
@@ -38,7 +38,7 @@ static int sha2_ce_update(struct shash_desc *desc, const u8 
*data,
return crypto_sha256_arm_update(desc, data, len);
 
kernel_neon_begin();
-   sha256_base_do_update(desc, data, len,
+   sha256_base_do_update(sctx, data, len,
  (sha256_block_fn *)sha2_ce_transform);
kernel_neon_end();
 
@@ -48,17 +48,19 @@ static int sha2_ce_update(struct shash_desc *desc, const u8 
*data,
 static int sha2_ce_finup(struct shash_desc *desc, const u8 *data,
 unsigned int len, u8 *out)
 {
+   struct sha256_state *sctx = shash_desc_ctx(desc);
+
if (!may_use_simd())
return crypto_sha256_arm_finup(desc, data, len, out);
 
kernel_neon_begin();
if (len)
-   sha256_base_do_update(desc, data, len,
+   sha256_base_do_update(sctx, data, len,
  (sha256_block_fn *)sha2_ce_transform);
-   sha256_base_do_finalize(desc, (sha256_block_fn *)sha2_ce_transform);
+   sha256_base_do_finalize(sctx, (sha256_block_fn *)sha2_ce_transform);
kernel_neon_end();
 
-   return sha256_base_finish(desc, out);
+   return crypto_sha2_final(desc, out);
 }
 
 static int sha2_ce_final(struct shash_desc *desc, u8 *out)
diff --git a/arch/arm/crypto/sha256_glue.c b/arch/arm/crypto/sha256_glue.c
index a84e869ef900..405a29a9a9d3 100644
--- a/arch/arm/crypto/sha256_glue.c
+++ b/arch/arm/crypto/sha256_glue.c
@@ -36,27 +36,34 @@ asmlinkage void sha256_block_data_order(u32 *digest, const 
void *data,
 int crypto_sha256_arm_update(struct shash_desc *desc, const u8 *data,
 unsigned int len)
 {
+   struct sha256_state *sctx = shash_desc_ctx(desc);
+
/* make sure casting to sha256_block_fn() is safe */
BUILD_BUG_ON(offsetof(struct sha256_state, state) != 0);
 
-   return sha256_base_do_update(desc, data, len,
+   sha256_base_do_update(sctx, data, len,
(sha256_block_fn *)sha256_block_data_order);
+   return 0;
 }
 EXPORT_SYMBOL(crypto_sha256_arm_update);
 
-static int sha256_final(struct shash_desc *desc, u8 *out)
+static int sha256_arm_final(struct shash_desc *desc, u8 *out)
 {
-   sha256_base_do_finalize(desc,
+   struct sha256_state *sctx = shash_desc_ctx(desc);
+
+   sha256_base_do_finalize(sctx,
(sha256_block_fn *)sha256_block_data_order);
-   return sha256_base_finish(desc, out);
+   return crypto_sha2_final(desc, out);
 }
 
 int crypto_sha256_arm_finup(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out)
 {
-   sha256_base_do_update(desc, data, len,
+   struct sha256_state *sctx = shash_desc_ctx(desc);
+
+   sha256_base_do_update(sctx, data, len,
  (sha256_block_fn *)sha256_block_data_order);
-   return sha256_final(desc, out);
+   return crypto_sha2_final(desc, out);
 }
 EXPORT_SYMBOL(crypto_sha256_arm_finup);
 
@@ -64,7 +71,7 @@ static struct shash_alg algs[] = { {
.digestsize =   SHA256_DIGEST_SIZE,
.init   =   sha256_base_init,
.update =   crypto_sha256_arm_update,
-   .final  =   sha256_final,
+   .final  =   sha256_arm_final,
.finup  =   crypto_sha256_arm_finup,
.descsize   =   sizeof(struct sha256_state),
.base   

[RFC PATCH 4.10 4/6] bpf: Avoid copying the entire BPF program when hashing it

2016-12-23 Thread Andy Lutomirski
The sha256 helpers can consume a message incrementally, so there's no need
to allocate a buffer to store the whole blob to be hashed.

This may be a slight slowdown for very long messages because gcc can't
inline the sha256_update() calls.  For reasonably-sized programs,
however, this should be a considerable speedup as vmalloc() is quite
slow.

Cc: Daniel Borkmann 
Cc: Alexei Starovoitov 
Signed-off-by: Andy Lutomirski 
---
 kernel/bpf/core.c | 34 ++
 1 file changed, 14 insertions(+), 20 deletions(-)

diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 911993863799..1c2931f505af 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -149,43 +149,37 @@ void __bpf_prog_free(struct bpf_prog *fp)
 int bpf_prog_calc_digest(struct bpf_prog *fp)
 {
struct sha256_state sha;
-   u32 i, psize;
-   struct bpf_insn *dst;
+   u32 i;
bool was_ld_map;
-   u8 *raw;
-
-   psize = bpf_prog_insn_size(fp);
-   raw = vmalloc(psize);
-   if (!raw)
-   return -ENOMEM;
 
sha256_init();
 
/* We need to take out the map fd for the digest calculation
 * since they are unstable from user space side.
 */
-   dst = (void *)raw;
for (i = 0, was_ld_map = false; i < fp->len; i++) {
-   dst[i] = fp->insnsi[i];
+   struct bpf_insn insn = fp->insnsi[i];
+
if (!was_ld_map &&
-   dst[i].code == (BPF_LD | BPF_IMM | BPF_DW) &&
-   dst[i].src_reg == BPF_PSEUDO_MAP_FD) {
+   insn.code == (BPF_LD | BPF_IMM | BPF_DW) &&
+   insn.src_reg == BPF_PSEUDO_MAP_FD) {
was_ld_map = true;
-   dst[i].imm = 0;
+   insn.imm = 0;
} else if (was_ld_map &&
-  dst[i].code == 0 &&
-  dst[i].dst_reg == 0 &&
-  dst[i].src_reg == 0 &&
-  dst[i].off == 0) {
+  insn.code == 0 &&
+  insn.dst_reg == 0 &&
+  insn.src_reg == 0 &&
+  insn.off == 0) {
was_ld_map = false;
-   dst[i].imm = 0;
+   insn.imm = 0;
} else {
was_ld_map = false;
}
+
+   sha256_update(, (const u8 *), sizeof(insn));
}
 
-   sha256_finup(, raw, psize, fp->digest);
-   vfree(raw);
+   sha256_final(, fp->digest);
return 0;
 }
 
-- 
2.9.3

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[RFC PATCH 4.10 5/6] bpf: Rename fdinfo's prog_digest to prog_sha256

2016-12-23 Thread Andy Lutomirski
This makes it easier to add another digest algorithm down the road
if needed.  It also serves to force any programs that might have
been written against a kernel that had 'prog_digest' to be updated.

This shouldn't violate any stable API policies, as no released
kernel has ever had 'prog_digest'.

Cc: Daniel Borkmann 
Cc: Alexei Starovoitov 
Signed-off-by: Andy Lutomirski 
---
 kernel/bpf/syscall.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index e89acea22ecf..956370b80296 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -694,7 +694,7 @@ static void bpf_prog_show_fdinfo(struct seq_file *m, struct 
file *filp)
seq_printf(m,
   "prog_type:\t%u\n"
   "prog_jited:\t%u\n"
-  "prog_digest:\t%s\n"
+  "prog_sha256:\t%s\n"
   "memlock:\t%llu\n",
   prog->type,
   prog->jited,
-- 
2.9.3

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[RFC PATCH 4.10 6/6] net: Rename TCA*BPF_DIGEST to ..._SHA256

2016-12-23 Thread Andy Lutomirski
This makes it easier to add another digest algorithm down the road if
needed.  It also serves to force any programs that might have been
written against a kernel that had the old field name to notice the
change and make any necessary changes.

This shouldn't violate any stable API policies, as no released kernel
has ever had TCA*BPF_DIGEST.

Cc: Daniel Borkmann 
Cc: Alexei Starovoitov 
Signed-off-by: Andy Lutomirski 
---
 include/uapi/linux/pkt_cls.h   | 2 +-
 include/uapi/linux/tc_act/tc_bpf.h | 2 +-
 net/sched/act_bpf.c| 2 +-
 net/sched/cls_bpf.c| 2 +-
 4 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/include/uapi/linux/pkt_cls.h b/include/uapi/linux/pkt_cls.h
index cb4bcdc58543..ac6b300c1550 100644
--- a/include/uapi/linux/pkt_cls.h
+++ b/include/uapi/linux/pkt_cls.h
@@ -397,7 +397,7 @@ enum {
TCA_BPF_NAME,
TCA_BPF_FLAGS,
TCA_BPF_FLAGS_GEN,
-   TCA_BPF_DIGEST,
+   TCA_BPF_SHA256,
__TCA_BPF_MAX,
 };
 
diff --git a/include/uapi/linux/tc_act/tc_bpf.h 
b/include/uapi/linux/tc_act/tc_bpf.h
index a6b88a6f7f71..eae18a7430eb 100644
--- a/include/uapi/linux/tc_act/tc_bpf.h
+++ b/include/uapi/linux/tc_act/tc_bpf.h
@@ -27,7 +27,7 @@ enum {
TCA_ACT_BPF_FD,
TCA_ACT_BPF_NAME,
TCA_ACT_BPF_PAD,
-   TCA_ACT_BPF_DIGEST,
+   TCA_ACT_BPF_SHA256,
__TCA_ACT_BPF_MAX,
 };
 #define TCA_ACT_BPF_MAX (__TCA_ACT_BPF_MAX - 1)
diff --git a/net/sched/act_bpf.c b/net/sched/act_bpf.c
index 1c60317f0121..3868a66d0b24 100644
--- a/net/sched/act_bpf.c
+++ b/net/sched/act_bpf.c
@@ -123,7 +123,7 @@ static int tcf_bpf_dump_ebpf_info(const struct tcf_bpf 
*prog,
nla_put_string(skb, TCA_ACT_BPF_NAME, prog->bpf_name))
return -EMSGSIZE;
 
-   nla = nla_reserve(skb, TCA_ACT_BPF_DIGEST,
+   nla = nla_reserve(skb, TCA_ACT_BPF_SHA256,
  sizeof(prog->filter->digest));
if (nla == NULL)
return -EMSGSIZE;
diff --git a/net/sched/cls_bpf.c b/net/sched/cls_bpf.c
index adc776048d1a..6fc110321621 100644
--- a/net/sched/cls_bpf.c
+++ b/net/sched/cls_bpf.c
@@ -555,7 +555,7 @@ static int cls_bpf_dump_ebpf_info(const struct cls_bpf_prog 
*prog,
nla_put_string(skb, TCA_BPF_NAME, prog->bpf_name))
return -EMSGSIZE;
 
-   nla = nla_reserve(skb, TCA_BPF_DIGEST, sizeof(prog->filter->digest));
+   nla = nla_reserve(skb, TCA_BPF_SHA256, sizeof(prog->filter->digest));
if (nla == NULL)
return -EMSGSIZE;
 
-- 
2.9.3

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[RFC PATCH 4.10 0/6] Switch BPF's digest to SHA256

2016-12-23 Thread Andy Lutomirski
Since there are plenty of uses for the new-in-4.10 BPF digest feature
that would be problematic if malicious users could produce collisions,
the BPF digest should be collision-resistant.  SHA-1 is no longer
considered collision-resistant, so switch it to SHA-256.

The actual switchover is trivial.  Most of this series consists of
cleanups to the SHA256 code to make it usable as a standalone library
(since BPF should not depend on crypto).

The cleaned up library is much more user-friendly than the SHA-1 code,
so this also significantly tidies up the BPF digest code.

This is intended for 4.10.  If this series misses 4.10 and nothing
takes its place, then we'll have an unpleasant ABI stability
situation.

Andy Lutomirski (6):
  crypto/sha256: Refactor the API so it can be used without shash
  crypto/sha256: Make the sha256 library functions selectable
  bpf: Use SHA256 instead of SHA1 for bpf digests
  bpf: Avoid copying the entire BPF program when hashing it
  bpf: Rename fdinfo's prog_digest to prog_sha256
  net: Rename TCA*BPF_DIGEST to ..._SHA256

 arch/arm/crypto/sha2-ce-glue.c  | 10 +++---
 arch/arm/crypto/sha256_glue.c   | 23 +-
 arch/arm/crypto/sha256_neon_glue.c  | 34 ++--
 arch/arm64/crypto/sha2-ce-glue.c| 13 
 arch/arm64/crypto/sha256-glue.c | 59 +++---
 arch/x86/crypto/sha256_ssse3_glue.c | 46 ---
 arch/x86/purgatory/purgatory.c  |  2 +-
 arch/x86/purgatory/sha256.c | 25 ++-
 arch/x86/purgatory/sha256.h | 22 -
 crypto/Kconfig  |  8 +
 crypto/Makefile |  2 +-
 crypto/sha256_generic.c | 54 +++
 include/crypto/sha.h| 33 ---
 include/crypto/sha256_base.h| 40 +++
 include/linux/filter.h  | 11 ++-
 include/uapi/linux/pkt_cls.h|  2 +-
 include/uapi/linux/tc_act/tc_bpf.h  |  2 +-
 init/Kconfig|  1 +
 kernel/bpf/core.c   | 63 +
 kernel/bpf/syscall.c|  2 +-
 net/sched/act_bpf.c |  2 +-
 net/sched/cls_bpf.c |  2 +-
 22 files changed, 225 insertions(+), 231 deletions(-)
 delete mode 100644 arch/x86/purgatory/sha256.h

-- 
2.9.3

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: George's crazy full state idea (Re: HalfSipHash Acceptable Usage)

2016-12-23 Thread George Spelvin
Hannes Frederic Sowa wrote:
> On 24.12.2016 00:39, George Spelvin wrote:
>> We just finished discussing why 8 bytes isn't enough.  If you only
>> feed back 8 bytes, an attacker who can do 2^64 computation can find it
>> (by guessing and computing forward to verify the guess) and recover the
>> previous state.  You need to feed back at least as much output as your
>> security targete.  For /dev/urandom's ChaCha20, that's 256 bits.

> I followed the discussion but it appeared to me that this has the
> additional constraint of time until the next reseeding event happenes,
> which is 300s (under the assumption that calls to get_random_int happen
> regularly, which I expect right now). After that the existing reseeding
> mechansim will ensure enough backtracking protection. The number of
> bytes can easily be increased here, given that reseeding was shown to be
> quite fast already and we produce enough output. But I am not sure if
> this is a bit overengineered in the end?

I'm not following your description of how the time-based and call-based
mechanisms interact, but for any mix-back, you should either do enough
or none at all.  (Also called "catastrophic reseeding".)

For example, two mix-backs of 64 bits gives you 65 bit security, not 128.
(Because each mixback can be guessed and verified separately.)

> Also agreed. Given your solution below to prandom_u32, I do think it
> might also work without the seqlock now.

It's not technically a seqlock; in particular the reader doesn't
spin.  But the write side, and general logic is so similar it's
a good mental model.

Basically, assume a 64-byte buffer.  The reader has gone through
32 bytes of it, and has 32 left, and when he reads another 8 bytes,
has to distinguish three cases:

1) No update; we read the old bytes and there are now 32 - 24 bytes left.
2) Update completed while we weren't looking.  There are now new
   bytes in the buffer, and we took 8 leaving 64 - 8 = 56.
3) Update in progress at the time of the read.  We don't know if we
   are seeing old bytes or new bytes, so we have to assume the worst
   and not proceeed unless 32 >= 8, but assume at the end there are
   64 - 8 = 56 new bytes left.

> I wouldn't have added a disable irqs, but given that I really like your
> proposal, I would take it in my todo branch and submit it when net-next
> window opens.

If you want that, I have a pile of patches to prandom I really
should push upstream.  Shall I refresh them and send them to you?


commit 4cf1b3d9f4fbccc29ffc2fe4ca4ff52ea77253f1
Author: George Spelvin 
Date:   Mon Aug 31 00:05:00 2015 -0400

net: Use prandom_u32_max()

It's slightly more efficient than "prandom_u32() % range"

The net/802 code was already efficient, but prandom_u32_max() is simpler.

In net/batman-adv/bat_iv_ogm.c, batadv_iv_ogm_fwd_send_time() got changed
from picking a random number of milliseconds and converting to jiffies to
picking a random number of jiffies, since the number of milliseconds (and
thus the conversion to jiffies) is a compile-time constant.  The equivalent
code in batadv_iv_ogm_emit_send_time was not changed, because the number
of milliseconds is variable.

In net/ipv6/ip6_flowlabel.c, ip6_flowlabel had htonl(prandom_u32()),
which is silly.  Just cast to __be32 without permuting the bits.

net/sched/sch_netem.c got adjusted to only need one call to prandom_u32
instead of 2.  (Assuming skb_headlen can't exceed 512 MiB, which is
hopefully safe for some time yet.)

Signed-off-by: George Spelvin 

commit 9c8fb80e1fd2be42c35cab1af27187d600fd85e3
Author: George Spelvin 
Date:   Sat May 24 15:20:47 2014 -0400

mm/swapfile.c: Use prandom_u32_max()

It's slightly more efficient than "prandom_u32() % range"

Signed-off-by: George Spelvin 

commit 2743eb01e5c5958fd88ae78d19c5fea772d4b117
Author: George Spelvin 
Date:   Sat May 24 15:19:53 2014 -0400

lib: Use prandom_u32_max()

It's slightly more efficient than "prandom_u32() % range"

Signed-off-by: George Spelvin 

commit 6a5e91bf395060a3351bfe5efc40ac20ffba2c1b
Author: George Spelvin 
Date:   Sat May 24 15:18:50 2014 -0400

fs/xfs: Use prandom_u32_max()

It's slightly more efficient than "prandom_u32() % range".

Also changed the last argument of xfs_error_test() from "unsigned long"
to "unsigned", since the code never did support values > 2^32, and
the largest value ever passed is 100.

The code could be improved even further by passing in 2^32/rf rather
than rf, but I'll leave that to some XFS developers.

Signed-off-by: George Spelvin 

commit 6f6d485d9179ca6ec4e30caa06ade0e0c6931810
Author: George Spelvin 
Date:   Sat May 24 15:00:17 2014 -0400

fs/ubifs: Use 

Re: George's crazy full state idea (Re: HalfSipHash Acceptable Usage)

2016-12-23 Thread Hannes Frederic Sowa
Hi,

On 24.12.2016 00:39, George Spelvin wrote:
> Hannes Frederic Sowa wrote:
>> In general this looks good, but bitbuffer needs to be protected from
>> concurrent access, thus needing at least some atomic instruction and
>> disabling of interrupts for the locking if done outside of
>> get_random_long. Thus I liked your previous approach more where you
>> simply embed this in the already necessary get_random_long and aliased
>> get_random_long as get_random_bits(BITS_PER_LONG) accordingly, IMHO.
> 
> It's meant to be part of the same approach, and I didn't include locking
> because that's a requirement for *any* solution, and isn't specific
> to the part I was trying to illustrate.
> 
> (As for re-using the name "get_random_long", that was just so
> I didn't have to explain it.  Call it "get_random_long_internal"
> if you like.)
> 
> Possible locking implementations include:
> 1) Use the same locking as applies to get_random_long_internal(), or
> 2) Make bitbuffer a per-CPU variable (note that we currently have 128
>bits of per-CPU state in get_random_int_hash[]), and this is all a
>fast-path to bypass heavier locking in get_random_long_internal().

I understood that this is definitely a solvable problem.

>>> But, I just realized I've been overlooking something glaringly obvious...
>>> there's no reason you can't compute multple blocks in advance.
>>
>> In the current code on the cost of per cpu allocations thus memory.
> 
> Yes, but on 4-core machines it's still not much, and 4096-core
> behemoths have RAM to burn.
> 
>> In the extract_crng case, couldn't we simply use 8 bytes of the 64 byte
>> return block to feed it directly back into the state chacha? So we pass
>> on 56 bytes into the pcpu buffer, and consume 8 bytes for the next
>> state. This would make the window max shorter than the anti
>> backtracking protection right now from 300s to 14 get_random_int calls.
>> Not sure if this is worth it.
> 
> We just finished discussing why 8 bytes isn't enough.  If you only
> feed back 8 bytes, an attacker who can do 2^64 computation can find it
> (by guessing and computing forward to verify the guess) and recover the
> previous state.  You need to feed back at least as much output as your
> security targete.  For /dev/urandom's ChaCha20, that's 256 bits.

I followed the discussion but it appeared to me that this has the
additional constraint of time until the next reseeding event happenes,
which is 300s (under the assumption that calls to get_random_int happen
regularly, which I expect right now). After that the existing reseeding
mechansim will ensure enough backtracking protection. The number of
bytes can easily be increased here, given that reseeding was shown to be
quite fast already and we produce enough output. But I am not sure if
this is a bit overengineered in the end?

>>> For example, suppose we gave each CPU a small pool to minimize locking.
>>> When one runs out and calls the global refill, it could refill *all*
>>> of the CPU pools.  (You don't even need locking; there may be a race to
>>> determine *which* random numbers the reader sees, but they're equally
>>> good either way.)
> 
>> Yes, but still disabled interrupts, otherwise the same state could be
>> used twice on the same CPU. Uff, I think we have this problem in
>> prandom_u32.
> 
> There are some locking gotchas, but it is doable lock-free.
> 
> Basically, it's a seqlock.  The writer increments it once (to an odd
> number) before starting to overwrite the buffer, and a second time (to
> an even number) after.  "Before" and "after" mean smp_wmb().
> 
> The reader can use this to figure out how much of the data in the buffer
> is safely fresh.  The full sequence of checks is a bit intricate,
> but straightforward.
> 
> I didn't discuss the locking because I'm confident it's solvable,
> not because I wasn't aware it has to be solved.

Also agreed. Given your solution below to prandom_u32, I do think it
might also work without the seqlock now.

> As for prandom_u32(), what's the problem?  Are you worried that
> get_cpu_var disables preemption but not interrupts, and so an
> ISR might return the same value as process-level code?

Yes, exactly those were my thoughts.

> First of all, that's not a problem because prandom_u32() doesn't
> have security guarantees.  Occasionally returning a dupicate number
> is okay.
>
> Second, if you do care, that could be trivially fixed by throwing
> a barrier() in the middle of the code.  (Patch appended; S-o-B
> if anyone wants it.)

I wouldn't have added a disable irqs, but given that I really like your
proposal, I would take it in my todo branch and submit it when net-next
window opens.

> diff --git a/lib/random32.c b/lib/random32.c
> index c750216d..6bee4a36 100644
> --- a/lib/random32.c
> +++ b/lib/random32.c
> [...]

Thanks,
Hannes


--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  

Re: George's crazy full state idea (Re: HalfSipHash Acceptable Usage)

2016-12-23 Thread George Spelvin
Hannes Frederic Sowa wrote:
> In general this looks good, but bitbuffer needs to be protected from
> concurrent access, thus needing at least some atomic instruction and
> disabling of interrupts for the locking if done outside of
> get_random_long. Thus I liked your previous approach more where you
> simply embed this in the already necessary get_random_long and aliased
> get_random_long as get_random_bits(BITS_PER_LONG) accordingly, IMHO.

It's meant to be part of the same approach, and I didn't include locking
because that's a requirement for *any* solution, and isn't specific
to the part I was trying to illustrate.

(As for re-using the name "get_random_long", that was just so
I didn't have to explain it.  Call it "get_random_long_internal"
if you like.)

Possible locking implementations include:
1) Use the same locking as applies to get_random_long_internal(), or
2) Make bitbuffer a per-CPU variable (note that we currently have 128
   bits of per-CPU state in get_random_int_hash[]), and this is all a
   fast-path to bypass heavier locking in get_random_long_internal().

>> But, I just realized I've been overlooking something glaringly obvious...
>> there's no reason you can't compute multple blocks in advance.
>
> In the current code on the cost of per cpu allocations thus memory.

Yes, but on 4-core machines it's still not much, and 4096-core
behemoths have RAM to burn.

> In the extract_crng case, couldn't we simply use 8 bytes of the 64 byte
> return block to feed it directly back into the state chacha? So we pass
> on 56 bytes into the pcpu buffer, and consume 8 bytes for the next
> state. This would make the window max shorter than the anti
> backtracking protection right now from 300s to 14 get_random_int calls.
> Not sure if this is worth it.

We just finished discussing why 8 bytes isn't enough.  If you only
feed back 8 bytes, an attacker who can do 2^64 computation can find it
(by guessing and computing forward to verify the guess) and recover the
previous state.  You need to feed back at least as much output as your
security targete.  For /dev/urandom's ChaCha20, that's 256 bits.

>> For example, suppose we gave each CPU a small pool to minimize locking.
>> When one runs out and calls the global refill, it could refill *all*
>> of the CPU pools.  (You don't even need locking; there may be a race to
>> determine *which* random numbers the reader sees, but they're equally
>> good either way.)

> Yes, but still disabled interrupts, otherwise the same state could be
> used twice on the same CPU. Uff, I think we have this problem in
> prandom_u32.

There are some locking gotchas, but it is doable lock-free.

Basically, it's a seqlock.  The writer increments it once (to an odd
number) before starting to overwrite the buffer, and a second time (to
an even number) after.  "Before" and "after" mean smp_wmb().

The reader can use this to figure out how much of the data in the buffer
is safely fresh.  The full sequence of checks is a bit intricate,
but straightforward.

I didn't discuss the locking because I'm confident it's solvable,
not because I wasn't aware it has to be solved.

As for prandom_u32(), what's the problem?  Are you worried that
get_cpu_var disables preemption but not interrupts, and so an
ISR might return the same value as process-level code?

First of all, that's not a problem because prandom_u32() doesn't
have security guarantees.  Occasionally returning a dupicate number
is okay.

Second, if you do care, that could be trivially fixed by throwing
a barrier() in the middle of the code.  (Patch appended; S-o-B
if anyone wants it.)


diff --git a/lib/random32.c b/lib/random32.c
index c750216d..6bee4a36 100644
--- a/lib/random32.c
+++ b/lib/random32.c
@@ -55,16 +55,29 @@ static DEFINE_PER_CPU(struct rnd_state, net_rand_state);
  *
  * This is used for pseudo-randomness with no outside seeding.
  * For more random results, use prandom_u32().
+ *
+ * The barrier() is to allow prandom_u32() to be called from interupt
+ * context without locking.  An interrupt will run to completion,
+ * updating all four state variables.  The barrier() ensures that
+ * the interrupted code will compute a different result.  Either it
+ * will have written s1 and s2 (so the interrupt will start with
+ * the updated values), or it will use the values of s3 and s4
+ * updated by the interrupt.
+ *
+ * (The same logic applies recursively to nested interrupts, trap
+ * handlers, and NMIs.)
  */
 u32 prandom_u32_state(struct rnd_state *state)
 {
+   register u32 x;
 #define TAUSWORTHE(s, a, b, c, d) ((s & c) << d) ^ (((s << a) ^ s) >> b)
-   state->s1 = TAUSWORTHE(state->s1,  6, 13, 4294967294U, 18U);
-   state->s2 = TAUSWORTHE(state->s2,  2, 27, 4294967288U,  2U);
-   state->s3 = TAUSWORTHE(state->s3, 13, 21, 4294967280U,  7U);
-   state->s4 = TAUSWORTHE(state->s4,  3, 12, 4294967168U, 13U);
+   x  = state->s1 = TAUSWORTHE(state->s1,  6, 13,   

Re: BPF hash algo (Re: [kernel-hardening] Re: [PATCH v7 3/6] random: use SipHash in place of MD5)

2016-12-23 Thread Jason A. Donenfeld
On Fri, Dec 23, 2016 at 7:19 PM, Hannes Frederic Sowa
 wrote:
> Factoring out sha3

Per the other thread, you probably don't actually want SHA3, because
it's slow in software. You want SHA2. If you want something faster and
better, then Blake2 is most certainly the way to go. I'll be
submitting some patches in a month or so adding Blake2 for future use
in the tree.

Jason
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: George's crazy full state idea (Re: HalfSipHash Acceptable Usage)

2016-12-23 Thread Hannes Frederic Sowa
Hi,

On Fri, 2016-12-23 at 13:26 -0500, George Spelvin wrote:
> (Cc: list trimmed slightly as the topic is wandering a bit.)
> 
> Hannes Frederic Sowa wrote:
> > On Thu, 2016-12-22 at 19:07 -0500, George Spelvin wrote:
> > > Adding might_lock() annotations will improve coverage a lot.
> > 
> > Might be hard to find the correct lock we take later down the code
> > path, but if that is possible, certainly.
> 
> The point of might_lock() is that you don't have to.  You find the
> worst case (most global) lock that the code *might* take if all the
> buffer-empty conditions are true, and tell lockdep "assume this lock is
> taken all the time".

Yes, of course. Probably indicating input_pool's and primary_crng's
locks are good to indicate here, but on NUMA other locks might be
taken.

> > > Hannes Frederic Sowa wrote:
> > > > Yes, that does look nice indeed. Accounting for bits instead of bytes
> > > > shouldn't be a huge problem either. Maybe it gets a bit more verbose in
> > > > case you can't satisfy a request with one batched entropy block and have
> > > > to consume randomness from two.
> 
> For example, here's a simple bit-buffer implementation that wraps around
> a get_random_long.  The bitbuffer is of the form "1", where the
> x bits are valid, and the position of the msbit indicates how many bits
> are valid.
> 
> extern unsigned long get_random_long();
> static unsigned long bitbuffer = 1;   /* Holds 0..BITS_PER_LONG-1 bits */
> unsigned long get_random_bits(unsigned char bits)
> {
>   /* We handle bits == BITS_PER_LONG,and not bits == 0 */
>   unsigned long mask = -1ul >> (BITS_PER_LONG - bits);
>   unsigned long val;
> 
>   if (bitbuffer > mask) {
>   /* Request can be satisfied out of the bit buffer */
>   val = bitbuffer;
>   bitbuffer >>= bits;
>   } else {
>   /*
>* Not enough bits, but enough room in bitbuffer for the
>* leftovers.  avail < bits, so avail + 64 <= bits + 63.
>*/
>   val = get_random_long();
>   bitbuffer = bitbuffer << (BITS_PER_LONG - bits)
>   | val >> 1 >> (bits-1);
>   }
>   return val & mask;
> }

In general this looks good, but bitbuffer needs to be protected from
concurrent access, thus needing at least some atomic instruction and
disabling of interrupts for the locking if done outside of
get_random_long. Thus I liked your previous approach more where you
simply embed this in the already necessary get_random_long and aliased
get_random_long as get_random_bits(BITS_PER_LONG) accordingly, IMHO.

> > When we hit the chacha20 without doing a reseed we only mutate the
> > state of chacha, but being an invertible function in its own, a
> > proposal would be to mix parts of the chacha20 output back into the
> > state, which, as a result, would cause slowdown because we couldn't
> > propagate the complete output of the cipher back to the caller (looking
> > at the function _extract_crng).
> 
> Basically, yes.  Half of the output goes to rekeying itself.

Okay, thanks and understood. :)

> But, I just realized I've been overlooking something glaringly obvious...
> there's no reason you can't compute multple blocks in advance.

In the current code on the cost of per cpu allocations thus memory.

> The standard assumption in antibacktracking is that you'll *notice* the
> state capture and stop trusting the random numbers afterward; you just
> want the output *before* to be secure.  In other words, cops busting
> down the door can't find the session key used in the message you just sent.

Yep, analogous to forward secrecy and future secrecy (self healing) is
provided by catastrophic reseeding from /dev/random.

In the extract_crng case, couldn't we simply use 8 bytes of the 64 byte
return block to feed it directly back into the state chacha? So we pass
on 56 bytes into the pcpu buffer, and consume 8 bytes for the next
state. This would make the window max shorter than the anti
backtracking protection right now from 300s to 14 get_random_int calls.
Not sure if this is worth it.

> So you can compute and store random numbers ahead of need.
> 
> This can amortize the antibacktracking as much as you'd like.
> 
> For example, suppose we gave each CPU a small pool to minimize locking.
> When one runs out and calls the global refill, it could refill *all*
> of the CPU pools.  (You don't even need locking; there may be a race to
> determine *which* random numbers the reader sees, but they're equally
> good either way.)

Yes, but still disabled interrupts, otherwise the same state could be
used twice on the same CPU. Uff, I think we have this problem in
prandom_u32.

> > Or are you referring that the anti-backtrack protection should happen
> > in every call from get_random_int?
> 
> If you ask for anti-backtracking without qualification, that's the
> goal, since you don't know how long will elapse until the next call.  

Re: George's crazy full state idea (Re: HalfSipHash Acceptable Usage)

2016-12-23 Thread George Spelvin
(Cc: list trimmed slightly as the topic is wandering a bit.)

Hannes Frederic Sowa wrote:
> On Thu, 2016-12-22 at 19:07 -0500, George Spelvin wrote:
>> Adding might_lock() annotations will improve coverage a lot.
>
> Might be hard to find the correct lock we take later down the code
> path, but if that is possible, certainly.

The point of might_lock() is that you don't have to.  You find the
worst case (most global) lock that the code *might* take if all the
buffer-empty conditions are true, and tell lockdep "assume this lock is
taken all the time".

>> Hannes Frederic Sowa wrote:
>>> Yes, that does look nice indeed. Accounting for bits instead of bytes
>>> shouldn't be a huge problem either. Maybe it gets a bit more verbose in
>>> case you can't satisfy a request with one batched entropy block and have
>>> to consume randomness from two.

For example, here's a simple bit-buffer implementation that wraps around
a get_random_long.  The bitbuffer is of the form "1", where the
x bits are valid, and the position of the msbit indicates how many bits
are valid.

extern unsigned long get_random_long();
static unsigned long bitbuffer = 1; /* Holds 0..BITS_PER_LONG-1 bits */
unsigned long get_random_bits(unsigned char bits)
{
/* We handle bits == BITS_PER_LONG,and not bits == 0 */
unsigned long mask = -1ul >> (BITS_PER_LONG - bits);
unsigned long val;

if (bitbuffer > mask) {
/* Request can be satisfied out of the bit buffer */
val = bitbuffer;
bitbuffer >>= bits;
} else {
/*
 * Not enough bits, but enough room in bitbuffer for the
 * leftovers.  avail < bits, so avail + 64 <= bits + 63.
 */
val = get_random_long();
bitbuffer = bitbuffer << (BITS_PER_LONG - bits)
| val >> 1 >> (bits-1);
}
return val & mask;
}

> When we hit the chacha20 without doing a reseed we only mutate the
> state of chacha, but being an invertible function in its own, a
> proposal would be to mix parts of the chacha20 output back into the
> state, which, as a result, would cause slowdown because we couldn't
> propagate the complete output of the cipher back to the caller (looking
> at the function _extract_crng).

Basically, yes.  Half of the output goes to rekeying itself.

But, I just realized I've been overlooking something glaringly obvious...
there's no reason you can't compute multple blocks in advance.

The standard assumption in antibacktracking is that you'll *notice* the
state capture and stop trusting the random numbers afterward; you just
want the output *before* to be secure.  In other words, cops busting
down the door can't find the session key used in the message you just sent.

So you can compute and store random numbers ahead of need.

This can amortize the antibacktracking as much as you'd like.

For example, suppose we gave each CPU a small pool to minimize locking.
When one runs out and calls the global refill, it could refill *all*
of the CPU pools.  (You don't even need locking; there may be a race to
determine *which* random numbers the reader sees, but they're equally
good either way.)

> Or are you referring that the anti-backtrack protection should happen
> in every call from get_random_int?

If you ask for anti-backtracking without qualification, that's the
goal, since you don't know how long will elapse until the next call.  

It's like fsync().  There are lots of more limited forms of "keep my
data safe in case of a crash", but the most basic one is "if we lost
power the very instant the call returned, the data would be safe."
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: BPF hash algo (Re: [kernel-hardening] Re: [PATCH v7 3/6] random: use SipHash in place of MD5)

2016-12-23 Thread Hannes Frederic Sowa
On 23.12.2016 17:42, Andy Lutomirski wrote:
> On Fri, Dec 23, 2016 at 8:23 AM, Andy Lutomirski  wrote:
>> On Fri, Dec 23, 2016 at 3:59 AM, Daniel Borkmann  
>> wrote:
>>> On 12/23/2016 11:59 AM, Hannes Frederic Sowa wrote:

 On Fri, 2016-12-23 at 11:04 +0100, Daniel Borkmann wrote:
>
> On 12/22/2016 05:59 PM, Hannes Frederic Sowa wrote:
>>
>> On Thu, 2016-12-22 at 08:07 -0800, Andy Lutomirski wrote:
>>>
>>> [...]
>>>
>> The hashing is not a proper sha1 neither, unfortunately. I think that
>> is why it will have a custom implementation in iproute2?
>
>
> Still trying to catch up on this admittedly bit confusing thread. I
> did run automated tests over couple of days comparing the data I got
> from fdinfo with the one from af_alg and found no mismatch on the test
> cases varying from min to max possible program sizes. In the process
> of testing, as you might have seen on netdev, I found couple of other
> bugs in bpf code along the way and fixed them up as well. So my question,
> do you or Andy or anyone participating in claiming this have any
> concrete data or test cases that suggests something different? If yes,
> I'm very curious to hear about it and willing fix it up, of course.
> When I'm back from pto I'll prep and cook up my test suite to be
> included into the selftests/bpf/, should have done this initially,
> sorry about that. I'll also post something to expose the alg, that
> sounds fine to me.


 Looking into your code closer, I noticed that you indeed seem to do the
 finalization of sha-1 by hand by aligning and padding the buffer
 accordingly and also patching in the necessary payload length.

 Apologies for my side for claiming that this is not correct sha1
 output, I was only looking at sha_transform and its implementation and
 couldn't see the padding and finalization round with embedding the data
 length in there and hadn't thought of it being done manually.

 Anyway, is it difficult to get the sha finalization into some common
 code library? It is not very bpf specific and crypto code reviewers
 won't find it there at all.
>>>
>>>
>>> Yes, sure, I'll rework it that way (early next year when I'm back if
>>> that's fine with you).
>>
>> Can we make it SHA-256 before 4.10 comes out, though?  This really
>> looks like it will be used in situations where collisions matter and
>> it will be exposed to malicious programs, and SHA-1 should not be used
>> for new designs for this purpose because it simply isn't long enough.
>>
>> Also, a SHA-1 digest isn't a pile of u32s, so u32 digest[...] is very
>> misleading.  That should be u8 or, at the very least, __be32.
>>
>> I realize that there isn't a sha-256 implementation in lib, but would
>> it really be so bad to make the bpf digest only work (for now) when
>> crypto is enabled?  I would *love* to see the crypto core learn how to
>> export simple primitives for direct use without needing the whole
>> crypto core, and this doesn't seem particularly hard to do, but I
>> don't think that's 4.10 material.
> 
> I'm going to try to send out RFC patches for all of this today or
> tomorrow.  It doesn't look bad at all.

Factoring out sha3 to lib/ and use it as standalone and in crypto api
doesn't seem hard, yep. I also proposed this to Daniel offlist.

Bye,
Hannes

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: BPF hash algo (Re: [kernel-hardening] Re: [PATCH v7 3/6] random: use SipHash in place of MD5)

2016-12-23 Thread Andy Lutomirski
On Fri, Dec 23, 2016 at 8:23 AM, Andy Lutomirski  wrote:
> On Fri, Dec 23, 2016 at 3:59 AM, Daniel Borkmann  wrote:
>> On 12/23/2016 11:59 AM, Hannes Frederic Sowa wrote:
>>>
>>> On Fri, 2016-12-23 at 11:04 +0100, Daniel Borkmann wrote:

 On 12/22/2016 05:59 PM, Hannes Frederic Sowa wrote:
>
> On Thu, 2016-12-22 at 08:07 -0800, Andy Lutomirski wrote:
>>
>> [...]
>>
> The hashing is not a proper sha1 neither, unfortunately. I think that
> is why it will have a custom implementation in iproute2?


 Still trying to catch up on this admittedly bit confusing thread. I
 did run automated tests over couple of days comparing the data I got
 from fdinfo with the one from af_alg and found no mismatch on the test
 cases varying from min to max possible program sizes. In the process
 of testing, as you might have seen on netdev, I found couple of other
 bugs in bpf code along the way and fixed them up as well. So my question,
 do you or Andy or anyone participating in claiming this have any
 concrete data or test cases that suggests something different? If yes,
 I'm very curious to hear about it and willing fix it up, of course.
 When I'm back from pto I'll prep and cook up my test suite to be
 included into the selftests/bpf/, should have done this initially,
 sorry about that. I'll also post something to expose the alg, that
 sounds fine to me.
>>>
>>>
>>> Looking into your code closer, I noticed that you indeed seem to do the
>>> finalization of sha-1 by hand by aligning and padding the buffer
>>> accordingly and also patching in the necessary payload length.
>>>
>>> Apologies for my side for claiming that this is not correct sha1
>>> output, I was only looking at sha_transform and its implementation and
>>> couldn't see the padding and finalization round with embedding the data
>>> length in there and hadn't thought of it being done manually.
>>>
>>> Anyway, is it difficult to get the sha finalization into some common
>>> code library? It is not very bpf specific and crypto code reviewers
>>> won't find it there at all.
>>
>>
>> Yes, sure, I'll rework it that way (early next year when I'm back if
>> that's fine with you).
>
> Can we make it SHA-256 before 4.10 comes out, though?  This really
> looks like it will be used in situations where collisions matter and
> it will be exposed to malicious programs, and SHA-1 should not be used
> for new designs for this purpose because it simply isn't long enough.
>
> Also, a SHA-1 digest isn't a pile of u32s, so u32 digest[...] is very
> misleading.  That should be u8 or, at the very least, __be32.
>
> I realize that there isn't a sha-256 implementation in lib, but would
> it really be so bad to make the bpf digest only work (for now) when
> crypto is enabled?  I would *love* to see the crypto core learn how to
> export simple primitives for direct use without needing the whole
> crypto core, and this doesn't seem particularly hard to do, but I
> don't think that's 4.10 material.

I'm going to try to send out RFC patches for all of this today or
tomorrow.  It doesn't look bad at all.

--Andy
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: BPF hash algo (Re: [kernel-hardening] Re: [PATCH v7 3/6] random: use SipHash in place of MD5)

2016-12-23 Thread Andy Lutomirski
On Fri, Dec 23, 2016 at 3:59 AM, Daniel Borkmann  wrote:
> On 12/23/2016 11:59 AM, Hannes Frederic Sowa wrote:
>>
>> On Fri, 2016-12-23 at 11:04 +0100, Daniel Borkmann wrote:
>>>
>>> On 12/22/2016 05:59 PM, Hannes Frederic Sowa wrote:

 On Thu, 2016-12-22 at 08:07 -0800, Andy Lutomirski wrote:
>
> [...]
>
 The hashing is not a proper sha1 neither, unfortunately. I think that
 is why it will have a custom implementation in iproute2?
>>>
>>>
>>> Still trying to catch up on this admittedly bit confusing thread. I
>>> did run automated tests over couple of days comparing the data I got
>>> from fdinfo with the one from af_alg and found no mismatch on the test
>>> cases varying from min to max possible program sizes. In the process
>>> of testing, as you might have seen on netdev, I found couple of other
>>> bugs in bpf code along the way and fixed them up as well. So my question,
>>> do you or Andy or anyone participating in claiming this have any
>>> concrete data or test cases that suggests something different? If yes,
>>> I'm very curious to hear about it and willing fix it up, of course.
>>> When I'm back from pto I'll prep and cook up my test suite to be
>>> included into the selftests/bpf/, should have done this initially,
>>> sorry about that. I'll also post something to expose the alg, that
>>> sounds fine to me.
>>
>>
>> Looking into your code closer, I noticed that you indeed seem to do the
>> finalization of sha-1 by hand by aligning and padding the buffer
>> accordingly and also patching in the necessary payload length.
>>
>> Apologies for my side for claiming that this is not correct sha1
>> output, I was only looking at sha_transform and its implementation and
>> couldn't see the padding and finalization round with embedding the data
>> length in there and hadn't thought of it being done manually.
>>
>> Anyway, is it difficult to get the sha finalization into some common
>> code library? It is not very bpf specific and crypto code reviewers
>> won't find it there at all.
>
>
> Yes, sure, I'll rework it that way (early next year when I'm back if
> that's fine with you).

Can we make it SHA-256 before 4.10 comes out, though?  This really
looks like it will be used in situations where collisions matter and
it will be exposed to malicious programs, and SHA-1 should not be used
for new designs for this purpose because it simply isn't long enough.

Also, a SHA-1 digest isn't a pile of u32s, so u32 digest[...] is very
misleading.  That should be u8 or, at the very least, __be32.

I realize that there isn't a sha-256 implementation in lib, but would
it really be so bad to make the bpf digest only work (for now) when
crypto is enabled?  I would *love* to see the crypto core learn how to
export simple primitives for direct use without needing the whole
crypto core, and this doesn't seem particularly hard to do, but I
don't think that's 4.10 material.

--Andy
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: George's crazy full state idea (Re: HalfSipHash Acceptable Usage)

2016-12-23 Thread Hannes Frederic Sowa
On Thu, 2016-12-22 at 19:07 -0500, George Spelvin wrote:
> Hannes Frederic Sowa wrote:
> > A lockdep test should still be done. ;)
> 
> Adding might_lock() annotations will improve coverage a lot.

Might be hard to find the correct lock we take later down the code
path, but if that is possible, certainly.

> > Yes, that does look nice indeed. Accounting for bits instead of bytes
> > shouldn't be a huge problem either. Maybe it gets a bit more verbose in
> > case you can't satisfy a request with one batched entropy block and have
> > to consume randomness from two.
> 
> The bit granularity is also for the callers' convenience, so they don't
> have to mask again.  Whether get_random_bits rounds up to byte boundaries
> internally or not is something else.
> 
> When the current batch runs low, I was actually thinking of throwing
> away the remaining bits and computing a new batch of 512.  But it's
> whatever works best at implementation time.
> 
> > > > It could only mix the output back in every two calls, in which case
> > > > you can backtrack up to one call but you need to do 2^128 work to
> > > > backtrack farther.  But yes, this is getting excessively complicated.
> > > No, if you're willing to accept limited backtrack, this is a perfectly
> > > acceptable solution, and not too complicated.  You could do it phase-less
> > > if you like; store the previous output, then after generating the new
> > > one, mix in both.  Then overwrite the previous output.  (But doing two
> > > rounds of a crypto primtive to avoid one conditional jump is stupid,
> > > so forget that.)
> > Can you quickly explain why we lose the backtracking capability?
> 
> Sure.  An RNG is (state[i], output[i]) = f(state[i-1]).  The goal of
> backtracking is to compute output[i], or better yet state[i-1], given
> state[i].
> 
> For example, consider an OFB or CTR mode generator.  The state is a key
> and and IV, and you encrypt the IV with the key to produce output, then
> either replace the IV with the output, or increment it.  Either way,
> since you still have the key, you can invert the transformation and
> recover the previous IV.
> 
> The standard way around this is to use the Davies-Meyer construction:
> 
> IV[i] = IV[i-1] + E(IV[i-1], key)
> 
> This is the standard way to make a non-invertible random function
> out of an invertible random permutation.
> 
> From the sum, there's no easy way to find the ciphertext *or* the
> plaintext that was encrypted.  Assuming the encryption is secure,
> the only way to reverse it is brute force: guess IV[i-1] and run the
> operation forward to see if the resultant IV[i] matches.
> 
> There are a variety of ways to organize this computation, since the
> guess gives toy both IV[i-1] and E(IV[i-1], key) = IV[i] - IV[i-1], including
> running E forward, backward, or starting from both ends to see if you
> meet in the middle.
> 
> The way you add the encryption output to the IV is not very important.
> It can be addition, xor, or some more complex invertible transformation.
> In the case of SipHash, the "encryption" output is smaller than the
> input, so we have to get a bit more creative, but it's still basically
> the same thing.
> 
> The problem is that the output which is combined with the IV is too small.
> With only 64 bits, trying all possible values is practical.  (The world's
> Bitcoin miners are collectively computing SHA-256(SHA-256(input)) 1.7 * 2^64
> times per second.)
> 
> By basically doing two iterations at once and mixing in 128 bits of
> output, the guessing attack is rendered impractical.  The only downside
> is that you need to remember and store one result between when it's
> computed and last used.  This is part of the state, so an attack can
> find output[i-1], but not anything farther back.

Thanks a lot for the explanation!

> > ChaCha as a block cipher gives a "perfect" permutation from the output
> > of either the CRNG or the CPRNG, which actually itself has backtracking
> > protection.
> 
> I'm not quite understanding.  The /dev/random implementation uses some
> of the ChaCha output as a new ChaCha key (that's another way to mix output
> back into the state) to prevent backtracking.  But this slows it down, and
> again if you want to be efficient, you're generating and storing large batches
> of entropy and storing it in the RNG state.

I was actually referring to the anti-backtrack protection in
/dev/random and also /dev/urandom, from where we reseed every 300
seconds and if our batched entropy runs low with Ted's/Jason's current
patch for get_random_int.

As far as I can understand it, backtracking is not a problem in case of
a reseed event inside extract_crng.

When we hit the chacha20 without doing a reseed we only mutate the
state of chacha, but being an invertible function in its own, a
proposal would be to mix parts of the chacha20 output back into the
state, which, as a result, would cause slowdown because we couldn't
propagate the complete output of the cipher 

Re: BPF hash algo (Re: [kernel-hardening] Re: [PATCH v7 3/6] random: use SipHash in place of MD5)

2016-12-23 Thread Daniel Borkmann

On 12/23/2016 11:59 AM, Hannes Frederic Sowa wrote:

On Fri, 2016-12-23 at 11:04 +0100, Daniel Borkmann wrote:

On 12/22/2016 05:59 PM, Hannes Frederic Sowa wrote:

On Thu, 2016-12-22 at 08:07 -0800, Andy Lutomirski wrote:

[...]

The hashing is not a proper sha1 neither, unfortunately. I think that
is why it will have a custom implementation in iproute2?


Still trying to catch up on this admittedly bit confusing thread. I
did run automated tests over couple of days comparing the data I got
from fdinfo with the one from af_alg and found no mismatch on the test
cases varying from min to max possible program sizes. In the process
of testing, as you might have seen on netdev, I found couple of other
bugs in bpf code along the way and fixed them up as well. So my question,
do you or Andy or anyone participating in claiming this have any
concrete data or test cases that suggests something different? If yes,
I'm very curious to hear about it and willing fix it up, of course.
When I'm back from pto I'll prep and cook up my test suite to be
included into the selftests/bpf/, should have done this initially,
sorry about that. I'll also post something to expose the alg, that
sounds fine to me.


Looking into your code closer, I noticed that you indeed seem to do the
finalization of sha-1 by hand by aligning and padding the buffer
accordingly and also patching in the necessary payload length.

Apologies for my side for claiming that this is not correct sha1
output, I was only looking at sha_transform and its implementation and
couldn't see the padding and finalization round with embedding the data
length in there and hadn't thought of it being done manually.

Anyway, is it difficult to get the sha finalization into some common
code library? It is not very bpf specific and crypto code reviewers
won't find it there at all.


Yes, sure, I'll rework it that way (early next year when I'm back if
that's fine with you).

Thanks,
Daniel
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 11/12] crypto: atmel-authenc: add support to authenc(hmac(shaX),Y(aes)) modes

2016-12-23 Thread Stephan Müller
Am Donnerstag, 22. Dezember 2016, 17:38:00 CET schrieb Cyrille Pitchen:

Hi Cyrille,

> This patchs allows to combine the AES and SHA hardware accelerators on
> some Atmel SoCs. Doing so, AES blocks are only written to/read from the
> AES hardware. Those blocks are also transferred from the AES to the SHA
> accelerator internally, without additionnal accesses to the system busses.
> 
> Hence, the AES and SHA accelerators work in parallel to process all the
> data blocks, instead of serializing the process by (de)crypting those
> blocks first then authenticating them after like the generic
> crypto/authenc.c driver does.
> 
> Of course, both the AES and SHA hardware accelerators need to be available
> before we can start to process the data blocks. Hence we use their crypto
> request queue to synchronize both drivers.
> 
> Signed-off-by: Cyrille Pitchen 
> ---
>  drivers/crypto/Kconfig  |  12 +
>  drivers/crypto/atmel-aes-regs.h |  16 ++
>  drivers/crypto/atmel-aes.c  | 471
> +++- drivers/crypto/atmel-authenc.h  | 
> 64 ++
>  drivers/crypto/atmel-sha-regs.h |  14 ++
>  drivers/crypto/atmel-sha.c  | 344 +++--
>  6 files changed, 906 insertions(+), 15 deletions(-)
>  create mode 100644 drivers/crypto/atmel-authenc.h
> 
> diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig
> index 79564785ae30..719a868d8ea1 100644
> --- a/drivers/crypto/Kconfig
> +++ b/drivers/crypto/Kconfig
> @@ -415,6 +415,18 @@ config CRYPTO_DEV_BFIN_CRC
> Newer Blackfin processors have CRC hardware. Select this if you
> want to use the Blackfin CRC module.
> 
> +config CRYPTO_DEV_ATMEL_AUTHENC
> + tristate "Support for Atmel IPSEC/SSL hw accelerator"
> + depends on (ARCH_AT91 && HAS_DMA) || COMPILE_TEST
> + select CRYPTO_AUTHENC
> + select CRYPTO_DEV_ATMEL_AES
> + select CRYPTO_DEV_ATMEL_SHA
> + help
> +   Some Atmel processors can combine the AES and SHA hw accelerators
> +   to enhance support of IPSEC/SSL.
> +   Select this if you want to use the Atmel modules for
> +   authenc(hmac(shaX),Y(cbc)) algorithms.
> +
>  config CRYPTO_DEV_ATMEL_AES
>   tristate "Support for Atmel AES hw accelerator"
>   depends on HAS_DMA
> diff --git a/drivers/crypto/atmel-aes-regs.h
> b/drivers/crypto/atmel-aes-regs.h index 0ec04407b533..7694679802b3 100644
> --- a/drivers/crypto/atmel-aes-regs.h
> +++ b/drivers/crypto/atmel-aes-regs.h
> @@ -68,6 +68,22 @@
>  #define AES_CTRR 0x98
>  #define AES_GCMHR(x) (0x9c + ((x) * 0x04))
> 
> +#define AES_EMR  0xb0
> +#define AES_EMR_APEN BIT(0)  /* Auto Padding Enable */
> +#define AES_EMR_APM  BIT(1)  /* Auto Padding Mode */
> +#define AES_EMR_APM_IPSEC0x0
> +#define AES_EMR_APM_SSL  BIT(1)
> +#define AES_EMR_PLIPEN   BIT(4)  /* PLIP Enable */
> +#define AES_EMR_PLIPDBIT(5)  /* PLIP Decipher */
> +#define AES_EMR_PADLEN_MASK  (0xFu << 8)
> +#define AES_EMR_PADLEN_OFFSET8
> +#define AES_EMR_PADLEN(padlen)   (((padlen) << AES_EMR_PADLEN_OFFSET) &\
> +  AES_EMR_PADLEN_MASK)
> +#define AES_EMR_NHEAD_MASK   (0xFu << 16)
> +#define AES_EMR_NHEAD_OFFSET 16
> +#define AES_EMR_NHEAD(nhead) (((nhead) << AES_EMR_NHEAD_OFFSET) &\
> +  AES_EMR_NHEAD_MASK)
> +
>  #define AES_TWR(x)   (0xc0 + ((x) * 0x04))
>  #define AES_ALPHAR(x)(0xd0 + ((x) * 0x04))
> 
> diff --git a/drivers/crypto/atmel-aes.c b/drivers/crypto/atmel-aes.c
> index 9fd2f63b8bc0..3c651e0c3113 100644
> --- a/drivers/crypto/atmel-aes.c
> +++ b/drivers/crypto/atmel-aes.c
> @@ -41,6 +41,7 @@
>  #include 
>  #include 
>  #include "atmel-aes-regs.h"
> +#include "atmel-authenc.h"
> 
>  #define ATMEL_AES_PRIORITY   300
> 
> @@ -78,6 +79,7 @@
>  #define AES_FLAGS_INIT   BIT(2)
>  #define AES_FLAGS_BUSY   BIT(3)
>  #define AES_FLAGS_DUMP_REG   BIT(4)
> +#define AES_FLAGS_OWN_SHABIT(5)
> 
>  #define AES_FLAGS_PERSISTENT (AES_FLAGS_INIT | AES_FLAGS_BUSY)
> 
> @@ -92,6 +94,7 @@ struct atmel_aes_caps {
>   boolhas_ctr32;
>   boolhas_gcm;
>   boolhas_xts;
> + boolhas_authenc;
>   u32 max_burst_size;
>  };
> 
> @@ -144,10 +147,31 @@ struct atmel_aes_xts_ctx {
>   u32 key2[AES_KEYSIZE_256 / sizeof(u32)];
>  };
> 
> +#ifdef CONFIG_CRYPTO_DEV_ATMEL_AUTHENC
> +struct atmel_aes_authenc_ctx {
> + struct atmel_aes_base_ctx   base;
> + struct atmel_sha_authenc_ctx*auth;
> +};
> +#endif
> +
>  struct atmel_aes_reqctx {
>   unsigned long   mode;
>  };
> 
> +#ifdef CONFIG_CRYPTO_DEV_ATMEL_AUTHENC
> +struct atmel_aes_authenc_reqctx {
> + struct atmel_aes_reqctx base;
> +
> + struct scatterlist  src[2];
> + struct scatterlist  dst[2];
> + size_t

Re: BPF hash algo (Re: [kernel-hardening] Re: [PATCH v7 3/6] random: use SipHash in place of MD5)

2016-12-23 Thread Hannes Frederic Sowa
On Fri, 2016-12-23 at 11:04 +0100, Daniel Borkmann wrote:
> On 12/22/2016 05:59 PM, Hannes Frederic Sowa wrote:
> > On Thu, 2016-12-22 at 08:07 -0800, Andy Lutomirski wrote:
> > > On Thu, Dec 22, 2016 at 7:51 AM, Hannes Frederic Sowa
> > >  wrote:
> > > > On Thu, 2016-12-22 at 16:41 +0100, Jason A. Donenfeld wrote:
> > > > > On Thu, Dec 22, 2016 at 4:33 PM, Hannes Frederic Sowa
> > > > >  wrote:
> > > > > > IPv6 you cannot touch anymore. The hashing algorithm is part of 
> > > > > > uAPI.
> > > > > > You don't want to give people new IPv6 addresses with the same 
> > > > > > stable
> > > > > > secret (across reboots) after a kernel upgrade. Maybe they lose
> > > > > > connectivity then and it is extra work?
> > > > > 
> > > > > Ahh, too bad. So it goes.
> > > > 
> > > > If no other users survive we can put it into the ipv6 module.
> > > > 
> > > > > > The bpf hash stuff can be changed during this merge window, as it is
> > > > > > not yet in a released kernel. Albeit I would probably have preferred
> > > > > > something like sha256 here, which can be easily replicated by user
> > > > > > space tools (minus the problem of patching out references to not
> > > > > > hashable data, which must be zeroed).
> > > > > 
> > > > > Oh, interesting, so time is of the essence then. Do you want to handle
> > > > > changing the new eBPF code to something not-SHA1 before it's too late,
> > > > > as part of a ne
> > > 
> > > w patchset that can fast track itself to David? And
> > > > > then I can preserve my large series for the next merge window.
> > > > 
> > > > This algorithm should be a non-seeded algorithm, because the hashes
> > > > should be stable and verifiable by user space tooling. Thus this would
> > > > need a hashing algorithm that is hardened against pre-image
> > > > attacks/collision resistance, which siphash is not. I would prefer some
> > > > higher order SHA algorithm for that actually.
> > > > 
> > > 
> > > You mean:
> > > 
> > > commit 7bd509e311f408f7a5132fcdde2069af65fa05ae
> > > Author: Daniel Borkmann 
> > > Date:   Sun Dec 4 23:19:41 2016 +0100
> > > 
> > >  bpf: add prog_digest and expose it via fdinfo/netlink
> > > 
> > > 
> > > Yes, please!  This actually matters for security -- imagine a
> > > malicious program brute-forcing a collision so that it gets loaded
> > > wrong.  And this is IMO a use case for SHA-256 or SHA-512/256
> > > (preferably the latter).  Speed basically doesn't matter here and
> > > Blake2 is both less stable (didn't they slightly change it recently?)
> > > and much less well studied.
> > 
> > We don't prevent ebpf programs being loaded based on the digest but
> > just to uniquely identify loaded programs from user space and match up
> > with their source.
> > 
> > There have been talks about signing bpf programs, thus this would
> > probably need another digest algorithm additionally to that one,
> > wasting probably instructions. Probably going somewhere in direction of
> > PKCS#7 might be the thing to do (which leads to the problem to make
> > PKCS#7 attackable by ordinary unpriv users, hmpf).
> > 
> > > My inclination would have been to seed them with something that isn't
> > > exposed to userspace for the precise reason that it would prevent user
> > > code from making assumptions about what's in the hash.  But if there's
> > > a use case for why user code needs to be able to calculate the hash on
> > > its own, then that's fine.  But perhaps the actual fdinfo string
> > > should be "sha256:abcd1234..." to give some flexibility down the road.
> > > 
> > > Also:
> > > 
> > > +   result = (__force __be32 *)fp->digest;
> > > +   for (i = 0; i < SHA_DIGEST_WORDS; i++)
> > > +   result[i] = cpu_to_be32(fp->digest[i]);
> > > 
> > > Everyone, please, please, please don't open-code crypto primitives.
> > > Is this and the code above it even correct?  It might be but on a very
> > > brief glance it looks wrong to me.  If you're doing this to avoid
> > > depending on crypto, then fix crypto so you can pull in the algorithm
> > > without pulling in the whole crypto core.
> > 
> > The hashing is not a proper sha1 neither, unfortunately. I think that
> > is why it will have a custom implementation in iproute2?
> 
> Still trying to catch up on this admittedly bit confusing thread. I
> did run automated tests over couple of days comparing the data I got
> from fdinfo with the one from af_alg and found no mismatch on the test
> cases varying from min to max possible program sizes. In the process
> of testing, as you might have seen on netdev, I found couple of other
> bugs in bpf code along the way and fixed them up as well. So my question,
> do you or Andy or anyone participating in claiming this have any
> concrete data or test cases that suggests something different? If yes,
> I'm very curious to hear about it and willing fix it up, of course.
> When I'm back from pto I'll prep 

Re: BPF hash algo (Re: [kernel-hardening] Re: [PATCH v7 3/6] random: use SipHash in place of MD5)

2016-12-23 Thread Daniel Borkmann

On 12/22/2016 06:25 PM, Andy Lutomirski wrote:

On Thu, Dec 22, 2016 at 8:59 AM, Hannes Frederic Sowa

[...]

I wondered if bpf program loading should have used the module loading
infrastructure from the beginning...


That would be way too complicated and would be nasty for the unprivileged cases.


Also, there are users be it privileged or not that don't require to have a full
obj loader from user space, but are fine with just hard-coding parts or all of
the insns in their application. Back then we settled with using fds based on
Andy's suggestion, it has both ups and downs as we saw along the way but worked
okay thus far.
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: BPF hash algo (Re: [kernel-hardening] Re: [PATCH v7 3/6] random: use SipHash in place of MD5)

2016-12-23 Thread Daniel Borkmann

On 12/22/2016 05:59 PM, Hannes Frederic Sowa wrote:

On Thu, 2016-12-22 at 08:07 -0800, Andy Lutomirski wrote:

On Thu, Dec 22, 2016 at 7:51 AM, Hannes Frederic Sowa
 wrote:

On Thu, 2016-12-22 at 16:41 +0100, Jason A. Donenfeld wrote:

On Thu, Dec 22, 2016 at 4:33 PM, Hannes Frederic Sowa
 wrote:

IPv6 you cannot touch anymore. The hashing algorithm is part of uAPI.
You don't want to give people new IPv6 addresses with the same stable
secret (across reboots) after a kernel upgrade. Maybe they lose
connectivity then and it is extra work?


Ahh, too bad. So it goes.


If no other users survive we can put it into the ipv6 module.


The bpf hash stuff can be changed during this merge window, as it is
not yet in a released kernel. Albeit I would probably have preferred
something like sha256 here, which can be easily replicated by user
space tools (minus the problem of patching out references to not
hashable data, which must be zeroed).


Oh, interesting, so time is of the essence then. Do you want to handle
changing the new eBPF code to something not-SHA1 before it's too late,
as part of a ne


w patchset that can fast track itself to David? And

then I can preserve my large series for the next merge window.


This algorithm should be a non-seeded algorithm, because the hashes
should be stable and verifiable by user space tooling. Thus this would
need a hashing algorithm that is hardened against pre-image
attacks/collision resistance, which siphash is not. I would prefer some
higher order SHA algorithm for that actually.



You mean:

commit 7bd509e311f408f7a5132fcdde2069af65fa05ae
Author: Daniel Borkmann 
Date:   Sun Dec 4 23:19:41 2016 +0100

 bpf: add prog_digest and expose it via fdinfo/netlink


Yes, please!  This actually matters for security -- imagine a
malicious program brute-forcing a collision so that it gets loaded
wrong.  And this is IMO a use case for SHA-256 or SHA-512/256
(preferably the latter).  Speed basically doesn't matter here and
Blake2 is both less stable (didn't they slightly change it recently?)
and much less well studied.


We don't prevent ebpf programs being loaded based on the digest but
just to uniquely identify loaded programs from user space and match up
with their source.

There have been talks about signing bpf programs, thus this would
probably need another digest algorithm additionally to that one,
wasting probably instructions. Probably going somewhere in direction of
PKCS#7 might be the thing to do (which leads to the problem to make
PKCS#7 attackable by ordinary unpriv users, hmpf).


My inclination would have been to seed them with something that isn't
exposed to userspace for the precise reason that it would prevent user
code from making assumptions about what's in the hash.  But if there's
a use case for why user code needs to be able to calculate the hash on
its own, then that's fine.  But perhaps the actual fdinfo string
should be "sha256:abcd1234..." to give some flexibility down the road.

Also:

+   result = (__force __be32 *)fp->digest;
+   for (i = 0; i < SHA_DIGEST_WORDS; i++)
+   result[i] = cpu_to_be32(fp->digest[i]);

Everyone, please, please, please don't open-code crypto primitives.
Is this and the code above it even correct?  It might be but on a very
brief glance it looks wrong to me.  If you're doing this to avoid
depending on crypto, then fix crypto so you can pull in the algorithm
without pulling in the whole crypto core.


The hashing is not a proper sha1 neither, unfortunately. I think that
is why it will have a custom implementation in iproute2?


Still trying to catch up on this admittedly bit confusing thread. I
did run automated tests over couple of days comparing the data I got
from fdinfo with the one from af_alg and found no mismatch on the test
cases varying from min to max possible program sizes. In the process
of testing, as you might have seen on netdev, I found couple of other
bugs in bpf code along the way and fixed them up as well. So my question,
do you or Andy or anyone participating in claiming this have any
concrete data or test cases that suggests something different? If yes,
I'm very curious to hear about it and willing fix it up, of course.
When I'm back from pto I'll prep and cook up my test suite to be
included into the selftests/bpf/, should have done this initially,
sorry about that. I'll also post something to expose the alg, that
sounds fine to me.


I wondered if bpf program loading should have used the module loading
infrastructure from the beginning...


At the very least, there should be a separate function that calculates
the hash of a buffer and that function should explicitly run itself
against test vectors of various lengths to make sure that it's
calculating what it claims to be calculating.  And it doesn't look
like the userspace code has landed, so, if this thing isn't
calculating SHA1 correctly, it's