[ANNOUNCE] Linux Security Summit North America 2018 - CFP

2018-04-09 Thread James Morris
==
   ANNOUNCEMENT AND CALL FOR PARTICIPATION

   LINUX SECURITY SUMMIT NORTH AMERICA 2018
 
 27-28 August
   VANCOUVER, CANADA
==


DESCRIPTION

  The Linux Security Summit (LSS) is a technical forum for collaboration
  between Linux developers, researchers, and end users. Its primary aim is to
  foster community efforts in analyzing and solving Linux security challenges.

  LSS will be held this year as two separate events, one in North America
  (LSS-NA), and one in Europe (LSS-EU), to facilitate broader participation in
  Linux Security development. Note that this CFP is for LSS-NA; a separate CFP
  will be announced for LSS-EU in May. We encourage everyone to attend both
  events.

  The program committee currently seeks proposals for:

* Refereed Presentations:
  45 minutes in length.

* Panel Discussion Topics:
  45 minutes in length.

* Short Topics:
  30 minutes in total, including at least 10 minutes discussion.

* BoF Sessions.

  Topic areas include, but are not limited to:

* Kernel self-protection
* Access control
* Cryptography and key management
* Integrity control
* Hardware Security
* Iot and embedded security
* Virtualization and containers
* System-specific system hardening
* Case studies
* Security tools
* Security UX
* Emerging technologies, threats & techniques 

  Proposals should be submitted via:

https://events.linuxfoundation.org/events/linux-security-summit-north-america-2018/program/cfp/


DATES

  * CFP Close: June 3, 2018
  * CFP Notifications: June 11, 2018
  * Schedule Announced: June 25, 2018
  * Event: August 27-28, 2018


WHO SHOULD ATTEND

  We're seeking a diverse range of attendees, and welcome participation by
  people involved in Linux security development, operations, and research.

  The LSS is a unique global event which provides the opportunity to present
  and discuss your work or research with key Linux security community members
  and maintainers. It’s also useful for those who wish to keep up with the
  latest in Linux security development, and to provide input to the
  development process.


WEB SITE

  
https://events.linuxfoundation.org/events/linux-security-summit-north-america-2018/


TWITTER

  For event updates and announcements, follow:

https://twitter.com/LinuxSecSummit
  

PROGRAM COMMITTEE

  The program committee for LSS 2018 is:

* James Morris, Microsoft
* Serge Hallyn, Cisco
* Paul Moore, Red Hat
* Stephen Smalley, NSA
* Elena Reshetova, Intel
* John Johansen, Canonical
* Kees Cook, Google
* Casey Schaufler, Intel
* Mimi Zohar, IBM
* David A. Wheeler, Institute for Defense Analyses

  The program committee may be contacted as a group via email:
lss-pc () lists.linuxfoundation.org


Re: [RFC PATCH] crypto: pcrypt - forbid recursive instantiation

2018-04-09 Thread Eric Biggers
On Mon, Apr 09, 2018 at 10:58:08AM +0200, Steffen Klassert wrote:
> On Sun, Apr 08, 2018 at 03:55:28PM -0700, Eric Biggers wrote:
> > On Fri, Mar 23, 2018 at 08:21:52AM +0800, Herbert Xu wrote:
> > > On Sat, Mar 10, 2018 at 03:22:31PM -0800, Eric Biggers wrote:
> > > > From: Eric Biggers 
> > > > 
> > > > If the pcrypt template is used multiple times in an algorithm, then a
> > > > deadlock occurs because all pcrypt instances share the same
> > > > padata_instance, which completes requests in the order submitted.  That
> > > > is, the inner pcrypt request waits for the outer pcrypt request while
> > > > the outer request is already waiting for the inner.
> > > > 
> > > > Fix this by making pcrypt forbid instantiation if pcrypt appears in the
> > > > underlying ->cra_driver_name.  This is somewhat of a hack, but it's a
> > > > simple fix that should be sufficient to prevent the deadlock.
> > > 
> > > I'm a bit uneasy with this fix.  What if pcrypt is used in the
> > > underlying algorithm as a fallback? Wouldn't it still dead-lock
> > > without triggering this check?
> > > 
> > 
> > Yep, I think you're right.
> > 
> > Steffen, how feasible do you think would it be to make each pcrypt instance 
> > use
> > its own padata instance, so different pcrypt instances aren't ordered with
> > respect to each other?
> 
> It should be possible at least, but requires some work.
> I already had patches to get multiple independent pcrypt
> insatance to better support pcrypt for different workloads
> and NUMA mchines. I never got finalized with is because
> of the lack of NUMA hardware to test but the code is 
> still availabe, maybe it can help:
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/klassert/linux-stk.git/log/?h=net-next-pcrypt
> 
> > Or do you think there is a better solution?
> 
> Not really.
> 
> Btw. is there a valid usecase where a certain template
> is used twice in one algorithm instance?
> 

None that I can think of right now.  The problem is that it's hard to prevent
template recursion when users can specify arbitrary algorithms using AF_ALG, and
some algorithms even use fallbacks which may use templates not explicitly listed
in the driver name.

Remember, a kernel deadlock is a bug even if it's "not a valid use case"...
In this case it can even be triggered by an unprivileged user via AF_ALG.

Eric


Re: CAAM and IMA/EVM : caam_rsa_enc: DECO: desc idx 7: Protocol Size Error

2018-04-09 Thread Martin Townsend
Hi Mike,

On Mon, Apr 9, 2018 at 5:53 PM, Mike Rapoport  wrote:
> (added CAAM maintainers)
>
> On Mon, Apr 09, 2018 at 03:10:11PM +0100, Martin Townsend wrote:
>> Hi Mimi,
>>
>> On Mon, Apr 9, 2018 at 1:46 PM, Mimi Zohar  wrote:
>> > On Mon, 2018-04-09 at 09:41 +0100, Martin Townsend wrote:
>> >> Hi,
>> >>
>> >> I'm trying to get to the bottom of an issue I'm seeing when enabling
>> >> the CAAM in the kernel with IMA/EVM enabled.  I'm using the official
>> >> NXP (imx_4.9.11_1.0.0_ga) vendor Kernel.
>> >>
>
> [ snip ]
>
>> I think the integrity error message is a side effect of the previous
>> error, ie we are getting this error message because the CAAM is
>> failing to verify the certificates signature and hence IMA fails to
>> load the certificate onto the keyring.  If I disable CAAM then
>> everything works as expected.  What I'm trying to get to the bottom of
>> is why CAAM is failing to verify the signature.  Further below in the
>> email I have determined that the signature is 257 bytes do you think
>> this is correct?  I've read a post here:
>> https://crypto.stackexchange.com/questions/3505/what-is-the-length-of-an-rsa-signature
>>
>> That says that for PKCS#1 the signature should always be of the size
>> of the modulus, then it goes on to say: "In some protocols, there can
>> be some wrapping around the signature, e.g. to indicate which
>> algorithm was used,"  I'm wondering if that's what I'm seeing, an
>> extra byte in the signature that is the type of algorithm used but
>> this extra byte is also passed to the CAAM and causes it to fail as
>> then the signature is now larger than the modulus.  But I don't know
>> what I can do about this, I'm not even sure what protocol is being
>> used to generate this extra byte, any suggestions on how to find this
>> out would be appreciated.
>
> I don't really understand crypto and I maybe talking complete nonsense, but
> judging by diffstat between the imx_4.9.11_1.0.0_ga and mainline, the CAAM
> driver had a lot of additions since then :)
>
> The caam_rsa_dec function gained form 2 and form of 3 RSA decoding.
> That might explain your issue...
>

Thanks very interesting, definitely worth investigating, hopefully
they will backport fairly easily.

I'm not seeing any caam_rsa_dec but I do see caam_rsa_enc, is this
expected when verifying signatures?
public_key_verify_signature -> pkcs1pad_verify -> caam_rsa_enc

>> >
>> >
>> >> I put a dump_stack in the error handling routine in CAAM and in the
>> >> caam_jr_enqueue to get (I removed some of the caam_jr_enqueue
>> >> dump_stacks during initialisation to highlight the one of interest)
>
> Does any of the other caam_jr_enqueue stacks include
> load_system_certificate_list() call?

Not that I've seen, the only stack trace I've seen stem from
integrity_load_x509; the IMA x509 certificate is the first thing that
gets processed by CAAM.

> If you have a x509 certificate built into the kernel I presume it will also
> pass through caam_rsa_dec.
>
> You can also try using the built-in certificate as IMA x509 certificate and 
> see
> if it changes anything.

I'm compiling the certificate into the kernel as it's required so
early in the boot for IMA.


>
>> >> caam 214.caam: ERA source: CCBVID.
>> >> caam 214.caam: Entropy delay = 3200
>> >> caam 214.caam: Instantiated RNG4 SH0
>> >> caam 214.caam: Instantiated RNG4 SH1
>> >> caam 214.caam: device ID = 0x0a160300 (Era 8)
>> >> caam 214.caam: job rings = 3, qi = 0
>> >> caam algorithms registered in /proc/crypto
>> >> caam_jr_enqueue
>> >> CPU: 0 PID: 1 Comm: swapper Not tainted 4.9.11-1.0.0+gc27010d #4
>> >> Hardware name: Freescale i.MX6 UltraLite (Device Tree)
>> >> caam_jr_enqueue
>> >> CPU: 0 PID: 1 Comm: swapper Not tainted 4.9.11-1.0.0+gc27010d #4
>> >> Hardware name: Freescale i.MX6 UltraLite (Device Tree)
>> >> caam_jr 2141000.jr0: registering rng-caam
>> >> caam 214.caam: caam pkc algorithms registered in /proc/crypto
>> >> caam_jr_enqueue
>> >> CPU: 0 PID: 1 Comm: swapper Not tainted 4.9.11-1.0.0+gc27010d #4
>> >> Hardware name: Freescale i.MX6 UltraLite (Device Tree)
>> >> caam_rsa_set_pub_key
>> >> caam_rsa_max_size
>> >> caam_rsa_enc
>> >> caam_jr_enqueue
>> >> CPU: 0 PID: 1 Comm: swapper Not tainted 4.9.11-1.0.0+gc27010d #4
>> >> Hardware name: Freescale i.MX6 UltraLite (Device Tree)
>> >> [<8010dd8c>] (unwind_backtrace) from [<8010b8cc>] (show_stack+0x10/0x14)
>> >> [<8010b8cc>] (show_stack) from [<80608f74>] (caam_jr_enqueue+0x3c/0x2bc)
>> >> [<80608f74>] (caam_jr_enqueue) from [<8061fb84>] 
>> >> (caam_rsa_enc+0x210/0x3ac)
>> >> [<8061fb84>] (caam_rsa_enc) from [<803ed910>] (pkcs1pad_verify+0xa8/0xe4)
>> >> [<803ed910>] (pkcs1pad_verify) from [<804134e4>]
>> >> (public_key_verify_signature+0x17c/0x248)
>> >> [<804134e4>] (public_key_verify_signature) from [<804132a0>]
>> >> (restrict_link_by_signature+0xa8/0xd4)
>> >> [<804132a0>] (restrict_link_by_signature) from 

Re: CAAM and IMA/EVM : caam_rsa_enc: DECO: desc idx 7: Protocol Size Error

2018-04-09 Thread Mike Rapoport
(added CAAM maintainers)

On Mon, Apr 09, 2018 at 03:10:11PM +0100, Martin Townsend wrote:
> Hi Mimi,
> 
> On Mon, Apr 9, 2018 at 1:46 PM, Mimi Zohar  wrote:
> > On Mon, 2018-04-09 at 09:41 +0100, Martin Townsend wrote:
> >> Hi,
> >>
> >> I'm trying to get to the bottom of an issue I'm seeing when enabling
> >> the CAAM in the kernel with IMA/EVM enabled.  I'm using the official
> >> NXP (imx_4.9.11_1.0.0_ga) vendor Kernel.
> >>

[ snip ]

> I think the integrity error message is a side effect of the previous
> error, ie we are getting this error message because the CAAM is
> failing to verify the certificates signature and hence IMA fails to
> load the certificate onto the keyring.  If I disable CAAM then
> everything works as expected.  What I'm trying to get to the bottom of
> is why CAAM is failing to verify the signature.  Further below in the
> email I have determined that the signature is 257 bytes do you think
> this is correct?  I've read a post here:
> https://crypto.stackexchange.com/questions/3505/what-is-the-length-of-an-rsa-signature
> 
> That says that for PKCS#1 the signature should always be of the size
> of the modulus, then it goes on to say: "In some protocols, there can
> be some wrapping around the signature, e.g. to indicate which
> algorithm was used,"  I'm wondering if that's what I'm seeing, an
> extra byte in the signature that is the type of algorithm used but
> this extra byte is also passed to the CAAM and causes it to fail as
> then the signature is now larger than the modulus.  But I don't know
> what I can do about this, I'm not even sure what protocol is being
> used to generate this extra byte, any suggestions on how to find this
> out would be appreciated.

I don't really understand crypto and I maybe talking complete nonsense, but
judging by diffstat between the imx_4.9.11_1.0.0_ga and mainline, the CAAM
driver had a lot of additions since then :)

The caam_rsa_dec function gained form 2 and form of 3 RSA decoding.
That might explain your issue...
 
> >
> >
> >> I put a dump_stack in the error handling routine in CAAM and in the
> >> caam_jr_enqueue to get (I removed some of the caam_jr_enqueue
> >> dump_stacks during initialisation to highlight the one of interest)

Does any of the other caam_jr_enqueue stacks include
load_system_certificate_list() call?
If you have a x509 certificate built into the kernel I presume it will also
pass through caam_rsa_dec.

You can also try using the built-in certificate as IMA x509 certificate and see
if it changes anything.

> >> caam 214.caam: ERA source: CCBVID.
> >> caam 214.caam: Entropy delay = 3200
> >> caam 214.caam: Instantiated RNG4 SH0
> >> caam 214.caam: Instantiated RNG4 SH1
> >> caam 214.caam: device ID = 0x0a160300 (Era 8)
> >> caam 214.caam: job rings = 3, qi = 0
> >> caam algorithms registered in /proc/crypto
> >> caam_jr_enqueue
> >> CPU: 0 PID: 1 Comm: swapper Not tainted 4.9.11-1.0.0+gc27010d #4
> >> Hardware name: Freescale i.MX6 UltraLite (Device Tree)
> >> caam_jr_enqueue
> >> CPU: 0 PID: 1 Comm: swapper Not tainted 4.9.11-1.0.0+gc27010d #4
> >> Hardware name: Freescale i.MX6 UltraLite (Device Tree)
> >> caam_jr 2141000.jr0: registering rng-caam
> >> caam 214.caam: caam pkc algorithms registered in /proc/crypto
> >> caam_jr_enqueue
> >> CPU: 0 PID: 1 Comm: swapper Not tainted 4.9.11-1.0.0+gc27010d #4
> >> Hardware name: Freescale i.MX6 UltraLite (Device Tree)
> >> caam_rsa_set_pub_key
> >> caam_rsa_max_size
> >> caam_rsa_enc
> >> caam_jr_enqueue
> >> CPU: 0 PID: 1 Comm: swapper Not tainted 4.9.11-1.0.0+gc27010d #4
> >> Hardware name: Freescale i.MX6 UltraLite (Device Tree)
> >> [<8010dd8c>] (unwind_backtrace) from [<8010b8cc>] (show_stack+0x10/0x14)
> >> [<8010b8cc>] (show_stack) from [<80608f74>] (caam_jr_enqueue+0x3c/0x2bc)
> >> [<80608f74>] (caam_jr_enqueue) from [<8061fb84>] (caam_rsa_enc+0x210/0x3ac)
> >> [<8061fb84>] (caam_rsa_enc) from [<803ed910>] (pkcs1pad_verify+0xa8/0xe4)
> >> [<803ed910>] (pkcs1pad_verify) from [<804134e4>]
> >> (public_key_verify_signature+0x17c/0x248)
> >> [<804134e4>] (public_key_verify_signature) from [<804132a0>]
> >> (restrict_link_by_signature+0xa8/0xd4)
> >> [<804132a0>] (restrict_link_by_signature) from [<803cadd4>]
> >> (key_create_or_update+0x12c/0x370)
> >> [<803cadd4>] (key_create_or_update) from [<80c253a0>]
> >> (integrity_load_x509+0x70/0xc4)
> >> [<80c253a0>] (integrity_load_x509) from [<80c256bc>] 
> >> (ima_load_x509+0x28/0x3c)
> >> [<80c256bc>] (ima_load_x509) from [<80c2510c>] 
> >> (integrity_load_keys+0x8/0x10)
> >> [<80c2510c>] (integrity_load_keys) from [<80c00de0>]
> >> (kernel_init_freeable+0x1bc/0x1cc)
> >> [<80c00de0>] (kernel_init_freeable) from [<80844ccc>] 
> >> (kernel_init+0x8/0x114)
> >> [<80844ccc>] (kernel_init) from [<801078d8>] (ret_from_fork+0x14/0x3c)
> >> CPU: 0 PID: 112 Comm: irq/214-2142000 Not tainted 4.9.11-1.0.0+gc27010d #4
> >> Hardware name: Freescale i.MX6 UltraLite (Device 

Re: [PATCH v2 0/2] crypto: removing various VLAs

2018-04-09 Thread Salvatore Mesoraca
2018-04-09 16:35 GMT+02:00 David Laight :
> From: Salvatore Mesoraca
>> Sent: 09 April 2018 14:55
>>
>> v2:
>>   As suggested by Herbert Xu, the blocksize and alignmask checks
>>   have been moved to crypto_check_alg.
>>   So, now, all the other separate checks are not necessary.
>>   Also, the defines have been moved to include/crypto/algapi.h.
>>
>> v1:
>>   As suggested by Laura Abbott[1], I'm resending my patch with
>>   MAX_BLOCKSIZE and MAX_ALIGNMASK defined in an header, so they
>>   can be used in other places.
>>   I took this opportunity to deal with some other VLAs not
>>   handled in the old patch.
>
> If the constants are visible they need better names.
> Maybe CRYPTO_MAX_xxx.

You are right, in fact I renamed them, but forget to write about this
in the change log.
The new names look like MAX_CIPHER_*.

> You can also do much better than allocating MAX_BLOCKSIZE + MAX_ALIGNMASK
> bytes by requesting 'long' aligned on-stack memory.
> The easiest way is to define a union like:
>
> union crypto_tmp {
> u8 buf[CRYPTO_MAX_TMP_BUF];
> long buf_align;
> };
>
> Then in each function:
>
> union tmp crypto_tmp;
> u8 *keystream = PTR_ALIGN(tmp.buf, alignmask + 1);
>
> I think CRYPTO_MAX_TMP_BUF needs to be MAX_BLOCKSIZE + MAX_ALIGNMASK - sizeof 
> (long).

Yeah, that would be nice, it might save us 4-8 bytes on the stack.
But I was thinking, wouldn't it be even better to do something like:

u8 buf[CRYPTO_MAX_TMP_BUF] __aligned(__alignof__(long));
u8 *keystream = PTR_ALIGN(buf, alignmask + 1);

In this case __aligned should work, if I'm not missing some other
subtle GCC caveat.

Thank you,

Salvatore


[PATCH v2 1/5] crypto: thunderx_zip: Fix fallout from CONFIG_VMAP_STACK

2018-04-09 Thread Jan Glauber
Enabling virtual mapped kernel stacks breaks the thunderx_zip
driver. On compression or decompression the executing CPU hangs
in an endless loop. The reason for this is the usage of __pa
by the driver which does no longer work for an address that is
not part of the 1:1 mapping.

The zip driver allocates a result struct on the stack and needs
to tell the hardware the physical address within this struct
that is used to signal the completion of the request.

As the hardware gets the wrong address after the broken __pa
conversion it writes to an arbitrary address. The zip driver then
waits forever for the completion byte to contain a non-zero value.

Allocating the result struct from 1:1 mapped memory resolves this
bug.

Signed-off-by: Jan Glauber 
Reviewed-by: Robert Richter 
Cc: stable  # 4.14
---
 drivers/crypto/cavium/zip/zip_crypto.c | 22 ++
 1 file changed, 14 insertions(+), 8 deletions(-)

diff --git a/drivers/crypto/cavium/zip/zip_crypto.c 
b/drivers/crypto/cavium/zip/zip_crypto.c
index 8df4d26cf9d4..b92b6e7e100f 100644
--- a/drivers/crypto/cavium/zip/zip_crypto.c
+++ b/drivers/crypto/cavium/zip/zip_crypto.c
@@ -124,7 +124,7 @@ int zip_compress(const u8 *src, unsigned int slen,
 struct zip_kernel_ctx *zip_ctx)
 {
struct zip_operation  *zip_ops   = NULL;
-   struct zip_state  zip_state;
+   struct zip_state  *zip_state;
struct zip_device *zip = NULL;
int ret;
 
@@ -135,20 +135,23 @@ int zip_compress(const u8 *src, unsigned int slen,
if (!zip)
return -ENODEV;
 
-   memset(_state, 0, sizeof(struct zip_state));
+   zip_state = kzalloc(sizeof(*zip_state), GFP_ATOMIC);
+   if (!zip_state)
+   return -ENOMEM;
+
zip_ops = _ctx->zip_comp;
 
zip_ops->input_len  = slen;
zip_ops->output_len = *dlen;
memcpy(zip_ops->input, src, slen);
 
-   ret = zip_deflate(zip_ops, _state, zip);
+   ret = zip_deflate(zip_ops, zip_state, zip);
 
if (!ret) {
*dlen = zip_ops->output_len;
memcpy(dst, zip_ops->output, *dlen);
}
-
+   kfree(zip_state);
return ret;
 }
 
@@ -157,7 +160,7 @@ int zip_decompress(const u8 *src, unsigned int slen,
   struct zip_kernel_ctx *zip_ctx)
 {
struct zip_operation  *zip_ops   = NULL;
-   struct zip_state  zip_state;
+   struct zip_state  *zip_state;
struct zip_device *zip = NULL;
int ret;
 
@@ -168,7 +171,10 @@ int zip_decompress(const u8 *src, unsigned int slen,
if (!zip)
return -ENODEV;
 
-   memset(_state, 0, sizeof(struct zip_state));
+   zip_state = kzalloc(sizeof(*zip_state), GFP_ATOMIC);
+   if (!zip_state)
+   return -ENOMEM;
+
zip_ops = _ctx->zip_decomp;
memcpy(zip_ops->input, src, slen);
 
@@ -179,13 +185,13 @@ int zip_decompress(const u8 *src, unsigned int slen,
zip_ops->input_len  = slen;
zip_ops->output_len = *dlen;
 
-   ret = zip_inflate(zip_ops, _state, zip);
+   ret = zip_inflate(zip_ops, zip_state, zip);
 
if (!ret) {
*dlen = zip_ops->output_len;
memcpy(dst, zip_ops->output, *dlen);
}
-
+   kfree(zip_state);
return ret;
 }
 
-- 
2.16.2



[PATCH v2 0/5] ThunderX ZIP driver bug fixes

2018-04-09 Thread Jan Glauber
Some bug fixes for this driver after it stopped working with virtual mapped
stacks. I think the first two patches qualify for stable.

Jan Glauber (5):
  crypto: thunderx_zip: Fix fallout from CONFIG_VMAP_STACK
  crypto: thunderx_zip: Limit result reading attempts
  crypto: thunderx_zip: Prevent division by zero
  crypto: thunderx_zip: Fix statistics pending request value
  crypto: thunderx_zip: Fix smp_processor_id() warnings

 drivers/crypto/cavium/zip/common.h  | 21 +
 drivers/crypto/cavium/zip/zip_crypto.c  | 22 ++
 drivers/crypto/cavium/zip/zip_deflate.c |  4 ++--
 drivers/crypto/cavium/zip/zip_device.c  |  4 ++--
 drivers/crypto/cavium/zip/zip_inflate.c |  4 ++--
 drivers/crypto/cavium/zip/zip_main.c| 24 +++-
 drivers/crypto/cavium/zip/zip_main.h|  1 -
 7 files changed, 52 insertions(+), 28 deletions(-)

-- 
2.16.2



[PATCH v2 5/5] crypto: thunderx_zip: Fix smp_processor_id() warnings

2018-04-09 Thread Jan Glauber
Switch to raw_smp_processor_id() to prevent a number of
warnings from kernel debugging. We do not care about
preemption here, as the CPU number is only used as a
poor mans load balancing or device selection. If preemption
happens during a compress/decompress operation a small performance
hit will occur but everything will continue to work, so just
ignore it.

Signed-off-by: Jan Glauber 
Reviewed-by: Robert Richter 
---
 drivers/crypto/cavium/zip/zip_device.c | 4 ++--
 drivers/crypto/cavium/zip/zip_main.c   | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/crypto/cavium/zip/zip_device.c 
b/drivers/crypto/cavium/zip/zip_device.c
index ccf21fb91513..f174ec29ed69 100644
--- a/drivers/crypto/cavium/zip/zip_device.c
+++ b/drivers/crypto/cavium/zip/zip_device.c
@@ -87,12 +87,12 @@ u32 zip_load_instr(union zip_inst_s *instr,
 * Distribute the instructions between the enabled queues based on
 * the CPU id.
 */
-   if (smp_processor_id() % 2 == 0)
+   if (raw_smp_processor_id() % 2 == 0)
queue = 0;
else
queue = 1;
 
-   zip_dbg("CPU Core: %d Queue number:%d", smp_processor_id(), queue);
+   zip_dbg("CPU Core: %d Queue number:%d", raw_smp_processor_id(), queue);
 
/* Take cmd buffer lock */
spin_lock(_dev->iq[queue].lock);
diff --git a/drivers/crypto/cavium/zip/zip_main.c 
b/drivers/crypto/cavium/zip/zip_main.c
index ae5b20c695ca..be055b9547f6 100644
--- a/drivers/crypto/cavium/zip/zip_main.c
+++ b/drivers/crypto/cavium/zip/zip_main.c
@@ -113,7 +113,7 @@ struct zip_device *zip_get_device(int node)
  */
 int zip_get_node_id(void)
 {
-   return cpu_to_node(smp_processor_id());
+   return cpu_to_node(raw_smp_processor_id());
 }
 
 /* Initializes the ZIP h/w sub-system */
-- 
2.16.2



[PATCH v2 4/5] crypto: thunderx_zip: Fix statistics pending request value

2018-04-09 Thread Jan Glauber
The pending request counter was read from the wrong register. While
at it, there is no need to use an atomic for it as it is only read
localy in a loop.

Signed-off-by: Jan Glauber 
Reviewed-by: Robert Richter 
---
 drivers/crypto/cavium/zip/zip_main.c | 13 +
 drivers/crypto/cavium/zip/zip_main.h |  1 -
 2 files changed, 5 insertions(+), 9 deletions(-)

diff --git a/drivers/crypto/cavium/zip/zip_main.c 
b/drivers/crypto/cavium/zip/zip_main.c
index 79b449e0f955..ae5b20c695ca 100644
--- a/drivers/crypto/cavium/zip/zip_main.c
+++ b/drivers/crypto/cavium/zip/zip_main.c
@@ -469,6 +469,8 @@ static int zip_show_stats(struct seq_file *s, void *unused)
struct zip_stats  *st;
 
for (index = 0; index < MAX_ZIP_DEVICES; index++) {
+   u64 pending = 0;
+
if (zip_dev[index]) {
zip = zip_dev[index];
st  = >stats;
@@ -476,10 +478,8 @@ static int zip_show_stats(struct seq_file *s, void *unused)
/* Get all the pending requests */
for (q = 0; q < ZIP_NUM_QUEUES; q++) {
val = zip_reg_read((zip->reg_base +
-   ZIP_DBG_COREX_STA(q)));
-   val = (val >> 32);
-   val = val & 0xff;
-   atomic64_add(val, >pending_req);
+   ZIP_DBG_QUEX_STA(q)));
+   pending += val >> 32 & 0xff;
}
 
val = atomic64_read(>comp_req_complete);
@@ -514,10 +514,7 @@ static int zip_show_stats(struct seq_file *s, void *unused)
   (u64)atomic64_read(>decomp_in_bytes),
   
(u64)atomic64_read(>decomp_out_bytes),
   (u64)atomic64_read(>decomp_bad_reqs),
-  (u64)atomic64_read(>pending_req));
-
-   /* Reset pending requests  count */
-   atomic64_set(>pending_req, 0);
+  pending);
}
}
return 0;
diff --git a/drivers/crypto/cavium/zip/zip_main.h 
b/drivers/crypto/cavium/zip/zip_main.h
index 64e051f60784..e1e4fa92ce80 100644
--- a/drivers/crypto/cavium/zip/zip_main.h
+++ b/drivers/crypto/cavium/zip/zip_main.h
@@ -74,7 +74,6 @@ struct zip_stats {
atomic64_tcomp_req_complete;
atomic64_tdecomp_req_submit;
atomic64_tdecomp_req_complete;
-   atomic64_tpending_req;
atomic64_tcomp_in_bytes;
atomic64_tcomp_out_bytes;
atomic64_tdecomp_in_bytes;
-- 
2.16.2



[PATCH v2 3/5] crypto: thunderx_zip: Prevent division by zero

2018-04-09 Thread Jan Glauber
Avoid two potential divisions by zero when calculating average
values for the zip statistics.

Signed-off-by: Jan Glauber 
Reviewed-by: Robert Richter 
---
 drivers/crypto/cavium/zip/zip_main.c | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/drivers/crypto/cavium/zip/zip_main.c 
b/drivers/crypto/cavium/zip/zip_main.c
index 1cd8aa488185..79b449e0f955 100644
--- a/drivers/crypto/cavium/zip/zip_main.c
+++ b/drivers/crypto/cavium/zip/zip_main.c
@@ -482,10 +482,11 @@ static int zip_show_stats(struct seq_file *s, void 
*unused)
atomic64_add(val, >pending_req);
}
 
-   avg_chunk = (atomic64_read(>comp_in_bytes) /
-atomic64_read(>comp_req_complete));
-   avg_cr = (atomic64_read(>comp_in_bytes) /
- atomic64_read(>comp_out_bytes));
+   val = atomic64_read(>comp_req_complete);
+   avg_chunk = (val) ? atomic64_read(>comp_in_bytes) / 
val : 0;
+
+   val = atomic64_read(>comp_out_bytes);
+   avg_cr = (val) ? atomic64_read(>comp_in_bytes) / 
val : 0;
seq_printf(s, "ZIP Device %d Stats\n"
  "---\n"
  "Comp Req Submitted: \t%lld\n"
-- 
2.16.2



[PATCH v2 2/5] crypto: thunderx_zip: Limit result reading attempts

2018-04-09 Thread Jan Glauber
After issuing a request an endless loop was used to read the
completion state from memory which is asynchronously updated
by the ZIP coprocessor.

Add an upper bound to the retry attempts to prevent a CPU getting stuck
forever in case of an error. Additionally, add a read memory barrier
and a small delay between the reading attempts.

Signed-off-by: Jan Glauber 
Reviewed-by: Robert Richter 
Cc: stable  # 4.14
---
 drivers/crypto/cavium/zip/common.h  | 21 +
 drivers/crypto/cavium/zip/zip_deflate.c |  4 ++--
 drivers/crypto/cavium/zip/zip_inflate.c |  4 ++--
 3 files changed, 25 insertions(+), 4 deletions(-)

diff --git a/drivers/crypto/cavium/zip/common.h 
b/drivers/crypto/cavium/zip/common.h
index dc451e0a43c5..58fb3ed6e644 100644
--- a/drivers/crypto/cavium/zip/common.h
+++ b/drivers/crypto/cavium/zip/common.h
@@ -46,8 +46,10 @@
 #ifndef __COMMON_H__
 #define __COMMON_H__
 
+#include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -149,6 +151,25 @@ struct zip_operation {
u32   sizeofzops;
 };
 
+static inline int zip_poll_result(union zip_zres_s *result)
+{
+   int retries = 1000;
+
+   while (!result->s.compcode) {
+   if (!--retries) {
+   pr_err("ZIP ERR: request timed out");
+   return -ETIMEDOUT;
+   }
+   udelay(10);
+   /*
+* Force re-reading of compcode which is updated
+* by the ZIP coprocessor.
+*/
+   rmb();
+   }
+   return 0;
+}
+
 /* error messages */
 #define zip_err(fmt, args...) pr_err("ZIP ERR:%s():%d: " \
  fmt "\n", __func__, __LINE__, ## args)
diff --git a/drivers/crypto/cavium/zip/zip_deflate.c 
b/drivers/crypto/cavium/zip/zip_deflate.c
index 9a944b8c1e29..d7133f857d67 100644
--- a/drivers/crypto/cavium/zip/zip_deflate.c
+++ b/drivers/crypto/cavium/zip/zip_deflate.c
@@ -129,8 +129,8 @@ int zip_deflate(struct zip_operation *zip_ops, struct 
zip_state *s,
/* Stats update for compression requests submitted */
atomic64_inc(_dev->stats.comp_req_submit);
 
-   while (!result_ptr->s.compcode)
-   continue;
+   /* Wait for completion or error */
+   zip_poll_result(result_ptr);
 
/* Stats update for compression requests completed */
atomic64_inc(_dev->stats.comp_req_complete);
diff --git a/drivers/crypto/cavium/zip/zip_inflate.c 
b/drivers/crypto/cavium/zip/zip_inflate.c
index 50cbdd83dbf2..7e0d73e2f89e 100644
--- a/drivers/crypto/cavium/zip/zip_inflate.c
+++ b/drivers/crypto/cavium/zip/zip_inflate.c
@@ -143,8 +143,8 @@ int zip_inflate(struct zip_operation *zip_ops, struct 
zip_state *s,
/* Decompression requests submitted stats update */
atomic64_inc(_dev->stats.decomp_req_submit);
 
-   while (!result_ptr->s.compcode)
-   continue;
+   /* Wait for completion or error */
+   zip_poll_result(result_ptr);
 
/* Decompression requests completed stats update */
atomic64_inc(_dev->stats.decomp_req_complete);
-- 
2.16.2



Re: CAAM and IMA/EVM : caam_rsa_enc: DECO: desc idx 7: Protocol Size Error

2018-04-09 Thread Mimi Zohar
On Mon, 2018-04-09 at 15:10 +0100, Martin Townsend wrote:
> Hi Mimi,
> 
> On Mon, Apr 9, 2018 at 1:46 PM, Mimi Zohar  wrote:
> > On Mon, 2018-04-09 at 09:41 +0100, Martin Townsend wrote:
> >> Hi,
> >>
> >> I'm trying to get to the bottom of an issue I'm seeing when enabling
> >> the CAAM in the kernel with IMA/EVM enabled.  I'm using the official
> >> NXP (imx_4.9.11_1.0.0_ga) vendor Kernel.
> >>
> >> Here's the error message I'm getting.
> >>
> >> caam_jr 2142000.jr1: 4789: DECO: desc idx 7: Protocol Size Error -
> >> A protocol has seen an error in size. When running RSA, pdb size N <
> >> (size of F) when no formatting is used; or pdb size N < (F + 11) when
> >> formatting is used.
> >> integrity: Problem loading X.509 certificate (-129): /etc/keys/ima-x509.der
> >
> > This error message indicates that the kernel is trying to load a key
> > onto the IMA keyring, but any key added to the trusted IMA keyring
> > (.ima) must be signed by a key on the builtin (or secondary) keyring.
> >
> > Please verify that the public key used to sign the ima-x509.der is on
> > the builtin keyring (eg. "keyctl show %keyring:.builtin_trusted_keys)
> > or the secondary keyring.  Depending on how the kernel was configured,
> > loading the public CA onto the builtin (or secondary) keyring might be
> > possible.
> >
> > Some distros are carrying patches which load the UEFI keys onto the
> > secondary keyring.  The other (preferred) method would be to insert
> > the CA certificate into the kernel and resign the kernel.  This
> > requires reserving memory for the key when the kernel is built.
> >
> > CONFIG_SYSTEM_EXTRA_CERTIFICATE=y
> > CONFIG_SYSTEM_EXTRA_CERTIFICATE_SIZE=4096
> >
> > Mimi
> 
> I think the integrity error message is a side effect of the previous
> error, ie we are getting this error message because the CAAM is
> failing to verify the certificates signature and hence IMA fails to
> load the certificate onto the keyring.  If I disable CAAM then
> everything works as expected.

Thanks!

> What I'm trying to get to the bottom of
> is why CAAM is failing to verify the signature.  Further below in the
> email I have determined that the signature is 257 bytes do you think
> this is correct?

Sorry, I'm not going to be of much help here.  Remember everything
that CAAM uses (eg. kernel crypto modules) must be builtin to the
kernel until a key is loaded onto to the IMA keyring.  Perhaps
enabling pr_devel/pr_debug in crypto/asymmetric_keys/ might provide
some additional information.

Mimi


> I've read a post here:
> https://crypto.stackexchange.com/questions/3505/what-is-the-length-of-an-rsa-signature

> That says that for PKCS#1 the signature should always be of the size
> of the modulus, then it goes on to say: "In some protocols, there can
> be some wrapping around the signature, e.g. to indicate which
> algorithm was used,"  I'm wondering if that's what I'm seeing, an
> extra byte in the signature that is the type of algorithm used but
> this extra byte is also passed to the CAAM and causes it to fail as
> then the signature is now larger than the modulus.  But I don't know
> what I can do about this, I'm not even sure what protocol is being
> used to generate this extra byte, any suggestions on how to find this
> out would be appreciated.
> 
> >
> >
> >> I put a dump_stack in the error handling routine in CAAM and in the
> >> caam_jr_enqueue to get (I removed some of the caam_jr_enqueue
> >> dump_stacks during initialisation to highlight the one of interest)
> >>
> >> caam 214.caam: ERA source: CCBVID.
> >> caam 214.caam: Entropy delay = 3200
> >> caam 214.caam: Instantiated RNG4 SH0
> >> caam 214.caam: Instantiated RNG4 SH1
> >> caam 214.caam: device ID = 0x0a160300 (Era 8)
> >> caam 214.caam: job rings = 3, qi = 0
> >> caam algorithms registered in /proc/crypto
> >> caam_jr_enqueue
> >> CPU: 0 PID: 1 Comm: swapper Not tainted 4.9.11-1.0.0+gc27010d #4
> >> Hardware name: Freescale i.MX6 UltraLite (Device Tree)
> >> caam_jr_enqueue
> >> CPU: 0 PID: 1 Comm: swapper Not tainted 4.9.11-1.0.0+gc27010d #4
> >> Hardware name: Freescale i.MX6 UltraLite (Device Tree)
> >> caam_jr 2141000.jr0: registering rng-caam
> >> caam 214.caam: caam pkc algorithms registered in /proc/crypto
> >> caam_jr_enqueue
> >> CPU: 0 PID: 1 Comm: swapper Not tainted 4.9.11-1.0.0+gc27010d #4
> >> Hardware name: Freescale i.MX6 UltraLite (Device Tree)
> >> caam_rsa_set_pub_key
> >> caam_rsa_max_size
> >> caam_rsa_enc
> >> caam_jr_enqueue
> >> CPU: 0 PID: 1 Comm: swapper Not tainted 4.9.11-1.0.0+gc27010d #4
> >> Hardware name: Freescale i.MX6 UltraLite (Device Tree)
> >> [<8010dd8c>] (unwind_backtrace) from [<8010b8cc>] (show_stack+0x10/0x14)
> >> [<8010b8cc>] (show_stack) from [<80608f74>] (caam_jr_enqueue+0x3c/0x2bc)
> >> [<80608f74>] (caam_jr_enqueue) from [<8061fb84>] (caam_rsa_enc+0x210/0x3ac)
> >> [<8061fb84>] (caam_rsa_enc) from [<803ed910>] (pkcs1pad_verify+0xa8/0xe4)
> >> 

RE: [PATCH v2 0/2] crypto: removing various VLAs

2018-04-09 Thread David Laight
From: Salvatore Mesoraca
> Sent: 09 April 2018 14:55
> 
> v2:
>   As suggested by Herbert Xu, the blocksize and alignmask checks
>   have been moved to crypto_check_alg.
>   So, now, all the other separate checks are not necessary.
>   Also, the defines have been moved to include/crypto/algapi.h.
> 
> v1:
>   As suggested by Laura Abbott[1], I'm resending my patch with
>   MAX_BLOCKSIZE and MAX_ALIGNMASK defined in an header, so they
>   can be used in other places.
>   I took this opportunity to deal with some other VLAs not
>   handled in the old patch.

If the constants are visible they need better names.
Maybe CRYPTO_MAX_xxx.

You can also do much better than allocating MAX_BLOCKSIZE + MAX_ALIGNMASK
bytes by requesting 'long' aligned on-stack memory.
The easiest way is to define a union like:

union crypto_tmp {
u8 buf[CRYPTO_MAX_TMP_BUF];
long buf_align;
};

Then in each function:

union tmp crypto_tmp;
u8 *keystream = PTR_ALIGN(tmp.buf, alignmask + 1);

I think CRYPTO_MAX_TMP_BUF needs to be MAX_BLOCKSIZE + MAX_ALIGNMASK - sizeof 
(long).

David


Re: CAAM and IMA/EVM : caam_rsa_enc: DECO: desc idx 7: Protocol Size Error

2018-04-09 Thread Martin Townsend
Hi Mimi,

On Mon, Apr 9, 2018 at 1:46 PM, Mimi Zohar  wrote:
> On Mon, 2018-04-09 at 09:41 +0100, Martin Townsend wrote:
>> Hi,
>>
>> I'm trying to get to the bottom of an issue I'm seeing when enabling
>> the CAAM in the kernel with IMA/EVM enabled.  I'm using the official
>> NXP (imx_4.9.11_1.0.0_ga) vendor Kernel.
>>
>> Here's the error message I'm getting.
>>
>> caam_jr 2142000.jr1: 4789: DECO: desc idx 7: Protocol Size Error -
>> A protocol has seen an error in size. When running RSA, pdb size N <
>> (size of F) when no formatting is used; or pdb size N < (F + 11) when
>> formatting is used.
>> integrity: Problem loading X.509 certificate (-129): /etc/keys/ima-x509.der
>
> This error message indicates that the kernel is trying to load a key
> onto the IMA keyring, but any key added to the trusted IMA keyring
> (.ima) must be signed by a key on the builtin (or secondary) keyring.
>
> Please verify that the public key used to sign the ima-x509.der is on
> the builtin keyring (eg. "keyctl show %keyring:.builtin_trusted_keys)
> or the secondary keyring.  Depending on how the kernel was configured,
> loading the public CA onto the builtin (or secondary) keyring might be
> possible.
>
> Some distros are carrying patches which load the UEFI keys onto the
> secondary keyring.  The other (preferred) method would be to insert
> the CA certificate into the kernel and resign the kernel.  This
> requires reserving memory for the key when the kernel is built.
>
> CONFIG_SYSTEM_EXTRA_CERTIFICATE=y
> CONFIG_SYSTEM_EXTRA_CERTIFICATE_SIZE=4096
>
> Mimi

I think the integrity error message is a side effect of the previous
error, ie we are getting this error message because the CAAM is
failing to verify the certificates signature and hence IMA fails to
load the certificate onto the keyring.  If I disable CAAM then
everything works as expected.  What I'm trying to get to the bottom of
is why CAAM is failing to verify the signature.  Further below in the
email I have determined that the signature is 257 bytes do you think
this is correct?  I've read a post here:
https://crypto.stackexchange.com/questions/3505/what-is-the-length-of-an-rsa-signature

That says that for PKCS#1 the signature should always be of the size
of the modulus, then it goes on to say: "In some protocols, there can
be some wrapping around the signature, e.g. to indicate which
algorithm was used,"  I'm wondering if that's what I'm seeing, an
extra byte in the signature that is the type of algorithm used but
this extra byte is also passed to the CAAM and causes it to fail as
then the signature is now larger than the modulus.  But I don't know
what I can do about this, I'm not even sure what protocol is being
used to generate this extra byte, any suggestions on how to find this
out would be appreciated.

>
>
>> I put a dump_stack in the error handling routine in CAAM and in the
>> caam_jr_enqueue to get (I removed some of the caam_jr_enqueue
>> dump_stacks during initialisation to highlight the one of interest)
>>
>> caam 214.caam: ERA source: CCBVID.
>> caam 214.caam: Entropy delay = 3200
>> caam 214.caam: Instantiated RNG4 SH0
>> caam 214.caam: Instantiated RNG4 SH1
>> caam 214.caam: device ID = 0x0a160300 (Era 8)
>> caam 214.caam: job rings = 3, qi = 0
>> caam algorithms registered in /proc/crypto
>> caam_jr_enqueue
>> CPU: 0 PID: 1 Comm: swapper Not tainted 4.9.11-1.0.0+gc27010d #4
>> Hardware name: Freescale i.MX6 UltraLite (Device Tree)
>> caam_jr_enqueue
>> CPU: 0 PID: 1 Comm: swapper Not tainted 4.9.11-1.0.0+gc27010d #4
>> Hardware name: Freescale i.MX6 UltraLite (Device Tree)
>> caam_jr 2141000.jr0: registering rng-caam
>> caam 214.caam: caam pkc algorithms registered in /proc/crypto
>> caam_jr_enqueue
>> CPU: 0 PID: 1 Comm: swapper Not tainted 4.9.11-1.0.0+gc27010d #4
>> Hardware name: Freescale i.MX6 UltraLite (Device Tree)
>> caam_rsa_set_pub_key
>> caam_rsa_max_size
>> caam_rsa_enc
>> caam_jr_enqueue
>> CPU: 0 PID: 1 Comm: swapper Not tainted 4.9.11-1.0.0+gc27010d #4
>> Hardware name: Freescale i.MX6 UltraLite (Device Tree)
>> [<8010dd8c>] (unwind_backtrace) from [<8010b8cc>] (show_stack+0x10/0x14)
>> [<8010b8cc>] (show_stack) from [<80608f74>] (caam_jr_enqueue+0x3c/0x2bc)
>> [<80608f74>] (caam_jr_enqueue) from [<8061fb84>] (caam_rsa_enc+0x210/0x3ac)
>> [<8061fb84>] (caam_rsa_enc) from [<803ed910>] (pkcs1pad_verify+0xa8/0xe4)
>> [<803ed910>] (pkcs1pad_verify) from [<804134e4>]
>> (public_key_verify_signature+0x17c/0x248)
>> [<804134e4>] (public_key_verify_signature) from [<804132a0>]
>> (restrict_link_by_signature+0xa8/0xd4)
>> [<804132a0>] (restrict_link_by_signature) from [<803cadd4>]
>> (key_create_or_update+0x12c/0x370)
>> [<803cadd4>] (key_create_or_update) from [<80c253a0>]
>> (integrity_load_x509+0x70/0xc4)
>> [<80c253a0>] (integrity_load_x509) from [<80c256bc>] 
>> (ima_load_x509+0x28/0x3c)
>> [<80c256bc>] (ima_load_x509) from [<80c2510c>] (integrity_load_keys+0x8/0x10)

Re: [PATCH 0/6] Remove several VLAs in the crypto subsystem

2018-04-09 Thread Salvatore Mesoraca
Please ignore this thread, I sent the old patch-set again by mistake :(
I'm sorry for the noise.

Salvatore


[PATCH 1/6] crypto: api - laying macros for statically allocated buffers

2018-04-09 Thread Salvatore Mesoraca
Creating 2 new compile-time constants for internal use,
in preparation for the removal of VLAs[1] from crypto code.
All ciphers implemented in Linux have a block size less than or
equal to 16 bytes and the most demanding hw require 16 bytes
alignment for the block buffer.

[1] 
http://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qpxydaacu1rq...@mail.gmail.com

Signed-off-by: Salvatore Mesoraca 
---
 crypto/internal.h | 8 
 1 file changed, 8 insertions(+)

diff --git a/crypto/internal.h b/crypto/internal.h
index 9a3f399..89ae41e 100644
--- a/crypto/internal.h
+++ b/crypto/internal.h
@@ -26,6 +26,14 @@
 #include 
 #include 
 
+/*
+ * Maximum values for blocksize and alignmask, used to allocate
+ * static buffers that are big enough for any combination of
+ * ciphers and architectures.
+ */
+#define MAX_BLOCKSIZE  16
+#define MAX_ALIGNMASK  15
+
 /* Crypto notification events. */
 enum {
CRYPTO_MSG_ALG_REQUEST,
-- 
1.9.1



[PATCH 6/6] crypto: cfb - avoid VLA use

2018-04-09 Thread Salvatore Mesoraca
We avoid 3 VLAs[1] by always allocating MAX_BLOCKSIZE bytes or,
when needed for alignement, MAX_BLOCKSIZE + MAX_ALIGNMASK bytes.
We also check the selected cipher at instance creation time, if
it doesn't comply with these limits, the creation will fail.

[1] 
http://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qpxydaacu1rq...@mail.gmail.com

Signed-off-by: Salvatore Mesoraca 
---
 crypto/cfb.c | 14 ++
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/crypto/cfb.c b/crypto/cfb.c
index 94ee39b..f500816 100644
--- a/crypto/cfb.c
+++ b/crypto/cfb.c
@@ -28,6 +28,7 @@
 #include 
 #include 
 #include 
+#include "internal.h"
 
 struct crypto_cfb_ctx {
struct crypto_cipher *child;
@@ -53,9 +54,8 @@ static void crypto_cfb_encrypt_one(struct crypto_skcipher 
*tfm,
 static void crypto_cfb_final(struct skcipher_walk *walk,
 struct crypto_skcipher *tfm)
 {
-   const unsigned int bsize = crypto_cfb_bsize(tfm);
const unsigned long alignmask = crypto_skcipher_alignmask(tfm);
-   u8 tmp[bsize + alignmask];
+   u8 tmp[MAX_BLOCKSIZE + MAX_ALIGNMASK];
u8 *stream = PTR_ALIGN(tmp + 0, alignmask + 1);
u8 *src = walk->src.virt.addr;
u8 *dst = walk->dst.virt.addr;
@@ -94,7 +94,7 @@ static int crypto_cfb_encrypt_inplace(struct skcipher_walk 
*walk,
unsigned int nbytes = walk->nbytes;
u8 *src = walk->src.virt.addr;
u8 *iv = walk->iv;
-   u8 tmp[bsize];
+   u8 tmp[MAX_BLOCKSIZE];
 
do {
crypto_cfb_encrypt_one(tfm, iv, tmp);
@@ -164,7 +164,7 @@ static int crypto_cfb_decrypt_inplace(struct skcipher_walk 
*walk,
unsigned int nbytes = walk->nbytes;
u8 *src = walk->src.virt.addr;
u8 *iv = walk->iv;
-   u8 tmp[bsize];
+   u8 tmp[MAX_BLOCKSIZE];
 
do {
crypto_cfb_encrypt_one(tfm, iv, tmp);
@@ -295,6 +295,12 @@ static int crypto_cfb_create(struct crypto_template *tmpl, 
struct rtattr **tb)
if (err)
goto err_drop_spawn;
 
+   err = -EINVAL;
+   if (alg->cra_blocksize > MAX_BLOCKSIZE)
+   goto err_drop_spawn;
+   if (alg->cra_alignmask > MAX_ALIGNMASK)
+   goto err_drop_spawn;
+
inst->alg.base.cra_priority = alg->cra_priority;
/* we're a stream cipher independend of the crypto cra_blocksize */
inst->alg.base.cra_blocksize = 1;
-- 
1.9.1



[PATCH 4/6] crypto: pcbc - avoid VLA use

2018-04-09 Thread Salvatore Mesoraca
We avoid 2 VLAs[1] by always allocating MAX_BLOCKSIZE bytes.
We also check the selected cipher at instance creation time, if
it doesn't comply with these limits, the creation will fail.

[1] 
http://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qpxydaacu1rq...@mail.gmail.com

Signed-off-by: Salvatore Mesoraca 
---
 crypto/pcbc.c | 9 +++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/crypto/pcbc.c b/crypto/pcbc.c
index d9e45a9..797da2f 100644
--- a/crypto/pcbc.c
+++ b/crypto/pcbc.c
@@ -21,6 +21,7 @@
 #include 
 #include 
 #include 
+#include "internal.h"
 
 struct crypto_pcbc_ctx {
struct crypto_cipher *child;
@@ -72,7 +73,7 @@ static int crypto_pcbc_encrypt_inplace(struct 
skcipher_request *req,
unsigned int nbytes = walk->nbytes;
u8 *src = walk->src.virt.addr;
u8 *iv = walk->iv;
-   u8 tmpbuf[bsize];
+   u8 tmpbuf[MAX_BLOCKSIZE];
 
do {
memcpy(tmpbuf, src, bsize);
@@ -144,7 +145,7 @@ static int crypto_pcbc_decrypt_inplace(struct 
skcipher_request *req,
unsigned int nbytes = walk->nbytes;
u8 *src = walk->src.virt.addr;
u8 *iv = walk->iv;
-   u8 tmpbuf[bsize] __aligned(__alignof__(u32));
+   u8 tmpbuf[MAX_BLOCKSIZE] __aligned(__alignof__(u32));
 
do {
memcpy(tmpbuf, src, bsize);
@@ -251,6 +252,10 @@ static int crypto_pcbc_create(struct crypto_template 
*tmpl, struct rtattr **tb)
if (err)
goto err_drop_spawn;
 
+   err = -EINVAL;
+   if (alg->cra_blocksize > MAX_BLOCKSIZE)
+   goto err_drop_spawn;
+
inst->alg.base.cra_flags = alg->cra_flags & CRYPTO_ALG_INTERNAL;
inst->alg.base.cra_priority = alg->cra_priority;
inst->alg.base.cra_blocksize = alg->cra_blocksize;
-- 
1.9.1



[PATCH 5/6] crypto: cts - avoid VLA use

2018-04-09 Thread Salvatore Mesoraca
We avoid 2 VLAs[1] by always allocating MAX_BLOCKSIZE*2 bytes.
We also check the selected cipher at instance creation time, if
it doesn't comply with these limits, the creation will fail.

[1] 
http://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qpxydaacu1rq...@mail.gmail.com

Signed-off-by: Salvatore Mesoraca 
---
 crypto/cts.c | 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/crypto/cts.c b/crypto/cts.c
index 4773c18..12e6bd3 100644
--- a/crypto/cts.c
+++ b/crypto/cts.c
@@ -50,6 +50,7 @@
 #include 
 #include 
 #include 
+#include "internal.h"
 
 struct crypto_cts_ctx {
struct crypto_skcipher *child;
@@ -104,7 +105,7 @@ static int cts_cbc_encrypt(struct skcipher_request *req)
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
struct skcipher_request *subreq = >subreq;
int bsize = crypto_skcipher_blocksize(tfm);
-   u8 d[bsize * 2] __aligned(__alignof__(u32));
+   u8 d[MAX_BLOCKSIZE * 2] __aligned(__alignof__(u32));
struct scatterlist *sg;
unsigned int offset;
int lastn;
@@ -183,7 +184,7 @@ static int cts_cbc_decrypt(struct skcipher_request *req)
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
struct skcipher_request *subreq = >subreq;
int bsize = crypto_skcipher_blocksize(tfm);
-   u8 d[bsize * 2] __aligned(__alignof__(u32));
+   u8 d[MAX_BLOCKSIZE * 2] __aligned(__alignof__(u32));
struct scatterlist *sg;
unsigned int offset;
u8 *space;
@@ -359,6 +360,9 @@ static int crypto_cts_create(struct crypto_template *tmpl, 
struct rtattr **tb)
if (crypto_skcipher_alg_ivsize(alg) != alg->base.cra_blocksize)
goto err_drop_spawn;
 
+   if (alg->base.cra_blocksize > MAX_BLOCKSIZE)
+   goto err_drop_spawn;
+
if (strncmp(alg->base.cra_name, "cbc(", 4))
goto err_drop_spawn;
 
-- 
1.9.1



[PATCH 3/6] crypto: api - avoid VLA use

2018-04-09 Thread Salvatore Mesoraca
We avoid a VLA[1] by always allocating MAX_BLOCKSIZE +
MAX_ALIGNMASK bytes.
We also check the selected cipher at initialization time, if
it doesn't comply with these limits, the initialization will
fail.

[1] 
http://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qpxydaacu1rq...@mail.gmail.com

Signed-off-by: Salvatore Mesoraca 
---
 crypto/cipher.c | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/crypto/cipher.c b/crypto/cipher.c
index 94fa355..9cedf23 100644
--- a/crypto/cipher.c
+++ b/crypto/cipher.c
@@ -67,7 +67,7 @@ static void cipher_crypt_unaligned(void (*fn)(struct 
crypto_tfm *, u8 *,
 {
unsigned long alignmask = crypto_tfm_alg_alignmask(tfm);
unsigned int size = crypto_tfm_alg_blocksize(tfm);
-   u8 buffer[size + alignmask];
+   u8 buffer[MAX_BLOCKSIZE + MAX_ALIGNMASK];
u8 *tmp = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1);
 
memcpy(tmp, src, size);
@@ -105,9 +105,14 @@ static void cipher_decrypt_unaligned(struct crypto_tfm 
*tfm,
 
 int crypto_init_cipher_ops(struct crypto_tfm *tfm)
 {
+   const unsigned long alignmask = crypto_tfm_alg_alignmask(tfm);
+   const unsigned int size = crypto_tfm_alg_blocksize(tfm);
struct cipher_tfm *ops = >crt_cipher;
struct cipher_alg *cipher = >__crt_alg->cra_cipher;
 
+   if (size > MAX_BLOCKSIZE || alignmask > MAX_ALIGNMASK)
+   return -EINVAL;
+
ops->cit_setkey = setkey;
ops->cit_encrypt_one = crypto_tfm_alg_alignmask(tfm) ?
cipher_encrypt_unaligned : cipher->cia_encrypt;
-- 
1.9.1



[PATCH v2 1/2] crypto: api - laying defines and checks for statically allocated buffers

2018-04-09 Thread Salvatore Mesoraca
In preparation for the removal of VLAs[1] from crypto code.
We create 2 new compile-time constants: all ciphers implemented
in Linux have a block size less than or equal to 16 bytes and
the most demanding hw require 16 bytes alignment for the block
buffer.
We also enforce these limits in crypto_check_alg when a new
cipher is registered.

[1] 
http://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qpxydaacu1rq...@mail.gmail.com

Signed-off-by: Salvatore Mesoraca 
---
 crypto/algapi.c | 10 ++
 include/crypto/algapi.h |  8 
 2 files changed, 18 insertions(+)

diff --git a/crypto/algapi.c b/crypto/algapi.c
index 2a0271b..c0755cf 100644
--- a/crypto/algapi.c
+++ b/crypto/algapi.c
@@ -10,6 +10,7 @@
  *
  */
 
+#include 
 #include 
 #include 
 #include 
@@ -59,6 +60,15 @@ static int crypto_check_alg(struct crypto_alg *alg)
if (alg->cra_blocksize > PAGE_SIZE / 8)
return -EINVAL;
 
+   if (!alg->cra_type && (alg->cra_flags & CRYPTO_ALG_TYPE_MASK) ==
+  CRYPTO_ALG_TYPE_CIPHER) {
+   if (alg->cra_alignmask > MAX_CIPHER_ALIGNMASK)
+   return -EINVAL;
+
+   if (alg->cra_blocksize > MAX_CIPHER_BLOCKSIZE)
+   return -EINVAL;
+   }
+
if (alg->cra_priority < 0)
return -EINVAL;
 
diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h
index 1aba888..bd5e8cc 100644
--- a/include/crypto/algapi.h
+++ b/include/crypto/algapi.h
@@ -17,6 +17,14 @@
 #include 
 #include 
 
+/*
+ * Maximum values for blocksize and alignmask, used to allocate
+ * static buffers that are big enough for any combination of
+ * ciphers and architectures.
+ */
+#define MAX_CIPHER_BLOCKSIZE   16
+#define MAX_CIPHER_ALIGNMASK   15
+
 struct crypto_aead;
 struct crypto_instance;
 struct module;
-- 
1.9.1



[PATCH v2 0/2] crypto: removing various VLAs

2018-04-09 Thread Salvatore Mesoraca
v2:
As suggested by Herbert Xu, the blocksize and alignmask checks
have been moved to crypto_check_alg.
So, now, all the other separate checks are not necessary.
Also, the defines have been moved to include/crypto/algapi.h.

v1:
As suggested by Laura Abbott[1], I'm resending my patch with
MAX_BLOCKSIZE and MAX_ALIGNMASK defined in an header, so they
can be used in other places.
I took this opportunity to deal with some other VLAs not
handled in the old patch.

[1] http://lkml.kernel.org/r/4e536889-439a-49e6-dd95-2d4286913...@redhat.com

Salvatore Mesoraca (2):
  crypto: api - laying defines and checks for statically allocated
buffers
  crypto: remove several VLAs

 crypto/algapi.c | 10 ++
 crypto/cfb.c|  7 +++
 crypto/cipher.c |  3 ++-
 crypto/ctr.c|  4 ++--
 crypto/cts.c|  5 +++--
 crypto/pcbc.c   |  5 +++--
 include/crypto/algapi.h |  8 
 7 files changed, 31 insertions(+), 11 deletions(-)

-- 
1.9.1



[PATCH v2 2/2] crypto: remove several VLAs

2018-04-09 Thread Salvatore Mesoraca
We avoid various VLAs[1] by using constant expressions for block size
and alignment mask.

[1] 
http://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qpxydaacu1rq...@mail.gmail.com

Signed-off-by: Salvatore Mesoraca 
---
 crypto/cfb.c| 7 +++
 crypto/cipher.c | 3 ++-
 crypto/ctr.c| 4 ++--
 crypto/cts.c| 5 +++--
 crypto/pcbc.c   | 5 +++--
 5 files changed, 13 insertions(+), 11 deletions(-)

diff --git a/crypto/cfb.c b/crypto/cfb.c
index 94ee39b..a0d68c0 100644
--- a/crypto/cfb.c
+++ b/crypto/cfb.c
@@ -53,9 +53,8 @@ static void crypto_cfb_encrypt_one(struct crypto_skcipher 
*tfm,
 static void crypto_cfb_final(struct skcipher_walk *walk,
 struct crypto_skcipher *tfm)
 {
-   const unsigned int bsize = crypto_cfb_bsize(tfm);
const unsigned long alignmask = crypto_skcipher_alignmask(tfm);
-   u8 tmp[bsize + alignmask];
+   u8 tmp[MAX_CIPHER_BLOCKSIZE + MAX_CIPHER_ALIGNMASK];
u8 *stream = PTR_ALIGN(tmp + 0, alignmask + 1);
u8 *src = walk->src.virt.addr;
u8 *dst = walk->dst.virt.addr;
@@ -94,7 +93,7 @@ static int crypto_cfb_encrypt_inplace(struct skcipher_walk 
*walk,
unsigned int nbytes = walk->nbytes;
u8 *src = walk->src.virt.addr;
u8 *iv = walk->iv;
-   u8 tmp[bsize];
+   u8 tmp[MAX_CIPHER_BLOCKSIZE];
 
do {
crypto_cfb_encrypt_one(tfm, iv, tmp);
@@ -164,7 +163,7 @@ static int crypto_cfb_decrypt_inplace(struct skcipher_walk 
*walk,
unsigned int nbytes = walk->nbytes;
u8 *src = walk->src.virt.addr;
u8 *iv = walk->iv;
-   u8 tmp[bsize];
+   u8 tmp[MAX_CIPHER_BLOCKSIZE];
 
do {
crypto_cfb_encrypt_one(tfm, iv, tmp);
diff --git a/crypto/cipher.c b/crypto/cipher.c
index 94fa355..57836c3 100644
--- a/crypto/cipher.c
+++ b/crypto/cipher.c
@@ -13,6 +13,7 @@
  *
  */
 
+#include 
 #include 
 #include 
 #include 
@@ -67,7 +68,7 @@ static void cipher_crypt_unaligned(void (*fn)(struct 
crypto_tfm *, u8 *,
 {
unsigned long alignmask = crypto_tfm_alg_alignmask(tfm);
unsigned int size = crypto_tfm_alg_blocksize(tfm);
-   u8 buffer[size + alignmask];
+   u8 buffer[MAX_CIPHER_BLOCKSIZE + MAX_CIPHER_ALIGNMASK];
u8 *tmp = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1);
 
memcpy(tmp, src, size);
diff --git a/crypto/ctr.c b/crypto/ctr.c
index 854d924..435b75b 100644
--- a/crypto/ctr.c
+++ b/crypto/ctr.c
@@ -58,7 +58,7 @@ static void crypto_ctr_crypt_final(struct blkcipher_walk 
*walk,
unsigned int bsize = crypto_cipher_blocksize(tfm);
unsigned long alignmask = crypto_cipher_alignmask(tfm);
u8 *ctrblk = walk->iv;
-   u8 tmp[bsize + alignmask];
+   u8 tmp[MAX_CIPHER_BLOCKSIZE + MAX_CIPHER_ALIGNMASK];
u8 *keystream = PTR_ALIGN(tmp + 0, alignmask + 1);
u8 *src = walk->src.virt.addr;
u8 *dst = walk->dst.virt.addr;
@@ -106,7 +106,7 @@ static int crypto_ctr_crypt_inplace(struct blkcipher_walk 
*walk,
unsigned int nbytes = walk->nbytes;
u8 *ctrblk = walk->iv;
u8 *src = walk->src.virt.addr;
-   u8 tmp[bsize + alignmask];
+   u8 tmp[MAX_CIPHER_BLOCKSIZE + MAX_CIPHER_ALIGNMASK];
u8 *keystream = PTR_ALIGN(tmp + 0, alignmask + 1);
 
do {
diff --git a/crypto/cts.c b/crypto/cts.c
index 4773c18..4e28d83 100644
--- a/crypto/cts.c
+++ b/crypto/cts.c
@@ -40,6 +40,7 @@
  * rfc3962 includes errata information in its Appendix A.
  */
 
+#include 
 #include 
 #include 
 #include 
@@ -104,7 +105,7 @@ static int cts_cbc_encrypt(struct skcipher_request *req)
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
struct skcipher_request *subreq = >subreq;
int bsize = crypto_skcipher_blocksize(tfm);
-   u8 d[bsize * 2] __aligned(__alignof__(u32));
+   u8 d[MAX_CIPHER_BLOCKSIZE * 2] __aligned(__alignof__(u32));
struct scatterlist *sg;
unsigned int offset;
int lastn;
@@ -183,7 +184,7 @@ static int cts_cbc_decrypt(struct skcipher_request *req)
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
struct skcipher_request *subreq = >subreq;
int bsize = crypto_skcipher_blocksize(tfm);
-   u8 d[bsize * 2] __aligned(__alignof__(u32));
+   u8 d[MAX_CIPHER_BLOCKSIZE * 2] __aligned(__alignof__(u32));
struct scatterlist *sg;
unsigned int offset;
u8 *space;
diff --git a/crypto/pcbc.c b/crypto/pcbc.c
index d9e45a9..ef802f6 100644
--- a/crypto/pcbc.c
+++ b/crypto/pcbc.c
@@ -14,6 +14,7 @@
  *
  */
 
+#include 
 #include 
 #include 
 #include 
@@ -72,7 +73,7 @@ static int crypto_pcbc_encrypt_inplace(struct 
skcipher_request *req,
unsigned int nbytes = walk->nbytes;
u8 *src = walk->src.virt.addr;
u8 *iv = walk->iv;
-   u8 tmpbuf[bsize];
+   u8 tmpbuf[MAX_CIPHER_BLOCKSIZE];
 
do {
memcpy(tmpbuf, src, bsize);
@@ -144,7 +145,7 @@ static 

[PATCH 2/6] crypto: ctr - avoid VLA use

2018-04-09 Thread Salvatore Mesoraca
We avoid 2 VLAs[1] by always allocating MAX_BLOCKSIZE +
MAX_ALIGNMASK bytes.
We also check the selected cipher at instance creation time, if
it doesn't comply with these limits, the creation will fail.

[1] 
http://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qpxydaacu1rq...@mail.gmail.com

Signed-off-by: Salvatore Mesoraca 
---
 crypto/ctr.c | 13 +++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/crypto/ctr.c b/crypto/ctr.c
index 854d924..ce62552 100644
--- a/crypto/ctr.c
+++ b/crypto/ctr.c
@@ -20,6 +20,7 @@
 #include 
 #include 
 #include 
+#include "internal.h"
 
 struct crypto_ctr_ctx {
struct crypto_cipher *child;
@@ -58,7 +59,7 @@ static void crypto_ctr_crypt_final(struct blkcipher_walk 
*walk,
unsigned int bsize = crypto_cipher_blocksize(tfm);
unsigned long alignmask = crypto_cipher_alignmask(tfm);
u8 *ctrblk = walk->iv;
-   u8 tmp[bsize + alignmask];
+   u8 tmp[MAX_BLOCKSIZE + MAX_ALIGNMASK];
u8 *keystream = PTR_ALIGN(tmp + 0, alignmask + 1);
u8 *src = walk->src.virt.addr;
u8 *dst = walk->dst.virt.addr;
@@ -106,7 +107,7 @@ static int crypto_ctr_crypt_inplace(struct blkcipher_walk 
*walk,
unsigned int nbytes = walk->nbytes;
u8 *ctrblk = walk->iv;
u8 *src = walk->src.virt.addr;
-   u8 tmp[bsize + alignmask];
+   u8 tmp[MAX_BLOCKSIZE + MAX_ALIGNMASK];
u8 *keystream = PTR_ALIGN(tmp + 0, alignmask + 1);
 
do {
@@ -206,6 +207,14 @@ static struct crypto_instance *crypto_ctr_alloc(struct 
rtattr **tb)
if (alg->cra_blocksize < 4)
goto out_put_alg;
 
+   /* Block size must be <= MAX_BLOCKSIZE. */
+   if (alg->cra_blocksize > MAX_BLOCKSIZE)
+   goto out_put_alg;
+
+   /* Alignmask must be <= MAX_ALIGNMASK. */
+   if (alg->cra_alignmask > MAX_ALIGNMASK)
+   goto out_put_alg;
+
/* If this is false we'd fail the alignment of crypto_inc. */
if (alg->cra_blocksize % 4)
goto out_put_alg;
-- 
1.9.1



[PATCH 0/6] Remove several VLAs in the crypto subsystem

2018-04-09 Thread Salvatore Mesoraca
As suggested by Laura Abbott[2], I'm resending my patch with
MAX_BLOCKSIZE and MAX_ALIGNMASK defined in an header, so they
can be used in other places.
I take this opportuinuty to deal with some other VLAs not
handled in the old patch.

[1] 
http://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qpxydaacu1rq...@mail.gmail.com
[2] http://lkml.kernel.org/r/4e536889-439a-49e6-dd95-2d4286913...@redhat.com

Salvatore Mesoraca (6):
  crypto: api - laying macros for statically allocated buffers
  crypto: ctr - avoid VLA use
  crypto: api - avoid VLA use
  crypto: pcbc - avoid VLA use
  crypto: cts - avoid VLA use
  crypto: cfb - avoid VLA use

 crypto/cfb.c  | 14 ++
 crypto/cipher.c   |  7 ++-
 crypto/ctr.c  | 13 +++--
 crypto/cts.c  |  8 ++--
 crypto/internal.h |  8 
 crypto/pcbc.c |  9 +++--
 6 files changed, 48 insertions(+), 11 deletions(-)

-- 
1.9.1



Re: [PATCH 3/6] crypto: api - avoid VLA use

2018-04-09 Thread Salvatore Mesoraca
2018-04-08 11:22 GMT+02:00 Herbert Xu :
> On Sun, Apr 08, 2018 at 11:07:12AM +0200, Salvatore Mesoraca wrote:
>>
>> > This check should be done when the algorithm is registered.  Perhaps
>> > crypto_check_alg.
>>
>> Please correct me if I'm wrong:
>> isn't crypto_check_alg invoked also during hashing algorithm registration?
>> In this patch-set I'm dealing only with ciphers, because the maximum
>> block size (16)
>> is relatively small and it's also the most common block size with
>> ciphers (maybe I should
>> have explicitly referenced ciphers in the macro names, my bad).
>> I don't think that it would be OK to use a similar approach for hashes
>> too, because some
>> of them have block size >= 1024 bytes.
>
> Yes we want to make it for ciphers only even if we move it to
> crypto_check_alg.
>
> For a legacy type like cipher cou can do it by
>
> if (!alg->cra_type && (alg->cra_flags & CRYPTO_ALG_TYPE_MASK) ==
>   CRYPTO_ALG_TYPE_CIPHER)
> do_cipher_specific_check();
>

Thank you very much for your help.
I'm sending the new version.

Salvatore


Re: CAAM and IMA/EVM : caam_rsa_enc: DECO: desc idx 7: Protocol Size Error

2018-04-09 Thread Mimi Zohar
On Mon, 2018-04-09 at 09:41 +0100, Martin Townsend wrote:
> Hi,
> 
> I'm trying to get to the bottom of an issue I'm seeing when enabling
> the CAAM in the kernel with IMA/EVM enabled.  I'm using the official
> NXP (imx_4.9.11_1.0.0_ga) vendor Kernel.
> 
> Here's the error message I'm getting.
> 
> caam_jr 2142000.jr1: 4789: DECO: desc idx 7: Protocol Size Error -
> A protocol has seen an error in size. When running RSA, pdb size N <
> (size of F) when no formatting is used; or pdb size N < (F + 11) when
> formatting is used.
> integrity: Problem loading X.509 certificate (-129): /etc/keys/ima-x509.der

This error message indicates that the kernel is trying to load a key
onto the IMA keyring, but any key added to the trusted IMA keyring
(.ima) must be signed by a key on the builtin (or secondary) keyring.

Please verify that the public key used to sign the ima-x509.der is on
the builtin keyring (eg. "keyctl show %keyring:.builtin_trusted_keys)
or the secondary keyring.  Depending on how the kernel was configured,
loading the public CA onto the builtin (or secondary) keyring might be
possible.

Some distros are carrying patches which load the UEFI keys onto the
secondary keyring.  The other (preferred) method would be to insert
the CA certificate into the kernel and resign the kernel.  This
requires reserving memory for the key when the kernel is built.

CONFIG_SYSTEM_EXTRA_CERTIFICATE=y
CONFIG_SYSTEM_EXTRA_CERTIFICATE_SIZE=4096

Mimi


> I put a dump_stack in the error handling routine in CAAM and in the
> caam_jr_enqueue to get (I removed some of the caam_jr_enqueue
> dump_stacks during initialisation to highlight the one of interest)
> 
> caam 214.caam: ERA source: CCBVID.
> caam 214.caam: Entropy delay = 3200
> caam 214.caam: Instantiated RNG4 SH0
> caam 214.caam: Instantiated RNG4 SH1
> caam 214.caam: device ID = 0x0a160300 (Era 8)
> caam 214.caam: job rings = 3, qi = 0
> caam algorithms registered in /proc/crypto
> caam_jr_enqueue
> CPU: 0 PID: 1 Comm: swapper Not tainted 4.9.11-1.0.0+gc27010d #4
> Hardware name: Freescale i.MX6 UltraLite (Device Tree)
> caam_jr_enqueue
> CPU: 0 PID: 1 Comm: swapper Not tainted 4.9.11-1.0.0+gc27010d #4
> Hardware name: Freescale i.MX6 UltraLite (Device Tree)
> caam_jr 2141000.jr0: registering rng-caam
> caam 214.caam: caam pkc algorithms registered in /proc/crypto
> caam_jr_enqueue
> CPU: 0 PID: 1 Comm: swapper Not tainted 4.9.11-1.0.0+gc27010d #4
> Hardware name: Freescale i.MX6 UltraLite (Device Tree)
> caam_rsa_set_pub_key
> caam_rsa_max_size
> caam_rsa_enc
> caam_jr_enqueue
> CPU: 0 PID: 1 Comm: swapper Not tainted 4.9.11-1.0.0+gc27010d #4
> Hardware name: Freescale i.MX6 UltraLite (Device Tree)
> [<8010dd8c>] (unwind_backtrace) from [<8010b8cc>] (show_stack+0x10/0x14)
> [<8010b8cc>] (show_stack) from [<80608f74>] (caam_jr_enqueue+0x3c/0x2bc)
> [<80608f74>] (caam_jr_enqueue) from [<8061fb84>] (caam_rsa_enc+0x210/0x3ac)
> [<8061fb84>] (caam_rsa_enc) from [<803ed910>] (pkcs1pad_verify+0xa8/0xe4)
> [<803ed910>] (pkcs1pad_verify) from [<804134e4>]
> (public_key_verify_signature+0x17c/0x248)
> [<804134e4>] (public_key_verify_signature) from [<804132a0>]
> (restrict_link_by_signature+0xa8/0xd4)
> [<804132a0>] (restrict_link_by_signature) from [<803cadd4>]
> (key_create_or_update+0x12c/0x370)
> [<803cadd4>] (key_create_or_update) from [<80c253a0>]
> (integrity_load_x509+0x70/0xc4)
> [<80c253a0>] (integrity_load_x509) from [<80c256bc>] (ima_load_x509+0x28/0x3c)
> [<80c256bc>] (ima_load_x509) from [<80c2510c>] (integrity_load_keys+0x8/0x10)
> [<80c2510c>] (integrity_load_keys) from [<80c00de0>]
> (kernel_init_freeable+0x1bc/0x1cc)
> [<80c00de0>] (kernel_init_freeable) from [<80844ccc>] (kernel_init+0x8/0x114)
> [<80844ccc>] (kernel_init) from [<801078d8>] (ret_from_fork+0x14/0x3c)
> CPU: 0 PID: 112 Comm: irq/214-2142000 Not tainted 4.9.11-1.0.0+gc27010d #4
> Hardware name: Freescale i.MX6 UltraLite (Device Tree)
> [<8010dd8c>] (unwind_backtrace) from [<8010b8cc>] (show_stack+0x10/0x14)
> [<8010b8cc>] (show_stack) from [<8060a0e8>] (report_deco_status+0x30/0xf8)
> [<8060a0e8>] (report_deco_status) from [<8061f3c8>] (rsa_pub_done+0x20/0x60)
> [<8061f3c8>] (rsa_pub_done) from [<80608d94>] (caam_jr_threadirq+0x1dc/0x29c)
> [<80608d94>] (caam_jr_threadirq) from [<80165dcc>] (irq_thread_fn+0x1c/0x54)
> [<80165dcc>] (irq_thread_fn) from [<80166024>] (irq_thread+0x114/0x1d4)
> [<80166024>] (irq_thread) from [<801415b4>] (kthread+0xe4/0xec)
> [<801415b4>] (kthread) from [<801078d8>] (ret_from_fork+0x14/0x3c)
> caam_jr 2142000.jr1: 4789: DECO: desc idx 7: Protocol Size Error -
> A protocol has seen an error in size. When running RSA, pdb size N <
> (size of F) when no formatting is used; or pdb size N < (F + 11) when
> formatting is used.
> integrity: Problem loading X.509 certificate (-129): /etc/keys/ima-x509.der
> [<8010dd8c>] (unwind_backtrace) from [<8010b8cc>] (show_stack+0x10/0x14)
> [<8010b8cc>] (show_stack) from [<8060a0d0>] 

Re: [PATCH 2/2] crypto: ccree: enable support for hardware keys

2018-04-09 Thread Harald Freudenberger
On 04/03/2018 12:19 PM, Herbert Xu wrote:
> On Sat, Mar 31, 2018 at 08:30:46PM +0300, Gilad Ben-Yossef wrote:
>> However, as it uses the exact same mechanism of the regular xts-aes-ccree
>> but takes the key from another source, I've marked it with a test of
>> alg_test_null() on the premise that if the xts-aes-ccree works, so must this.
> Well it appears to be stubbed out as cc_is_hw_key always returns
> false.
>
>>> Are there other crypto drivers doing this?
>> I thought the exact same thing until I ran into a presentation about the s390
>> secure keys implementation. I basically imitated their use (or abuse?)
>> of the Crypto API
>> assuming it is the way to go.
>>
>> Take a look at arch/s390/crypto/paes_s390.c
> It's always nice to discover code that was never reviewed.
>
> The general approach seems sane though.
>
>> Anyway, if this is not the way to go I'd be more than happy to do whatever
>> work is needed to create the right interface.
>>
>> PS. I'd be out of the office and away from email access to the coming week, 
>> so
>> kindly excuse any delay in response.
> I think it's fine.  However, I don't really like the idea of everyone
> coming up with their own "paes", i.e., your patch's use of "haes".
> It should be OK to just use paes for everyone, no?
>
> As to your patch specifically, there is one issue where you're
> directly dereferencing the key as a struct.  This is a no-no because
> the key may have come from user-space.  You must treat it as a
> binary blob.  The s390 code seems to do this correctly.
>
> Cheers,
I would be happy to have a generic kernel interface for some kind
of key token as a binary blob. I am also open to use the kernel key
ring for future extensions. But please understand we needed a way
to support our hardware keys and I think the chosen design isn't
so bad.

regards Harald Freudenberger
using the kernel key ring in future extensions.



Re: [RFC PATCH] crypto: pcrypt - forbid recursive instantiation

2018-04-09 Thread Steffen Klassert
On Sun, Apr 08, 2018 at 03:55:28PM -0700, Eric Biggers wrote:
> On Fri, Mar 23, 2018 at 08:21:52AM +0800, Herbert Xu wrote:
> > On Sat, Mar 10, 2018 at 03:22:31PM -0800, Eric Biggers wrote:
> > > From: Eric Biggers 
> > > 
> > > If the pcrypt template is used multiple times in an algorithm, then a
> > > deadlock occurs because all pcrypt instances share the same
> > > padata_instance, which completes requests in the order submitted.  That
> > > is, the inner pcrypt request waits for the outer pcrypt request while
> > > the outer request is already waiting for the inner.
> > > 
> > > Fix this by making pcrypt forbid instantiation if pcrypt appears in the
> > > underlying ->cra_driver_name.  This is somewhat of a hack, but it's a
> > > simple fix that should be sufficient to prevent the deadlock.
> > 
> > I'm a bit uneasy with this fix.  What if pcrypt is used in the
> > underlying algorithm as a fallback? Wouldn't it still dead-lock
> > without triggering this check?
> > 
> 
> Yep, I think you're right.
> 
> Steffen, how feasible do you think would it be to make each pcrypt instance 
> use
> its own padata instance, so different pcrypt instances aren't ordered with
> respect to each other?

It should be possible at least, but requires some work.
I already had patches to get multiple independent pcrypt
insatance to better support pcrypt for different workloads
and NUMA mchines. I never got finalized with is because
of the lack of NUMA hardware to test but the code is 
still availabe, maybe it can help:

https://git.kernel.org/pub/scm/linux/kernel/git/klassert/linux-stk.git/log/?h=net-next-pcrypt

> Or do you think there is a better solution?

Not really.

Btw. is there a valid usecase where a certain template
is used twice in one algorithm instance?



Re: [PATCH 2/2] crypto: ccree: enable support for hardware keys

2018-04-09 Thread Gilad Ben-Yossef
On Tue, Apr 3, 2018 at 3:22 PM, Milan Broz  wrote:
> On 03/31/2018 07:30 PM, Gilad Ben-Yossef wrote:
> ...
>>> Are there other crypto drivers doing this?
>>
>> I thought the exact same thing until I ran into a presentation about the s390
>> secure keys implementation. I basically imitated their use (or abuse?)
>> of the Crypto API
>> assuming it is the way to go.
>>
>> Take a look at arch/s390/crypto/paes_s390.c
>>
>> The slide for the presentation describing this is here:
>> http://schd.ws/hosted_files/ossna2017/89/LC2017SecKeyDmCryptV5.pdf
>>
>> And they seem to even have support for it in the DM-Crypt tools, which at
>> the time they claimed to be in the process of getting it up-streamed.
>
> It is "in the process", but definitely not accepted.
>
> We are just discussing how to integrate paes wrapped keys in cryptsetup and
> it will definitely not be the way presented in the slides above.
>
> If you plan more such ciphers, I would welcome some unified way in crypto API
> how to handle these HSM keys flavors.

That would be good. Note however the fine difference - the s390 usage
is a wrapped key.
Ours is a token for a key (a slot number really). Probably makes no
difference for any
practical sense, but I thought it is worth mentioning it.

>
> For kernel dm-crypt, there is no change needed (dmcrypt just treats it as a 
> normal cipher key).
> (I would say that it is not the best idea either, IMHO it would be better to 
> use
> kernel keyring reference instead and somehow handle hw keys through keyring.)
>

I am all for the keyring approach. In fact, that was the way I wanted
to to go to introduce this feature
for cryptocell when I discovered that was already upstream code using
a different approach.

Any suggestion how this would work vis a vis the crypto API usage?

e.g. - have a parallel setkey variant added to crypto APi that takes a
kernel keyring object rather than actual key?


Thanks,
Gilad





-- 
Gilad Ben-Yossef
Chief Coffee Drinker

"If you take a class in large-scale robotics, can you end up in a
situation where the homework eats your dog?"
 -- Jean-Baptiste Queru


Re: [PATCH 2/2] crypto: ccree: enable support for hardware keys

2018-04-09 Thread Gilad Ben-Yossef
On Tue, Apr 3, 2018 at 1:19 PM, Herbert Xu  wrote:
> On Sat, Mar 31, 2018 at 08:30:46PM +0300, Gilad Ben-Yossef wrote:
>>
>> However, as it uses the exact same mechanism of the regular xts-aes-ccree
>> but takes the key from another source, I've marked it with a test of
>> alg_test_null() on the premise that if the xts-aes-ccree works, so must this.
>
> Well it appears to be stubbed out as cc_is_hw_key always returns
> false.

Please look again. The stub version of cc_is_hw_key() doing that is being
replaced in this patch.

>> > Are there other crypto drivers doing this?
>>
>> I thought the exact same thing until I ran into a presentation about the s390
>> secure keys implementation. I basically imitated their use (or abuse?)
>> of the Crypto API
>> assuming it is the way to go.
>>
>> Take a look at arch/s390/crypto/paes_s390.c
>
> It's always nice to discover code that was never reviewed.

 :-)

>
> The general approach seems sane though.
>
>> Anyway, if this is not the way to go I'd be more than happy to do whatever
>> work is needed to create the right interface.
>>
>> PS. I'd be out of the office and away from email access to the coming week, 
>> so
>> kindly excuse any delay in response.
>
> I think it's fine.  However, I don't really like the idea of everyone
> coming up with their own "paes", i.e., your patch's use of "haes".
> It should be OK to just use paes for everyone, no?

The s390 key and the cryptocell keys are not the same:

Their is, I believe, is an AES key encrypted by some internal key/algorithm.

The cryptocell "key" is a token, which is internally comprised of one
or two indexes, referencing slots in the internal memory in the
hardware, and a key size, that describe the size of the key.

I thought it would be confusing to use "paes" to describe both, since
they are not interchangeable.
You would not be able to feed an paes key that works with the s390
version to cryptocell and vice verse and get it work.

Having said, if you prefer to have "paes" simply designate
"implementation specific token for an AES key" I'm perfectly fine with
that.

>
> As to your patch specifically, there is one issue where you're
> directly dereferencing the key as a struct.  This is a no-no because
> the key may have come from user-space.  You must treat it as a
> binary blob.  The s390 code seems to do this correctly.
>

As noted above, the haes "key" is really a token encoding 3 different
pieces of information:
1. The index of the first key slot to use
2. Potentially, the index of a 2nd key slot to use (for XTS)
3. a key size, denoting the actual key size used, not the slot number

I than have to parse this information encoded in the token and feed it
to the HW (via 3 different registers)

Therefore, I do not see how I can avoid parsing the token.

I hope I've managed to explain things better now.

Thanks,
Gilad
-- 
Gilad Ben-Yossef
Chief Coffee Drinker

"If you take a class in large-scale robotics, can you end up in a
situation where the homework eats your dog?"
 -- Jean-Baptiste Queru


CAAM and IMA/EVM : caam_rsa_enc: DECO: desc idx 7: Protocol Size Error

2018-04-09 Thread Martin Townsend
Hi,

I'm trying to get to the bottom of an issue I'm seeing when enabling
the CAAM in the kernel with IMA/EVM enabled.  I'm using the official
NXP (imx_4.9.11_1.0.0_ga) vendor Kernel.

Here's the error message I'm getting.

caam_jr 2142000.jr1: 4789: DECO: desc idx 7: Protocol Size Error -
A protocol has seen an error in size. When running RSA, pdb size N <
(size of F) when no formatting is used; or pdb size N < (F + 11) when
formatting is used.
integrity: Problem loading X.509 certificate (-129): /etc/keys/ima-x509.der


I put a dump_stack in the error handling routine in CAAM and in the
caam_jr_enqueue to get (I removed some of the caam_jr_enqueue
dump_stacks during initialisation to highlight the one of interest)

caam 214.caam: ERA source: CCBVID.
caam 214.caam: Entropy delay = 3200
caam 214.caam: Instantiated RNG4 SH0
caam 214.caam: Instantiated RNG4 SH1
caam 214.caam: device ID = 0x0a160300 (Era 8)
caam 214.caam: job rings = 3, qi = 0
caam algorithms registered in /proc/crypto
caam_jr_enqueue
CPU: 0 PID: 1 Comm: swapper Not tainted 4.9.11-1.0.0+gc27010d #4
Hardware name: Freescale i.MX6 UltraLite (Device Tree)
caam_jr_enqueue
CPU: 0 PID: 1 Comm: swapper Not tainted 4.9.11-1.0.0+gc27010d #4
Hardware name: Freescale i.MX6 UltraLite (Device Tree)
caam_jr 2141000.jr0: registering rng-caam
caam 214.caam: caam pkc algorithms registered in /proc/crypto
caam_jr_enqueue
CPU: 0 PID: 1 Comm: swapper Not tainted 4.9.11-1.0.0+gc27010d #4
Hardware name: Freescale i.MX6 UltraLite (Device Tree)
caam_rsa_set_pub_key
caam_rsa_max_size
caam_rsa_enc
caam_jr_enqueue
CPU: 0 PID: 1 Comm: swapper Not tainted 4.9.11-1.0.0+gc27010d #4
Hardware name: Freescale i.MX6 UltraLite (Device Tree)
[<8010dd8c>] (unwind_backtrace) from [<8010b8cc>] (show_stack+0x10/0x14)
[<8010b8cc>] (show_stack) from [<80608f74>] (caam_jr_enqueue+0x3c/0x2bc)
[<80608f74>] (caam_jr_enqueue) from [<8061fb84>] (caam_rsa_enc+0x210/0x3ac)
[<8061fb84>] (caam_rsa_enc) from [<803ed910>] (pkcs1pad_verify+0xa8/0xe4)
[<803ed910>] (pkcs1pad_verify) from [<804134e4>]
(public_key_verify_signature+0x17c/0x248)
[<804134e4>] (public_key_verify_signature) from [<804132a0>]
(restrict_link_by_signature+0xa8/0xd4)
[<804132a0>] (restrict_link_by_signature) from [<803cadd4>]
(key_create_or_update+0x12c/0x370)
[<803cadd4>] (key_create_or_update) from [<80c253a0>]
(integrity_load_x509+0x70/0xc4)
[<80c253a0>] (integrity_load_x509) from [<80c256bc>] (ima_load_x509+0x28/0x3c)
[<80c256bc>] (ima_load_x509) from [<80c2510c>] (integrity_load_keys+0x8/0x10)
[<80c2510c>] (integrity_load_keys) from [<80c00de0>]
(kernel_init_freeable+0x1bc/0x1cc)
[<80c00de0>] (kernel_init_freeable) from [<80844ccc>] (kernel_init+0x8/0x114)
[<80844ccc>] (kernel_init) from [<801078d8>] (ret_from_fork+0x14/0x3c)
CPU: 0 PID: 112 Comm: irq/214-2142000 Not tainted 4.9.11-1.0.0+gc27010d #4
Hardware name: Freescale i.MX6 UltraLite (Device Tree)
[<8010dd8c>] (unwind_backtrace) from [<8010b8cc>] (show_stack+0x10/0x14)
[<8010b8cc>] (show_stack) from [<8060a0e8>] (report_deco_status+0x30/0xf8)
[<8060a0e8>] (report_deco_status) from [<8061f3c8>] (rsa_pub_done+0x20/0x60)
[<8061f3c8>] (rsa_pub_done) from [<80608d94>] (caam_jr_threadirq+0x1dc/0x29c)
[<80608d94>] (caam_jr_threadirq) from [<80165dcc>] (irq_thread_fn+0x1c/0x54)
[<80165dcc>] (irq_thread_fn) from [<80166024>] (irq_thread+0x114/0x1d4)
[<80166024>] (irq_thread) from [<801415b4>] (kthread+0xe4/0xec)
[<801415b4>] (kthread) from [<801078d8>] (ret_from_fork+0x14/0x3c)
caam_jr 2142000.jr1: 4789: DECO: desc idx 7: Protocol Size Error -
A protocol has seen an error in size. When running RSA, pdb size N <
(size of F) when no formatting is used; or pdb size N < (F + 11) when
formatting is used.
integrity: Problem loading X.509 certificate (-129): /etc/keys/ima-x509.der
[<8010dd8c>] (unwind_backtrace) from [<8010b8cc>] (show_stack+0x10/0x14)
[<8010b8cc>] (show_stack) from [<8060a0d0>] (report_deco_status+0x30/0xf8)
[<8060a0d0>] (report_deco_status) from [<8061db94>] (rsa_pub_done+0x20/0x60)
[<8061db94>] (rsa_pub_done) from [<80608d94>] (caam_jr_threadirq+0x1dc/0x29c)
[<80608d94>] (caam_jr_threadirq) from [<80165dcc>] (irq_thread_fn+0x1c/0x54)
[<80165dcc>] (irq_thread_fn) from [<80166024>] (irq_thread+0x114/0x1d4)
[<80166024>] (irq_thread) from [<801415b4>] (kthread+0xe4/0xec)
[<801415b4>] (kthread) from [<801078d8>] (ret_from_fork+0x14/0x3c)

So it's failing to verify the signature on my x509 certificate because
pdb size M < (size of F) ...

On the NXP mailing lists there is someone with a similar issue and the
response was
“The error appears to be saying that the public RSA modulus is shorter
than the signature being verified.

RSA uses modular arithmetic, and all valid values are smaller than the
public RSA modulus. However, the signature (F in the error message
below), is too large.

The customer should try to determine why their public RSA Modulus (N)
is shorter than the signature (F).”

I have to admit I'm not really sure 

Re: [PATCH] crypto: DRBG - guard uninstantion by lock

2018-04-09 Thread Dmitry Vyukov
On Mon, Apr 9, 2018 at 7:40 AM, Stephan Mueller  wrote:
> Am Montag, 9. April 2018, 00:46:03 CEST schrieb Theodore Y. Ts'o:
>
> Hi Theodore,
>>
>> So the syzbot will run while the patch goes through the normal e-mail
>> review process, which is kind of neat.  :-)
>
> Thank you very much for the hint. That is a neat feature indeed.
>
> As I came late to the party and I missed the original mails, I am wondering
> about which GIT repo was used and which branch of it. With that, I would be
> happy to resubmit with the test line.

All syzbot reported bugs are available here:
https://groups.google.com/forum/#!searchin/syzkaller-bugs/"WARNING$20in$20kmem_cache_free;
and here:
https://syzkaller.appspot.com/

But unfortunately testing won't work in this case, because I manually
extracted a reproducer and syzbot does not know about it. This bug
seems to lead to assorted silent heap corruptions and different
manifestations each time, so it's difficult for syzbot to attribute a
reproducer to the bug. When we debug it, it would be nice to
understand why the heap corruption is silent and is not detected by
KASAN and anything else, to prevent such unpleasant cases in future.

I've tested it manually, but unfortunately kernel still crashed within a minute:

$ git status
HEAD detached at f2d285669aae
Changes not staged for commit:
  (use "git add ..." to update what will be committed)
  (use "git checkout -- ..." to discard changes in working directory)

modified:   crypto/drbg.c

$ git diff
diff --git a/crypto/drbg.c b/crypto/drbg.c
index 4faa2781c964..68c1949a253f 100644
--- a/crypto/drbg.c
+++ b/crypto/drbg.c
@@ -1510,8 +1510,8 @@ static int drbg_instantiate(struct drbg_state
*drbg, struct drbg_string *pers,
return ret;

 free_everything:
-   mutex_unlock(>drbg_mutex);
drbg_uninstantiate(drbg);
+   mutex_unlock(>drbg_mutex);
return ret;
 }

# ./a.out
...
[  183.647874] FAULT_INJECTION: forcing a failure.
[  183.647874] name failslab, interval 1, probability 0, space 0, times 0
[  183.648287] Call Trace:
[  183.648297]  dump_stack+0x1b9/0x29f
[  183.648306]  ? arch_local_irq_restore+0x52/0x52
[  183.648318]  ? __save_stack_trace+0x7e/0xd0
[  183.651848]  should_fail.cold.4+0xa/0x1a
[  183.652411]  ? fault_create_debugfs_attr+0x1f0/0x1f0
[  183.653138]  ? kasan_kmalloc+0xc4/0xe0
[  183.653694]  ? __kmalloc+0x14e/0x760
[  183.654206]  ? drbg_kcapi_seed+0x776/0x12e0
[  183.654798]  ? crypto_rng_reset+0x7c/0x130
[  183.655379]  ? rng_setkey+0x25/0x30
[  183.655882]  ? alg_setsockopt+0x306/0x3b0
[  183.656450]  ? graph_lock+0x170/0x170
[  183.656975]  ? entry_SYSENTER_compat+0x70/0x7f
[  183.657606]  ? find_held_lock+0x36/0x1c0
[  183.658164]  ? __lock_is_held+0xb5/0x140
[  183.658728]  ? check_same_owner+0x320/0x320
[  183.659321]  ? rcu_note_context_switch+0x710/0x710
[  183.66]  should_failslab+0x124/0x180
[  183.660561]  __kmalloc+0x2c8/0x760
[  183.661046]  ? graph_lock+0x170/0x170
[  183.661569]  ? drbg_kcapi_seed+0x882/0x12e0
[  183.662161]  drbg_kcapi_seed+0x882/0x12e0
[  183.662731]  ? drbg_seed+0x10a0/0x10a0
[  183.663267]  ? lock_downgrade+0x8e0/0x8e0
[  183.663833]  ? lock_acquire+0x1dc/0x520
[  183.664385]  ? lock_release+0xa10/0xa10
[  183.664934]  ? check_same_owner+0x320/0x320
[  183.665530]  ? __sanitizer_cov_trace_const_cmp8+0x18/0x20
[  183.666292]  ? __check_object_size+0x95/0x5d9
[  183.666904]  ? sock_kmalloc+0x14e/0x1d0
[  183.667444]  ? mark_held_locks+0xc9/0x160
[  183.668020]  ? __might_sleep+0x95/0x190
[  183.668567]  crypto_rng_reset+0x7c/0x130
[  183.669124]  rng_setkey+0x25/0x30
[  183.669598]  ? rng_sock_destruct+0x90/0x90
[  183.670176]  alg_setsockopt+0x306/0x3b0
[  183.670724]  __compat_sys_setsockopt+0x315/0x7c0
[  183.671375]  ? __compat_sys_getsockopt+0x7f0/0x7f0
[  183.672057]  ? __sanitizer_cov_trace_const_cmp4+0x16/0x20
[  183.672813]  ? ksys_write+0x1a6/0x250
[  183.67]  ? SyS_read+0x30/0x30
[  183.673811]  compat_SyS_setsockopt+0x34/0x50
[  183.674416]  ? scm_detach_fds_compat+0x440/0x440
[  183.675079]  do_fast_syscall_32+0x41f/0x10dc
[  183.675725]  ? do_page_fault+0xee/0x8a7
[  183.676284]  ? do_int80_syscall_32+0xa70/0xa70
[  183.676925]  ? trace_hardirqs_on_thunk+0x1a/0x1c
[  183.677590]  ? __sanitizer_cov_trace_const_cmp4+0x16/0x20
[  183.678348]  ? syscall_return_slowpath+0x30f/0x5c0
[  183.679026]  ? sysret32_from_system_call+0x5/0x3c
[  183.679694]  ? trace_hardirqs_off_thunk+0x1a/0x1c
[  183.680380]  entry_SYSENTER_compat+0x70/0x7f
[  183.681000] RIP: 0023:0xf7f0ecb9
[  183.681488] RSP: 002b:ffeb1e9c EFLAGS: 0296 ORIG_RAX:
016e
[  183.682606] RAX: ffda RBX: 0003 RCX: 0117
[  183.683620] RDX: 0001 RSI: 205b1fd0 RDI: 
[  183.684602] RBP: 2040 R08:  R09: 
[  183.685622] R10:  R11:  R12: 
[  183.686642] R13:  R14: 

Re: [PATCH] AF_ALG: register completely initialized request in list

2018-04-09 Thread Stephan Mueller
Am Montag, 9. April 2018, 09:51:13 CEST schrieb Dmitry Vyukov:

Hi Dmitry,

> You can ask syzbot to test by replying to its report email with a test
> command, see:
> https://github.com/google/syzkaller/blob/master/docs/syzbot.md#communication
> -with-syzbot
> 
> Note that all testing of KMSAN bugs needs to go to KMSAN tree, for details
> see:
> https://github.com/google/syzkaller/blob/master/docs/syzbot.md#kmsan-bugs

Thank you. I will resend the patch later today with the proper tags.

Ciao
Stephan




Re: [PATCH] AF_ALG: register completely initialized request in list

2018-04-09 Thread Dmitry Vyukov
On Sun, Apr 8, 2018 at 7:57 PM, Stephan Müller  wrote:
> Hi,
>
> May I ask to check whether this patch fixes the issue? I cannot re-create
> the issue with the reproducter. Yet, as far as I understand, you try to
> induce errors which shall validate whether the error code paths are correct.

You can ask syzbot to test by replying to its report email with a test
command, see:
https://github.com/google/syzkaller/blob/master/docs/syzbot.md#communication-with-syzbot

Note that all testing of KMSAN bugs needs to go to KMSAN tree, for details see:
https://github.com/google/syzkaller/blob/master/docs/syzbot.md#kmsan-bugs




> The fix below should ensure this now.
>
> Thanks a lot.
>
> ---8<---
>
> From 8f083e7b0684a9f91c186d7b46eec34e439689c3 Mon Sep 17 00:00:00 2001
> From: Stephan Mueller 
> Date: Sun, 8 Apr 2018 19:53:59 +0200
> Subject: [PATCH] AF_ALG: Initialize sg_num_bytes in error code path
>
> The RX SGL in processing is already registered with the RX SGL tracking
> list to support proper cleanup. The cleanup code path uses the
> sg_num_bytes variable which must therefore be always initialized, even
> in the error code path.
>
> Signed-off-by: Stephan Mueller 
> Reported-by: syzbot+9c251bdd09f83b92b...@syzkaller.appspotmail.com
> ---
>  crypto/af_alg.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/crypto/af_alg.c b/crypto/af_alg.c
> index c49766b03165..0d555c072669 100644
> --- a/crypto/af_alg.c
> +++ b/crypto/af_alg.c
> @@ -1156,8 +1156,10 @@ int af_alg_get_rsgl(struct sock *sk, struct msghdr 
> *msg, int flags,
>
> /* make one iovec available as scatterlist */
> err = af_alg_make_sg(>sgl, >msg_iter, seglen);
> -   if (err < 0)
> +   if (err < 0) {
> +   rsgl->sg_num_bytes = 0;
> return err;
> +   }
>
> /* chain the new scatterlist with previous one */
> if (areq->last_rsgl)
> --
> 2.14.3
>
>
>
>
>
> --
> You received this message because you are subscribed to the Google Groups 
> "syzkaller-bugs" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to syzkaller-bugs+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/syzkaller-bugs/3337259.MW9pfDCdka%40positron.chronox.de.
> For more options, visit https://groups.google.com/d/optout.