Re: [PATCH 00/47] arch-removal: device drivers

2018-03-14 Thread Boris Brezillon
Hi Arnd,

On Wed, 14 Mar 2018 16:35:13 +0100
Arnd Bergmann  wrote:

> Hi driver maintainers,
> 
> I just posted one series with the removal of eight architectures,
> see https://lkml.org/lkml/2018/3/14/505 for details, or
> https://lwn.net/Articles/748074/ for more background.
> 
> These are the device drivers that go along with them. I have already
> picked up the drivers for arch/metag/ into my tree, they were reviewed
> earlier.
> 
> Please let me know if you have any concerns with the patch, or if you
> prefer to pick up the patches in your respective trees.  I created
> the patches with 'git format-patch -D', so they will not apply without
> manually removing those files.
> 
> For anything else, I'd keep the removal patches in my asm-generic tree
> and will send a pull request for 4.17 along with the actual arch removal.
> 
>Arnd
> 
> Arnd Bergmann
>   edac: remove tile driver
>   net: tile: remove ethernet drivers
>   net: adi: remove blackfin ethernet drivers
>   net: 8390: remove m32r specific bits
>   net: remove cris etrax ethernet driver
>   net: smsc: remove m32r specific smc91x configuration
>   raid: remove tile specific raid6 implementation
>   rtc: remove tile driver
>   rtc: remove bfin driver
>   char: remove obsolete ds1302 rtc driver
>   char: remove tile-srom.c
>   char: remove blackfin OTP driver
>   pcmcia: remove m32r drivers
>   pcmcia: remove blackfin driver
>   ASoC: remove blackfin drivers
>   video/logo: remove obsolete logo files
>   fbdev: remove blackfin drivers
>   fbdev: s1d13xxxfb: remove m32r specific hacks
>   crypto: remove blackfin CRC driver
>   media: platform: remove blackfin capture driver
>   media: platform: remove m32r specific arv driver
>   cpufreq: remove blackfin driver
>   cpufreq: remove cris specific drivers
>   gpio: remove etraxfs driver
>   pinctrl: remove adi2/blackfin drivers
>   ata: remove bf54x driver
>   input: keyboard: remove bf54x driver
>   input: misc: remove blackfin rotary driver
>   mmc: remove bfin_sdh driver
>   can: remove bfin_can driver
>   watchdog: remove bfin_wdt driver
>   mtd: maps: remove bfin-async-flash driver
>   mtd: nand: remove bf5xx_nand driver

If you don't mind, I'd like to take the mtd patches through the MTD
tree. As you've probably noticed, nand code has been moved around and
it's easier for me to carry those 2 simple changes in my tree than
creating an immutable branch.

Let me know if this is a problem.

Regards,

Boris

-- 
Boris Brezillon, Bootlin (formerly Free Electrons)
Embedded Linux and Kernel engineering
https://bootlin.com


Re: Official Notice

2018-03-14 Thread Shalom Saada Saar
This email just won a sum of €5,000,000. For claims, Send your NAME, AGE & 
TELEPHONE NUMBER to:  mastercard-awa...@columbus.rr.com


[PATCH] crypto: doc - Document remaining members in struct crypto_alg

2018-03-14 Thread Gary R Hook
Add missing comments for union members ablkcipher, blkcipher,
cipher, and compress. This silences complaints when building
the htmldocs.

Fixes: 0d7f488f0305a (crypto: doc - cipher data structures)
Signed-off-by: Gary R Hook 
---
 include/linux/crypto.h |8 
 1 file changed, 8 insertions(+)

diff --git a/include/linux/crypto.h b/include/linux/crypto.h
index 7e6e84cf6383..6eb06101089f 100644
--- a/include/linux/crypto.h
+++ b/include/linux/crypto.h
@@ -435,6 +435,14 @@ struct compress_alg {
  * @cra_exit: Deinitialize the cryptographic transformation object. This is a
  *   counterpart to @cra_init, used to remove various changes set in
  *   @cra_init.
+ * @cra_u.ablkcipher: Union member which contains an asynchronous block cipher
+ *   definition. See @struct @ablkcipher_alg.
+ * @cra_u.blkcipher: Union member which contains a synchronous block cipher
+ *  definition See @struct @blkcipher_alg.
+ * @cra_u.cipher: Union member which contains a single-block symmetric cipher
+ *   definition. See @struct @cipher_alg.
+ * @cra_u.compress: Union member which contains a (de)compression algorithm.
+ * See @struct @compress_alg.
  * @cra_module: Owner of this transformation implementation. Set to THIS_MODULE
  * @cra_list: internally used
  * @cra_users: internally used



Re: [PATCH] crypto: ctr: avoid VLA use

2018-03-14 Thread Salvatore Mesoraca
2018-03-14 19:31 GMT+01:00 Eric Biggers :
> On Wed, Mar 14, 2018 at 02:17:30PM +0100, Salvatore Mesoraca wrote:
>> All ciphers implemented in Linux have a block size less than or
>> equal to 16 bytes and the most demanding hw require 16 bits
>> alignment for the block buffer.
>> We avoid 2 VLAs[1] by always allocating 16 bytes with 16 bits
>> alignment, unless the architecture support efficient unaligned
>> accesses.
>> We also check, at runtime, that our assumptions still stand,
>> possibly dynamically allocating a new buffer, just in case
>> something changes in the future.
>>
>> [1] https://lkml.org/lkml/2018/3/7/621
>>
>> Signed-off-by: Salvatore Mesoraca 
>> ---
>>
>> Notes:
>> Can we maybe skip the runtime check?
>>
>>  crypto/ctr.c | 50 ++
>>  1 file changed, 42 insertions(+), 8 deletions(-)
>>
>> diff --git a/crypto/ctr.c b/crypto/ctr.c
>> index 854d924..f37adf0 100644
>> --- a/crypto/ctr.c
>> +++ b/crypto/ctr.c
>> @@ -35,6 +35,16 @@ struct crypto_rfc3686_req_ctx {
>>   struct skcipher_request subreq CRYPTO_MINALIGN_ATTR;
>>  };
>>
>> +#ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
>> +#define DECLARE_CIPHER_BUFFER(name) u8 name[16]
>> +#else
>> +#define DECLARE_CIPHER_BUFFER(name) u8 __aligned(16) name[16]
>> +#endif
>> +
>> +#define CHECK_CIPHER_BUFFER(name, size, align)   \
>> + likely(size <= sizeof(name) &&  \
>> +name == PTR_ALIGN(((u8 *) name), align + 1))
>> +
>>  static int crypto_ctr_setkey(struct crypto_tfm *parent, const u8 *key,
>>unsigned int keylen)
>>  {
>> @@ -52,22 +62,35 @@ static int crypto_ctr_setkey(struct crypto_tfm *parent, 
>> const u8 *key,
>>   return err;
>>  }
>>
>> -static void crypto_ctr_crypt_final(struct blkcipher_walk *walk,
>> -struct crypto_cipher *tfm)
>> +static int crypto_ctr_crypt_final(struct blkcipher_walk *walk,
>> +   struct crypto_cipher *tfm)
>>  {
>>   unsigned int bsize = crypto_cipher_blocksize(tfm);
>>   unsigned long alignmask = crypto_cipher_alignmask(tfm);
>>   u8 *ctrblk = walk->iv;
>> - u8 tmp[bsize + alignmask];
>> - u8 *keystream = PTR_ALIGN(tmp + 0, alignmask + 1);
>>   u8 *src = walk->src.virt.addr;
>>   u8 *dst = walk->dst.virt.addr;
>>   unsigned int nbytes = walk->nbytes;
>> + DECLARE_CIPHER_BUFFER(tmp);
>> + u8 *keystream, *tmp2;
>> +
>> + if (CHECK_CIPHER_BUFFER(tmp, bsize, alignmask))
>> + keystream = tmp;
>> + else {
>> + tmp2 = kmalloc(bsize + alignmask, GFP_ATOMIC);
>> + if (!tmp2)
>> + return -ENOMEM;
>> + keystream = PTR_ALIGN(tmp2 + 0, alignmask + 1);
>> + }
>>
>>   crypto_cipher_encrypt_one(tfm, keystream, ctrblk);
>>   crypto_xor_cpy(dst, keystream, src, nbytes);
>>
>>   crypto_inc(ctrblk, bsize);
>> +
>> + if (unlikely(keystream != tmp))
>> + kfree(tmp2);
>> + return 0;
>>  }
>
> This seems silly; isn't the !CHECK_CIPHER_BUFFER() case unreachable?  Did you
> even test it? If there's going to be limits, the crypto API ought to enforce
> them when registering an algorithm.

Yes, as I wrote in the commit log, I put that code just in case
something changes (i.e.
someone adds a cipher with a bigger block size), so that it won't fail
but just work as
is. Although I didn't really like it, hence the note.

> A better alternative may be to move the keystream buffer into the request
> context, which is allowed to be variable length.  It looks like that would
> require converting the ctr template over to the skcipher API, since the
> blkcipher API doesn't have a request context.  But my understanding is that 
> that
> will need to be done eventually anyway, since the blkcipher (and ablkcipher) 
> API
> is going away.  I converted a bunch of algorithms recently and I can look at 
> the
> remaining ones in crypto/*.c if no one else gets to it first, but it may be a
> little while until I have time.

This seems much better. I don't think that removing these VLAs is
urgent, after all their sizes
are limited and not under user control: we can just wait.
I might help porting some crypto/*.c to skcipher API.

> Also, I recall there being a long discussion a while back about how
> __aligned(16) doesn't work on local variables because the kernel's stack 
> pointer
> isn't guaranteed to maintain the alignment assumed by the compiler (see commit
> b8fbe71f7535)...

Oh... didn't know this! Interesting...

Thank you for your time,

Salvatore


Re: [PATCH] crypto: ctr: avoid VLA use

2018-03-14 Thread Eric Biggers
On Wed, Mar 14, 2018 at 02:17:30PM +0100, Salvatore Mesoraca wrote:
> All ciphers implemented in Linux have a block size less than or
> equal to 16 bytes and the most demanding hw require 16 bits
> alignment for the block buffer.
> We avoid 2 VLAs[1] by always allocating 16 bytes with 16 bits
> alignment, unless the architecture support efficient unaligned
> accesses.
> We also check, at runtime, that our assumptions still stand,
> possibly dynamically allocating a new buffer, just in case
> something changes in the future.
> 
> [1] https://lkml.org/lkml/2018/3/7/621
> 
> Signed-off-by: Salvatore Mesoraca 
> ---
> 
> Notes:
> Can we maybe skip the runtime check?
> 
>  crypto/ctr.c | 50 ++
>  1 file changed, 42 insertions(+), 8 deletions(-)
> 
> diff --git a/crypto/ctr.c b/crypto/ctr.c
> index 854d924..f37adf0 100644
> --- a/crypto/ctr.c
> +++ b/crypto/ctr.c
> @@ -35,6 +35,16 @@ struct crypto_rfc3686_req_ctx {
>   struct skcipher_request subreq CRYPTO_MINALIGN_ATTR;
>  };
>  
> +#ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
> +#define DECLARE_CIPHER_BUFFER(name) u8 name[16]
> +#else
> +#define DECLARE_CIPHER_BUFFER(name) u8 __aligned(16) name[16]
> +#endif
> +
> +#define CHECK_CIPHER_BUFFER(name, size, align)   \
> + likely(size <= sizeof(name) &&  \
> +name == PTR_ALIGN(((u8 *) name), align + 1))
> +
>  static int crypto_ctr_setkey(struct crypto_tfm *parent, const u8 *key,
>unsigned int keylen)
>  {
> @@ -52,22 +62,35 @@ static int crypto_ctr_setkey(struct crypto_tfm *parent, 
> const u8 *key,
>   return err;
>  }
>  
> -static void crypto_ctr_crypt_final(struct blkcipher_walk *walk,
> -struct crypto_cipher *tfm)
> +static int crypto_ctr_crypt_final(struct blkcipher_walk *walk,
> +   struct crypto_cipher *tfm)
>  {
>   unsigned int bsize = crypto_cipher_blocksize(tfm);
>   unsigned long alignmask = crypto_cipher_alignmask(tfm);
>   u8 *ctrblk = walk->iv;
> - u8 tmp[bsize + alignmask];
> - u8 *keystream = PTR_ALIGN(tmp + 0, alignmask + 1);
>   u8 *src = walk->src.virt.addr;
>   u8 *dst = walk->dst.virt.addr;
>   unsigned int nbytes = walk->nbytes;
> + DECLARE_CIPHER_BUFFER(tmp);
> + u8 *keystream, *tmp2;
> +
> + if (CHECK_CIPHER_BUFFER(tmp, bsize, alignmask))
> + keystream = tmp;
> + else {
> + tmp2 = kmalloc(bsize + alignmask, GFP_ATOMIC);
> + if (!tmp2)
> + return -ENOMEM;
> + keystream = PTR_ALIGN(tmp2 + 0, alignmask + 1);
> + }
>  
>   crypto_cipher_encrypt_one(tfm, keystream, ctrblk);
>   crypto_xor_cpy(dst, keystream, src, nbytes);
>  
>   crypto_inc(ctrblk, bsize);
> +
> + if (unlikely(keystream != tmp))
> + kfree(tmp2);
> + return 0;
>  }

This seems silly; isn't the !CHECK_CIPHER_BUFFER() case unreachable?  Did you
even test it?  If there's going to be limits, the crypto API ought to enforce
them when registering an algorithm.

A better alternative may be to move the keystream buffer into the request
context, which is allowed to be variable length.  It looks like that would
require converting the ctr template over to the skcipher API, since the
blkcipher API doesn't have a request context.  But my understanding is that that
will need to be done eventually anyway, since the blkcipher (and ablkcipher) API
is going away.  I converted a bunch of algorithms recently and I can look at the
remaining ones in crypto/*.c if no one else gets to it first, but it may be a
little while until I have time.

Also, I recall there being a long discussion a while back about how
__aligned(16) doesn't work on local variables because the kernel's stack pointer
isn't guaranteed to maintain the alignment assumed by the compiler (see commit
b8fbe71f7535)...

Eric


[PATCH 19/47] crypto: remove blackfin CRC driver

2018-03-14 Thread Arnd Bergmann
The blackfin architecture is getting removed, so this
driver won't be used any more.

Signed-off-by: Arnd Bergmann 
---
 drivers/crypto/Kconfig|   7 -
 drivers/crypto/Makefile   |   1 -
 drivers/crypto/bfin_crc.c | 753 --
 drivers/crypto/bfin_crc.h | 124 
 4 files changed, 885 deletions(-)
 delete mode 100644 drivers/crypto/bfin_crc.c
 delete mode 100644 drivers/crypto/bfin_crc.h

diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig
index 5c4106979cc9..d1ea1a07cecb 100644
--- a/drivers/crypto/Kconfig
+++ b/drivers/crypto/Kconfig
@@ -464,13 +464,6 @@ if CRYPTO_DEV_UX500
source "drivers/crypto/ux500/Kconfig"
 endif # if CRYPTO_DEV_UX500
 
-config CRYPTO_DEV_BFIN_CRC
-   tristate "Support for Blackfin CRC hardware"
-   depends on BF60x
-   help
- Newer Blackfin processors have CRC hardware. Select this if you
- want to use the Blackfin CRC module.
-
 config CRYPTO_DEV_ATMEL_AUTHENC
tristate "Support for Atmel IPSEC/SSL hw accelerator"
depends on HAS_DMA
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index ee5ec5c94383..7ae87b4f6c8d 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -3,7 +3,6 @@ obj-$(CONFIG_CRYPTO_DEV_ATMEL_AES) += atmel-aes.o
 obj-$(CONFIG_CRYPTO_DEV_ATMEL_SHA) += atmel-sha.o
 obj-$(CONFIG_CRYPTO_DEV_ATMEL_TDES) += atmel-tdes.o
 obj-$(CONFIG_CRYPTO_DEV_ATMEL_ECC) += atmel-ecc.o
-obj-$(CONFIG_CRYPTO_DEV_BFIN_CRC) += bfin_crc.o
 obj-$(CONFIG_CRYPTO_DEV_CAVIUM_ZIP) += cavium/
 obj-$(CONFIG_CRYPTO_DEV_CCP) += ccp/
 obj-$(CONFIG_CRYPTO_DEV_CCREE) += ccree/
diff --git a/drivers/crypto/bfin_crc.c b/drivers/crypto/bfin_crc.c
deleted file mode 100644
index cfc4d682d55a..
diff --git a/drivers/crypto/bfin_crc.h b/drivers/crypto/bfin_crc.h
deleted file mode 100644
index 786ef746d109..
-- 
2.9.0



[PATCH 00/47] arch-removal: device drivers

2018-03-14 Thread Arnd Bergmann
Hi driver maintainers,

I just posted one series with the removal of eight architectures,
see https://lkml.org/lkml/2018/3/14/505 for details, or
https://lwn.net/Articles/748074/ for more background.

These are the device drivers that go along with them. I have already
picked up the drivers for arch/metag/ into my tree, they were reviewed
earlier.

Please let me know if you have any concerns with the patch, or if you
prefer to pick up the patches in your respective trees.  I created
the patches with 'git format-patch -D', so they will not apply without
manually removing those files.

For anything else, I'd keep the removal patches in my asm-generic tree
and will send a pull request for 4.17 along with the actual arch removal.

   Arnd

Arnd Bergmann
  edac: remove tile driver
  net: tile: remove ethernet drivers
  net: adi: remove blackfin ethernet drivers
  net: 8390: remove m32r specific bits
  net: remove cris etrax ethernet driver
  net: smsc: remove m32r specific smc91x configuration
  raid: remove tile specific raid6 implementation
  rtc: remove tile driver
  rtc: remove bfin driver
  char: remove obsolete ds1302 rtc driver
  char: remove tile-srom.c
  char: remove blackfin OTP driver
  pcmcia: remove m32r drivers
  pcmcia: remove blackfin driver
  ASoC: remove blackfin drivers
  video/logo: remove obsolete logo files
  fbdev: remove blackfin drivers
  fbdev: s1d13xxxfb: remove m32r specific hacks
  crypto: remove blackfin CRC driver
  media: platform: remove blackfin capture driver
  media: platform: remove m32r specific arv driver
  cpufreq: remove blackfin driver
  cpufreq: remove cris specific drivers
  gpio: remove etraxfs driver
  pinctrl: remove adi2/blackfin drivers
  ata: remove bf54x driver
  input: keyboard: remove bf54x driver
  input: misc: remove blackfin rotary driver
  mmc: remove bfin_sdh driver
  can: remove bfin_can driver
  watchdog: remove bfin_wdt driver
  mtd: maps: remove bfin-async-flash driver
  mtd: nand: remove bf5xx_nand driver
  spi: remove blackfin related host drivers
  i2c: remove bfin-twi driver
  pwm: remobe pwm-bfin driver
  usb: host: remove tilegx platform glue
  usb: musb: remove blackfin port
  usb: isp1362: remove blackfin arch glue
  serial: remove cris/etrax uart drivers
  serial: remove blackfin drivers
  serial: remove m32r_sio driver
  serial: remove tile uart driver
  tty: remove bfin_jtag_comm and hvc_bfin_jtag drivers
  tty: hvc: remove tile driver
  staging: irda: remove bfin_sir driver
  staging: iio: remove iio-trig-bfin-timer driver

 .../devicetree/bindings/gpio/gpio-etraxfs.txt  |   22 -
 .../bindings/serial/axis,etraxfs-uart.txt  |   22 -
 Documentation/watchdog/watchdog-parameters.txt |5 -
 MAINTAINERS|8 -
 drivers/ata/Kconfig|9 -
 drivers/ata/Makefile   |1 -
 drivers/ata/pata_bf54x.c   | 1703 
 drivers/char/Kconfig   |   48 -
 drivers/char/Makefile  |3 -
 drivers/char/bfin-otp.c|  237 --
 drivers/char/ds1302.c  |  357 --
 drivers/char/tile-srom.c   |  475 ---
 drivers/cpufreq/Makefile   |3 -
 drivers/cpufreq/blackfin-cpufreq.c |  217 -
 drivers/cpufreq/cris-artpec3-cpufreq.c |   93 -
 drivers/cpufreq/cris-etraxfs-cpufreq.c |   92 -
 drivers/crypto/Kconfig |7 -
 drivers/crypto/Makefile|1 -
 drivers/crypto/bfin_crc.c  |  753 
 drivers/crypto/bfin_crc.h  |  124 -
 drivers/edac/Kconfig   |8 -
 drivers/edac/Makefile  |2 -
 drivers/edac/tile_edac.c   |  265 --
 drivers/gpio/Kconfig   |9 -
 drivers/gpio/Makefile  |1 -
 drivers/gpio/gpio-etraxfs.c|  475 ---
 drivers/i2c/busses/Kconfig |   18 -
 drivers/i2c/busses/Makefile|1 -
 drivers/i2c/busses/i2c-bfin-twi.c  |  737 
 drivers/input/keyboard/Kconfig |9 -
 drivers/input/keyboard/Makefile|1 -
 drivers/input/keyboard/bf54x-keys.c|  396 --
 drivers/input/misc/Kconfig |9 -
 drivers/input/misc/Makefile|1 -
 drivers/input/misc/bfin_rotary.c   |  294 --
 drivers/media/platform/Kconfig |   22 -
 drivers/media/platform/Makefile|4 -
 drivers/media/platform/arv.c   |  884 
 drivers/media/platform/blackfin/Kconfig|   16 -
 drivers/media/platform/blackfin/Makefile   | 

Re: [PATCH] crypto: ctr: avoid VLA use

2018-03-14 Thread Stephan Mueller
Am Mittwoch, 14. März 2018, 14:46:29 CET schrieb Salvatore Mesoraca:

Hi Salvatore,

> 2018-03-14 14:31 GMT+01:00 Stephan Mueller :
> > Am Mittwoch, 14. März 2018, 14:17:30 CET schrieb Salvatore Mesoraca:
> > 
> > Hi Salvatore,
> > 
> >>   if (walk.nbytes) {
> >> 
> >> - crypto_ctr_crypt_final(, child);
> >> - err = blkcipher_walk_done(desc, , 0);
> >> + err = crypto_ctr_crypt_final(, child);
> >> + err = blkcipher_walk_done(desc, , err);
> > 
> > I guess you either want to handle the error from crypto_ctr_crypt_final or
> > do an err |= blkcipher_walk_done.
> 
> I think that blkcipher_walk_done handles and returns the error for me.
> Am I wrong?

You are right as you want to finalize the crypto operation even though the 
encryption fails.

Please disregard my comment.
> 
> Best regards,
> 
> Salvatore



Ciao
Stephan




Re: [PATCH] crypto: ctr: avoid VLA use

2018-03-14 Thread Salvatore Mesoraca
2018-03-14 14:31 GMT+01:00 Stephan Mueller :
> Am Mittwoch, 14. März 2018, 14:17:30 CET schrieb Salvatore Mesoraca:
>
> Hi Salvatore,
>
>>   if (walk.nbytes) {
>> - crypto_ctr_crypt_final(, child);
>> - err = blkcipher_walk_done(desc, , 0);
>> + err = crypto_ctr_crypt_final(, child);
>> + err = blkcipher_walk_done(desc, , err);
>
> I guess you either want to handle the error from crypto_ctr_crypt_final or do
> an err |= blkcipher_walk_done.

I think that blkcipher_walk_done handles and returns the error for me.
Am I wrong?

Best regards,

Salvatore


Re: [PATCH] crypto: ctr: avoid VLA use

2018-03-14 Thread Stephan Mueller
Am Mittwoch, 14. März 2018, 14:17:30 CET schrieb Salvatore Mesoraca:

Hi Salvatore,

>   if (walk.nbytes) {
> - crypto_ctr_crypt_final(, child);
> - err = blkcipher_walk_done(desc, , 0);
> + err = crypto_ctr_crypt_final(, child);
> + err = blkcipher_walk_done(desc, , err);

I guess you either want to handle the error from crypto_ctr_crypt_final or do 
an err |= blkcipher_walk_done.

>   }
> 
>   return err;



Ciao
Stephan




[PATCH] crypto: ctr: avoid VLA use

2018-03-14 Thread Salvatore Mesoraca
All ciphers implemented in Linux have a block size less than or
equal to 16 bytes and the most demanding hw require 16 bits
alignment for the block buffer.
We avoid 2 VLAs[1] by always allocating 16 bytes with 16 bits
alignment, unless the architecture support efficient unaligned
accesses.
We also check, at runtime, that our assumptions still stand,
possibly dynamically allocating a new buffer, just in case
something changes in the future.

[1] https://lkml.org/lkml/2018/3/7/621

Signed-off-by: Salvatore Mesoraca 
---

Notes:
Can we maybe skip the runtime check?

 crypto/ctr.c | 50 ++
 1 file changed, 42 insertions(+), 8 deletions(-)

diff --git a/crypto/ctr.c b/crypto/ctr.c
index 854d924..f37adf0 100644
--- a/crypto/ctr.c
+++ b/crypto/ctr.c
@@ -35,6 +35,16 @@ struct crypto_rfc3686_req_ctx {
struct skcipher_request subreq CRYPTO_MINALIGN_ATTR;
 };
 
+#ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
+#define DECLARE_CIPHER_BUFFER(name) u8 name[16]
+#else
+#define DECLARE_CIPHER_BUFFER(name) u8 __aligned(16) name[16]
+#endif
+
+#define CHECK_CIPHER_BUFFER(name, size, align) \
+   likely(size <= sizeof(name) &&  \
+  name == PTR_ALIGN(((u8 *) name), align + 1))
+
 static int crypto_ctr_setkey(struct crypto_tfm *parent, const u8 *key,
 unsigned int keylen)
 {
@@ -52,22 +62,35 @@ static int crypto_ctr_setkey(struct crypto_tfm *parent, 
const u8 *key,
return err;
 }
 
-static void crypto_ctr_crypt_final(struct blkcipher_walk *walk,
-  struct crypto_cipher *tfm)
+static int crypto_ctr_crypt_final(struct blkcipher_walk *walk,
+ struct crypto_cipher *tfm)
 {
unsigned int bsize = crypto_cipher_blocksize(tfm);
unsigned long alignmask = crypto_cipher_alignmask(tfm);
u8 *ctrblk = walk->iv;
-   u8 tmp[bsize + alignmask];
-   u8 *keystream = PTR_ALIGN(tmp + 0, alignmask + 1);
u8 *src = walk->src.virt.addr;
u8 *dst = walk->dst.virt.addr;
unsigned int nbytes = walk->nbytes;
+   DECLARE_CIPHER_BUFFER(tmp);
+   u8 *keystream, *tmp2;
+
+   if (CHECK_CIPHER_BUFFER(tmp, bsize, alignmask))
+   keystream = tmp;
+   else {
+   tmp2 = kmalloc(bsize + alignmask, GFP_ATOMIC);
+   if (!tmp2)
+   return -ENOMEM;
+   keystream = PTR_ALIGN(tmp2 + 0, alignmask + 1);
+   }
 
crypto_cipher_encrypt_one(tfm, keystream, ctrblk);
crypto_xor_cpy(dst, keystream, src, nbytes);
 
crypto_inc(ctrblk, bsize);
+
+   if (unlikely(keystream != tmp))
+   kfree(tmp2);
+   return 0;
 }
 
 static int crypto_ctr_crypt_segment(struct blkcipher_walk *walk,
@@ -106,8 +129,17 @@ static int crypto_ctr_crypt_inplace(struct blkcipher_walk 
*walk,
unsigned int nbytes = walk->nbytes;
u8 *ctrblk = walk->iv;
u8 *src = walk->src.virt.addr;
-   u8 tmp[bsize + alignmask];
-   u8 *keystream = PTR_ALIGN(tmp + 0, alignmask + 1);
+   DECLARE_CIPHER_BUFFER(tmp);
+   u8 *keystream, *tmp2;
+
+   if (CHECK_CIPHER_BUFFER(tmp, bsize, alignmask))
+   keystream = tmp;
+   else {
+   tmp2 = kmalloc(bsize + alignmask, GFP_ATOMIC);
+   if (!tmp2)
+   return -ENOMEM;
+   keystream = PTR_ALIGN(tmp2 + 0, alignmask + 1);
+   }
 
do {
/* create keystream */
@@ -120,6 +152,8 @@ static int crypto_ctr_crypt_inplace(struct blkcipher_walk 
*walk,
src += bsize;
} while ((nbytes -= bsize) >= bsize);
 
+   if (unlikely(keystream != tmp))
+   kfree(tmp2);
return nbytes;
 }
 
@@ -147,8 +181,8 @@ static int crypto_ctr_crypt(struct blkcipher_desc *desc,
}
 
if (walk.nbytes) {
-   crypto_ctr_crypt_final(, child);
-   err = blkcipher_walk_done(desc, , 0);
+   err = crypto_ctr_crypt_final(, child);
+   err = blkcipher_walk_done(desc, , err);
}
 
return err;
-- 
1.9.1



Re: [PATCH] crypto: arm,arm64 - Fix random regeneration of S_shipped

2018-03-14 Thread Ard Biesheuvel
On 14 March 2018 at 02:31, Masahiro Yamada
 wrote:
> 2018-03-14 5:17 GMT+09:00 Leonard Crestez :
>> The decision to rebuild .S_shipped is made based on the relative
>> timestamps of .S_shipped and .pl files but git makes this essentially
>> random. This means that the perl script might run anyway (usually at
>> most once per checkout), defeating the whole purpose of _shipped.
>>
>> Fix by skipping the rule unless explicit make variables are provided:
>> REGENERATE_ARM_CRYPTO or REGENERATE_ARM64_CRYPTO.
>>
>> This can produce nasty occasional build failures downstream, for example
>> for toolchains with broken perl. The solution is minimally intrusive to
>> make it easier to push into stable.
>>
>> Another report on a similar issue here: https://lkml.org/lkml/2018/3/8/1379
>>
>> Signed-off-by: Leonard Crestez 
>> Cc: 
>> ---
>
>
>
> Reviewed-by: Masahiro Yamada 
>

Acked-by: Ard Biesheuvel 

>
>
>>  arch/arm/crypto/Makefile   | 2 ++
>>  arch/arm64/crypto/Makefile | 2 ++
>>  2 files changed, 4 insertions(+)
>>
>> Not clear if this needs to go through crypto or arm but all commits in these
>> directories start with "crypto:".
>>
>> My problems were only on arm64 because of a yocto toolchain which ships a 
>> version
>> of perl which fails on "use integer;".
>>
>> CC stable because this can cause trouble for downstream packagers.
>>
>> diff --git a/arch/arm/crypto/Makefile b/arch/arm/crypto/Makefile
>> index 30ef8e2..c9919c2 100644
>> --- a/arch/arm/crypto/Makefile
>> +++ b/arch/arm/crypto/Makefile
>> @@ -47,20 +47,22 @@ sha256-arm-y:= sha256-core.o sha256_glue.o 
>> $(sha256-arm-neon-y)
>>  sha512-arm-neon-$(CONFIG_KERNEL_MODE_NEON) := sha512-neon-glue.o
>>  sha512-arm-y   := sha512-core.o sha512-glue.o $(sha512-arm-neon-y)
>>  sha1-arm-ce-y  := sha1-ce-core.o sha1-ce-glue.o
>>  sha2-arm-ce-y  := sha2-ce-core.o sha2-ce-glue.o
>>  aes-arm-ce-y   := aes-ce-core.o aes-ce-glue.o
>>  ghash-arm-ce-y := ghash-ce-core.o ghash-ce-glue.o
>>  crct10dif-arm-ce-y := crct10dif-ce-core.o crct10dif-ce-glue.o
>>  crc32-arm-ce-y:= crc32-ce-core.o crc32-ce-glue.o
>>  chacha20-neon-y := chacha20-neon-core.o chacha20-neon-glue.o
>>
>> +ifdef REGENERATE_ARM_CRYPTO
>>  quiet_cmd_perl = PERL$@
>>cmd_perl = $(PERL) $(<) > $(@)
>>
>>  $(src)/sha256-core.S_shipped: $(src)/sha256-armv4.pl
>> $(call cmd,perl)
>>
>>  $(src)/sha512-core.S_shipped: $(src)/sha512-armv4.pl
>> $(call cmd,perl)
>> +endif
>>
>>  .PRECIOUS: $(obj)/sha256-core.S $(obj)/sha512-core.S
>> diff --git a/arch/arm64/crypto/Makefile b/arch/arm64/crypto/Makefile
>> index cee9b8d9..dfe651b 100644
>> --- a/arch/arm64/crypto/Makefile
>> +++ b/arch/arm64/crypto/Makefile
>> @@ -60,20 +60,22 @@ obj-$(CONFIG_CRYPTO_AES_ARM64_BS) += aes-neon-bs.o
>>  aes-neon-bs-y := aes-neonbs-core.o aes-neonbs-glue.o
>>
>>  AFLAGS_aes-ce.o:= -DINTERLEAVE=4
>>  AFLAGS_aes-neon.o  := -DINTERLEAVE=4
>>
>>  CFLAGS_aes-glue-ce.o   := -DUSE_V8_CRYPTO_EXTENSIONS
>>
>>  $(obj)/aes-glue-%.o: $(src)/aes-glue.c FORCE
>> $(call if_changed_rule,cc_o_c)
>>
>> +ifdef REGENERATE_ARM64_CRYPTO
>>  quiet_cmd_perlasm = PERLASM $@
>>cmd_perlasm = $(PERL) $(<) void $(@)
>>
>>  $(src)/sha256-core.S_shipped: $(src)/sha512-armv8.pl
>> $(call cmd,perlasm)
>>
>>  $(src)/sha512-core.S_shipped: $(src)/sha512-armv8.pl
>> $(call cmd,perlasm)
>> +endif
>>
>>  .PRECIOUS: $(obj)/sha256-core.S $(obj)/sha512-core.S
>> --
>> 2.7.4
>>
>
>
>
> --
> Best Regards
> Masahiro Yamada