Re: [PATCH V7 0/7] crypto: AES CBC multibuffer implementation

2017-07-31 Thread Megha Dey
On Tue, 2017-07-25 at 19:09 -0700, Megha Dey wrote:
> In this patch series, we introduce AES CBC encryption that is parallelized on
> x86_64 cpu with XMM registers. The multi-buffer technique encrypt 8 data
> streams in parallel with SIMD instructions. Decryption is handled as in the
> existing AESNI Intel CBC implementation which can already parallelize 
> decryption
> even for a single data stream.
> 
> Please see the multi-buffer whitepaper for details of the technique:
> http://www.intel.com/content/www/us/en/communications/communications-ia-multi-buffer-paper.html
> 
> It is important that any driver uses this algorithm properly for scenarios
> where we have many data streams that can fill up the data lanes most of the
> time. It shouldn't be used when only a single data stream is expected mostly.
> Otherwise we may incur extra delays when we have frequent gaps in data lanes,
> causing us to wait till data come in to fill the data lanes before initiating
> encryption.  We may have to wait for flush operations to commence when no new
> data come in after some wait time. However we keep this extra delay to a
> minimum by opportunistically flushing the unfinished jobs if crypto daemon is
> the only active task running on a cpu.
> 
> By using this technique, we saw a throughput increase of up to 5.7x under
> optimal conditions when we have fully loaded encryption jobs filling up all
> the data lanes.

Hi Herbert,

Are there any more issues with this patchset?
> 
> Change Log:
> 
> v7
> 1. Add the CRYPTO_ALG_ASYNC flag to the internal algorithm
> 2. Remove the irq_disabled check
> 
> v6
> 1. Move away from the compat naming scheme and update the names of the inner
>and outer algorithm
> 2. Move wrapper code around synchronous internal algorithm from simd.c
>to mcryptd.c
> 
> v5
> 1. Use an async implementation of the inner algorithm instead of sync and use
>the latest skcipher interface instead of the older blkcipher interface.
>(we have picked up this work after a while)
> 
> v4
> 1. Make the decrypt path also use ablkcpher walk.
> http://lkml.iu.edu/hypermail/linux/kernel/1512.0/01807.html
> 
> v3
> 1. Use ablkcipher_walk helpers to walk the scatter gather list
> and eliminated needs to modify blkcipher_walk for multibuffer cipher
> 
> v2
> 1. Update cpu feature check to make sure SSE is supported
> 2. Fix up unloading of aes-cbc-mb module to properly free memory
> 
> Megha Dey (7):
>   crypto: Multi-buffer encryption infrastructure support
>   crypto: AES CBC multi-buffer data structures
>   crypto: AES CBC multi-buffer scheduler
>   crypto: AES CBC by8 encryption
>   crypto: AES CBC multi-buffer glue code
>   crypto: AES vectors for AES CBC multibuffer testing
>   crypto: AES CBC multi-buffer tcrypt
> 
>  arch/x86/crypto/Makefile   |1 +
>  arch/x86/crypto/aes-cbc-mb/Makefile|   22 +
>  arch/x86/crypto/aes-cbc-mb/aes_cbc_enc_x8.S|  775 ++
>  arch/x86/crypto/aes-cbc-mb/aes_cbc_mb.c|  720 ++
>  arch/x86/crypto/aes-cbc-mb/aes_cbc_mb_ctx.h|   97 ++
>  arch/x86/crypto/aes-cbc-mb/aes_cbc_mb_mgr.h|  132 ++
>  arch/x86/crypto/aes-cbc-mb/aes_mb_mgr_init.c   |  146 ++
>  arch/x86/crypto/aes-cbc-mb/mb_mgr_datastruct.S |  271 
>  arch/x86/crypto/aes-cbc-mb/mb_mgr_inorder_x8_asm.S |  223 +++
>  arch/x86/crypto/aes-cbc-mb/mb_mgr_ooo_x8_asm.S |  417 ++
>  arch/x86/crypto/aes-cbc-mb/reg_sizes.S |  126 ++
>  crypto/Kconfig |   15 +
>  crypto/mcryptd.c   |  475 +++
>  crypto/tcrypt.c|  259 +++-
>  crypto/testmgr.c   |  707 +
>  crypto/testmgr.h   | 1496 
> 
>  include/crypto/mcryptd.h   |   56 +
>  17 files changed, 5936 insertions(+), 2 deletions(-)
>  create mode 100644 arch/x86/crypto/aes-cbc-mb/Makefile
>  create mode 100644 arch/x86/crypto/aes-cbc-mb/aes_cbc_enc_x8.S
>  create mode 100644 arch/x86/crypto/aes-cbc-mb/aes_cbc_mb.c
>  create mode 100644 arch/x86/crypto/aes-cbc-mb/aes_cbc_mb_ctx.h
>  create mode 100644 arch/x86/crypto/aes-cbc-mb/aes_cbc_mb_mgr.h
>  create mode 100644 arch/x86/crypto/aes-cbc-mb/aes_mb_mgr_init.c
>  create mode 100644 arch/x86/crypto/aes-cbc-mb/mb_mgr_datastruct.S
>  create mode 100644 arch/x86/crypto/aes-cbc-mb/mb_mgr_inorder_x8_asm.S
>  create mode 100644 arch/x86/crypto/aes-cbc-mb/mb_mgr_ooo_x8_asm.S
>  create mode 100644 arch/x86/crypto/aes-cbc-mb/reg_sizes.S
> 




Re: [PATCH v3 net-next 1/4] tcp: ULP infrastructure

2017-07-31 Thread Dave Watson
On 07/29/17 01:12 PM, Tom Herbert wrote:
> On Wed, Jun 14, 2017 at 11:37 AM, Dave Watson  wrote:
> > Add the infrustructure for attaching Upper Layer Protocols (ULPs) over TCP
> > sockets. Based on a similar infrastructure in tcp_cong.  The idea is that 
> > any
> > ULP can add its own logic by changing the TCP proto_ops structure to its own
> > methods.
> >
> > Example usage:
> >
> > setsockopt(sock, SOL_TCP, TCP_ULP, "tls", sizeof("tls"));
> >
> One question: is there a good reason why the ULP infrastructure should
> just be for TCP sockets. For example, I'd really like to be able
> something like:
> 
> setsockopt(sock, SOL_SOCKET, SO_ULP, _param, sizeof(ulp_param));
> 
> Where ulp_param is a structure containing the ULP name as well as some
> ULP specific parameters that are passed to init_ulp. ulp_init could
> determine whether the socket family is appropriate for the ULP being
> requested.

Using SOL_SOCKET instead seems reasonable to me.  I can see how
ulp_params could have some use, perhaps at a slight loss in clarity.
TLS needs its own setsockopts anyway though, for renegotiate for
example.


[PATCH] crypto: ccp - select CONFIG_CRYPTO_RSA

2017-07-31 Thread Arnd Bergmann
Without the base RSA code, we run into a link error:

ERROR: "rsa_parse_pub_key" [drivers/crypto/ccp/ccp-crypto.ko] undefined!
ERROR: "rsa_parse_priv_key" [drivers/crypto/ccp/ccp-crypto.ko] undefined!

Like the other drivers implementing RSA in hardware, this
can be avoided by always enabling the base support when we build
CCP.

Fixes: ceeec0afd684 ("crypto: ccp - Add support for RSA on the CCP")
Signed-off-by: Arnd Bergmann 
---
 drivers/crypto/ccp/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/crypto/ccp/Kconfig b/drivers/crypto/ccp/Kconfig
index 15b63fd3d180..6d626606b9c5 100644
--- a/drivers/crypto/ccp/Kconfig
+++ b/drivers/crypto/ccp/Kconfig
@@ -27,6 +27,7 @@ config CRYPTO_DEV_CCP_CRYPTO
select CRYPTO_HASH
select CRYPTO_BLKCIPHER
select CRYPTO_AUTHENC
+   select CRYPTO_RSA
help
  Support for using the cryptographic API with the AMD Cryptographic
  Coprocessor. This module supports offload of SHA and AES algorithms.
-- 
2.9.0



[PATCH] crypto: ccp - avoid uninitialized variable warning

2017-07-31 Thread Arnd Bergmann
The added support for version 5 CCPs introduced a false-positive
warning in the RSA implementation:

drivers/crypto/ccp/ccp-ops.c: In function 'ccp_run_rsa_cmd':
drivers/crypto/ccp/ccp-ops.c:1856:3: error: 'sb_count' may be used 
uninitialized in this function [-Werror=maybe-uninitialized]

This changes the code in a way that should make it easier for
the compiler to track the state of the sb_count variable, and
avoid the warning.

Fixes: 6ba46c7d4d7e ("crypto: ccp - Fix base RSA function for version 5 CCPs")
Signed-off-by: Arnd Bergmann 
---
 drivers/crypto/ccp/ccp-ops.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/crypto/ccp/ccp-ops.c b/drivers/crypto/ccp/ccp-ops.c
index 40c062ad8726..a8bc207b099a 100644
--- a/drivers/crypto/ccp/ccp-ops.c
+++ b/drivers/crypto/ccp/ccp-ops.c
@@ -1758,6 +1758,7 @@ static int ccp_run_rsa_cmd(struct ccp_cmd_queue *cmd_q, 
struct ccp_cmd *cmd)
o_len = 32 * ((rsa->key_size + 255) / 256);
i_len = o_len * 2;
 
+   sb_count = 0;
if (cmd_q->ccp->vdata->version < CCP_VERSION(5, 0)) {
/* sb_count is the number of storage block slots required
 * for the modulus.
@@ -1852,7 +1853,7 @@ static int ccp_run_rsa_cmd(struct ccp_cmd_queue *cmd_q, 
struct ccp_cmd *cmd)
ccp_dm_free();
 
 e_sb:
-   if (cmd_q->ccp->vdata->version < CCP_VERSION(5, 0))
+   if (sb_count)
cmd_q->ccp->vdata->perform->sbfree(cmd_q, op.sb_key, sb_count);
 
return ret;
-- 
2.9.0



[PATCH] crypto: serpent: improve __serpent_setkey with UBSAN

2017-07-31 Thread Arnd Bergmann
When UBSAN is enabled, we get a very large stack frame for
__serpent_setkey, when the register allocator ends up using more registers
than it has, and has to spill temporary values to the stack. The code
was originally optimized for in-order x86-32 CPU implementations using
older compilers, but it now runs into a highly suboptimal case on all
CPU architectures, as seen by this warning:

crypto/serpent_generic.c: In function '__serpent_setkey':
crypto/serpent_generic.c:436:1: error: the frame size of 2720 bytes is larger 
than 2048 bytes [-Werror=frame-larger-than=]

Disabling -fsanitize=alignment would avoid that warning, presumably the
option turns off a optimization step that is required for getting the
register allocation right, but there is no easy way to do that on gcc-7
(gcc-8 introduces a function attribute for this).

I tried to figure out a way to modify the source code instead, and noticed
that the two stages of the setkey() function (keyiter and sbox) each are
fine by themselves, but not when combined into one function. Splitting
out the entire sbox into a separate function also happens to work fine
with all compilers I tried (arm, arm64 and x86).

The setkey function uses a strange way to handle offsets into the key
array, using both negative and positive index values, as well as adjusting
the array pointer back and forth. I have checked that this actually
makes no difference to modern compilers, but I left that untouched
to make the patch easier to review and to keep the code closer to
the reference implementation.

Link: https://patchwork.kernel.org/patch/9189575/
Signed-off-by: Arnd Bergmann 
---
 crypto/serpent_generic.c | 77 ++--
 1 file changed, 41 insertions(+), 36 deletions(-)

diff --git a/crypto/serpent_generic.c b/crypto/serpent_generic.c
index 94970a794975..7c3382facc82 100644
--- a/crypto/serpent_generic.c
+++ b/crypto/serpent_generic.c
@@ -229,6 +229,46 @@
x4 ^= x2;   \
})
 
+static void __serpent_setkey_sbox(u32 r0, u32 r1, u32 r2, u32 r3, u32 r4, u32 
*k)
+{
+   k += 100;
+   S3(r3, r4, r0, r1, r2); store_and_load_keys(r1, r2, r4, r3, 28, 24);
+   S4(r1, r2, r4, r3, r0); store_and_load_keys(r2, r4, r3, r0, 24, 20);
+   S5(r2, r4, r3, r0, r1); store_and_load_keys(r1, r2, r4, r0, 20, 16);
+   S6(r1, r2, r4, r0, r3); store_and_load_keys(r4, r3, r2, r0, 16, 12);
+   S7(r4, r3, r2, r0, r1); store_and_load_keys(r1, r2, r0, r4, 12, 8);
+   S0(r1, r2, r0, r4, r3); store_and_load_keys(r0, r2, r4, r1, 8, 4);
+   S1(r0, r2, r4, r1, r3); store_and_load_keys(r3, r4, r1, r0, 4, 0);
+   S2(r3, r4, r1, r0, r2); store_and_load_keys(r2, r4, r3, r0, 0, -4);
+   S3(r2, r4, r3, r0, r1); store_and_load_keys(r0, r1, r4, r2, -4, -8);
+   S4(r0, r1, r4, r2, r3); store_and_load_keys(r1, r4, r2, r3, -8, -12);
+   S5(r1, r4, r2, r3, r0); store_and_load_keys(r0, r1, r4, r3, -12, -16);
+   S6(r0, r1, r4, r3, r2); store_and_load_keys(r4, r2, r1, r3, -16, -20);
+   S7(r4, r2, r1, r3, r0); store_and_load_keys(r0, r1, r3, r4, -20, -24);
+   S0(r0, r1, r3, r4, r2); store_and_load_keys(r3, r1, r4, r0, -24, -28);
+   k -= 50;
+   S1(r3, r1, r4, r0, r2); store_and_load_keys(r2, r4, r0, r3, 22, 18);
+   S2(r2, r4, r0, r3, r1); store_and_load_keys(r1, r4, r2, r3, 18, 14);
+   S3(r1, r4, r2, r3, r0); store_and_load_keys(r3, r0, r4, r1, 14, 10);
+   S4(r3, r0, r4, r1, r2); store_and_load_keys(r0, r4, r1, r2, 10, 6);
+   S5(r0, r4, r1, r2, r3); store_and_load_keys(r3, r0, r4, r2, 6, 2);
+   S6(r3, r0, r4, r2, r1); store_and_load_keys(r4, r1, r0, r2, 2, -2);
+   S7(r4, r1, r0, r2, r3); store_and_load_keys(r3, r0, r2, r4, -2, -6);
+   S0(r3, r0, r2, r4, r1); store_and_load_keys(r2, r0, r4, r3, -6, -10);
+   S1(r2, r0, r4, r3, r1); store_and_load_keys(r1, r4, r3, r2, -10, -14);
+   S2(r1, r4, r3, r2, r0); store_and_load_keys(r0, r4, r1, r2, -14, -18);
+   S3(r0, r4, r1, r2, r3); store_and_load_keys(r2, r3, r4, r0, -18, -22);
+   k -= 50;
+   S4(r2, r3, r4, r0, r1); store_and_load_keys(r3, r4, r0, r1, 28, 24);
+   S5(r3, r4, r0, r1, r2); store_and_load_keys(r2, r3, r4, r1, 24, 20);
+   S6(r2, r3, r4, r1, r0); store_and_load_keys(r4, r0, r3, r1, 20, 16);
+   S7(r4, r0, r3, r1, r2); store_and_load_keys(r2, r3, r1, r4, 16, 12);
+   S0(r2, r3, r1, r4, r0); store_and_load_keys(r1, r3, r4, r2, 12, 8);
+   S1(r1, r3, r4, r2, r0); store_and_load_keys(r0, r4, r2, r1, 8, 4);
+   S2(r0, r4, r2, r1, r3); store_and_load_keys(r3, r4, r0, r1, 4, 0);
+   S3(r3, r4, r0, r1, r2); storekeys(r1, r2, r4, r3, 0);
+}
+
 int __serpent_setkey(struct serpent_ctx *ctx, const u8 *key,
 unsigned int keylen)
 {
@@ -395,42 +435,7 @@ int __serpent_setkey(struct serpent_ctx *ctx, const u8 
*key,
keyiter(k[23], r1, r0, r3, 131, 31);
 
/* Apply S-boxes */
-
-   S3(r3, 

Re: [PATCH v5 3/6] iomap: introduce io{read|write}64_{lo_hi|hi_lo}

2017-07-31 Thread Andy Shevchenko
On Mon, Jul 31, 2017 at 9:04 PM, Logan Gunthorpe  wrote:
>
>
> On 31/07/17 12:03 PM, Andy Shevchenko wrote:
>>
>> Per commit 3a044178cccf they are exactly created for this kind of cases.
>>
>
> Sure, ok, and my patchset provides the same set of functions to satisfy
> such a use.

Okay, please, Cc me for next version, I think I need fresh view on it.
Thanks!

-- 
With Best Regards,
Andy Shevchenko


Re: [PATCH v5 3/6] iomap: introduce io{read|write}64_{lo_hi|hi_lo}

2017-07-31 Thread Logan Gunthorpe


On 31/07/17 12:03 PM, Andy Shevchenko wrote:
> 
> Per commit 3a044178cccf they are exactly created for this kind of cases.
> 

Sure, ok, and my patchset provides the same set of functions to satisfy
such a use.

Logan


Re: [PATCH v5 3/6] iomap: introduce io{read|write}64_{lo_hi|hi_lo}

2017-07-31 Thread Andy Shevchenko
On Mon, Jul 31, 2017 at 9:00 PM, Logan Gunthorpe  wrote:
> On 31/07/17 11:58 AM, Andy Shevchenko wrote:
>> On Mon, Jul 31, 2017 at 7:31 PM, Logan Gunthorpe  wrote:
>>> On 31/07/17 10:10 AM, Andy Shevchenko wrote:
 Some drivers (hardware) would like to have non-atomic MMIO accesses
 when readq() defined
>>>
>>> Huh? But that's the whole point of the io64-nonatomic header. If a
>>> driver wants a specific non-atomic access they should just code two 32
>>> bit accesses.
>
>> You mean to call them directly as lo_hi_XXX() or hi_lo_XXX() ?
>> Yes it would work.
>
> I suppose you could do that too but I really meant just using two io32
> calls. That's the most explicit way to indicate you want a non-atomic
> access.

Per commit 3a044178cccf they are exactly created for this kind of cases.

-- 
With Best Regards,
Andy Shevchenko


Re: [PATCH v5 3/6] iomap: introduce io{read|write}64_{lo_hi|hi_lo}

2017-07-31 Thread Logan Gunthorpe


On 31/07/17 11:58 AM, Andy Shevchenko wrote:
> On Mon, Jul 31, 2017 at 7:31 PM, Logan Gunthorpe  wrote:
>> On 31/07/17 10:10 AM, Andy Shevchenko wrote:
>>> Some drivers (hardware) would like to have non-atomic MMIO accesses
>>> when readq() defined
>>
>> Huh? But that's the whole point of the io64-nonatomic header. If a
>> driver wants a specific non-atomic access they should just code two 32
>> bit accesses.

> You mean to call them directly as lo_hi_XXX() or hi_lo_XXX() ?
> Yes it would work.

I suppose you could do that too but I really meant just using two io32
calls. That's the most explicit way to indicate you want a non-atomic
access.

Logan


Re: [PATCH v5 3/6] iomap: introduce io{read|write}64_{lo_hi|hi_lo}

2017-07-31 Thread Andy Shevchenko
On Mon, Jul 31, 2017 at 7:31 PM, Logan Gunthorpe  wrote:
> On 31/07/17 10:10 AM, Andy Shevchenko wrote:
>> Some drivers (hardware) would like to have non-atomic MMIO accesses
>> when readq() defined
>
> Huh? But that's the whole point of the io64-nonatomic header. If a
> driver wants a specific non-atomic access they should just code two 32
> bit accesses.

You mean to call them directly as lo_hi_XXX() or hi_lo_XXX() ?
Yes it would work.

>> In case of readq() / writeq() it's defined by the order of inclusion:
>>
>> 1)
>> include <...non-atomic...>
>> include 
>>
>> Always non-atomic will be used.
>
> I'm afraid you're wrong about this. The io-64-nonatomic-xx header
> includes linux/io.h. Thus the order of the includes doesn't matter and
> it will always auto switch. In any case, making an interface do
> different things depending on the order of include files is *completely*
> insane.

Yes, you are right. I was thinking about something unrelated.

-- 
With Best Regards,
Andy Shevchenko


Re: [PATCH v5 3/6] iomap: introduce io{read|write}64_{lo_hi|hi_lo}

2017-07-31 Thread Logan Gunthorpe


On 31/07/17 10:10 AM, Andy Shevchenko wrote:
> Some drivers (hardware) would like to have non-atomic MMIO accesses
> when readq() defined

Huh? But that's the whole point of the io64-nonatomic header. If a
driver wants a specific non-atomic access they should just code two 32
bit accesses.

> In case of readq() / writeq() it's defined by the order of inclusion:
> 
> 1)
> include <...non-atomic...>
> include 
> 
> Always non-atomic will be used.

I'm afraid you're wrong about this. The io-64-nonatomic-xx header
includes linux/io.h. Thus the order of the includes doesn't matter and
it will always auto switch. In any case, making an interface do
different things depending on the order of include files is *completely*
insane.

> P.S. I have done a table of comparison between IO accessors in Linux
> kernel and it looks hell out of being consistent.

There are a few corner oddities but it's really not that bad. Most
things are done for a reason if you dig into them.

Logan


Re: [PATCH v5 3/6] iomap: introduce io{read|write}64_{lo_hi|hi_lo}

2017-07-31 Thread Andy Shevchenko
On Mon, Jul 31, 2017 at 6:55 PM, Logan Gunthorpe  wrote:
> On 30/07/17 10:03 AM, Andy Shevchenko wrote:
>> On Thu, Jul 27, 2017 at 2:19 AM, Logan Gunthorpe  wrote:
>>> In order to provide non-atomic functions for io{read|write}64 that will
>>> use readq and writeq when appropriate. We define a number of variants
>>> of these functions in the generic iomap that will do non-atomic
>>> operations on pio but atomic operations on mmio.
>>>
>>> These functions are only defined if readq and writeq are defined. If
>>> they are not, then the wrappers that always use non-atomic operations
>>> from include/linux/io-64-nonatomic*.h will be used.
>>
>> Don't you see here a slight problem?
>>
>> In some cases we want to substitute atomic in favour of non-atomic
>> when both are defined.
>> So, please don't do this "smartly".
>
> I'm not sure what you mean here. The driver should use ioread64 and
> include an io-64-nonatomic header. Then there are three cases:
>
> 1) The arch has no atomic 64 bit io operations defined. In this case it
> uses the non-atomic inline function in the io-64-nonatomic header.

Okay

> 2) The arch uses CONFIG_GENERIC_IOMAP and has readq defined, but not
> ioread64 defined (likely because pio can't do atomic 64 bit operations
> but mmio can). In this case we need to use the ioread64_xx functions
> defined in iomap.c which do atomic mmio and non-atomic pio.

Not okay.

Some drivers (hardware) would like to have non-atomic MMIO accesses
when readq() defined

> 3) The arch has ioread64 defined so the atomic operation is used.

Not okay. Same reason as above.

In case of readq() / writeq() it's defined by the order of inclusion:

1)
include <...non-atomic...>
include 

Always non-atomic will be used.

2)
include 
include <...non-atomic...>

Auto switch like you described.

I don't like above solution, since it's fragile, but few drivers depend on that.

If you wish to do it always like 2) perhaps we need to split accessors
to ones for fixed bus width and ones for atomic/non-atomic cases.
OTOH, it would be done by introducing
memcpyXX_fromio()
memcpyXX_toio()
memsetXX_io()

Where XX = 64, 32, 16, 8.

Note, that ioreadXX_rep() is not the same as above.

P.S. I have done a table of comparison between IO accessors in Linux
kernel and it looks hell out of being consistent.

-- 
With Best Regards,
Andy Shevchenko


Re: [PATCH v5 3/6] iomap: introduce io{read|write}64_{lo_hi|hi_lo}

2017-07-31 Thread Logan Gunthorpe


On 30/07/17 10:03 AM, Andy Shevchenko wrote:
> On Thu, Jul 27, 2017 at 2:19 AM, Logan Gunthorpe  wrote:
>> In order to provide non-atomic functions for io{read|write}64 that will
>> use readq and writeq when appropriate. We define a number of variants
>> of these functions in the generic iomap that will do non-atomic
>> operations on pio but atomic operations on mmio.
>>
>> These functions are only defined if readq and writeq are defined. If
>> they are not, then the wrappers that always use non-atomic operations
>> from include/linux/io-64-nonatomic*.h will be used.
> 
> Don't you see here a slight problem?
> 
> In some cases we want to substitute atomic in favour of non-atomic
> when both are defined.
> So, please don't do this "smartly".

I'm not sure what you mean here. The driver should use ioread64 and
include an io-64-nonatomic header. Then there are three cases:

1) The arch has no atomic 64 bit io operations defined. In this case it
uses the non-atomic inline function in the io-64-nonatomic header.

2) The arch uses CONFIG_GENERIC_IOMAP and has readq defined, but not
ioread64 defined (likely because pio can't do atomic 64 bit operations
but mmio can). In this case we need to use the ioread64_xx functions
defined in iomap.c which do atomic mmio and non-atomic pio.

3) The arch has ioread64 defined so the atomic operation is used.

>> +u64 ioread64_lo_hi(void __iomem *addr)
>> +{
>> +   IO_COND(addr, return pio_read64_lo_hi(port), return readq(addr));
>> +   return 0xLL;
>> +}
> 
> U missed u.

I'll fix this in the next revision.

Thanks,

Logan



[PATCH v2] crypto: caam: Remove unused dentry members

2017-07-31 Thread Fabio Estevam
Most of the dentry members from structure caam_drv_private
are never used at all, so it is safe to remove them.

Since debugfs_remove_recursive() is called, we don't need the
file entries.

Signed-off-by: Fabio Estevam 
---
Changes since v1:
- Remove all the unused dentry members (Horia)

 drivers/crypto/caam/ctrl.c   | 81 
 drivers/crypto/caam/intern.h |  8 -
 drivers/crypto/caam/qi.c |  6 ++--
 3 files changed, 32 insertions(+), 63 deletions(-)

diff --git a/drivers/crypto/caam/ctrl.c b/drivers/crypto/caam/ctrl.c
index 7338f15..dc65fed 100644
--- a/drivers/crypto/caam/ctrl.c
+++ b/drivers/crypto/caam/ctrl.c
@@ -734,59 +734,38 @@ static int caam_probe(struct platform_device *pdev)
 ctrlpriv->total_jobrs, ctrlpriv->qi_present);
 
 #ifdef CONFIG_DEBUG_FS
-
-   ctrlpriv->ctl_rq_dequeued =
-   debugfs_create_file("rq_dequeued",
-   S_IRUSR | S_IRGRP | S_IROTH,
-   ctrlpriv->ctl, >req_dequeued,
-   _fops_u64_ro);
-   ctrlpriv->ctl_ob_enc_req =
-   debugfs_create_file("ob_rq_encrypted",
-   S_IRUSR | S_IRGRP | S_IROTH,
-   ctrlpriv->ctl, >ob_enc_req,
-   _fops_u64_ro);
-   ctrlpriv->ctl_ib_dec_req =
-   debugfs_create_file("ib_rq_decrypted",
-   S_IRUSR | S_IRGRP | S_IROTH,
-   ctrlpriv->ctl, >ib_dec_req,
-   _fops_u64_ro);
-   ctrlpriv->ctl_ob_enc_bytes =
-   debugfs_create_file("ob_bytes_encrypted",
-   S_IRUSR | S_IRGRP | S_IROTH,
-   ctrlpriv->ctl, >ob_enc_bytes,
-   _fops_u64_ro);
-   ctrlpriv->ctl_ob_prot_bytes =
-   debugfs_create_file("ob_bytes_protected",
-   S_IRUSR | S_IRGRP | S_IROTH,
-   ctrlpriv->ctl, >ob_prot_bytes,
-   _fops_u64_ro);
-   ctrlpriv->ctl_ib_dec_bytes =
-   debugfs_create_file("ib_bytes_decrypted",
-   S_IRUSR | S_IRGRP | S_IROTH,
-   ctrlpriv->ctl, >ib_dec_bytes,
-   _fops_u64_ro);
-   ctrlpriv->ctl_ib_valid_bytes =
-   debugfs_create_file("ib_bytes_validated",
-   S_IRUSR | S_IRGRP | S_IROTH,
-   ctrlpriv->ctl, >ib_valid_bytes,
-   _fops_u64_ro);
+   debugfs_create_file("rq_dequeued",S_IRUSR | S_IRGRP | S_IROTH,
+   ctrlpriv->ctl, >req_dequeued,
+   _fops_u64_ro);
+   debugfs_create_file("ob_rq_encrypted", S_IRUSR | S_IRGRP | S_IROTH,
+   ctrlpriv->ctl, >ob_enc_req,
+   _fops_u64_ro);
+   debugfs_create_file("ib_rq_decrypted", S_IRUSR | S_IRGRP | S_IROTH,
+   ctrlpriv->ctl, >ib_dec_req,
+   _fops_u64_ro);
+   debugfs_create_file("ob_bytes_encrypted", S_IRUSR | S_IRGRP | S_IROTH,
+   ctrlpriv->ctl, >ob_enc_bytes,
+   _fops_u64_ro);
+   debugfs_create_file("ob_bytes_protected", S_IRUSR | S_IRGRP | S_IROTH,
+   ctrlpriv->ctl, >ob_prot_bytes,
+   _fops_u64_ro);
+   debugfs_create_file("ib_bytes_decrypted", S_IRUSR | S_IRGRP | S_IROTH,
+   ctrlpriv->ctl, >ib_dec_bytes,
+   _fops_u64_ro);
+   debugfs_create_file("ib_bytes_validated", S_IRUSR | S_IRGRP | S_IROTH,
+   ctrlpriv->ctl, >ib_valid_bytes,
+   _fops_u64_ro);
 
/* Controller level - global status values */
-   ctrlpriv->ctl_faultaddr =
-   debugfs_create_file("fault_addr",
-   S_IRUSR | S_IRGRP | S_IROTH,
-   ctrlpriv->ctl, >faultaddr,
-   _fops_u32_ro);
-   ctrlpriv->ctl_faultdetail =
-   debugfs_create_file("fault_detail",
-   S_IRUSR | S_IRGRP | S_IROTH,
-   ctrlpriv->ctl, >faultdetail,
-   _fops_u32_ro);
-   ctrlpriv->ctl_faultstatus =
-   debugfs_create_file("fault_status",
-   S_IRUSR | S_IRGRP | S_IROTH,
-   ctrlpriv->ctl, >status,
-   _fops_u32_ro);
+   debugfs_create_file("fault_addr", S_IRUSR | S_IRGRP | S_IROTH,
+   ctrlpriv->ctl, 

Re: [PATCH] crypto: caam/qi - Remove unused 'qi_congested' entry

2017-07-31 Thread Fabio Estevam
On Mon, Jul 31, 2017 at 4:22 AM, Horia Geantă  wrote:
> On 7/30/2017 1:55 AM, Fabio Estevam wrote:
>> From: Fabio Estevam 
>>
>> 'qi_congested' member from structure caam_drv_private
>> is never used at all, so it is safe to remove it.
>
> Agree, though I would remove all the other dentry members not currently
> used - since debugfs_remove_recursive() is called, we don't need the
> file entries.

Ok, it makes sense. Wil do it in v2.

>> diff --git a/drivers/crypto/caam/qi.c b/drivers/crypto/caam/qi.c
>> index 6d5a010..b2f7a42 100644
>> --- a/drivers/crypto/caam/qi.c
>> +++ b/drivers/crypto/caam/qi.c
>> @@ -793,10 +793,8 @@ int caam_qi_init(struct platform_device *caam_pdev)
>>   /* Done with the CGRs; restore the cpus allowed mask */
>>   set_cpus_allowed_ptr(current, _cpumask);
>>  #ifdef CONFIG_DEBUG_FS
>> - ctrlpriv->qi_congested = debugfs_create_file("qi_congested", 0444,
>> -  ctrlpriv->ctl,
>> -  _congested,
>> -  _fops_u64_ro);
>> + debugfs_create_file("qi_congested", 0444, ctrlpriv->ctl,
>> + _congested, _fops_u64_ro);
>
> Either here or in a different patch the return value of debugfs_create_*
> functions should be checked, such that if IS_ERR_OR_NULL(ret) we could
> print a warning.

I will leave the error checking for a separate patch then.


[PATCH 11/16] crypto: AF_ALG - consolidate waiting for TX data

2017-07-31 Thread Stephan Müller
Consoliate aead_wait_for_data, skcipher_wait_for_data
==> af_alg_wait_for_data

The wakeup is triggered when either more data is received or the
indicator that more data is to be expected is released. The first is
triggered by user space, the second is triggered by the kernel upon
finishing the processing of data (i.e. the kernel is ready for more).

Signed-off-by: Stephan Mueller 
---
 crypto/af_alg.c | 39 +++
 crypto/algif_aead.c | 32 +---
 crypto/algif_skcipher.c | 33 +
 include/crypto/if_alg.h |  1 +
 4 files changed, 42 insertions(+), 63 deletions(-)

diff --git a/crypto/af_alg.c b/crypto/af_alg.c
index 6645454bc445..c1fe7c5f1b2e 100644
--- a/crypto/af_alg.c
+++ b/crypto/af_alg.c
@@ -769,6 +769,45 @@ void af_alg_wmem_wakeup(struct sock *sk)
 }
 EXPORT_SYMBOL_GPL(af_alg_wmem_wakeup);
 
+/**
+ * af_alg_wait_for_data - wait for availability of TX data
+ *
+ * @sk socket of connection to user space
+ * @flags If MSG_DONTWAIT is set, then only report if function would sleep
+ * @return 0 when writable memory is available, < 0 upon error
+ */
+int af_alg_wait_for_data(struct sock *sk, unsigned flags)
+{
+   DEFINE_WAIT_FUNC(wait, woken_wake_function);
+   struct alg_sock *ask = alg_sk(sk);
+   struct af_alg_ctx *ctx = ask->private;
+   long timeout;
+   int err = -ERESTARTSYS;
+
+   if (flags & MSG_DONTWAIT)
+   return -EAGAIN;
+
+   sk_set_bit(SOCKWQ_ASYNC_WAITDATA, sk);
+
+   add_wait_queue(sk_sleep(sk), );
+   for (;;) {
+   if (signal_pending(current))
+   break;
+   timeout = MAX_SCHEDULE_TIMEOUT;
+   if (sk_wait_event(sk, , (ctx->used || !ctx->more),
+ )) {
+   err = 0;
+   break;
+   }
+   }
+   remove_wait_queue(sk_sleep(sk), );
+
+   sk_clear_bit(SOCKWQ_ASYNC_WAITDATA, sk);
+
+   return err;
+}
+EXPORT_SYMBOL_GPL(af_alg_wait_for_data);
+
 static int __init af_alg_init(void)
 {
int err = proto_register(_proto, 0);
diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c
index 8c7fac1053f0..8db8c10401d6 100644
--- a/crypto/algif_aead.c
+++ b/crypto/algif_aead.c
@@ -64,36 +64,6 @@ static inline bool aead_sufficient_data(struct sock *sk)
return ctx->used >= ctx->aead_assoclen + (ctx->enc ? 0 : as);
 }
 
-static int aead_wait_for_data(struct sock *sk, unsigned flags)
-{
-   DEFINE_WAIT_FUNC(wait, woken_wake_function);
-   struct alg_sock *ask = alg_sk(sk);
-   struct af_alg_ctx *ctx = ask->private;
-   long timeout;
-   int err = -ERESTARTSYS;
-
-   if (flags & MSG_DONTWAIT)
-   return -EAGAIN;
-
-   sk_set_bit(SOCKWQ_ASYNC_WAITDATA, sk);
-
-   add_wait_queue(sk_sleep(sk), );
-   for (;;) {
-   if (signal_pending(current))
-   break;
-   timeout = MAX_SCHEDULE_TIMEOUT;
-   if (sk_wait_event(sk, , !ctx->more, )) {
-   err = 0;
-   break;
-   }
-   }
-   remove_wait_queue(sk_sleep(sk), );
-
-   sk_clear_bit(SOCKWQ_ASYNC_WAITDATA, sk);
-
-   return err;
-}
-
 static void aead_data_wakeup(struct sock *sk)
 {
struct alg_sock *ask = alg_sk(sk);
@@ -425,7 +395,7 @@ static int _aead_recvmsg(struct socket *sock, struct msghdr 
*msg,
break;
 
if (!ctx->used) {
-   err = aead_wait_for_data(sk, flags);
+   err = af_alg_wait_for_data(sk, flags);
if (err)
goto free;
}
diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c
index 258a27d0f553..572a5a632ea1 100644
--- a/crypto/algif_skcipher.c
+++ b/crypto/algif_skcipher.c
@@ -44,37 +44,6 @@ struct skcipher_tfm {
bool has_key;
 };
 
-static int skcipher_wait_for_data(struct sock *sk, unsigned flags)
-{
-   DEFINE_WAIT_FUNC(wait, woken_wake_function);
-   struct alg_sock *ask = alg_sk(sk);
-   struct af_alg_ctx *ctx = ask->private;
-   long timeout;
-   int err = -ERESTARTSYS;
-
-   if (flags & MSG_DONTWAIT) {
-   return -EAGAIN;
-   }
-
-   sk_set_bit(SOCKWQ_ASYNC_WAITDATA, sk);
-
-   add_wait_queue(sk_sleep(sk), );
-   for (;;) {
-   if (signal_pending(current))
-   break;
-   timeout = MAX_SCHEDULE_TIMEOUT;
-   if (sk_wait_event(sk, , ctx->used, )) {
-   err = 0;
-   break;
-   }
-   }
-   remove_wait_queue(sk_sleep(sk), );
-
-   sk_clear_bit(SOCKWQ_ASYNC_WAITDATA, sk);
-
-   return err;
-}
-
 static void skcipher_data_wakeup(struct sock *sk)
 {
struct alg_sock *ask = alg_sk(sk);
@@ -343,7 +312,7 @@ 

[PATCH 08/16] crypto: AF_ALG - consolidate freeing TX/RX SGLs

2017-07-31 Thread Stephan Müller
Consoliate aead_free_areq_sgls, skcipher_free_areq_sgls
==> af_alg_free_areq_sgls

Signed-off-by: Stephan Mueller 
---
 crypto/af_alg.c | 35 +++
 crypto/algif_aead.c | 33 ++---
 crypto/algif_skcipher.c | 33 ++---
 include/crypto/if_alg.h |  1 +
 4 files changed, 40 insertions(+), 62 deletions(-)

diff --git a/crypto/af_alg.c b/crypto/af_alg.c
index 73d4434df380..07c0e965c336 100644
--- a/crypto/af_alg.c
+++ b/crypto/af_alg.c
@@ -676,6 +676,41 @@ void af_alg_pull_tsgl(struct sock *sk, size_t used, struct 
scatterlist *dst,
 }
 EXPORT_SYMBOL_GPL(af_alg_pull_tsgl);
 
+/**
+ * af_alg_free_areq_sgls - Release TX and RX SGLs of the request
+ *
+ * @areq Request holding the TX and RX SGL
+ */
+void af_alg_free_areq_sgls(struct af_alg_async_req *areq)
+{
+   struct sock *sk = areq->sk;
+   struct alg_sock *ask = alg_sk(sk);
+   struct af_alg_ctx *ctx = ask->private;
+   struct af_alg_rsgl *rsgl, *tmp;
+   struct scatterlist *tsgl;
+   struct scatterlist *sg;
+   unsigned int i;
+
+   list_for_each_entry_safe(rsgl, tmp, >rsgl_list, list) {
+   ctx->rcvused -= rsgl->sg_num_bytes;
+   af_alg_free_sg(>sgl);
+   list_del(>list);
+   if (rsgl != >first_rsgl)
+   sock_kfree_s(sk, rsgl, sizeof(*rsgl));
+   }
+
+   tsgl = areq->tsgl;
+   for_each_sg(tsgl, sg, areq->tsgl_entries, i) {
+   if (!sg_page(sg))
+   continue;
+   put_page(sg_page(sg));
+   }
+
+   if (areq->tsgl && areq->tsgl_entries)
+   sock_kfree_s(sk, tsgl, areq->tsgl_entries * sizeof(*tsgl));
+}
+EXPORT_SYMBOL_GPL(af_alg_free_areq_sgls);
+
 static int __init af_alg_init(void)
 {
int err = proto_register(_proto, 0);
diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c
index b78acb3336d6..5ccac7f0047e 100644
--- a/crypto/algif_aead.c
+++ b/crypto/algif_aead.c
@@ -64,35 +64,6 @@ static inline bool aead_sufficient_data(struct sock *sk)
return ctx->used >= ctx->aead_assoclen + (ctx->enc ? 0 : as);
 }
 
-static void aead_free_areq_sgls(struct af_alg_async_req *areq)
-{
-   struct sock *sk = areq->sk;
-   struct alg_sock *ask = alg_sk(sk);
-   struct af_alg_ctx *ctx = ask->private;
-   struct af_alg_rsgl *rsgl, *tmp;
-   struct scatterlist *tsgl;
-   struct scatterlist *sg;
-   unsigned int i;
-
-   list_for_each_entry_safe(rsgl, tmp, >rsgl_list, list) {
-   ctx->rcvused -= rsgl->sg_num_bytes;
-   af_alg_free_sg(>sgl);
-   list_del(>list);
-   if (rsgl != >first_rsgl)
-   sock_kfree_s(sk, rsgl, sizeof(*rsgl));
-   }
-
-   tsgl = areq->tsgl;
-   for_each_sg(tsgl, sg, areq->tsgl_entries, i) {
-   if (!sg_page(sg))
-   continue;
-   put_page(sg_page(sg));
-   }
-
-   if (areq->tsgl && areq->tsgl_entries)
-   sock_kfree_s(sk, tsgl, areq->tsgl_entries * sizeof(*tsgl));
-}
-
 static int aead_wait_for_wmem(struct sock *sk, unsigned int flags)
 {
DEFINE_WAIT_FUNC(wait, woken_wake_function);
@@ -393,7 +364,7 @@ static void aead_async_cb(struct crypto_async_request 
*_req, int err)
/* Buffer size written by crypto operation. */
resultlen = areq->outlen;
 
-   aead_free_areq_sgls(areq);
+   af_alg_free_areq_sgls(areq);
sock_kfree_s(sk, areq, areq->areqlen);
__sock_put(sk);
 
@@ -671,7 +642,7 @@ static int _aead_recvmsg(struct socket *sock, struct msghdr 
*msg,
}
 
 free:
-   aead_free_areq_sgls(areq);
+   af_alg_free_areq_sgls(areq);
if (areq)
sock_kfree_s(sk, areq, areqlen);
 
diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c
index bc7bbd16f2eb..ea7cfe7c1971 100644
--- a/crypto/algif_skcipher.c
+++ b/crypto/algif_skcipher.c
@@ -44,35 +44,6 @@ struct skcipher_tfm {
bool has_key;
 };
 
-static void skcipher_free_areq_sgls(struct af_alg_async_req *areq)
-{
-   struct sock *sk = areq->sk;
-   struct alg_sock *ask = alg_sk(sk);
-   struct af_alg_ctx *ctx = ask->private;
-   struct af_alg_rsgl *rsgl, *tmp;
-   struct scatterlist *tsgl;
-   struct scatterlist *sg;
-   unsigned int i;
-
-   list_for_each_entry_safe(rsgl, tmp, >rsgl_list, list) {
-   ctx->rcvused -= rsgl->sg_num_bytes;
-   af_alg_free_sg(>sgl);
-   list_del(>list);
-   if (rsgl != >first_rsgl)
-   sock_kfree_s(sk, rsgl, sizeof(*rsgl));
-   }
-
-   tsgl = areq->tsgl;
-   for_each_sg(tsgl, sg, areq->tsgl_entries, i) {
-   if (!sg_page(sg))
-   continue;
-   put_page(sg_page(sg));
-   }
-
-   if (areq->tsgl && areq->tsgl_entries)
-   sock_kfree_s(sk, tsgl, 

[PATCH 10/16] crypto: AF_ALG - consolidate waking up on writable memory

2017-07-31 Thread Stephan Müller
Consoliate aead_wmem_wakeup, skcipher_wmem_wakeup
==> af_alg_wmem_wakeup

Signed-off-by: Stephan Mueller 
---
 crypto/af_alg.c | 23 +++
 crypto/algif_aead.c | 19 +--
 crypto/algif_skcipher.c | 19 +--
 include/crypto/if_alg.h |  1 +
 4 files changed, 26 insertions(+), 36 deletions(-)

diff --git a/crypto/af_alg.c b/crypto/af_alg.c
index 518c9de4bb6e..6645454bc445 100644
--- a/crypto/af_alg.c
+++ b/crypto/af_alg.c
@@ -746,6 +746,29 @@ int af_alg_wait_for_wmem(struct sock *sk, unsigned int 
flags)
 }
 EXPORT_SYMBOL_GPL(af_alg_wait_for_wmem);
 
+/**
+ * af_alg_wmem_wakeup - wakeup caller when writable memory is available
+ *
+ * @sk socket of connection to user space
+ */
+void af_alg_wmem_wakeup(struct sock *sk)
+{
+   struct socket_wq *wq;
+
+   if (!af_alg_writable(sk))
+   return;
+
+   rcu_read_lock();
+   wq = rcu_dereference(sk->sk_wq);
+   if (skwq_has_sleeper(wq))
+   wake_up_interruptible_sync_poll(>wait, POLLIN |
+  POLLRDNORM |
+  POLLRDBAND);
+   sk_wake_async(sk, SOCK_WAKE_WAITD, POLL_IN);
+   rcu_read_unlock();
+}
+EXPORT_SYMBOL_GPL(af_alg_wmem_wakeup);
+
 static int __init af_alg_init(void)
 {
int err = proto_register(_proto, 0);
diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c
index 5474fec42fe3..8c7fac1053f0 100644
--- a/crypto/algif_aead.c
+++ b/crypto/algif_aead.c
@@ -64,23 +64,6 @@ static inline bool aead_sufficient_data(struct sock *sk)
return ctx->used >= ctx->aead_assoclen + (ctx->enc ? 0 : as);
 }
 
-static void aead_wmem_wakeup(struct sock *sk)
-{
-   struct socket_wq *wq;
-
-   if (!af_alg_writable(sk))
-   return;
-
-   rcu_read_lock();
-   wq = rcu_dereference(sk->sk_wq);
-   if (skwq_has_sleeper(wq))
-   wake_up_interruptible_sync_poll(>wait, POLLIN |
-  POLLRDNORM |
-  POLLRDBAND);
-   sk_wake_async(sk, SOCK_WAKE_WAITD, POLL_IN);
-   rcu_read_unlock();
-}
-
 static int aead_wait_for_data(struct sock *sk, unsigned flags)
 {
DEFINE_WAIT_FUNC(wait, woken_wake_function);
@@ -651,7 +634,7 @@ static int aead_recvmsg(struct socket *sock, struct msghdr 
*msg,
}
 
 out:
-   aead_wmem_wakeup(sk);
+   af_alg_wmem_wakeup(sk);
release_sock(sk);
return ret;
 }
diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c
index 10d3c6f477ac..258a27d0f553 100644
--- a/crypto/algif_skcipher.c
+++ b/crypto/algif_skcipher.c
@@ -44,23 +44,6 @@ struct skcipher_tfm {
bool has_key;
 };
 
-static void skcipher_wmem_wakeup(struct sock *sk)
-{
-   struct socket_wq *wq;
-
-   if (!af_alg_writable(sk))
-   return;
-
-   rcu_read_lock();
-   wq = rcu_dereference(sk->sk_wq);
-   if (skwq_has_sleeper(wq))
-   wake_up_interruptible_sync_poll(>wait, POLLIN |
-  POLLRDNORM |
-  POLLRDBAND);
-   sk_wake_async(sk, SOCK_WAKE_WAITD, POLL_IN);
-   rcu_read_unlock();
-}
-
 static int skcipher_wait_for_data(struct sock *sk, unsigned flags)
 {
DEFINE_WAIT_FUNC(wait, woken_wake_function);
@@ -492,7 +475,7 @@ static int skcipher_recvmsg(struct socket *sock, struct 
msghdr *msg,
}
 
 out:
-   skcipher_wmem_wakeup(sk);
+   af_alg_wmem_wakeup(sk);
release_sock(sk);
return ret;
 }
diff --git a/include/crypto/if_alg.h b/include/crypto/if_alg.h
index 1008fab90eab..88eda219b90f 100644
--- a/include/crypto/if_alg.h
+++ b/include/crypto/if_alg.h
@@ -246,5 +246,6 @@ void af_alg_pull_tsgl(struct sock *sk, size_t used, struct 
scatterlist *dst,
  size_t dst_offset);
 void af_alg_free_areq_sgls(struct af_alg_async_req *areq);
 int af_alg_wait_for_wmem(struct sock *sk, unsigned int flags);
+void af_alg_wmem_wakeup(struct sock *sk);
 
 #endif /* _CRYPTO_IF_ALG_H */
-- 
2.13.3




[PATCH 04/16] crypto: AF_ALG - consolidate RX buffer service functions

2017-07-31 Thread Stephan Müller
Consolidate the common functions verifying the RX buffers:

 * skcipher_rcvbuf, aead_rcvbuf ==> af_alg_rcvbuf

 * skcipher_readable, aead_readable ==> af_alg_readable

Signed-off-by: Stephan Mueller 
---
 crypto/algif_aead.c | 16 +---
 crypto/algif_skcipher.c | 16 +---
 include/crypto/if_alg.h | 26 ++
 3 files changed, 28 insertions(+), 30 deletions(-)

diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c
index c923ce29bfe3..cdcf186296bd 100644
--- a/crypto/algif_aead.c
+++ b/crypto/algif_aead.c
@@ -47,20 +47,6 @@ struct aead_tfm {
struct crypto_skcipher *null_tfm;
 };
 
-static inline int aead_rcvbuf(struct sock *sk)
-{
-   struct alg_sock *ask = alg_sk(sk);
-   struct af_alg_ctx *ctx = ask->private;
-
-   return max_t(int, max_t(int, sk->sk_rcvbuf & PAGE_MASK, PAGE_SIZE) -
- ctx->rcvused, 0);
-}
-
-static inline bool aead_readable(struct sock *sk)
-{
-   return PAGE_SIZE <= aead_rcvbuf(sk);
-}
-
 static inline bool aead_sufficient_data(struct sock *sk)
 {
struct alg_sock *ask = alg_sk(sk);
@@ -655,7 +641,7 @@ static int _aead_recvmsg(struct socket *sock, struct msghdr 
*msg,
size_t seglen;
 
/* limit the amount of readable buffers */
-   if (!aead_readable(sk))
+   if (!af_alg_readable(sk))
break;
 
if (!ctx->used) {
diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c
index a5c6643f2abe..081df927fb8b 100644
--- a/crypto/algif_skcipher.c
+++ b/crypto/algif_skcipher.c
@@ -44,20 +44,6 @@ struct skcipher_tfm {
bool has_key;
 };
 
-static inline int skcipher_rcvbuf(struct sock *sk)
-{
-   struct alg_sock *ask = alg_sk(sk);
-   struct af_alg_ctx *ctx = ask->private;
-
-   return max_t(int, max_t(int, sk->sk_rcvbuf & PAGE_MASK, PAGE_SIZE) -
- ctx->rcvused, 0);
-}
-
-static inline bool skcipher_readable(struct sock *sk)
-{
-   return PAGE_SIZE <= skcipher_rcvbuf(sk);
-}
-
 static int skcipher_alloc_tsgl(struct sock *sk)
 {
struct alg_sock *ask = alg_sk(sk);
@@ -532,7 +518,7 @@ static int _skcipher_recvmsg(struct socket *sock, struct 
msghdr *msg,
size_t seglen;
 
/* limit the amount of readable buffers */
-   if (!skcipher_readable(sk))
+   if (!af_alg_readable(sk))
break;
 
if (!ctx->used) {
diff --git a/include/crypto/if_alg.h b/include/crypto/if_alg.h
index 79d215f65acf..e1ac57c32d85 100644
--- a/include/crypto/if_alg.h
+++ b/include/crypto/if_alg.h
@@ -214,4 +214,30 @@ static inline bool af_alg_writable(struct sock *sk)
return PAGE_SIZE <= af_alg_sndbuf(sk);
 }
 
+/**
+ * Size of available buffer used by kernel for the RX user space operation.
+ *
+ * @sk socket of connection to user space
+ * @return number of bytes still available
+ */
+static inline int af_alg_rcvbuf(struct sock *sk)
+{
+   struct alg_sock *ask = alg_sk(sk);
+   struct af_alg_ctx *ctx = ask->private;
+
+   return max_t(int, max_t(int, sk->sk_rcvbuf & PAGE_MASK, PAGE_SIZE) -
+ ctx->rcvused, 0);
+}
+
+/**
+ * Can the RX buffer still be written to?
+ *
+ * @sk socket of connection to user space
+ * @return true => writable, false => not writable
+ */
+static inline bool af_alg_readable(struct sock *sk)
+{
+   return PAGE_SIZE <= af_alg_rcvbuf(sk);
+}
+
 #endif /* _CRYPTO_IF_ALG_H */
-- 
2.13.3




[PATCH 00/16] crypto: AF_ALG - consolidation

2017-07-31 Thread Stephan Müller
Hi,

with the update of the algif_aead and algif_skcipher memory management,
a lot of code duplication has been introduced deliberately.

This patch set cleans up the low-hanging fruits. The cleanup of the
recvmsg RX SGL copy will come separately as this is not a simple copy
and paste operation.

Each patch was tested individually with libkcapi's test suite.

The patch set goes on top of patches "crypto: AF_ALG - return error code when
no data was processed" and "crypto: algif_aead - copy AAD from src to dst".

Stephan Mueller (16):
  crypto: AF_ALG - consolidation of common data structures
  crypto: AF_ALG - consolidation of context data structure
  crypto: AF_ALG - consolidate send buffer service functions
  crypto: AF_ALG - consolidate RX buffer service functions
  crypto: AF_ALG - consolidate TX SGL allocation
  crypto: AF_ALG - consolidate counting TX SG entries
  crypto: AF_ALG - consolidate counting TX SG entries
  crypto: AF_ALG - consolidate freeing TX/RX SGLs
  crypto: AF_ALG - consolidate waiting for wmem
  crypto: AF_ALG - consolidate waking up on writable memory
  crypto: AF_ALG - consolidate waiting for TX data
  crypto: AF_ALG - consolidate waking up caller for TX data
  crypto: AF_ALG - consolidate sendmsg implementation
  crypto: AF_ALG - consolidate sendpage implementation
  crypto: AF_ALG - consolidate AIO callback handler
  crypto: AF_ALG - consolidate poll syscall handler

 crypto/af_alg.c | 597 
 crypto/algif_aead.c | 641 +++-
 crypto/algif_skcipher.c | 585 +++
 include/crypto/if_alg.h | 163 
 4 files changed, 830 insertions(+), 1156 deletions(-)

-- 
2.13.3




[PATCH 09/16] crypto: AF_ALG - consolidate waiting for wmem

2017-07-31 Thread Stephan Müller
Consoliate aead_wait_for_wmem, skcipher_wait_for_wmem
==> af_alg_wait_for_wmem

Signed-off-by: Stephan Mueller 
---
 crypto/af_alg.c | 35 +++
 crypto/algif_aead.c | 30 ++
 crypto/algif_skcipher.c | 30 ++
 include/crypto/if_alg.h |  1 +
 4 files changed, 40 insertions(+), 56 deletions(-)

diff --git a/crypto/af_alg.c b/crypto/af_alg.c
index 07c0e965c336..518c9de4bb6e 100644
--- a/crypto/af_alg.c
+++ b/crypto/af_alg.c
@@ -21,6 +21,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 
 struct alg_type_list {
@@ -711,6 +712,40 @@ void af_alg_free_areq_sgls(struct af_alg_async_req *areq)
 }
 EXPORT_SYMBOL_GPL(af_alg_free_areq_sgls);
 
+/**
+ * af_alg_wait_for_wmem - wait for availability of writable memory
+ *
+ * @sk socket of connection to user space
+ * @flags If MSG_DONTWAIT is set, then only report if function would sleep
+ * @return 0 when writable memory is available, < 0 upon error
+ */
+int af_alg_wait_for_wmem(struct sock *sk, unsigned int flags)
+{
+   DEFINE_WAIT_FUNC(wait, woken_wake_function);
+   int err = -ERESTARTSYS;
+   long timeout;
+
+   if (flags & MSG_DONTWAIT)
+   return -EAGAIN;
+
+   sk_set_bit(SOCKWQ_ASYNC_NOSPACE, sk);
+
+   add_wait_queue(sk_sleep(sk), );
+   for (;;) {
+   if (signal_pending(current))
+   break;
+   timeout = MAX_SCHEDULE_TIMEOUT;
+   if (sk_wait_event(sk, , af_alg_writable(sk), )) {
+   err = 0;
+   break;
+   }
+   }
+   remove_wait_queue(sk_sleep(sk), );
+
+   return err;
+}
+EXPORT_SYMBOL_GPL(af_alg_wait_for_wmem);
+
 static int __init af_alg_init(void)
 {
int err = proto_register(_proto, 0);
diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c
index 5ccac7f0047e..5474fec42fe3 100644
--- a/crypto/algif_aead.c
+++ b/crypto/algif_aead.c
@@ -64,32 +64,6 @@ static inline bool aead_sufficient_data(struct sock *sk)
return ctx->used >= ctx->aead_assoclen + (ctx->enc ? 0 : as);
 }
 
-static int aead_wait_for_wmem(struct sock *sk, unsigned int flags)
-{
-   DEFINE_WAIT_FUNC(wait, woken_wake_function);
-   int err = -ERESTARTSYS;
-   long timeout;
-
-   if (flags & MSG_DONTWAIT)
-   return -EAGAIN;
-
-   sk_set_bit(SOCKWQ_ASYNC_NOSPACE, sk);
-
-   add_wait_queue(sk_sleep(sk), );
-   for (;;) {
-   if (signal_pending(current))
-   break;
-   timeout = MAX_SCHEDULE_TIMEOUT;
-   if (sk_wait_event(sk, , af_alg_writable(sk), )) {
-   err = 0;
-   break;
-   }
-   }
-   remove_wait_queue(sk_sleep(sk), );
-
-   return err;
-}
-
 static void aead_wmem_wakeup(struct sock *sk)
 {
struct socket_wq *wq;
@@ -237,7 +211,7 @@ static int aead_sendmsg(struct socket *sock, struct msghdr 
*msg, size_t size)
}
 
if (!af_alg_writable(sk)) {
-   err = aead_wait_for_wmem(sk, msg->msg_flags);
+   err = af_alg_wait_for_wmem(sk, msg->msg_flags);
if (err)
goto unlock;
}
@@ -319,7 +293,7 @@ static ssize_t aead_sendpage(struct socket *sock, struct 
page *page,
goto done;
 
if (!af_alg_writable(sk)) {
-   err = aead_wait_for_wmem(sk, flags);
+   err = af_alg_wait_for_wmem(sk, flags);
if (err)
goto unlock;
}
diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c
index ea7cfe7c1971..10d3c6f477ac 100644
--- a/crypto/algif_skcipher.c
+++ b/crypto/algif_skcipher.c
@@ -44,32 +44,6 @@ struct skcipher_tfm {
bool has_key;
 };
 
-static int skcipher_wait_for_wmem(struct sock *sk, unsigned flags)
-{
-   DEFINE_WAIT_FUNC(wait, woken_wake_function);
-   int err = -ERESTARTSYS;
-   long timeout;
-
-   if (flags & MSG_DONTWAIT)
-   return -EAGAIN;
-
-   sk_set_bit(SOCKWQ_ASYNC_NOSPACE, sk);
-
-   add_wait_queue(sk_sleep(sk), );
-   for (;;) {
-   if (signal_pending(current))
-   break;
-   timeout = MAX_SCHEDULE_TIMEOUT;
-   if (sk_wait_event(sk, , af_alg_writable(sk), )) {
-   err = 0;
-   break;
-   }
-   }
-   remove_wait_queue(sk_sleep(sk), );
-
-   return err;
-}
-
 static void skcipher_wmem_wakeup(struct sock *sk)
 {
struct socket_wq *wq;
@@ -218,7 +192,7 @@ static int skcipher_sendmsg(struct socket *sock, struct 
msghdr *msg,
}
 
if (!af_alg_writable(sk)) {
-   err = skcipher_wait_for_wmem(sk, msg->msg_flags);
+   err = af_alg_wait_for_wmem(sk, 

[PATCH 05/16] crypto: AF_ALG - consolidate TX SGL allocation

2017-07-31 Thread Stephan Müller
Consoliate aead_alloc_tsgl, skcipher_alloc_tsgl ==> af_alg_alloc_tsgl

Signed-off-by: Stephan Mueller 
---
 crypto/af_alg.c | 37 +
 crypto/algif_aead.c | 34 ++
 crypto/algif_skcipher.c | 34 ++
 include/crypto/if_alg.h |  2 ++
 4 files changed, 43 insertions(+), 64 deletions(-)

diff --git a/crypto/af_alg.c b/crypto/af_alg.c
index 92a3d540d920..87138c4b5a0f 100644
--- a/crypto/af_alg.c
+++ b/crypto/af_alg.c
@@ -507,6 +507,43 @@ void af_alg_complete(struct crypto_async_request *req, int 
err)
 }
 EXPORT_SYMBOL_GPL(af_alg_complete);
 
+/**
+ * af_alg_alloc_tsgl - allocate the TX SGL
+ *
+ * @sk socket of connection to user space
+ * @return: 0 upon success, < 0 upon error
+ */
+int af_alg_alloc_tsgl(struct sock *sk)
+{
+   struct alg_sock *ask = alg_sk(sk);
+   struct af_alg_ctx *ctx = ask->private;
+   struct af_alg_tsgl *sgl;
+   struct scatterlist *sg = NULL;
+
+   sgl = list_entry(ctx->tsgl_list.prev, struct af_alg_tsgl, list);
+   if (!list_empty(>tsgl_list))
+   sg = sgl->sg;
+
+   if (!sg || sgl->cur >= MAX_SGL_ENTS) {
+   sgl = sock_kmalloc(sk, sizeof(*sgl) +
+  sizeof(sgl->sg[0]) * (MAX_SGL_ENTS + 1),
+  GFP_KERNEL);
+   if (!sgl)
+   return -ENOMEM;
+
+   sg_init_table(sgl->sg, MAX_SGL_ENTS + 1);
+   sgl->cur = 0;
+
+   if (sg)
+   sg_chain(sg, MAX_SGL_ENTS + 1, sgl->sg);
+
+   list_add_tail(>list, >tsgl_list);
+   }
+
+   return 0;
+}
+EXPORT_SYMBOL_GPL(af_alg_alloc_tsgl);
+
 static int __init af_alg_init(void)
 {
int err = proto_register(_proto, 0);
diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c
index cdcf186296bd..a722df95e55c 100644
--- a/crypto/algif_aead.c
+++ b/crypto/algif_aead.c
@@ -64,36 +64,6 @@ static inline bool aead_sufficient_data(struct sock *sk)
return ctx->used >= ctx->aead_assoclen + (ctx->enc ? 0 : as);
 }
 
-static int aead_alloc_tsgl(struct sock *sk)
-{
-   struct alg_sock *ask = alg_sk(sk);
-   struct af_alg_ctx *ctx = ask->private;
-   struct af_alg_tsgl *sgl;
-   struct scatterlist *sg = NULL;
-
-   sgl = list_entry(ctx->tsgl_list.prev, struct af_alg_tsgl, list);
-   if (!list_empty(>tsgl_list))
-   sg = sgl->sg;
-
-   if (!sg || sgl->cur >= MAX_SGL_ENTS) {
-   sgl = sock_kmalloc(sk, sizeof(*sgl) +
-  sizeof(sgl->sg[0]) * (MAX_SGL_ENTS + 1),
-  GFP_KERNEL);
-   if (!sgl)
-   return -ENOMEM;
-
-   sg_init_table(sgl->sg, MAX_SGL_ENTS + 1);
-   sgl->cur = 0;
-
-   if (sg)
-   sg_chain(sg, MAX_SGL_ENTS + 1, sgl->sg);
-
-   list_add_tail(>list, >tsgl_list);
-   }
-
-   return 0;
-}
-
 /**
  * Count number of SG entries from the beginning of the SGL to @bytes. If
  * an offset is provided, the counting of the SG entries starts at the offset.
@@ -422,7 +392,7 @@ static int aead_sendmsg(struct socket *sock, struct msghdr 
*msg, size_t size)
/* allocate a new page */
len = min_t(unsigned long, size, af_alg_sndbuf(sk));
 
-   err = aead_alloc_tsgl(sk);
+   err = af_alg_alloc_tsgl(sk);
if (err)
goto unlock;
 
@@ -501,7 +471,7 @@ static ssize_t aead_sendpage(struct socket *sock, struct 
page *page,
goto unlock;
}
 
-   err = aead_alloc_tsgl(sk);
+   err = af_alg_alloc_tsgl(sk);
if (err)
goto unlock;
 
diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c
index 081df927fb8b..d511f665a190 100644
--- a/crypto/algif_skcipher.c
+++ b/crypto/algif_skcipher.c
@@ -44,36 +44,6 @@ struct skcipher_tfm {
bool has_key;
 };
 
-static int skcipher_alloc_tsgl(struct sock *sk)
-{
-   struct alg_sock *ask = alg_sk(sk);
-   struct af_alg_ctx *ctx = ask->private;
-   struct af_alg_tsgl *sgl;
-   struct scatterlist *sg = NULL;
-
-   sgl = list_entry(ctx->tsgl_list.prev, struct af_alg_tsgl, list);
-   if (!list_empty(>tsgl_list))
-   sg = sgl->sg;
-
-   if (!sg || sgl->cur >= MAX_SGL_ENTS) {
-   sgl = sock_kmalloc(sk, sizeof(*sgl) +
-  sizeof(sgl->sg[0]) * (MAX_SGL_ENTS + 1),
-  GFP_KERNEL);
-   if (!sgl)
-   return -ENOMEM;
-
-   sg_init_table(sgl->sg, MAX_SGL_ENTS + 1);
-   sgl->cur = 0;
-
-   if (sg)
-   sg_chain(sg, MAX_SGL_ENTS + 1, sgl->sg);
-
-   list_add_tail(>list, >tsgl_list);
-   }
-
-   return 0;

[PATCH 01/16] crypto: AF_ALG - consolidation of common data structures

2017-07-31 Thread Stephan Müller
Consolidate following data structures:

- skcipher_async_req, aead_async_req -> af_alg_async_req

- skcipher_rsgl, aead_rsql -> af_alg_rsgl

- skcipher_tsgl, aead_tsql -> af_alg_tsgl

Signed-off-by: Stephan Mueller 
---
 crypto/algif_aead.c | 89 -
 crypto/algif_skcipher.c | 88 +---
 include/crypto/if_alg.h | 53 +
 3 files changed, 112 insertions(+), 118 deletions(-)

diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c
index 1f0696dd64f4..42f69a4f87d5 100644
--- a/crypto/algif_aead.c
+++ b/crypto/algif_aead.c
@@ -41,34 +41,6 @@
 #include 
 #include 
 
-struct aead_tsgl {
-   struct list_head list;
-   unsigned int cur;   /* Last processed SG entry */
-   struct scatterlist sg[0];   /* Array of SGs forming the SGL */
-};
-
-struct aead_rsgl {
-   struct af_alg_sgl sgl;
-   struct list_head list;
-   size_t sg_num_bytes;/* Bytes of data in that SGL */
-};
-
-struct aead_async_req {
-   struct kiocb *iocb;
-   struct sock *sk;
-
-   struct aead_rsgl first_rsgl;/* First RX SG */
-   struct list_head rsgl_list; /* Track RX SGs */
-
-   struct scatterlist *tsgl;   /* priv. TX SGL of buffers to process */
-   unsigned int tsgl_entries;  /* number of entries in priv. TX SGL */
-
-   unsigned int outlen;/* Filled output buf length */
-
-   unsigned int areqlen;   /* Length of this data struct */
-   struct aead_request aead_req;   /* req ctx trails this struct */
-};
-
 struct aead_tfm {
struct crypto_aead *aead;
bool has_key;
@@ -93,9 +65,6 @@ struct aead_ctx {
unsigned int len;   /* Length of allocated memory for this struct */
 };
 
-#define MAX_SGL_ENTS ((4096 - sizeof(struct aead_tsgl)) / \
- sizeof(struct scatterlist) - 1)
-
 static inline int aead_sndbuf(struct sock *sk)
 {
struct alg_sock *ask = alg_sk(sk);
@@ -145,10 +114,10 @@ static int aead_alloc_tsgl(struct sock *sk)
 {
struct alg_sock *ask = alg_sk(sk);
struct aead_ctx *ctx = ask->private;
-   struct aead_tsgl *sgl;
+   struct af_alg_tsgl *sgl;
struct scatterlist *sg = NULL;
 
-   sgl = list_entry(ctx->tsgl_list.prev, struct aead_tsgl, list);
+   sgl = list_entry(ctx->tsgl_list.prev, struct af_alg_tsgl, list);
if (!list_empty(>tsgl_list))
sg = sgl->sg;
 
@@ -180,7 +149,7 @@ static unsigned int aead_count_tsgl(struct sock *sk, size_t 
bytes,
 {
struct alg_sock *ask = alg_sk(sk);
struct aead_ctx *ctx = ask->private;
-   struct aead_tsgl *sgl, *tmp;
+   struct af_alg_tsgl *sgl, *tmp;
unsigned int i;
unsigned int sgl_count = 0;
 
@@ -230,12 +199,12 @@ static void aead_pull_tsgl(struct sock *sk, size_t used,
 {
struct alg_sock *ask = alg_sk(sk);
struct aead_ctx *ctx = ask->private;
-   struct aead_tsgl *sgl;
+   struct af_alg_tsgl *sgl;
struct scatterlist *sg;
unsigned int i, j;
 
while (!list_empty(>tsgl_list)) {
-   sgl = list_first_entry(>tsgl_list, struct aead_tsgl,
+   sgl = list_first_entry(>tsgl_list, struct af_alg_tsgl,
   list);
sg = sgl->sg;
 
@@ -289,12 +258,12 @@ static void aead_pull_tsgl(struct sock *sk, size_t used,
ctx->merge = 0;
 }
 
-static void aead_free_areq_sgls(struct aead_async_req *areq)
+static void aead_free_areq_sgls(struct af_alg_async_req *areq)
 {
struct sock *sk = areq->sk;
struct alg_sock *ask = alg_sk(sk);
struct aead_ctx *ctx = ask->private;
-   struct aead_rsgl *rsgl, *tmp;
+   struct af_alg_rsgl *rsgl, *tmp;
struct scatterlist *tsgl;
struct scatterlist *sg;
unsigned int i;
@@ -420,7 +389,7 @@ static int aead_sendmsg(struct socket *sock, struct msghdr 
*msg, size_t size)
struct aead_tfm *aeadc = pask->private;
struct crypto_aead *tfm = aeadc->aead;
unsigned int ivsize = crypto_aead_ivsize(tfm);
-   struct aead_tsgl *sgl;
+   struct af_alg_tsgl *sgl;
struct af_alg_control con = {};
long copied = 0;
bool enc = 0;
@@ -470,7 +439,7 @@ static int aead_sendmsg(struct socket *sock, struct msghdr 
*msg, size_t size)
/* use the existing memory in an allocated page */
if (ctx->merge) {
sgl = list_entry(ctx->tsgl_list.prev,
-struct aead_tsgl, list);
+struct af_alg_tsgl, list);
sg = sgl->sg + sgl->cur - 1;
len = min_t(unsigned long, len,
PAGE_SIZE - sg->offset - sg->length);
@@ -503,7 +472,7 @@ static int aead_sendmsg(struct socket *sock, struct 

[PATCH 13/16] crypto: AF_ALG - consolidate sendmsg implementation

2017-07-31 Thread Stephan Müller
Consoliate aead_sendmsg, skcipher_sendmsg
==> af_alg_sendmsg

The following changes to the sendmsg implementation aead_sendmsg have
been applied:

* uses size_t in min_t calculation for obtaining len

* switch the use of the variable size to variable len to determine the
  sndbuf size (both variables have the same content here, consolidate
  implementation with skcipher_sendmsg)

* return code determination is changed to be identical to
  skcipher_sendmsg

The following changes to the sendmsg implementation skcipher_sendmsg
have been applied:

* use of size_t instead of unsigned long for calculating the available
  memory size in the page storing the user data

* scope of variable i is reduced to be compliant with aead_sendmsg

* type of variable is switched from int to unsigned int to be compliant
with aead_sendmsg

Signed-off-by: Stephan Mueller 
---
 crypto/af_alg.c | 157 
 crypto/algif_aead.c | 132 +---
 crypto/algif_skcipher.c | 127 +--
 include/crypto/if_alg.h |   2 +
 4 files changed, 161 insertions(+), 257 deletions(-)

diff --git a/crypto/af_alg.c b/crypto/af_alg.c
index d1e0c176495f..716a73ff5309 100644
--- a/crypto/af_alg.c
+++ b/crypto/af_alg.c
@@ -834,6 +834,163 @@ void af_alg_data_wakeup(struct sock *sk)
 }
 EXPORT_SYMBOL_GPL(af_alg_data_wakeup);
 
+/**
+ * af_alg_sendmsg - implementation of sendmsg system call handler
+ *
+ * The sendmsg system call handler obtains the user data and stores it
+ * in ctx->tsgl_list. This implies allocation of the required numbers of
+ * struct af_alg_tsgl.
+ *
+ * In addition, the ctx is filled with the information sent via CMSG.
+ *
+ * @sock socket of connection to user space
+ * @msg message from user space
+ * @size size of message from user space
+ * @ivsize the size of the IV for the cipher operation to verify that the
+ *user-space-provided IV has the right size
+ * @return the number of copied data upon success, < 0 upon error
+ */
+int af_alg_sendmsg(struct socket *sock, struct msghdr *msg, size_t size,
+  unsigned int ivsize)
+{
+   struct sock *sk = sock->sk;
+   struct alg_sock *ask = alg_sk(sk);
+   struct af_alg_ctx *ctx = ask->private;
+   struct af_alg_tsgl *sgl;
+   struct af_alg_control con = {};
+   long copied = 0;
+   bool enc = 0;
+   bool init = 0;
+   int err = 0;
+
+   if (msg->msg_controllen) {
+   err = af_alg_cmsg_send(msg, );
+   if (err)
+   return err;
+
+   init = 1;
+   switch (con.op) {
+   case ALG_OP_ENCRYPT:
+   enc = 1;
+   break;
+   case ALG_OP_DECRYPT:
+   enc = 0;
+   break;
+   default:
+   return -EINVAL;
+   }
+
+   if (con.iv && con.iv->ivlen != ivsize)
+   return -EINVAL;
+   }
+
+   lock_sock(sk);
+   if (!ctx->more && ctx->used) {
+   err = -EINVAL;
+   goto unlock;
+   }
+
+   if (init) {
+   ctx->enc = enc;
+   if (con.iv)
+   memcpy(ctx->iv, con.iv->iv, ivsize);
+
+   ctx->aead_assoclen = con.aead_assoclen;
+   }
+
+   while (size) {
+   struct scatterlist *sg;
+   size_t len = size;
+   size_t plen;
+
+   /* use the existing memory in an allocated page */
+   if (ctx->merge) {
+   sgl = list_entry(ctx->tsgl_list.prev,
+struct af_alg_tsgl, list);
+   sg = sgl->sg + sgl->cur - 1;
+   len = min_t(size_t, len,
+   PAGE_SIZE - sg->offset - sg->length);
+
+   err = memcpy_from_msg(page_address(sg_page(sg)) +
+ sg->offset + sg->length,
+ msg, len);
+   if (err)
+   goto unlock;
+
+   sg->length += len;
+   ctx->merge = (sg->offset + sg->length) &
+(PAGE_SIZE - 1);
+
+   ctx->used += len;
+   copied += len;
+   size -= len;
+   continue;
+   }
+
+   if (!af_alg_writable(sk)) {
+   err = af_alg_wait_for_wmem(sk, msg->msg_flags);
+   if (err)
+   goto unlock;
+   }
+
+   /* allocate a new page */
+   len = min_t(unsigned long, len, af_alg_sndbuf(sk));
+
+   err = af_alg_alloc_tsgl(sk);
+   if (err)
+

[PATCH 06/16] crypto: AF_ALG - consolidate counting TX SG entries

2017-07-31 Thread Stephan Müller
Consoliate aead_count_tsgl, skcipher_count_tsgl ==> af_alg_count_tsgl

Signed-off-by: Stephan Mueller 
---
 crypto/af_alg.c | 52 
 crypto/algif_aead.c | 53 -
 crypto/algif_skcipher.c | 30 ++--
 include/crypto/if_alg.h |  1 +
 4 files changed, 59 insertions(+), 77 deletions(-)

diff --git a/crypto/af_alg.c b/crypto/af_alg.c
index 87138c4b5a0f..861167fc12f7 100644
--- a/crypto/af_alg.c
+++ b/crypto/af_alg.c
@@ -544,6 +544,58 @@ int af_alg_alloc_tsgl(struct sock *sk)
 }
 EXPORT_SYMBOL_GPL(af_alg_alloc_tsgl);
 
+/**
+ * aead_count_tsgl - Count number of TX SG entries
+ *
+ * The counting starts from the beginning of the SGL to @bytes. If
+ * an offset is provided, the counting of the SG entries starts at the offset.
+ *
+ * @sk socket of connection to user space
+ * @bytes Count the number of SG entries holding given number of bytes.
+ * @offset Start the counting of SG entries from the given offset.
+ * @return Number of TX SG entries found given the constraints
+ */
+unsigned int af_alg_count_tsgl(struct sock *sk, size_t bytes, size_t offset)
+{
+   struct alg_sock *ask = alg_sk(sk);
+   struct af_alg_ctx *ctx = ask->private;
+   struct af_alg_tsgl *sgl, *tmp;
+   unsigned int i;
+   unsigned int sgl_count = 0;
+
+   if (!bytes)
+   return 0;
+
+   list_for_each_entry_safe(sgl, tmp, >tsgl_list, list) {
+   struct scatterlist *sg = sgl->sg;
+
+   for (i = 0; i < sgl->cur; i++) {
+   size_t bytes_count;
+
+   /* Skip offset */
+   if (offset >= sg[i].length) {
+   offset -= sg[i].length;
+   bytes -= sg[i].length;
+   continue;
+   }
+
+   bytes_count = sg[i].length - offset;
+
+   offset = 0;
+   sgl_count++;
+
+   /* If we have seen requested number of bytes, stop */
+   if (bytes_count >= bytes)
+   return sgl_count;
+
+   bytes -= bytes_count;
+   }
+   }
+
+   return sgl_count;
+}
+EXPORT_SYMBOL_GPL(af_alg_count_tsgl);
+
 static int __init af_alg_init(void)
 {
int err = proto_register(_proto, 0);
diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c
index a722df95e55c..78651b26aa77 100644
--- a/crypto/algif_aead.c
+++ b/crypto/algif_aead.c
@@ -65,58 +65,13 @@ static inline bool aead_sufficient_data(struct sock *sk)
 }
 
 /**
- * Count number of SG entries from the beginning of the SGL to @bytes. If
- * an offset is provided, the counting of the SG entries starts at the offset.
- */
-static unsigned int aead_count_tsgl(struct sock *sk, size_t bytes,
-   size_t offset)
-{
-   struct alg_sock *ask = alg_sk(sk);
-   struct af_alg_ctx *ctx = ask->private;
-   struct af_alg_tsgl *sgl, *tmp;
-   unsigned int i;
-   unsigned int sgl_count = 0;
-
-   if (!bytes)
-   return 0;
-
-   list_for_each_entry_safe(sgl, tmp, >tsgl_list, list) {
-   struct scatterlist *sg = sgl->sg;
-
-   for (i = 0; i < sgl->cur; i++) {
-   size_t bytes_count;
-
-   /* Skip offset */
-   if (offset >= sg[i].length) {
-   offset -= sg[i].length;
-   bytes -= sg[i].length;
-   continue;
-   }
-
-   bytes_count = sg[i].length - offset;
-
-   offset = 0;
-   sgl_count++;
-
-   /* If we have seen requested number of bytes, stop */
-   if (bytes_count >= bytes)
-   return sgl_count;
-
-   bytes -= bytes_count;
-   }
-   }
-
-   return sgl_count;
-}
-
-/**
  * Release the specified buffers from TX SGL pointed to by ctx->tsgl_list for
  * @used bytes.
  *
  * If @dst is non-null, reassign the pages to dst. The caller must release
  * the pages. If @dst_offset is given only reassign the pages to @dst starting
  * at the @dst_offset (byte). The caller must ensure that @dst is large
- * enough (e.g. by using aead_count_tsgl with the same offset).
+ * enough (e.g. by using af_alg_count_tsgl with the same offset).
  */
 static void aead_pull_tsgl(struct sock *sk, size_t used,
   struct scatterlist *dst, size_t dst_offset)
@@ -140,7 +95,7 @@ static void aead_pull_tsgl(struct sock *sk, size_t used,
continue;
 
/*
-* Assumption: caller created aead_count_tsgl(len)
+ 

[PATCH 02/16] crypto: AF_ALG - consolidation of context data structure

2017-07-31 Thread Stephan Müller
Consolidate skcipher_ctx, aead_ctx ==> af_alg_ctx

Signed-off-by: Stephan Mueller 
---
 crypto/algif_aead.c | 48 +++-
 crypto/algif_skcipher.c | 45 ++---
 include/crypto/if_alg.h | 41 +
 3 files changed, 70 insertions(+), 64 deletions(-)

diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c
index 42f69a4f87d5..adbf87a0 100644
--- a/crypto/algif_aead.c
+++ b/crypto/algif_aead.c
@@ -47,28 +47,10 @@ struct aead_tfm {
struct crypto_skcipher *null_tfm;
 };
 
-struct aead_ctx {
-   struct list_head tsgl_list; /* Link to TX SGL */
-
-   void *iv;
-   size_t aead_assoclen;
-
-   struct af_alg_completion completion;/* sync work queue */
-
-   size_t used;/* TX bytes sent to kernel */
-   size_t rcvused; /* total RX bytes to be processed by kernel */
-
-   bool more;  /* More data to be expected? */
-   bool merge; /* Merge new data into existing SG */
-   bool enc;   /* Crypto operation: enc, dec */
-
-   unsigned int len;   /* Length of allocated memory for this struct */
-};
-
 static inline int aead_sndbuf(struct sock *sk)
 {
struct alg_sock *ask = alg_sk(sk);
-   struct aead_ctx *ctx = ask->private;
+   struct af_alg_ctx *ctx = ask->private;
 
return max_t(int, max_t(int, sk->sk_sndbuf & PAGE_MASK, PAGE_SIZE) -
  ctx->used, 0);
@@ -82,7 +64,7 @@ static inline bool aead_writable(struct sock *sk)
 static inline int aead_rcvbuf(struct sock *sk)
 {
struct alg_sock *ask = alg_sk(sk);
-   struct aead_ctx *ctx = ask->private;
+   struct af_alg_ctx *ctx = ask->private;
 
return max_t(int, max_t(int, sk->sk_rcvbuf & PAGE_MASK, PAGE_SIZE) -
  ctx->rcvused, 0);
@@ -98,7 +80,7 @@ static inline bool aead_sufficient_data(struct sock *sk)
struct alg_sock *ask = alg_sk(sk);
struct sock *psk = ask->parent;
struct alg_sock *pask = alg_sk(psk);
-   struct aead_ctx *ctx = ask->private;
+   struct af_alg_ctx *ctx = ask->private;
struct aead_tfm *aeadc = pask->private;
struct crypto_aead *tfm = aeadc->aead;
unsigned int as = crypto_aead_authsize(tfm);
@@ -113,7 +95,7 @@ static inline bool aead_sufficient_data(struct sock *sk)
 static int aead_alloc_tsgl(struct sock *sk)
 {
struct alg_sock *ask = alg_sk(sk);
-   struct aead_ctx *ctx = ask->private;
+   struct af_alg_ctx *ctx = ask->private;
struct af_alg_tsgl *sgl;
struct scatterlist *sg = NULL;
 
@@ -148,7 +130,7 @@ static unsigned int aead_count_tsgl(struct sock *sk, size_t 
bytes,
size_t offset)
 {
struct alg_sock *ask = alg_sk(sk);
-   struct aead_ctx *ctx = ask->private;
+   struct af_alg_ctx *ctx = ask->private;
struct af_alg_tsgl *sgl, *tmp;
unsigned int i;
unsigned int sgl_count = 0;
@@ -198,7 +180,7 @@ static void aead_pull_tsgl(struct sock *sk, size_t used,
   struct scatterlist *dst, size_t dst_offset)
 {
struct alg_sock *ask = alg_sk(sk);
-   struct aead_ctx *ctx = ask->private;
+   struct af_alg_ctx *ctx = ask->private;
struct af_alg_tsgl *sgl;
struct scatterlist *sg;
unsigned int i, j;
@@ -262,7 +244,7 @@ static void aead_free_areq_sgls(struct af_alg_async_req 
*areq)
 {
struct sock *sk = areq->sk;
struct alg_sock *ask = alg_sk(sk);
-   struct aead_ctx *ctx = ask->private;
+   struct af_alg_ctx *ctx = ask->private;
struct af_alg_rsgl *rsgl, *tmp;
struct scatterlist *tsgl;
struct scatterlist *sg;
@@ -334,7 +316,7 @@ static int aead_wait_for_data(struct sock *sk, unsigned 
flags)
 {
DEFINE_WAIT_FUNC(wait, woken_wake_function);
struct alg_sock *ask = alg_sk(sk);
-   struct aead_ctx *ctx = ask->private;
+   struct af_alg_ctx *ctx = ask->private;
long timeout;
int err = -ERESTARTSYS;
 
@@ -363,7 +345,7 @@ static int aead_wait_for_data(struct sock *sk, unsigned 
flags)
 static void aead_data_wakeup(struct sock *sk)
 {
struct alg_sock *ask = alg_sk(sk);
-   struct aead_ctx *ctx = ask->private;
+   struct af_alg_ctx *ctx = ask->private;
struct socket_wq *wq;
 
if (!ctx->used)
@@ -385,7 +367,7 @@ static int aead_sendmsg(struct socket *sock, struct msghdr 
*msg, size_t size)
struct alg_sock *ask = alg_sk(sk);
struct sock *psk = ask->parent;
struct alg_sock *pask = alg_sk(psk);
-   struct aead_ctx *ctx = ask->private;
+   struct af_alg_ctx *ctx = ask->private;
struct aead_tfm *aeadc = pask->private;
struct crypto_aead *tfm = aeadc->aead;
unsigned int ivsize = crypto_aead_ivsize(tfm);
@@ -527,7 +509,7 @@ static ssize_t 

[PATCH 07/16] crypto: AF_ALG - consolidate counting TX SG entries

2017-07-31 Thread Stephan Müller
Consoliate aead_pull_tsgl, skcipher_pull_tsgl ==> af_alg_pull_tsgl

Signed-off-by: Stephan Mueller 
---
 crypto/af_alg.c | 80 +
 crypto/algif_aead.c | 79 ++--
 crypto/algif_skcipher.c | 55 ++
 include/crypto/if_alg.h |  2 ++
 4 files changed, 87 insertions(+), 129 deletions(-)

diff --git a/crypto/af_alg.c b/crypto/af_alg.c
index 861167fc12f7..73d4434df380 100644
--- a/crypto/af_alg.c
+++ b/crypto/af_alg.c
@@ -596,6 +596,86 @@ unsigned int af_alg_count_tsgl(struct sock *sk, size_t 
bytes, size_t offset)
 }
 EXPORT_SYMBOL_GPL(af_alg_count_tsgl);
 
+/**
+ * aead_pull_tsgl - Release the specified buffers from TX SGL
+ *
+ * If @dst is non-null, reassign the pages to dst. The caller must release
+ * the pages. If @dst_offset is given only reassign the pages to @dst starting
+ * at the @dst_offset (byte). The caller must ensure that @dst is large
+ * enough (e.g. by using af_alg_count_tsgl with the same offset).
+ *
+ * @sk socket of connection to user space
+ * @used Number of bytes to pull from TX SGL
+ * @dst If non-NULL, buffer is reassigned to dst SGL instead of releasing. The
+ * caller must release the buffers in dst.
+ * @dst_offset Reassign the TX SGL from given offset. All buffers before
+ *reaching the offset is released.
+ */
+void af_alg_pull_tsgl(struct sock *sk, size_t used, struct scatterlist *dst,
+ size_t dst_offset)
+{
+   struct alg_sock *ask = alg_sk(sk);
+   struct af_alg_ctx *ctx = ask->private;
+   struct af_alg_tsgl *sgl;
+   struct scatterlist *sg;
+   unsigned int i, j;
+
+   while (!list_empty(>tsgl_list)) {
+   sgl = list_first_entry(>tsgl_list, struct af_alg_tsgl,
+  list);
+   sg = sgl->sg;
+
+   for (i = 0, j = 0; i < sgl->cur; i++) {
+   size_t plen = min_t(size_t, used, sg[i].length);
+   struct page *page = sg_page(sg + i);
+
+   if (!page)
+   continue;
+
+   /*
+* Assumption: caller created af_alg_count_tsgl(len)
+* SG entries in dst.
+*/
+   if (dst) {
+   if (dst_offset >= plen) {
+   /* discard page before offset */
+   dst_offset -= plen;
+   put_page(page);
+   } else {
+   /* reassign page to dst after offset */
+   sg_set_page(dst + j, page,
+   plen - dst_offset,
+   sg[i].offset + dst_offset);
+   dst_offset = 0;
+   j++;
+   }
+   }
+
+   sg[i].length -= plen;
+   sg[i].offset += plen;
+
+   used -= plen;
+   ctx->used -= plen;
+
+   if (sg[i].length)
+   return;
+
+   if (!dst)
+   put_page(page);
+
+   sg_assign_page(sg + i, NULL);
+   }
+
+   list_del(>list);
+   sock_kfree_s(sk, sgl, sizeof(*sgl) + sizeof(sgl->sg[0]) *
+(MAX_SGL_ENTS + 1));
+   }
+
+   if (!ctx->used)
+   ctx->merge = 0;
+}
+EXPORT_SYMBOL_GPL(af_alg_pull_tsgl);
+
 static int __init af_alg_init(void)
 {
int err = proto_register(_proto, 0);
diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c
index 78651b26aa77..b78acb3336d6 100644
--- a/crypto/algif_aead.c
+++ b/crypto/algif_aead.c
@@ -64,79 +64,6 @@ static inline bool aead_sufficient_data(struct sock *sk)
return ctx->used >= ctx->aead_assoclen + (ctx->enc ? 0 : as);
 }
 
-/**
- * Release the specified buffers from TX SGL pointed to by ctx->tsgl_list for
- * @used bytes.
- *
- * If @dst is non-null, reassign the pages to dst. The caller must release
- * the pages. If @dst_offset is given only reassign the pages to @dst starting
- * at the @dst_offset (byte). The caller must ensure that @dst is large
- * enough (e.g. by using af_alg_count_tsgl with the same offset).
- */
-static void aead_pull_tsgl(struct sock *sk, size_t used,
-  struct scatterlist *dst, size_t dst_offset)
-{
-   struct alg_sock *ask = alg_sk(sk);
-   struct af_alg_ctx *ctx = ask->private;
-   struct af_alg_tsgl *sgl;
-   struct scatterlist *sg;
-   unsigned int i, j;
-
-   while 

[PATCH 15/16] crypto: AF_ALG - consolidate AIO callback handler

2017-07-31 Thread Stephan Müller
Consoliate aead_async_cb, skcipher_async_cb ==> af_alg_async_cb

algif_skcipher has been changed to store the number of output bytes in
areq->outlen before the AIO callback is triggered.

Signed-off-by: Stephan Mueller 
---
 crypto/af_alg.c | 31 +++
 crypto/algif_aead.c | 23 +--
 crypto/algif_skcipher.c | 27 +--
 include/crypto/if_alg.h |  1 +
 4 files changed, 38 insertions(+), 44 deletions(-)

diff --git a/crypto/af_alg.c b/crypto/af_alg.c
index 5a33d6629a67..ef37fc3a9015 100644
--- a/crypto/af_alg.c
+++ b/crypto/af_alg.c
@@ -1049,6 +1049,37 @@ ssize_t af_alg_sendpage(struct socket *sock, struct page 
*page,
 }
 EXPORT_SYMBOL_GPL(af_alg_sendpage);
 
+/**
+ * af_alg_async_cb - AIO callback handler
+ *
+ * This handler cleans up the struct af_alg_async_req upon completion of the
+ * AIO operation.
+ *
+ * The number of bytes to be generated with the AIO operation must be set
+ * in areq->outlen before the AIO callback handler is invoked.
+ */
+void af_alg_async_cb(struct crypto_async_request *_req, int err)
+{
+   struct af_alg_async_req *areq = _req->data;
+   struct sock *sk = areq->sk;
+   struct kiocb *iocb = areq->iocb;
+   unsigned int resultlen;
+
+   lock_sock(sk);
+
+   /* Buffer size written by crypto operation. */
+   resultlen = areq->outlen;
+
+   af_alg_free_areq_sgls(areq);
+   sock_kfree_s(sk, areq, areq->areqlen);
+   __sock_put(sk);
+
+   iocb->ki_complete(iocb, err ? err : resultlen, 0);
+
+   release_sock(sk);
+}
+EXPORT_SYMBOL_GPL(af_alg_async_cb);
+
 static int __init af_alg_init(void)
 {
int err = proto_register(_proto, 0);
diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c
index f655fadb9075..8add641cc667 100644
--- a/crypto/algif_aead.c
+++ b/crypto/algif_aead.c
@@ -76,27 +76,6 @@ static int aead_sendmsg(struct socket *sock, struct msghdr 
*msg, size_t size)
return af_alg_sendmsg(sock, msg, size, ivsize);
 }
 
-static void aead_async_cb(struct crypto_async_request *_req, int err)
-{
-   struct af_alg_async_req *areq = _req->data;
-   struct sock *sk = areq->sk;
-   struct kiocb *iocb = areq->iocb;
-   unsigned int resultlen;
-
-   lock_sock(sk);
-
-   /* Buffer size written by crypto operation. */
-   resultlen = areq->outlen;
-
-   af_alg_free_areq_sgls(areq);
-   sock_kfree_s(sk, areq, areq->areqlen);
-   __sock_put(sk);
-
-   iocb->ki_complete(iocb, err ? err : resultlen, 0);
-
-   release_sock(sk);
-}
-
 static int crypto_aead_copy_sgl(struct crypto_skcipher *null_tfm,
struct scatterlist *src,
struct scatterlist *dst, unsigned int len)
@@ -341,7 +320,7 @@ static int _aead_recvmsg(struct socket *sock, struct msghdr 
*msg,
areq->iocb = msg->msg_iocb;
aead_request_set_callback(>cra_u.aead_req,
  CRYPTO_TFM_REQ_MAY_BACKLOG,
- aead_async_cb, areq);
+ af_alg_async_cb, areq);
err = ctx->enc ? crypto_aead_encrypt(>cra_u.aead_req) :
 crypto_aead_decrypt(>cra_u.aead_req);
} else {
diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c
index c5779fd8..5134df529833 100644
--- a/crypto/algif_skcipher.c
+++ b/crypto/algif_skcipher.c
@@ -57,27 +57,6 @@ static int skcipher_sendmsg(struct socket *sock, struct 
msghdr *msg,
return af_alg_sendmsg(sock, msg, size, ivsize);
 }
 
-static void skcipher_async_cb(struct crypto_async_request *req, int err)
-{
-   struct af_alg_async_req *areq = req->data;
-   struct sock *sk = areq->sk;
-   struct kiocb *iocb = areq->iocb;
-   unsigned int resultlen;
-
-   lock_sock(sk);
-
-   /* Buffer size written by crypto operation. */
-   resultlen = areq->cra_u.skcipher_req.cryptlen;
-
-   af_alg_free_areq_sgls(areq);
-   sock_kfree_s(sk, areq, areq->areqlen);
-   __sock_put(sk);
-
-   iocb->ki_complete(iocb, err ? err : resultlen, 0);
-
-   release_sock(sk);
-}
-
 static int _skcipher_recvmsg(struct socket *sock, struct msghdr *msg,
 size_t ignored, int flags)
 {
@@ -189,7 +168,7 @@ static int _skcipher_recvmsg(struct socket *sock, struct 
msghdr *msg,
areq->iocb = msg->msg_iocb;
skcipher_request_set_callback(>cra_u.skcipher_req,
  CRYPTO_TFM_REQ_MAY_SLEEP,
- skcipher_async_cb, areq);
+ af_alg_async_cb, areq);
err = ctx->enc ?
crypto_skcipher_encrypt(>cra_u.skcipher_req) :
crypto_skcipher_decrypt(>cra_u.skcipher_req);
@@ -209,6 +188,10 @@ static int 

[PATCH 14/16] crypto: AF_ALG - consolidate sendpage implementation

2017-07-31 Thread Stephan Müller
Consoliate aead_sendpage, skcipher_sendpage
==> af_alg_sendpage

The following changes to aead_sendpage have been applied:

* remove superfluous err = 0

Signed-off-by: Stephan Mueller 
---
 crypto/af_alg.c | 58 +
 crypto/algif_aead.c | 57 ++--
 crypto/algif_skcipher.c | 55 ++
 include/crypto/if_alg.h |  2 ++
 4 files changed, 64 insertions(+), 108 deletions(-)

diff --git a/crypto/af_alg.c b/crypto/af_alg.c
index 716a73ff5309..5a33d6629a67 100644
--- a/crypto/af_alg.c
+++ b/crypto/af_alg.c
@@ -991,6 +991,64 @@ int af_alg_sendmsg(struct socket *sock, struct msghdr 
*msg, size_t size,
 }
 EXPORT_SYMBOL_GPL(af_alg_sendmsg);
 
+/**
+ * af_alg_sendpage - sendpage system call handler
+ *
+ * This is a generic implementation of sendpage to fill ctx->tsgl_list.
+ */
+ssize_t af_alg_sendpage(struct socket *sock, struct page *page,
+   int offset, size_t size, int flags)
+{
+   struct sock *sk = sock->sk;
+   struct alg_sock *ask = alg_sk(sk);
+   struct af_alg_ctx *ctx = ask->private;
+   struct af_alg_tsgl *sgl;
+   int err = -EINVAL;
+
+   if (flags & MSG_SENDPAGE_NOTLAST)
+   flags |= MSG_MORE;
+
+   lock_sock(sk);
+   if (!ctx->more && ctx->used)
+   goto unlock;
+
+   if (!size)
+   goto done;
+
+   if (!af_alg_writable(sk)) {
+   err = af_alg_wait_for_wmem(sk, flags);
+   if (err)
+   goto unlock;
+   }
+
+   err = af_alg_alloc_tsgl(sk);
+   if (err)
+   goto unlock;
+
+   ctx->merge = 0;
+   sgl = list_entry(ctx->tsgl_list.prev, struct af_alg_tsgl, list);
+
+   if (sgl->cur)
+   sg_unmark_end(sgl->sg + sgl->cur - 1);
+
+   sg_mark_end(sgl->sg + sgl->cur);
+
+   get_page(page);
+   sg_set_page(sgl->sg + sgl->cur, page, size, offset);
+   sgl->cur++;
+   ctx->used += size;
+
+done:
+   ctx->more = flags & MSG_MORE;
+
+unlock:
+   af_alg_data_wakeup(sk);
+   release_sock(sk);
+
+   return err ?: size;
+}
+EXPORT_SYMBOL_GPL(af_alg_sendpage);
+
 static int __init af_alg_init(void)
 {
int err = proto_register(_proto, 0);
diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c
index 9cb934b3175a..f655fadb9075 100644
--- a/crypto/algif_aead.c
+++ b/crypto/algif_aead.c
@@ -76,59 +76,6 @@ static int aead_sendmsg(struct socket *sock, struct msghdr 
*msg, size_t size)
return af_alg_sendmsg(sock, msg, size, ivsize);
 }
 
-static ssize_t aead_sendpage(struct socket *sock, struct page *page,
-int offset, size_t size, int flags)
-{
-   struct sock *sk = sock->sk;
-   struct alg_sock *ask = alg_sk(sk);
-   struct af_alg_ctx *ctx = ask->private;
-   struct af_alg_tsgl *sgl;
-   int err = -EINVAL;
-
-   if (flags & MSG_SENDPAGE_NOTLAST)
-   flags |= MSG_MORE;
-
-   lock_sock(sk);
-   if (!ctx->more && ctx->used)
-   goto unlock;
-
-   if (!size)
-   goto done;
-
-   if (!af_alg_writable(sk)) {
-   err = af_alg_wait_for_wmem(sk, flags);
-   if (err)
-   goto unlock;
-   }
-
-   err = af_alg_alloc_tsgl(sk);
-   if (err)
-   goto unlock;
-
-   ctx->merge = 0;
-   sgl = list_entry(ctx->tsgl_list.prev, struct af_alg_tsgl, list);
-
-   if (sgl->cur)
-   sg_unmark_end(sgl->sg + sgl->cur - 1);
-
-   sg_mark_end(sgl->sg + sgl->cur);
-
-   get_page(page);
-   sg_set_page(sgl->sg + sgl->cur, page, size, offset);
-   sgl->cur++;
-   ctx->used += size;
-
-   err = 0;
-
-done:
-   ctx->more = flags & MSG_MORE;
-unlock:
-   af_alg_data_wakeup(sk);
-   release_sock(sk);
-
-   return err ?: size;
-}
-
 static void aead_async_cb(struct crypto_async_request *_req, int err)
 {
struct af_alg_async_req *areq = _req->data;
@@ -496,7 +443,7 @@ static struct proto_ops algif_aead_ops = {
 
.release=   af_alg_release,
.sendmsg=   aead_sendmsg,
-   .sendpage   =   aead_sendpage,
+   .sendpage   =   af_alg_sendpage,
.recvmsg=   aead_recvmsg,
.poll   =   aead_poll,
 };
@@ -560,7 +507,7 @@ static ssize_t aead_sendpage_nokey(struct socket *sock, 
struct page *page,
if (err)
return err;
 
-   return aead_sendpage(sock, page, offset, size, flags);
+   return af_alg_sendpage(sock, page, offset, size, flags);
 }
 
 static int aead_recvmsg_nokey(struct socket *sock, struct msghdr *msg,
diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c
index d5bc5de7a8a1..c5779fd8 100644
--- a/crypto/algif_skcipher.c
+++ b/crypto/algif_skcipher.c
@@ -57,57 +57,6 @@ static int 

[PATCH 16/16] crypto: AF_ALG - consolidate poll syscall handler

2017-07-31 Thread Stephan Müller
Consoliate aead_poll, skcipher_poll ==> af_alg_poll

The POLLIN / POLLRDNORM is now set when either not more data is given or
the kernel is supplied with data. This is consistent to the wakeup from
sleep when the kernel waits for data.

Signed-off-by: Stephan Mueller 
---
 crypto/af_alg.c | 24 
 crypto/algif_aead.c | 24 ++--
 crypto/algif_skcipher.c | 23 ++-
 include/crypto/if_alg.h |  2 ++
 4 files changed, 30 insertions(+), 43 deletions(-)

diff --git a/crypto/af_alg.c b/crypto/af_alg.c
index ef37fc3a9015..ae0e93103c76 100644
--- a/crypto/af_alg.c
+++ b/crypto/af_alg.c
@@ -1080,6 +1080,30 @@ void af_alg_async_cb(struct crypto_async_request *_req, 
int err)
 }
 EXPORT_SYMBOL_GPL(af_alg_async_cb);
 
+/**
+ * af_alg_poll - poll system call handler
+ */
+unsigned int af_alg_poll(struct file *file, struct socket *sock,
+poll_table *wait)
+{
+   struct sock *sk = sock->sk;
+   struct alg_sock *ask = alg_sk(sk);
+   struct af_alg_ctx *ctx = ask->private;
+   unsigned int mask;
+
+   sock_poll_wait(file, sk_sleep(sk), wait);
+   mask = 0;
+
+   if (!ctx->more || ctx->used)
+   mask |= POLLIN | POLLRDNORM;
+
+   if (af_alg_writable(sk))
+   mask |= POLLOUT | POLLWRNORM | POLLWRBAND;
+
+   return mask;
+}
+EXPORT_SYMBOL_GPL(af_alg_poll);
+
 static int __init af_alg_init(void)
 {
int err = proto_register(_proto, 0);
diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c
index 8add641cc667..478bacf30079 100644
--- a/crypto/algif_aead.c
+++ b/crypto/algif_aead.c
@@ -385,26 +385,6 @@ static int aead_recvmsg(struct socket *sock, struct msghdr 
*msg,
return ret;
 }
 
-static unsigned int aead_poll(struct file *file, struct socket *sock,
- poll_table *wait)
-{
-   struct sock *sk = sock->sk;
-   struct alg_sock *ask = alg_sk(sk);
-   struct af_alg_ctx *ctx = ask->private;
-   unsigned int mask;
-
-   sock_poll_wait(file, sk_sleep(sk), wait);
-   mask = 0;
-
-   if (!ctx->more)
-   mask |= POLLIN | POLLRDNORM;
-
-   if (af_alg_writable(sk))
-   mask |= POLLOUT | POLLWRNORM | POLLWRBAND;
-
-   return mask;
-}
-
 static struct proto_ops algif_aead_ops = {
.family =   PF_ALG,
 
@@ -424,7 +404,7 @@ static struct proto_ops algif_aead_ops = {
.sendmsg=   aead_sendmsg,
.sendpage   =   af_alg_sendpage,
.recvmsg=   aead_recvmsg,
-   .poll   =   aead_poll,
+   .poll   =   af_alg_poll,
 };
 
 static int aead_check_key(struct socket *sock)
@@ -520,7 +500,7 @@ static struct proto_ops algif_aead_ops_nokey = {
.sendmsg=   aead_sendmsg_nokey,
.sendpage   =   aead_sendpage_nokey,
.recvmsg=   aead_recvmsg_nokey,
-   .poll   =   aead_poll,
+   .poll   =   af_alg_poll,
 };
 
 static void *aead_bind(const char *name, u32 type, u32 mask)
diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c
index 5134df529833..5bd85c1dd188 100644
--- a/crypto/algif_skcipher.c
+++ b/crypto/algif_skcipher.c
@@ -236,25 +236,6 @@ static int skcipher_recvmsg(struct socket *sock, struct 
msghdr *msg,
return ret;
 }
 
-static unsigned int skcipher_poll(struct file *file, struct socket *sock,
- poll_table *wait)
-{
-   struct sock *sk = sock->sk;
-   struct alg_sock *ask = alg_sk(sk);
-   struct af_alg_ctx *ctx = ask->private;
-   unsigned int mask;
-
-   sock_poll_wait(file, sk_sleep(sk), wait);
-   mask = 0;
-
-   if (ctx->used)
-   mask |= POLLIN | POLLRDNORM;
-
-   if (af_alg_writable(sk))
-   mask |= POLLOUT | POLLWRNORM | POLLWRBAND;
-
-   return mask;
-}
 
 static struct proto_ops algif_skcipher_ops = {
.family =   PF_ALG,
@@ -275,7 +256,7 @@ static struct proto_ops algif_skcipher_ops = {
.sendmsg=   skcipher_sendmsg,
.sendpage   =   af_alg_sendpage,
.recvmsg=   skcipher_recvmsg,
-   .poll   =   skcipher_poll,
+   .poll   =   af_alg_poll,
 };
 
 static int skcipher_check_key(struct socket *sock)
@@ -371,7 +352,7 @@ static struct proto_ops algif_skcipher_ops_nokey = {
.sendmsg=   skcipher_sendmsg_nokey,
.sendpage   =   skcipher_sendpage_nokey,
.recvmsg=   skcipher_recvmsg_nokey,
-   .poll   =   skcipher_poll,
+   .poll   =   af_alg_poll,
 };
 
 static void *skcipher_bind(const char *name, u32 type, u32 mask)
diff --git a/include/crypto/if_alg.h b/include/crypto/if_alg.h
index ef08fb4a599e..7bac3fee6061 100644
--- a/include/crypto/if_alg.h
+++ b/include/crypto/if_alg.h
@@ -254,5 +254,7 @@ 

[PATCH 03/16] crypto: AF_ALG - consolidate send buffer service functions

2017-07-31 Thread Stephan Müller
Consolidate the common functions verifying the send buffers:

 * skcipher_sndbuf, aead_sndbuf ==> af_alg_sndbuf

 * skcipher_writable, aead_writable ==> af_alg_writable

Signed-off-by: Stephan Mueller 
---
 crypto/algif_aead.c | 26 ++
 crypto/algif_skcipher.c | 26 ++
 include/crypto/if_alg.h | 26 ++
 3 files changed, 38 insertions(+), 40 deletions(-)

diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c
index adbf87a0..c923ce29bfe3 100644
--- a/crypto/algif_aead.c
+++ b/crypto/algif_aead.c
@@ -47,20 +47,6 @@ struct aead_tfm {
struct crypto_skcipher *null_tfm;
 };
 
-static inline int aead_sndbuf(struct sock *sk)
-{
-   struct alg_sock *ask = alg_sk(sk);
-   struct af_alg_ctx *ctx = ask->private;
-
-   return max_t(int, max_t(int, sk->sk_sndbuf & PAGE_MASK, PAGE_SIZE) -
- ctx->used, 0);
-}
-
-static inline bool aead_writable(struct sock *sk)
-{
-   return PAGE_SIZE <= aead_sndbuf(sk);
-}
-
 static inline int aead_rcvbuf(struct sock *sk)
 {
struct alg_sock *ask = alg_sk(sk);
@@ -285,7 +271,7 @@ static int aead_wait_for_wmem(struct sock *sk, unsigned int 
flags)
if (signal_pending(current))
break;
timeout = MAX_SCHEDULE_TIMEOUT;
-   if (sk_wait_event(sk, , aead_writable(sk), )) {
+   if (sk_wait_event(sk, , af_alg_writable(sk), )) {
err = 0;
break;
}
@@ -299,7 +285,7 @@ static void aead_wmem_wakeup(struct sock *sk)
 {
struct socket_wq *wq;
 
-   if (!aead_writable(sk))
+   if (!af_alg_writable(sk))
return;
 
rcu_read_lock();
@@ -441,14 +427,14 @@ static int aead_sendmsg(struct socket *sock, struct 
msghdr *msg, size_t size)
continue;
}
 
-   if (!aead_writable(sk)) {
+   if (!af_alg_writable(sk)) {
err = aead_wait_for_wmem(sk, msg->msg_flags);
if (err)
goto unlock;
}
 
/* allocate a new page */
-   len = min_t(unsigned long, size, aead_sndbuf(sk));
+   len = min_t(unsigned long, size, af_alg_sndbuf(sk));
 
err = aead_alloc_tsgl(sk);
if (err)
@@ -523,7 +509,7 @@ static ssize_t aead_sendpage(struct socket *sock, struct 
page *page,
if (!size)
goto done;
 
-   if (!aead_writable(sk)) {
+   if (!af_alg_writable(sk)) {
err = aead_wait_for_wmem(sk, flags);
if (err)
goto unlock;
@@ -901,7 +887,7 @@ static unsigned int aead_poll(struct file *file, struct 
socket *sock,
if (!ctx->more)
mask |= POLLIN | POLLRDNORM;
 
-   if (aead_writable(sk))
+   if (af_alg_writable(sk))
mask |= POLLOUT | POLLWRNORM | POLLWRBAND;
 
return mask;
diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c
index 0256959b0925..a5c6643f2abe 100644
--- a/crypto/algif_skcipher.c
+++ b/crypto/algif_skcipher.c
@@ -44,20 +44,6 @@ struct skcipher_tfm {
bool has_key;
 };
 
-static inline int skcipher_sndbuf(struct sock *sk)
-{
-   struct alg_sock *ask = alg_sk(sk);
-   struct af_alg_ctx *ctx = ask->private;
-
-   return max_t(int, max_t(int, sk->sk_sndbuf & PAGE_MASK, PAGE_SIZE) -
- ctx->used, 0);
-}
-
-static inline bool skcipher_writable(struct sock *sk)
-{
-   return PAGE_SIZE <= skcipher_sndbuf(sk);
-}
-
 static inline int skcipher_rcvbuf(struct sock *sk)
 {
struct alg_sock *ask = alg_sk(sk);
@@ -224,7 +210,7 @@ static int skcipher_wait_for_wmem(struct sock *sk, unsigned 
flags)
if (signal_pending(current))
break;
timeout = MAX_SCHEDULE_TIMEOUT;
-   if (sk_wait_event(sk, , skcipher_writable(sk), )) {
+   if (sk_wait_event(sk, , af_alg_writable(sk), )) {
err = 0;
break;
}
@@ -238,7 +224,7 @@ static void skcipher_wmem_wakeup(struct sock *sk)
 {
struct socket_wq *wq;
 
-   if (!skcipher_writable(sk))
+   if (!af_alg_writable(sk))
return;
 
rcu_read_lock();
@@ -381,13 +367,13 @@ static int skcipher_sendmsg(struct socket *sock, struct 
msghdr *msg,
continue;
}
 
-   if (!skcipher_writable(sk)) {
+   if (!af_alg_writable(sk)) {
err = skcipher_wait_for_wmem(sk, msg->msg_flags);
if (err)
goto unlock;
}
 
-   len = min_t(unsigned long, len, skcipher_sndbuf(sk));
+   len = min_t(unsigned long, len, af_alg_sndbuf(sk));
 

[PATCH 12/16] crypto: AF_ALG - consolidate waking up caller for TX data

2017-07-31 Thread Stephan Müller
Consoliate aead_data_wakeup, skcipher_data_wakeup
==> af_alg_data_wakeup

Signed-off-by: Stephan Mueller 
---
 crypto/af_alg.c | 26 ++
 crypto/algif_aead.c | 24 ++--
 crypto/algif_skcipher.c | 24 ++--
 include/crypto/if_alg.h |  1 +
 4 files changed, 31 insertions(+), 44 deletions(-)

diff --git a/crypto/af_alg.c b/crypto/af_alg.c
index c1fe7c5f1b2e..d1e0c176495f 100644
--- a/crypto/af_alg.c
+++ b/crypto/af_alg.c
@@ -808,6 +808,32 @@ int af_alg_wait_for_data(struct sock *sk, unsigned flags)
 }
 EXPORT_SYMBOL_GPL(af_alg_wait_for_data);
 
+/**
+ * af_alg_data_wakeup - wakeup caller when new data can be sent to kernel
+ *
+ * @sk socket of connection to user space
+ */
+
+void af_alg_data_wakeup(struct sock *sk)
+{
+   struct alg_sock *ask = alg_sk(sk);
+   struct af_alg_ctx *ctx = ask->private;
+   struct socket_wq *wq;
+
+   if (!ctx->used)
+   return;
+
+   rcu_read_lock();
+   wq = rcu_dereference(sk->sk_wq);
+   if (skwq_has_sleeper(wq))
+   wake_up_interruptible_sync_poll(>wait, POLLOUT |
+  POLLRDNORM |
+  POLLRDBAND);
+   sk_wake_async(sk, SOCK_WAKE_SPACE, POLL_OUT);
+   rcu_read_unlock();
+}
+EXPORT_SYMBOL_GPL(af_alg_data_wakeup);
+
 static int __init af_alg_init(void)
 {
int err = proto_register(_proto, 0);
diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c
index 8db8c10401d6..7a3b81545363 100644
--- a/crypto/algif_aead.c
+++ b/crypto/algif_aead.c
@@ -35,7 +35,6 @@
 #include 
 #include 
 #include 
-#include 
 #include 
 #include 
 #include 
@@ -64,25 +63,6 @@ static inline bool aead_sufficient_data(struct sock *sk)
return ctx->used >= ctx->aead_assoclen + (ctx->enc ? 0 : as);
 }
 
-static void aead_data_wakeup(struct sock *sk)
-{
-   struct alg_sock *ask = alg_sk(sk);
-   struct af_alg_ctx *ctx = ask->private;
-   struct socket_wq *wq;
-
-   if (!ctx->used)
-   return;
-
-   rcu_read_lock();
-   wq = rcu_dereference(sk->sk_wq);
-   if (skwq_has_sleeper(wq))
-   wake_up_interruptible_sync_poll(>wait, POLLOUT |
-  POLLRDNORM |
-  POLLRDBAND);
-   sk_wake_async(sk, SOCK_WAKE_SPACE, POLL_OUT);
-   rcu_read_unlock();
-}
-
 static int aead_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
 {
struct sock *sk = sock->sk;
@@ -220,7 +200,7 @@ static int aead_sendmsg(struct socket *sock, struct msghdr 
*msg, size_t size)
ctx->more = msg->msg_flags & MSG_MORE;
 
 unlock:
-   aead_data_wakeup(sk);
+   af_alg_data_wakeup(sk);
release_sock(sk);
 
return err ?: copied;
@@ -273,7 +253,7 @@ static ssize_t aead_sendpage(struct socket *sock, struct 
page *page,
 done:
ctx->more = flags & MSG_MORE;
 unlock:
-   aead_data_wakeup(sk);
+   af_alg_data_wakeup(sk);
release_sock(sk);
 
return err ?: size;
diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c
index 572a5a632ea1..1b16fb1161b9 100644
--- a/crypto/algif_skcipher.c
+++ b/crypto/algif_skcipher.c
@@ -33,7 +33,6 @@
 #include 
 #include 
 #include 
-#include 
 #include 
 #include 
 #include 
@@ -44,25 +43,6 @@ struct skcipher_tfm {
bool has_key;
 };
 
-static void skcipher_data_wakeup(struct sock *sk)
-{
-   struct alg_sock *ask = alg_sk(sk);
-   struct af_alg_ctx *ctx = ask->private;
-   struct socket_wq *wq;
-
-   if (!ctx->used)
-   return;
-
-   rcu_read_lock();
-   wq = rcu_dereference(sk->sk_wq);
-   if (skwq_has_sleeper(wq))
-   wake_up_interruptible_sync_poll(>wait, POLLOUT |
-  POLLRDNORM |
-  POLLRDBAND);
-   sk_wake_async(sk, SOCK_WAKE_SPACE, POLL_OUT);
-   rcu_read_unlock();
-}
-
 static int skcipher_sendmsg(struct socket *sock, struct msghdr *msg,
size_t size)
 {
@@ -196,7 +176,7 @@ static int skcipher_sendmsg(struct socket *sock, struct 
msghdr *msg,
ctx->more = msg->msg_flags & MSG_MORE;
 
 unlock:
-   skcipher_data_wakeup(sk);
+   af_alg_data_wakeup(sk);
release_sock(sk);
 
return copied ?: err;
@@ -247,7 +227,7 @@ static ssize_t skcipher_sendpage(struct socket *sock, 
struct page *page,
ctx->more = flags & MSG_MORE;
 
 unlock:
-   skcipher_data_wakeup(sk);
+   af_alg_data_wakeup(sk);
release_sock(sk);
 
return err ?: size;
diff --git a/include/crypto/if_alg.h b/include/crypto/if_alg.h
index c3b02276f38f..dae18aec9792 100644
--- a/include/crypto/if_alg.h
+++ b/include/crypto/if_alg.h
@@ -248,5 +248,6 @@ void af_alg_free_areq_sgls(struct 

[PATCH V2] staging: ccree: Fix format/argument mismatches

2017-07-31 Thread Joe Perches
By default, debug logging is disabled by CC_DEBUG not being defined.

Convert SSI_LOG_DEBUG to use no_printk instead of an empty define
to validate formats and arguments.

Fix fallout.

Miscellanea:

o One of the conversions now uses %pR instead of multiple uses of %pad

Signed-off-by: Joe Perches <j...@perches.com>
---

On top of next-20170731

 drivers/staging/ccree/ssi_aead.c|  8 
 drivers/staging/ccree/ssi_buffer_mgr.c  | 29 +---
 drivers/staging/ccree/ssi_cipher.c  | 10 +-
 drivers/staging/ccree/ssi_driver.c  |  5 ++---
 drivers/staging/ccree/ssi_driver.h  |  2 +-
 drivers/staging/ccree/ssi_hash.c| 34 -
 drivers/staging/ccree/ssi_request_mgr.c |  6 +++---
 7 files changed, 45 insertions(+), 49 deletions(-)

diff --git a/drivers/staging/ccree/ssi_aead.c b/drivers/staging/ccree/ssi_aead.c
index f5ca0e35c5d3..6664ade43b70 100644
--- a/drivers/staging/ccree/ssi_aead.c
+++ b/drivers/staging/ccree/ssi_aead.c
@@ -103,7 +103,7 @@ static void ssi_aead_exit(struct crypto_aead *tfm)
if (ctx->enckey) {
dma_free_coherent(dev, AES_MAX_KEY_SIZE, ctx->enckey, 
ctx->enckey_dma_addr);
SSI_LOG_DEBUG("Freed enckey DMA buffer enckey_dma_addr=%pad\n",
- ctx->enckey_dma_addr);
+ >enckey_dma_addr);
ctx->enckey_dma_addr = 0;
ctx->enckey = NULL;
}
@@ -117,7 +117,7 @@ static void ssi_aead_exit(struct crypto_aead *tfm)
  xcbc->xcbc_keys_dma_addr);
}
SSI_LOG_DEBUG("Freed xcbc_keys DMA buffer 
xcbc_keys_dma_addr=%pad\n",
- xcbc->xcbc_keys_dma_addr);
+ >xcbc_keys_dma_addr);
xcbc->xcbc_keys_dma_addr = 0;
xcbc->xcbc_keys = NULL;
} else if (ctx->auth_mode != DRV_HASH_NULL) { /* HMAC auth. */
@@ -128,7 +128,7 @@ static void ssi_aead_exit(struct crypto_aead *tfm)
  hmac->ipad_opad,
  hmac->ipad_opad_dma_addr);
SSI_LOG_DEBUG("Freed ipad_opad DMA buffer 
ipad_opad_dma_addr=%pad\n",
- hmac->ipad_opad_dma_addr);
+ >ipad_opad_dma_addr);
hmac->ipad_opad_dma_addr = 0;
hmac->ipad_opad = NULL;
}
@@ -137,7 +137,7 @@ static void ssi_aead_exit(struct crypto_aead *tfm)
  hmac->padded_authkey,
  hmac->padded_authkey_dma_addr);
SSI_LOG_DEBUG("Freed padded_authkey DMA buffer 
padded_authkey_dma_addr=%pad\n",
- hmac->padded_authkey_dma_addr);
+ >padded_authkey_dma_addr);
hmac->padded_authkey_dma_addr = 0;
hmac->padded_authkey = NULL;
}
diff --git a/drivers/staging/ccree/ssi_buffer_mgr.c 
b/drivers/staging/ccree/ssi_buffer_mgr.c
index 63936091d524..88b36477ce6d 100644
--- a/drivers/staging/ccree/ssi_buffer_mgr.c
+++ b/drivers/staging/ccree/ssi_buffer_mgr.c
@@ -14,6 +14,7 @@
  * along with this program; if not, see <http://www.gnu.org/licenses/>.
  */
 
+#include 
 #include 
 #include 
 #include 
@@ -33,14 +34,10 @@
 #include "ssi_hash.h"
 #include "ssi_aead.h"
 
-#ifdef CC_DEBUG
 #define GET_DMA_BUFFER_TYPE(buff_type) ( \
((buff_type) == SSI_DMA_BUF_NULL) ? "BUF_NULL" : \
((buff_type) == SSI_DMA_BUF_DLLI) ? "BUF_DLLI" : \
((buff_type) == SSI_DMA_BUF_MLLI) ? "BUF_MLLI" : "BUF_INVALID")
-#else
-#define GET_DMA_BUFFER_TYPE(buff_type)
-#endif
 
 enum dma_buffer_type {
DMA_NULL_TYPE = -1,
@@ -262,7 +259,7 @@ static int ssi_buffer_mgr_generate_mlli(
SSI_LOG_DEBUG("MLLI params: "
 "virt_addr=%pK dma_addr=%pad mlli_len=0x%X\n",
   mlli_params->mlli_virt_addr,
-  mlli_params->mlli_dma_addr,
+  _params->mlli_dma_addr,
   mlli_params->mlli_len);
 
 build_mlli_exit:
@@ -278,7 +275,7 @@ static inline void ssi_buffer_mgr_add_buffer_entry(
 
SSI_LOG_DEBUG("index=%u single_buff=%pad "
 "buffer_len=0x%08X is_last=%d\n",
-index, buffer_dma, buffer_len, is_last_entry);
+index, _dma, buffer_len, is_last_entry);
sgl_data->nents[index] = 1;
sgl_data->entry[index].buffer_dma = buffer_dma;
sgl_data->offset[index] = 0;
@@ -362,7 +359,7 @@ stati

Re: [PATCH v5 0/6] make io{read|write}64 globally usable

2017-07-31 Thread Horia Geantă
On 7/27/2017 2:19 AM, Logan Gunthorpe wrote:
> Changes since v4:
> - Add functions so the powerpc implementation of iomap.c compiles. (As
> noticed by Horia)

Tested-by: Horia Geantă 

more exactly: crypto self-tests pass on CAAM crypto engine
on NXP platforms LS1046A (ARMv8 A53), T1040 (PPC64 e5500), P4080 (PPC
e500mc).

> 
> Changes since v3:
> 
> - I noticed powerpc didn't use the appropriate functions seeing
> readq/writeq were not defined when iomap.h was included. Thus I've
> included a patch to adjust this
> - Fixed some mistakes with a couple of the defines in io-64-nonatomic*
> headers
> - Fixed a typo noticed by Horia.
> 
> (earlier versions were drastically different)
> 
> 
> Horia Geantă (1):
>   crypto: caam: cleanup CONFIG_64BIT ifdefs when using io{read|write}64
> 
> Logan Gunthorpe (5):
>   powerpc: io.h: move iomap.h include so that it can use readq/writeq
> defs
>   powerpc: iomap.c: introduce io{read|write}64_{lo_hi|hi_lo}
>   iomap: introduce io{read|write}64_{lo_hi|hi_lo}
>   io-64-nonatomic: add io{read|write}64[be]{_lo_hi|_hi_lo} macros
>   ntb: ntb_hw_intel: use io-64-nonatomic instead of in-driver hacks
> 
>  arch/powerpc/include/asm/io.h |   6 +-
>  arch/powerpc/kernel/iomap.c   |  40 +++
>  drivers/crypto/caam/regs.h|  35 ++---
>  drivers/ntb/hw/intel/ntb_hw_intel.c   |  30 +---
>  include/asm-generic/iomap.h   |  26 +--
>  include/linux/io-64-nonatomic-hi-lo.h |  60 
>  include/linux/io-64-nonatomic-lo-hi.h |  60 
>  lib/iomap.c   | 132 
> ++
>  8 files changed, 322 insertions(+), 67 deletions(-)
> 
> --
> 2.11.0
> 


Re: [PATCH] staging: ccree: Fix format/argument mismatches

2017-07-31 Thread Joe Perches
On Mon, 2017-07-31 at 09:39 +0300, Gilad Ben-Yossef wrote:
> On Sun, Jul 30, 2017 at 7:45 PM, Joe Perches  wrote:
> > By default, debug logging is disabled by CC_DEBUG not being defined.
> > 
> > Convert SSI_LOG_DEBUG to use no_printk instead of an empty define
> > to validate formats and arguments.
> > 
> > Fix fallout.
> > 
> > Miscellanea:
> > 
> > o One of the conversions now uses %pR instead of multiple uses of %pad
> 
> This looks great (I didn't know about no_printk) but does not seem to
> apply on top of
> staging-next.

Applies to next-20170728

b2cf822e075f7a7e7ced8c50af600f9edf5ccc31



[PATCH] staging/ccree: Declare compiled out fuctions static inline

2017-07-31 Thread RishabhHardas
From: RishabhHardas 

Sparse was giving out a warning for symbols 'cc_set_ree_fips_status' and 
'fips_handler'
that they were not declared and need to be made static. This patch makes both 
the symbols
static inline, to remove the warnings.

Signed-off-by: RishabhHardas 
---
 drivers/staging/ccree/ssi_fips.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/staging/ccree/ssi_fips.h b/drivers/staging/ccree/ssi_fips.h
index 369ddf9..63bcca7 100644
--- a/drivers/staging/ccree/ssi_fips.h
+++ b/drivers/staging/ccree/ssi_fips.h
@@ -40,1 +40,1 @@ static inline int ssi_fips_init(struct ssi_drvdata *p_drvdata)
 }

 static inline void ssi_fips_fini(struct ssi_drvdata *drvdata) {}
-void cc_set_ree_fips_status(struct ssi_drvdata *drvdata, bool ok) {}
-void fips_handler(struct ssi_drvdata *drvdata) {}
+static inline void cc_set_ree_fips_status(struct ssi_drvdata *drvdata, bool 
ok) {}
+static inline void fips_handler(struct ssi_drvdata *drvdata) {}

 #endif /* CONFIG_CRYPTO_FIPS */

--
1.9.1



Re: [PATCH 2/2] crypto: stm32 - Support for STM32 HASH module

2017-07-31 Thread Kamil Konieczny


On 13.07.2017 15:32, Lionel Debieve wrote:
> This module register a HASH module that support multiples
> algorithms: MD5, SHA1, SHA224, SHA256. [...]

> +static irqreturn_t stm32_hash_irq_thread(int irq, void *dev_id)
> +{
> + struct stm32_hash_dev *hdev = dev_id;
> + int err;

The 'err' var is used without initialize.

> +
> + if (HASH_FLAGS_CPU & hdev->flags) {
> + if (HASH_FLAGS_OUTPUT_READY & hdev->flags) {
> + hdev->flags &= ~HASH_FLAGS_OUTPUT_READY;
> + goto finish;
> + }
> + } else if (HASH_FLAGS_DMA_READY & hdev->flags) {
> + if (HASH_FLAGS_DMA_ACTIVE & hdev->flags) {
> + hdev->flags &= ~HASH_FLAGS_DMA_ACTIVE;
> + goto finish;
> + }
> + }
> +
> + return IRQ_HANDLED;
> +
> +finish:
> + /*Finish current request */
> + stm32_hash_finish_req(hdev->req, err);
> +
> + return IRQ_HANDLED;
> +}
> +
and here is beginnig for finish_req:

+static void stm32_hash_finish_req(struct ahash_request *req, int err)
+{
+   struct stm32_hash_request_ctx *rctx = ahash_request_ctx(req);
+   struct stm32_hash_dev *hdev = rctx->hdev;
+
+   if (!err && (HASH_FLAGS_FINAL & hdev->flags)) {

-- 
Best regards,
Kamil Konieczny
Samsung R Institute Poland



Re: [PATCH] crypto: caam/qi - Remove unused 'qi_congested' entry

2017-07-31 Thread Horia Geantă
On 7/30/2017 1:55 AM, Fabio Estevam wrote:
> From: Fabio Estevam 
> 
> 'qi_congested' member from structure caam_drv_private
> is never used at all, so it is safe to remove it.

Agree, though I would remove all the other dentry members not currently
used - since debugfs_remove_recursive() is called, we don't need the
file entries.

> 
> Signed-off-by: Fabio Estevam 
> ---
>  drivers/crypto/caam/intern.h | 3 ---
>  drivers/crypto/caam/qi.c | 6 ++
>  2 files changed, 2 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/crypto/caam/intern.h b/drivers/crypto/caam/intern.h
> index 9e3f3e0..9625b2d 100644
> --- a/drivers/crypto/caam/intern.h
> +++ b/drivers/crypto/caam/intern.h
> @@ -109,9 +109,6 @@ struct caam_drv_private {
>  
>   struct debugfs_blob_wrapper ctl_kek_wrap, ctl_tkek_wrap, ctl_tdsk_wrap;
>   struct dentry *ctl_kek, *ctl_tkek, *ctl_tdsk;
> -#ifdef CONFIG_CAAM_QI
> - struct dentry *qi_congested;
> -#endif
>  #endif
>  };
>  
> diff --git a/drivers/crypto/caam/qi.c b/drivers/crypto/caam/qi.c
> index 6d5a010..b2f7a42 100644
> --- a/drivers/crypto/caam/qi.c
> +++ b/drivers/crypto/caam/qi.c
> @@ -793,10 +793,8 @@ int caam_qi_init(struct platform_device *caam_pdev)
>   /* Done with the CGRs; restore the cpus allowed mask */
>   set_cpus_allowed_ptr(current, _cpumask);
>  #ifdef CONFIG_DEBUG_FS
> - ctrlpriv->qi_congested = debugfs_create_file("qi_congested", 0444,
> -  ctrlpriv->ctl,
> -  _congested,
> -  _fops_u64_ro);
> + debugfs_create_file("qi_congested", 0444, ctrlpriv->ctl,
> + _congested, _fops_u64_ro);

Either here or in a different patch the return value of debugfs_create_*
functions should be checked, such that if IS_ERR_OR_NULL(ret) we could
print a warning.

Thanks,
Horia


Re: [PATCH] staging: ccree: Fix format/argument mismatches

2017-07-31 Thread Gilad Ben-Yossef
On Sun, Jul 30, 2017 at 7:45 PM, Joe Perches  wrote:
> By default, debug logging is disabled by CC_DEBUG not being defined.
>
> Convert SSI_LOG_DEBUG to use no_printk instead of an empty define
> to validate formats and arguments.
>
> Fix fallout.
>
> Miscellanea:
>
> o One of the conversions now uses %pR instead of multiple uses of %pad

This looks great (I didn't know about no_printk) but does not seem to
apply on top of
staging-next.


Thanks,
Gilad

>
> Signed-off-by: Joe Perches 
> ---
>  drivers/staging/ccree/ssi_aead.c|  8 
>  drivers/staging/ccree/ssi_buffer_mgr.c  | 29 +
>  drivers/staging/ccree/ssi_cipher.c  | 10 +-
>  drivers/staging/ccree/ssi_driver.c  |  5 ++---
>  drivers/staging/ccree/ssi_driver.h  |  2 +-
>  drivers/staging/ccree/ssi_hash.c| 32 
>  drivers/staging/ccree/ssi_request_mgr.c |  6 +++---
>  7 files changed, 44 insertions(+), 48 deletions(-)
>
> diff --git a/drivers/staging/ccree/ssi_aead.c 
> b/drivers/staging/ccree/ssi_aead.c
> index ea29b8a1a71d..9376bf8b8c61 100644
> --- a/drivers/staging/ccree/ssi_aead.c
> +++ b/drivers/staging/ccree/ssi_aead.c
> @@ -103,7 +103,7 @@ static void ssi_aead_exit(struct crypto_aead *tfm)
> if (ctx->enckey) {
> dma_free_coherent(dev, AES_MAX_KEY_SIZE, ctx->enckey, 
> ctx->enckey_dma_addr);
> SSI_LOG_DEBUG("Freed enckey DMA buffer 
> enckey_dma_addr=%pad\n",
> - ctx->enckey_dma_addr);
> + >enckey_dma_addr);
> ctx->enckey_dma_addr = 0;
> ctx->enckey = NULL;
> }
> @@ -117,7 +117,7 @@ static void ssi_aead_exit(struct crypto_aead *tfm)
>   xcbc->xcbc_keys_dma_addr);
> }
> SSI_LOG_DEBUG("Freed xcbc_keys DMA buffer 
> xcbc_keys_dma_addr=%pad\n",
> - xcbc->xcbc_keys_dma_addr);
> + >xcbc_keys_dma_addr);
> xcbc->xcbc_keys_dma_addr = 0;
> xcbc->xcbc_keys = NULL;
> } else if (ctx->auth_mode != DRV_HASH_NULL) { /* HMAC auth. */
> @@ -128,7 +128,7 @@ static void ssi_aead_exit(struct crypto_aead *tfm)
>   hmac->ipad_opad,
>   hmac->ipad_opad_dma_addr);
> SSI_LOG_DEBUG("Freed ipad_opad DMA buffer 
> ipad_opad_dma_addr=%pad\n",
> - hmac->ipad_opad_dma_addr);
> + >ipad_opad_dma_addr);
> hmac->ipad_opad_dma_addr = 0;
> hmac->ipad_opad = NULL;
> }
> @@ -137,7 +137,7 @@ static void ssi_aead_exit(struct crypto_aead *tfm)
>   hmac->padded_authkey,
>   hmac->padded_authkey_dma_addr);
> SSI_LOG_DEBUG("Freed padded_authkey DMA buffer 
> padded_authkey_dma_addr=%pad\n",
> - hmac->padded_authkey_dma_addr);
> + >padded_authkey_dma_addr);
> hmac->padded_authkey_dma_addr = 0;
> hmac->padded_authkey = NULL;
> }
> diff --git a/drivers/staging/ccree/ssi_buffer_mgr.c 
> b/drivers/staging/ccree/ssi_buffer_mgr.c
> index 6579a54f9dc4..e13184d1d165 100644
> --- a/drivers/staging/ccree/ssi_buffer_mgr.c
> +++ b/drivers/staging/ccree/ssi_buffer_mgr.c
> @@ -14,6 +14,7 @@
>   * along with this program; if not, see .
>   */
>
> +#include 
>  #include 
>  #include 
>  #include 
> @@ -33,14 +34,10 @@
>  #include "ssi_hash.h"
>  #include "ssi_aead.h"
>
> -#ifdef CC_DEBUG
>  #define GET_DMA_BUFFER_TYPE(buff_type) ( \
> ((buff_type) == SSI_DMA_BUF_NULL) ? "BUF_NULL" : \
> ((buff_type) == SSI_DMA_BUF_DLLI) ? "BUF_DLLI" : \
> ((buff_type) == SSI_DMA_BUF_MLLI) ? "BUF_MLLI" : "BUF_INVALID")
> -#else
> -#define GET_DMA_BUFFER_TYPE(buff_type)
> -#endif
>
>  enum dma_buffer_type {
> DMA_NULL_TYPE = -1,
> @@ -262,7 +259,7 @@ static int ssi_buffer_mgr_generate_mlli(
> SSI_LOG_DEBUG("MLLI params: "
>  "virt_addr=%pK dma_addr=%pad mlli_len=0x%X\n",
>mlli_params->mlli_virt_addr,
> -  mlli_params->mlli_dma_addr,
> +  _params->mlli_dma_addr,
>mlli_params->mlli_len);
>
>  build_mlli_exit:
> @@ -278,7 +275,7 @@ static inline void ssi_buffer_mgr_add_buffer_entry(
>
> SSI_LOG_DEBUG("index=%u single_buff=%pad "
>  "buffer_len=0x%08X is_last=%d\n",
> -index, buffer_dma, buffer_len, is_last_entry);
> +index, _dma, buffer_len, is_last_entry);
> sgl_data->nents[index] = 1;
>