Re: [PATCH v6 3/6] crypto: AF_ALG -- add asymmetric cipher interface

2016-06-08 Thread Mat Martineau


On Wed, 8 Jun 2016, Stephan Mueller wrote:


Am Dienstag, 7. Juni 2016, 17:28:07 schrieb Mat Martineau:

Hi Mat,


+   used = ctx->used;
+
+   /* convert iovecs of output buffers into scatterlists */
+   while (iov_iter_count(>msg_iter)) {
+   /* make one iovec available as scatterlist */
+   err = af_alg_make_sg(>rsgl[cnt], >msg_iter,
+iov_iter_count(>msg_iter));
+   if (err < 0)
+   goto unlock;
+   usedpages += err;
+   /* chain the new scatterlist with previous one */
+   if (cnt)
+   af_alg_link_sg(>rsgl[cnt - 1], >rsgl[cnt]);
+
+   iov_iter_advance(>msg_iter, err);
+   cnt++;
+   }
+
+   /* ensure output buffer is sufficiently large */
+   if (usedpages < akcipher_calcsize(ctx)) {
+   err = -EMSGSIZE;
+   goto unlock;
+   }


Why is the size of the output buffer enforced here instead of depending on
the algorithm implementation?


akcipher_calcsize calls crypto_akcipher_maxsize to get the maximum size the
algorithm generates as output during its operation.

The code ensures that the caller provided at least that amount of memory for
the kernel to store its data in. This check therefore is present to ensure the
kernel does not overstep memory boundaries in user space.


Yes, it's understood that the userspace buffer length must not be 
exceeded. But dst_len is part of the akcipher_request struct, so why does 
it need to be checked *here* when it is also checked later?



What is your concern?


Userspace must allocate larger buffers than it knows are necessary for 
expected results.


It looks like the software rsa implementation handles shorter output 
buffers ok (mpi_write_to_sgl will return EOVERFLOW if the the buffer is 
too small), however I see at least one hardware rsa driver that requires 
the output buffer to be the maximum size. But this inconsistency might be 
best addressed within the software cipher or drivers rather than in 
recvmsg.


--
Mat Martineau
Intel OTC
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[RFC PATCH 14/15] hwrng: exynos - fixup IO accesors

2016-06-08 Thread Matthew Leach
From: Ben Dooks 

The __raw IO functions are not endian safe, so use the readl_relaxed
and writel_relaxed versions of these.

Signed-off-by: Ben Dooks 
---
CC: Matt Mackall 
CC: Krzysztof Kozlowski 
CC: linux-crypto@vger.kernel.org
CC: linux-arm-ker...@lists.infradead.org
CC: linux-samsung-...@vger.kernel.org
---
 drivers/char/hw_random/exynos-rng.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/char/hw_random/exynos-rng.c 
b/drivers/char/hw_random/exynos-rng.c
index ed44561..23d3585 100644
--- a/drivers/char/hw_random/exynos-rng.c
+++ b/drivers/char/hw_random/exynos-rng.c
@@ -45,12 +45,12 @@ struct exynos_rng {
 
 static u32 exynos_rng_readl(struct exynos_rng *rng, u32 offset)
 {
-   return  __raw_readl(rng->mem + offset);
+   return  readl_relaxed(rng->mem + offset);
 }
 
 static void exynos_rng_writel(struct exynos_rng *rng, u32 val, u32 offset)
 {
-   __raw_writel(val, rng->mem + offset);
+   writel_relaxed(val, rng->mem + offset);
 }
 
 static int exynos_rng_configure(struct exynos_rng *exynos_rng)
-- 
2.8.3

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [sparc] niagara2 cpu, opcodes not available message?

2016-06-08 Thread David Miller
From: Anatoly Pugachev 
Date: Wed, 8 Jun 2016 20:30:40 +0300

> Can someone please tell, why do we get a bunch of the following
> messages on niagara2 cpu hardware (SPARC Enterprise T5120, T5220,
> T5140, and T5240 servers)
> 
> Asking, because I see the following lines on kernel boot (removing
> first field boot time stamp in cut):
> 
> mator@nvg5120:~/linux-sparc-boot-logs/t5120$ grep opcode
> dmesg-4.7.0-rc2+.log  | cut -f2- -d' ' | sort | uniq -c
>   4 aes_sparc64: sparc64 aes opcodes not available.
>   7 camellia_sparc64: sparc64 camellia opcodes not available.
>  37 crc32c_sparc64: sparc64 crc32c opcode not available.
>   5 des_sparc64: sparc64 des opcodes not available.
>   4 md5_sparc64: sparc64 md5 opcode not available.
>   1 sha1_sparc64: sparc64 sha1 opcode not available.
>   2 sha256_sparc64: sparc64 sha256 opcode not available.
>   3 sha512_sparc64: sparc64 sha512 opcode not available.

Because the drivers unconditionally try to load, check the CPU capabilites
and emit the log message if the cpu caps aren't present.

I don't see what the problem is, everying is working as designed.
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [sparc] niagara2 cpu, opcodes not available message?

2016-06-08 Thread Anatoly Pugachev
On Wed, Jun 8, 2016 at 8:30 PM, Anatoly Pugachev  wrote:
> Hello!
>
> Can someone please tell, why do we get a bunch of the following
> messages on niagara2 cpu hardware (SPARC Enterprise T5120, T5220,
> T5140, and T5240 servers)
>
> Asking, because I see the following lines on kernel boot (removing
> first field boot time stamp in cut):
>
> mator@nvg5120:~/linux-sparc-boot-logs/t5120$ grep opcode
> dmesg-4.7.0-rc2+.log  | cut -f2- -d' ' | sort | uniq -c
>   4 aes_sparc64: sparc64 aes opcodes not available.
>   7 camellia_sparc64: sparc64 camellia opcodes not available.
>  37 crc32c_sparc64: sparc64 crc32c opcode not available.
>   5 des_sparc64: sparc64 des opcodes not available.
>   4 md5_sparc64: sparc64 md5 opcode not available.
>   1 sha1_sparc64: sparc64 sha1 opcode not available.
>   2 sha256_sparc64: sparc64 sha256 opcode not available.
>   3 sha512_sparc64: sparc64 sha512 opcode not available.
>
> Can we probably remove this functionality/messages from niagara2 cpus,
> if it does not support it anyway?

Wasn't clear at all, I mean can we please change pr_info in
arch/sparc/crypto/ to pr_debug in xx_sparc64_mod_init() functions?
Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[sparc] niagara2 cpu, opcodes not available message?

2016-06-08 Thread Anatoly Pugachev
Hello!

Can someone please tell, why do we get a bunch of the following
messages on niagara2 cpu hardware (SPARC Enterprise T5120, T5220,
T5140, and T5240 servers)

Asking, because I see the following lines on kernel boot (removing
first field boot time stamp in cut):

mator@nvg5120:~/linux-sparc-boot-logs/t5120$ grep opcode
dmesg-4.7.0-rc2+.log  | cut -f2- -d' ' | sort | uniq -c
  4 aes_sparc64: sparc64 aes opcodes not available.
  7 camellia_sparc64: sparc64 camellia opcodes not available.
 37 crc32c_sparc64: sparc64 crc32c opcode not available.
  5 des_sparc64: sparc64 des opcodes not available.
  4 md5_sparc64: sparc64 md5 opcode not available.
  1 sha1_sparc64: sparc64 sha1 opcode not available.
  2 sha256_sparc64: sparc64 sha256 opcode not available.
  3 sha512_sparc64: sparc64 sha512 opcode not available.

But linux kernel sources ( linux-2.6/arch/sparc/kernel/setup_64.c )
define crypto_hwcaps only for CPUs with the following capabilities:

static const char *crypto_hwcaps[] = {
"aes", "des", "kasumi", "camellia", "md5", "sha1", "sha256",
"sha512", "mpmul", "montmul", "montsqr", "crc32c",
};

and we don't have them in niagara2 cpu CAPS:

mator@nvg5120:~/linux-sparc-boot-logs/t5120$ grep CAPS dmesg-4.7.0-rc2+.log
[0.00] CPU CAPS: [flush,stbar,swap,muldiv,v9,blkinit,n2,mul32]
[0.00] CPU CAPS: [div32,v8plus,popc,vis,vis2,ASIBlkInit]

mator@nvg5120:~/linux-sparc-boot-logs/t5120$ egrep '^cpu|pmu' /proc/cpuinfo
cpu : UltraSparc T2 (Niagara2)
pmu : niagara2
cpucaps :
flush,stbar,swap,muldiv,v9,blkinit,n2,mul32,div32,v8plus,popc,vis,vis2,ASIBlkInit


compare, for example, with sparc CPU which support crypto (T5 cpu,
landau is machine name):

mator@landau:~$ grep CAPS dmesg-4.6.1.txt
[0.00] CPU CAPS: [flush,stbar,swap,muldiv,v9,blkinit,n2,mul32]
[0.00] CPU CAPS: [div32,v8plus,popc,vis,vis2,ASIBlkInit,fmaf,vis3]
[0.00] CPU CAPS: [hpc,ima,pause,cbcond,aes,des,kasumi,camellia]
[0.00] CPU CAPS: [md5,sha1,sha256,sha512,mpmul,montmul,montsqr,crc32c]

mator@landau:~$ egrep '^cpu|pmu' /proc/cpuinfo
cpu : UltraSparc T5 (Niagara5)
pmu : niagara5
cpucaps :
flush,stbar,swap,muldiv,v9,blkinit,n2,mul32,div32,v8plus,popc,vis,vis2,ASIBlkInit,fmaf,vis3,hpc,ima,pause,cbcond,aes,des,kasumi,camellia,md5,sha1,sha256,sha512,mpmul,montmul,montsqr,crc32c

mator@landau:~$ grep opcode dmesg-4.6.1.txt
[8537574.887049] aes_sparc64: Using sparc64 aes opcodes optimized AES
implementation
[8537574.887611] crc32c_sparc64: Using sparc64 crc32c opcode optimized
CRC32C implementation
[8537576.577455] sha1_sparc64: Using sparc64 sha1 opcode optimized
SHA-1 implementation
[8537576.578928] sha256_sparc64: Using sparc64 sha256 opcode optimized
SHA-256/SHA-224 implementation
[8537576.580908] sha512_sparc64: Using sparc64 sha512 opcode optimized
SHA-512/SHA-384 implementation
[8537576.582964] md5_sparc64: Using sparc64 md5 opcode optimized MD5
implementation
[8537576.596984] des_sparc64: Using sparc64 des opcodes optimized DES
implementation
[8537576.600503] camellia_sparc64: Using sparc64 camellia opcodes
optimized CAMELLIA implementation


I don't understand why niagara2 cpu getting HWCAP_SPARC_CRYPTO flag if
it does not support it.
Can we probably remove this functionality/messages from niagara2 cpus,
if it does not support it anyway?

mator@nvg5120:~$ lsmod | grep -c sparc64
0

mator@landau:~$ lsmod | grep -c sparc64
9




Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] crypto: ux500: memmove the right size

2016-06-08 Thread Linus Walleij
The hash buffer is really HASH_BLOCK_SIZE bytes, someone
must have thought that memmove takes n*u32 words by mistake.
Tests work as good/bad as before after this patch.

Cc: Joakim Bech 
Reported-by: David Binderman 
Signed-off-by: Linus Walleij 
---
 drivers/crypto/ux500/hash/hash_core.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/crypto/ux500/hash/hash_core.c 
b/drivers/crypto/ux500/hash/hash_core.c
index 574e87c7f2b8..9acccad26928 100644
--- a/drivers/crypto/ux500/hash/hash_core.c
+++ b/drivers/crypto/ux500/hash/hash_core.c
@@ -781,7 +781,7 @@ static int hash_process_data(struct hash_device_data 
*device_data,
_data->state);
memmove(req_ctx->state.buffer,
device_data->state.buffer,
-   HASH_BLOCK_SIZE / sizeof(u32));
+   HASH_BLOCK_SIZE);
if (ret) {
dev_err(device_data->dev,
"%s: hash_resume_state() 
failed!\n",
@@ -832,7 +832,7 @@ static int hash_process_data(struct hash_device_data 
*device_data,
 
memmove(device_data->state.buffer,
req_ctx->state.buffer,
-   HASH_BLOCK_SIZE / sizeof(u32));
+   HASH_BLOCK_SIZE);
if (ret) {
dev_err(device_data->dev, "%s: 
hash_save_state() failed!\n",
__func__);
-- 
2.4.11

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: linux-4.6/drivers/crypto/ux500/hash/hash_core.c: 2 * possible bad size ?

2016-06-08 Thread Linus Walleij
On Wed, May 18, 2016 at 9:46 AM, Herbert Xu  wrote:
> On Mon, May 16, 2016 at 07:13:12PM +0100, David Binderman wrote:
>> Hello there,
>>
>> 1.
>>
>> linux-4.6/drivers/crypto/ux500/hash/hash_core.c:784]: (warning) Division by
>> result of sizeof(). memmove() expects a size in bytes, did you intend to
>> multiply instead?
>>
>> Source code is
>>
>> memmove(req_ctx->state.buffer,
>> device_data->state.buffer,
>> HASH_BLOCK_SIZE / sizeof(u32));
>>
>> Maybe better code
>>
>> memmove(req_ctx->state.buffer,
>> device_data->state.buffer,
>> HASH_BLOCK_SIZE);

Yeah obviously the latter as in hash_alg.h:

struct hash_state {
(...)
u32buffer[HASH_BLOCK_SIZE / sizeof(u32)];

That could just as well be an u8 of HASH_BLOCK_SIZE.

Sending a patch for this.

>> linux-4.6/drivers/crypto/ux500/hash/hash_core.c:835]: (warning) Division by
>> result of sizeof(). memmove() expects a size in bytes, did you intend to
>> multiply instead?
>>
>> Duplicate.
>
> Thanks for noticing these bugs.  This driver hasn't been maintained
> since 2012, so unless someone steps up I'm going to just delete it.

I'm trying to take a look at it because I'm using this platform
for tests and it's nice to have all features enabled.

And it has some problems (I added prints to also print successful tests):

[3.864746] alg: hash: Test 1 SUCCEEDED for sha1-ux500
[3.870147] alg: hash: Test 2 SUCCEEDED for sha1-ux500
[3.875610] alg: hash: Test 3 SUCCEEDED for sha1-ux500
[3.881408] alg: hash: Test 4 SUCCEEDED for sha1-ux500
[3.886596] alg: hash: Chunking test 1 SUCCEEDED for sha1-ux500
[3.892639] alg: hash: Chunking test 2 FAILED for sha1-ux500
[3.898284] result:
[3.900421] : 76 b4 ed 2f d7 11 1d c8 64 4c 38 b0 f8 27 19 89
[3.906860] 0010: 58 1e bb 3a
[3.915588] expected:
[3.917846] : 97 01 11 c4 e7 7b cc 88 cc 20 45 9c 02 b6 9b 4a
[3.928314] 0010: a8 f5 82 17
[3.937255] alg: hash: Test 1 SUCCEEDED for sha256-ux500
[3.948089] alg: hash: Test 2 SUCCEEDED for sha256-ux500
[3.961944] alg: hash: Test 3 SUCCEEDED for sha256-ux500
[3.967590] alg: hash: Test 4 SUCCEEDED for sha256-ux500
[3.973083] alg: hash: Chunking test 1 SUCCEEDED for sha256-ux500
[3.979248] alt: hash: Failed to export() for sha256-ux500
[3.984802] hash: partial update failed on test 1 for sha256-ux500: ret=38
[3.992004] alg: hash: Test 1 SUCCEEDED for hmac-sha1-ux500
[3.997650] alg: hash: Test 2 SUCCEEDED for hmac-sha1-ux500
[4.003356] alg: hash: Test 3 SUCCEEDED for hmac-sha1-ux500
[4.009002] alg: hash: Test 4 SUCCEEDED for hmac-sha1-ux500
[4.014678] alg: hash: Test 5 SUCCEEDED for hmac-sha1-ux500
[4.020385] alg: hash: Test 6 SUCCEEDED for hmac-sha1-ux500
[4.026062] alg: hash: Chunking test 1 SUCCEEDED for hmac-sha1-ux500
[4.032470] alt: hash: Failed to export() for hmac-sha1-ux500
[4.038208] hash: partial update failed on test 1 for hmac-sha1-ux500: ret=38
[4.045623] alg: hash: Test 1 SUCCEEDED for hmac-sha256-ux500
[4.051483] alg: hash: Test 2 SUCCEEDED for hmac-sha256-ux500
[4.057342] alg: hash: Test 3 SUCCEEDED for hmac-sha256-ux500
[4.063201] alg: hash: Test 4 SUCCEEDED for hmac-sha256-ux500
[4.069030] alg: hash: Test 5 SUCCEEDED for hmac-sha256-ux500
[4.074890] alg: hash: Test 6 SUCCEEDED for hmac-sha256-ux500
[4.080780] alg: hash: Test 7 SUCCEEDED for hmac-sha256-ux500
[4.086608] alg: hash: Test 8 SUCCEEDED for hmac-sha256-ux500
[4.092468] alg: hash: Test 9 SUCCEEDED for hmac-sha256-ux500
[4.098297] alg: hash: Chunking test 1 SUCCEEDED for hmac-sha256-ux500
[4.104888] alt: hash: Failed to export() for hmac-sha256-ux500
[4.110809] hash: partial update failed on test 1 for
hmac-sha256-ux500: ret=38
[4.118164] hash1 hash1: successfully registered
[4.123687] alg: No test for aes (aes-ux500)
[4.132354] alg: No test for des (des-ux500)
[4.136749] alg: No test for des3_ede (des3_ede-ux500)
[4.151306] alg: skcipher: Test 1 failed (invalid result) on
encryption for cbc-des-ux500
[4.159484] : 03 91 6b cc 4a f6 3a 53 9c 4d 2e 2b 91 83 44 f6
[4.165954] 0010: aa 6a 15 6a dc b5 e0 3d
[4.170501] cryp1 cryp1: successfully registered

The simple tests always work, it's those stressful ones that create
problems.

Joakim: did you have a memory of this code working? Should
I check the vendor tree for fixes?

Yours,
Linus Walleij
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: AES-NI: slower than aes-generic?

2016-06-08 Thread Stephan Mueller
Am Donnerstag, 26. Mai 2016, 22:14:29 schrieb Theodore Ts'o:

Hi Theodore,

> On Thu, May 26, 2016 at 08:49:39PM +0200, Stephan Mueller wrote:
> > Using the kernel crypto API one can relieve the CPU of the crypto work, if
> > a hardware or assembler implementation is available. This may be of
> > particular interest for smaller systems. So, for smaller systems (where
> > kernel bloat is bad, but where now these days more and more hardware
> > crypto support is added), we must weigh the kernel bloat (of 3 or 4
> > additional C files for the basic kernel crypto API logic) against
> > relieving the CPU of work.
> 
> There are a number of caveats with using hardware acceleration; one is
> that many hardware accelerators are optimized for bulk data
> encryption, and so key scheduling, or switching between key schedules,
> can have a higher overhead that a pure software implementation.

As a followup: I tweaked the DRBG side a bit to use the full speed of the AES-
NI implementation. With that tweak, the initial finding does not apply any 
more.

I depending on the request size, I now get more than 800 MB/s (increase by 
more than 450% compared to the initial implementation) from the AES-NI 
implementation. Hence, the frequent key schedule update seems to be not too 
much of an issue.

Ciao
Stephan
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v5 1/3] crypto: Key-agreement Protocol Primitives API (KPP)

2016-06-08 Thread Salvatore Benedetto
On Wed, Jun 08, 2016 at 10:54:51AM +0800, Herbert Xu wrote:
> On Thu, Jun 02, 2016 at 12:06:48PM +, Benedetto, Salvatore wrote:
> >
> > Off the top of my head, with ECDH when the user gets a EGAIN, he wants
> > to reset the secret key only, not the params.
> 
> I don't see any performance benefit in changing one and not the
> other.  Besides, you could always check the params in the algo
> and only update if necessary.
>

I'm OK with merging set_params and set_key and allow the user to
pass either both, or only the key in which case params previously
set are reused, although I don't see any particular benefit with this.

> > > >  * generate_public_key() - It generates the public key to be sent to
> > > >the other counterpart involved in the key-agreement session. The
> > > >function has to be called after set_params() and set_secret()
> > > >  * generate_secret() - It generates the shared secret for the session
> > > 
> > > Ditto, we only need one operation and that is multiplication by the 
> > > secret.
> > 
> > Sorry, but I don't understand your point.
> > We do always need one math operation with different params.
> 
> Look at your actual implementations of DH and ECDH, they are the
> same except for the multiplicand, which is fixed to G for the
> public key.
> 
> Now you could argue that having to reparse G every time could be
> bad for performance, but that's easily fixed by making the case
> of a zero-length input value an implicit request to use G.
> 
> Even better, just drop G from the params and you won't need to
> reparse it or do anything special.
> 
> The point of all this is to make the lives of future driver authors
> simpler, the less they have to do the less that could go wrong.

I really would like to keep the interface as it is because it's
very clear what each function does. I'm OK with remapping both
function to the same one, and if src is zero, g/G will be used.

Keep in mind that while for DH g is always provided by the user,
for ECDH G is a public value which we already have and I don't see
why the user should pass that.

Having said that, are you OK with as far as the interface goes
to only merge set_param and set_key, and keeping the rest as it is?
For the implementation of DH and ECDH I'll merge the two operations
functions into one as you suggested. If so, I'll send a new version.

Thanks for your inputs.

Regards,
Salvatore

> Cheers,
> -- 
> Email: Herbert Xu 
> Home Page: http://gondor.apana.org.au/~herbert/
> PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [PATCH v3] crypto: rsa - return raw integers for the ASN.1 parser

2016-06-08 Thread Tudor-Dan Ambarus
Hi Stephan,

> >  int rsa_get_n(void *context, size_t hdrlen, unsigned char tag,
> >   const void *value, size_t vlen)
> >  {
> > struct rsa_key *key = context;
> > +   const u8 *ptr = value;
> > +   int ret;
> >
> > -   key->n = mpi_read_raw_data(value, vlen);
> > +   while (!*ptr && vlen) {
> > +   ptr++;
> > +   vlen--;
> > +   }
> 
> As you do this operation 3 times, isn't an inline better here?

Actually this operation makes sense only for n to determine its length.
I will remove the while from rsa_get_e/d. 

> 
> >
> > -   if (!key->n)
> > -   return -ENOMEM;
> > +   /* invalid key provided */
> > +   if (!ptr)
> > +   return -EINVAL;
> 
> Hm, you check the pointer here, but you dereference it already above. So, I
> guess you want to move that check to the beginning of the function?

Sure. Thank you for reviewing!
ta
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] DRBG: which shall be default?

2016-06-08 Thread Herbert Xu
On Wed, Jun 08, 2016 at 11:07:42AM +0200, Stephan Mueller wrote:
>
> No, it does not:
> 
> #ifdef CONFIG_X86_64

Well there is no fundamental reason why we can't do it on 32-bit.
Even if we just did the counter increment in C this would still
beat ctr(aes-aesni) by many orders of magnitude.

So if you really care about the performance on x86-32 then perhaps
you should send a patch to implement ctr-aes-aesni for it.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] DRBG: which shall be default?

2016-06-08 Thread Stephan Mueller
Am Mittwoch, 8. Juni 2016, 17:03:54 schrieb Herbert Xu:

Hi Herbert,

> On Wed, Jun 08, 2016 at 10:58:37AM +0200, Stephan Mueller wrote:
> > Indeed, I used ctr(aes-aesni) instead of ctr-aes-aesni according to the
> > refcnt in /proc/crypto. The reason was that I used the sync skcipher API
> > which naturally excludes the ctr-aes-aesni.
> > 
> > When changing it to really use ctr-aes-aesni, I get: between 450%
> > performance gains (for 4096 bytes -- 780 MB/s(!)) and 4% gain (for 16
> > bytes).
> > 
> > Though, on 32 bit with ctr(aes-aesni) -- which is a blkcipher -- I get a
> > performance degradation of 10% (4096 bytes) and 20% (16 bytes).
> > 
> > Any ideas on how to handle the blkcipher in a better way?
> 
> You should always use ctr-aes-aesni, ctr(aes-aesni) makes no sense.
> 
> So I don't quite understand why you need to handle blkcipher, does
> ctr-aes-aesni not work on 32-bit?

No, it does not:

#ifdef CONFIG_X86_64
}, {
.cra_name   = "__ctr-aes-aesni",
.cra_driver_name= "__driver-ctr-aes-aesni",
...
}, {
.cra_name   = "ctr(aes)",
.cra_driver_name= "ctr-aes-aesni",


==> ctr-aes-aesni is not available in 32 bit. Only aes-aesni is available in 
32 bit so the system defaults to ctr(aes-aesni).

Note, in my tests I use a generic code that requests ctr(aes). I am not arch-
specific in the code. The discussion above is the analysis of the kernel 
crypto API's decisions.

Ciao
Stephan
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] DRBG: which shall be default?

2016-06-08 Thread Herbert Xu
On Wed, Jun 08, 2016 at 10:58:37AM +0200, Stephan Mueller wrote:
>
> Indeed, I used ctr(aes-aesni) instead of ctr-aes-aesni according to the 
> refcnt 
> in /proc/crypto. The reason was that I used the sync skcipher API which 
> naturally excludes the ctr-aes-aesni.
> 
> When changing it to really use ctr-aes-aesni, I get: between 450% performance 
> gains (for 4096 bytes -- 780 MB/s(!)) and 4% gain (for 16 bytes).
> 
> Though, on 32 bit with ctr(aes-aesni) -- which is a blkcipher -- I get a 
> performance degradation of 10% (4096 bytes) and 20% (16 bytes).
> 
> Any ideas on how to handle the blkcipher in a better way?

You should always use ctr-aes-aesni, ctr(aes-aesni) makes no sense.

So I don't quite understand why you need to handle blkcipher, does
ctr-aes-aesni not work on 32-bit?

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] DRBG: which shall be default?

2016-06-08 Thread Stephan Mueller
Am Mittwoch, 8. Juni 2016, 16:00:55 schrieb Herbert Xu:

Hi Herbert,

> On Wed, Jun 08, 2016 at 09:56:42AM +0200, Stephan Mueller wrote:
> > The performance with ctr-aes-aesni on 64 bit is as follows -- I used my
> > LRNG implementation for testing for which I already have performance
> > measurements:
> > 
> > - generating smaller lengths (I tested up to 128 bytes) of random numbers
> > (which is the vast majority of random numbers to be generated), the
> > performance is even worse by 10 to 15%
> > 
> > - generating larger lengths (tested with 4096 bytes) of random numbers,
> > the
> > performance increases by 3%
> > 
> > Using ctr(aes-aesni) on 32 bit, the numbers are generally worse by 5 to
> > 10%.
> ctr(aes-aesni) is not the same thing as ctr-aes-aesni, the former
> being just another way of doing what you were doing.  So did you
> actually test the real optimised version which is ctr-aes-aesni?

Indeed, I used ctr(aes-aesni) instead of ctr-aes-aesni according to the refcnt 
in /proc/crypto. The reason was that I used the sync skcipher API which 
naturally excludes the ctr-aes-aesni.

When changing it to really use ctr-aes-aesni, I get: between 450% performance 
gains (for 4096 bytes -- 780 MB/s(!)) and 4% gain (for 16 bytes).

Though, on 32 bit with ctr(aes-aesni) -- which is a blkcipher -- I get a 
performance degradation of 10% (4096 bytes) and 20% (16 bytes).

Any ideas on how to handle the blkcipher in a better way?

Ciao
Stephan
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 0/7] crypto: talitos - implementation of AEAD for SEC1

2016-06-08 Thread Herbert Xu
On Mon, Jun 06, 2016 at 01:20:31PM +0200, Christophe Leroy wrote:
> This set of patches provides the implementation of AEAD for
> talitos SEC1.

All applied.  Thanks.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v6 5/8] crypto: acomp - add support for lz4hc via scomp

2016-06-08 Thread Giovanni Cabiddu
This patch implements an scomp backend for the lz4hc compression algorithm.
This way, lz4hc is exposed through the acomp api.

Signed-off-by: Giovanni Cabiddu 
---
 crypto/Kconfig |1 +
 crypto/lz4hc.c |   92 +--
 2 files changed, 83 insertions(+), 10 deletions(-)

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 114d43b..59570da 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -1539,6 +1539,7 @@ config CRYPTO_LZ4
 config CRYPTO_LZ4HC
tristate "LZ4HC compression algorithm"
select CRYPTO_ALGAPI
+   select CRYPTO_ACOMP2
select LZ4HC_COMPRESS
select LZ4_DECOMPRESS
help
diff --git a/crypto/lz4hc.c b/crypto/lz4hc.c
index a1d3b5b..75ffc4a 100644
--- a/crypto/lz4hc.c
+++ b/crypto/lz4hc.c
@@ -22,37 +22,53 @@
 #include 
 #include 
 #include 
+#include 
 
 struct lz4hc_ctx {
void *lz4hc_comp_mem;
 };
 
+static void *lz4hc_alloc_ctx(struct crypto_scomp *tfm)
+{
+   void *ctx;
+
+   ctx = vmalloc(LZ4HC_MEM_COMPRESS);
+   if (!ctx)
+   return ERR_PTR(-ENOMEM);
+
+   return ctx;
+}
+
 static int lz4hc_init(struct crypto_tfm *tfm)
 {
struct lz4hc_ctx *ctx = crypto_tfm_ctx(tfm);
 
-   ctx->lz4hc_comp_mem = vmalloc(LZ4HC_MEM_COMPRESS);
-   if (!ctx->lz4hc_comp_mem)
+   ctx->lz4hc_comp_mem = lz4hc_alloc_ctx(NULL);
+   if (IS_ERR(ctx->lz4hc_comp_mem))
return -ENOMEM;
 
return 0;
 }
 
+static void lz4hc_free_ctx(struct crypto_scomp *tfm, void *ctx)
+{
+   vfree(ctx);
+}
+
 static void lz4hc_exit(struct crypto_tfm *tfm)
 {
struct lz4hc_ctx *ctx = crypto_tfm_ctx(tfm);
 
-   vfree(ctx->lz4hc_comp_mem);
+   lz4hc_free_ctx(NULL, ctx->lz4hc_comp_mem);
 }
 
-static int lz4hc_compress_crypto(struct crypto_tfm *tfm, const u8 *src,
-   unsigned int slen, u8 *dst, unsigned int *dlen)
+static int __lz4hc_compress_crypto(const u8 *src, unsigned int slen,
+  u8 *dst, unsigned int *dlen, void *ctx)
 {
-   struct lz4hc_ctx *ctx = crypto_tfm_ctx(tfm);
size_t tmp_len = *dlen;
int err;
 
-   err = lz4hc_compress(src, slen, dst, _len, ctx->lz4hc_comp_mem);
+   err = lz4hc_compress(src, slen, dst, _len, ctx);
 
if (err < 0)
return -EINVAL;
@@ -61,8 +77,25 @@ static int lz4hc_compress_crypto(struct crypto_tfm *tfm, 
const u8 *src,
return 0;
 }
 
-static int lz4hc_decompress_crypto(struct crypto_tfm *tfm, const u8 *src,
- unsigned int slen, u8 *dst, unsigned int *dlen)
+static int lz4hc_scompress(struct crypto_scomp *tfm, const u8 *src,
+  unsigned int slen, u8 *dst, unsigned int *dlen,
+  void *ctx)
+{
+   return __lz4hc_compress_crypto(src, slen, dst, dlen, ctx);
+}
+
+static int lz4hc_compress_crypto(struct crypto_tfm *tfm, const u8 *src,
+unsigned int slen, u8 *dst,
+unsigned int *dlen)
+{
+   struct lz4hc_ctx *ctx = crypto_tfm_ctx(tfm);
+
+   return __lz4hc_compress_crypto(src, slen, dst, dlen,
+   ctx->lz4hc_comp_mem);
+}
+
+static int __lz4hc_decompress_crypto(const u8 *src, unsigned int slen,
+u8 *dst, unsigned int *dlen, void *ctx)
 {
int err;
size_t tmp_len = *dlen;
@@ -76,6 +109,20 @@ static int lz4hc_decompress_crypto(struct crypto_tfm *tfm, 
const u8 *src,
return err;
 }
 
+static int lz4hc_sdecompress(struct crypto_scomp *tfm, const u8 *src,
+unsigned int slen, u8 *dst, unsigned int *dlen,
+void *ctx)
+{
+   return __lz4hc_decompress_crypto(src, slen, dst, dlen, NULL);
+}
+
+static int lz4hc_decompress_crypto(struct crypto_tfm *tfm, const u8 *src,
+  unsigned int slen, u8 *dst,
+  unsigned int *dlen)
+{
+   return __lz4hc_decompress_crypto(src, slen, dst, dlen, NULL);
+}
+
 static struct crypto_alg alg_lz4hc = {
.cra_name   = "lz4hc",
.cra_flags  = CRYPTO_ALG_TYPE_COMPRESS,
@@ -89,14 +136,39 @@ static struct crypto_alg alg_lz4hc = {
.coa_decompress = lz4hc_decompress_crypto } }
 };
 
+static struct scomp_alg scomp = {
+   .alloc_ctx  = lz4hc_alloc_ctx,
+   .free_ctx   = lz4hc_free_ctx,
+   .compress   = lz4hc_scompress,
+   .decompress = lz4hc_sdecompress,
+   .base   = {
+   .cra_name   = "lz4hc",
+   .cra_driver_name = "lz4hc-scomp",
+   .cra_module  = THIS_MODULE,
+   }
+};
+
 static int __init lz4hc_mod_init(void)
 {
-   return crypto_register_alg(_lz4hc);
+   int ret;
+
+   ret = crypto_register_alg(_lz4hc);
+   if 

[PATCH v6 4/8] crypto: acomp - add support for lz4 via scomp

2016-06-08 Thread Giovanni Cabiddu
This patch implements an scomp backend for the lz4 compression algorithm.
This way, lz4 is exposed through the acomp api.

Signed-off-by: Giovanni Cabiddu 
---
 crypto/Kconfig |1 +
 crypto/lz4.c   |   91 +--
 2 files changed, 82 insertions(+), 10 deletions(-)

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 08075c1..114d43b 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -1530,6 +1530,7 @@ config CRYPTO_842
 config CRYPTO_LZ4
tristate "LZ4 compression algorithm"
select CRYPTO_ALGAPI
+   select CRYPTO_ACOMP2
select LZ4_COMPRESS
select LZ4_DECOMPRESS
help
diff --git a/crypto/lz4.c b/crypto/lz4.c
index aefbcea..99c1b2c 100644
--- a/crypto/lz4.c
+++ b/crypto/lz4.c
@@ -23,36 +23,53 @@
 #include 
 #include 
 #include 
+#include 
 
 struct lz4_ctx {
void *lz4_comp_mem;
 };
 
+static void *lz4_alloc_ctx(struct crypto_scomp *tfm)
+{
+   void *ctx;
+
+   ctx = vmalloc(LZ4_MEM_COMPRESS);
+   if (!ctx)
+   return ERR_PTR(-ENOMEM);
+
+   return ctx;
+}
+
 static int lz4_init(struct crypto_tfm *tfm)
 {
struct lz4_ctx *ctx = crypto_tfm_ctx(tfm);
 
-   ctx->lz4_comp_mem = vmalloc(LZ4_MEM_COMPRESS);
-   if (!ctx->lz4_comp_mem)
+   ctx->lz4_comp_mem = lz4_alloc_ctx(NULL);
+   if (IS_ERR(ctx->lz4_comp_mem))
return -ENOMEM;
 
return 0;
 }
 
+static void lz4_free_ctx(struct crypto_scomp *tfm, void *ctx)
+{
+   vfree(ctx);
+}
+
 static void lz4_exit(struct crypto_tfm *tfm)
 {
struct lz4_ctx *ctx = crypto_tfm_ctx(tfm);
-   vfree(ctx->lz4_comp_mem);
+
+   lz4_free_ctx(NULL, ctx->lz4_comp_mem);
 }
 
-static int lz4_compress_crypto(struct crypto_tfm *tfm, const u8 *src,
-   unsigned int slen, u8 *dst, unsigned int *dlen)
+static int __lz4_compress_crypto(const u8 *src, unsigned int slen,
+u8 *dst, unsigned int *dlen, void *ctx)
 {
-   struct lz4_ctx *ctx = crypto_tfm_ctx(tfm);
size_t tmp_len = *dlen;
int err;
 
-   err = lz4_compress(src, slen, dst, _len, ctx->lz4_comp_mem);
+   err = lz4_compress(src, slen, dst, _len, ctx);
 
if (err < 0)
return -EINVAL;
@@ -61,8 +78,23 @@ static int lz4_compress_crypto(struct crypto_tfm *tfm, const 
u8 *src,
return 0;
 }
 
-static int lz4_decompress_crypto(struct crypto_tfm *tfm, const u8 *src,
- unsigned int slen, u8 *dst, unsigned int *dlen)
+static int lz4_scompress(struct crypto_scomp *tfm, const u8 *src,
+unsigned int slen, u8 *dst, unsigned int *dlen,
+void *ctx)
+{
+   return __lz4_compress_crypto(src, slen, dst, dlen, ctx);
+}
+
+static int lz4_compress_crypto(struct crypto_tfm *tfm, const u8 *src,
+  unsigned int slen, u8 *dst, unsigned int *dlen)
+{
+   struct lz4_ctx *ctx = crypto_tfm_ctx(tfm);
+
+   return __lz4_compress_crypto(src, slen, dst, dlen, ctx->lz4_comp_mem);
+}
+
+static int __lz4_decompress_crypto(const u8 *src, unsigned int slen,
+  u8 *dst, unsigned int *dlen, void *ctx)
 {
int err;
size_t tmp_len = *dlen;
@@ -76,6 +108,20 @@ static int lz4_decompress_crypto(struct crypto_tfm *tfm, 
const u8 *src,
return err;
 }
 
+static int lz4_sdecompress(struct crypto_scomp *tfm, const u8 *src,
+  unsigned int slen, u8 *dst, unsigned int *dlen,
+  void *ctx)
+{
+   return __lz4_decompress_crypto(src, slen, dst, dlen, NULL);
+}
+
+static int lz4_decompress_crypto(struct crypto_tfm *tfm, const u8 *src,
+unsigned int slen, u8 *dst,
+unsigned int *dlen)
+{
+   return __lz4_decompress_crypto(src, slen, dst, dlen, NULL);
+}
+
 static struct crypto_alg alg_lz4 = {
.cra_name   = "lz4",
.cra_flags  = CRYPTO_ALG_TYPE_COMPRESS,
@@ -89,14 +135,39 @@ static struct crypto_alg alg_lz4 = {
.coa_decompress = lz4_decompress_crypto } }
 };
 
+static struct scomp_alg scomp = {
+   .alloc_ctx  = lz4_alloc_ctx,
+   .free_ctx   = lz4_free_ctx,
+   .compress   = lz4_scompress,
+   .decompress = lz4_sdecompress,
+   .base   = {
+   .cra_name   = "lz4",
+   .cra_driver_name = "lz4-scomp",
+   .cra_module  = THIS_MODULE,
+   }
+};
+
 static int __init lz4_mod_init(void)
 {
-   return crypto_register_alg(_lz4);
+   int ret;
+
+   ret = crypto_register_alg(_lz4);
+   if (ret)
+   return ret;
+
+   ret = crypto_register_scomp();
+   if (ret) {
+   crypto_unregister_alg(_lz4);
+   return ret;
+   }
+
+   return ret;
 }
 
 

[PATCH v6 7/8] crypto: acomp - add support for deflate via scomp

2016-06-08 Thread Giovanni Cabiddu
This patch implements an scomp backend for the deflate compression
algorithm. This way, deflate is exposed through the acomp api.

Signed-off-by: Giovanni Cabiddu 
---
 crypto/Kconfig   |1 +
 crypto/deflate.c |  111 +-
 2 files changed, 102 insertions(+), 10 deletions(-)

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 09c88ba..b617c5d 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -1502,6 +1502,7 @@ comment "Compression"
 config CRYPTO_DEFLATE
tristate "Deflate compression algorithm"
select CRYPTO_ALGAPI
+   select CRYPTO_ACOMP2
select ZLIB_INFLATE
select ZLIB_DEFLATE
help
diff --git a/crypto/deflate.c b/crypto/deflate.c
index 95d8d37..f942cb3 100644
--- a/crypto/deflate.c
+++ b/crypto/deflate.c
@@ -32,6 +32,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #define DEFLATE_DEF_LEVEL  Z_DEFAULT_COMPRESSION
 #define DEFLATE_DEF_WINBITS11
@@ -101,9 +102,8 @@ static void deflate_decomp_exit(struct deflate_ctx *ctx)
vfree(ctx->decomp_stream.workspace);
 }
 
-static int deflate_init(struct crypto_tfm *tfm)
+static int __deflate_init(void *ctx)
 {
-   struct deflate_ctx *ctx = crypto_tfm_ctx(tfm);
int ret;
 
ret = deflate_comp_init(ctx);
@@ -116,19 +116,55 @@ out:
return ret;
 }
 
-static void deflate_exit(struct crypto_tfm *tfm)
+static void *deflate_alloc_ctx(struct crypto_scomp *tfm)
+{
+   struct deflate_ctx *ctx;
+   int ret;
+
+   ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
+   if (!ctx)
+   return ERR_PTR(-ENOMEM);
+
+   ret = __deflate_init(ctx);
+   if (ret) {
+   kfree(ctx);
+   return ERR_PTR(ret);
+   }
+
+   return ctx;
+}
+
+static int deflate_init(struct crypto_tfm *tfm)
 {
struct deflate_ctx *ctx = crypto_tfm_ctx(tfm);
 
+   return __deflate_init(ctx);
+}
+
+static void __deflate_exit(void *ctx)
+{
deflate_comp_exit(ctx);
deflate_decomp_exit(ctx);
 }
 
-static int deflate_compress(struct crypto_tfm *tfm, const u8 *src,
-   unsigned int slen, u8 *dst, unsigned int *dlen)
+static void deflate_free_ctx(struct crypto_scomp *tfm, void *ctx)
+{
+   __deflate_exit(ctx);
+   kzfree(ctx);
+}
+
+static void deflate_exit(struct crypto_tfm *tfm)
+{
+   struct deflate_ctx *ctx = crypto_tfm_ctx(tfm);
+
+   __deflate_exit(ctx);
+}
+
+static int __deflate_compress(const u8 *src, unsigned int slen,
+ u8 *dst, unsigned int *dlen, void *ctx)
 {
int ret = 0;
-   struct deflate_ctx *dctx = crypto_tfm_ctx(tfm);
+   struct deflate_ctx *dctx = ctx;
struct z_stream_s *stream = >comp_stream;
 
ret = zlib_deflateReset(stream);
@@ -153,12 +189,27 @@ out:
return ret;
 }
 
-static int deflate_decompress(struct crypto_tfm *tfm, const u8 *src,
- unsigned int slen, u8 *dst, unsigned int *dlen)
+static int deflate_compress(struct crypto_tfm *tfm, const u8 *src,
+   unsigned int slen, u8 *dst, unsigned int *dlen)
+{
+   struct deflate_ctx *dctx = crypto_tfm_ctx(tfm);
+
+   return __deflate_compress(src, slen, dst, dlen, dctx);
+}
+
+static int deflate_scompress(struct crypto_scomp *tfm, const u8 *src,
+unsigned int slen, u8 *dst, unsigned int *dlen,
+void *ctx)
+{
+   return __deflate_compress(src, slen, dst, dlen, ctx);
+}
+
+static int __deflate_decompress(const u8 *src, unsigned int slen,
+   u8 *dst, unsigned int *dlen, void *ctx)
 {
 
int ret = 0;
-   struct deflate_ctx *dctx = crypto_tfm_ctx(tfm);
+   struct deflate_ctx *dctx = ctx;
struct z_stream_s *stream = >decomp_stream;
 
ret = zlib_inflateReset(stream);
@@ -194,6 +245,21 @@ out:
return ret;
 }
 
+static int deflate_decompress(struct crypto_tfm *tfm, const u8 *src,
+ unsigned int slen, u8 *dst, unsigned int *dlen)
+{
+   struct deflate_ctx *dctx = crypto_tfm_ctx(tfm);
+
+   return __deflate_decompress(src, slen, dst, dlen, dctx);
+}
+
+static int deflate_sdecompress(struct crypto_scomp *tfm, const u8 *src,
+  unsigned int slen, u8 *dst, unsigned int *dlen,
+  void *ctx)
+{
+   return __deflate_decompress(src, slen, dst, dlen, ctx);
+}
+
 static struct crypto_alg alg = {
.cra_name   = "deflate",
.cra_flags  = CRYPTO_ALG_TYPE_COMPRESS,
@@ -206,14 +272,39 @@ static struct crypto_alg alg = {
.coa_decompress = deflate_decompress } }
 };
 
+static struct scomp_alg scomp = {
+   .alloc_ctx  = deflate_alloc_ctx,
+   .free_ctx   = deflate_free_ctx,
+   .compress   = deflate_scompress,
+   .decompress  

[PATCH v6 8/8] crypto: acomp - update testmgr with support for acomp

2016-06-08 Thread Giovanni Cabiddu
This patch adds tests to the test manager for algorithms exposed through
the acomp api

Signed-off-by: Giovanni Cabiddu 
---
 crypto/testmgr.c |  158 +-
 1 files changed, 145 insertions(+), 13 deletions(-)

diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index c727fb0..fc47716 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -32,6 +32,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include "internal.h"
 
@@ -1423,6 +1424,121 @@ out:
return ret;
 }
 
+static int test_acomp(struct crypto_acomp *tfm, struct comp_testvec *ctemplate,
+ struct comp_testvec *dtemplate, int ctcount, int dtcount)
+{
+   const char *algo = crypto_tfm_alg_driver_name(crypto_acomp_tfm(tfm));
+   unsigned int i;
+   char output[COMP_BUF_SIZE];
+   int ret;
+   struct scatterlist src, dst;
+   struct acomp_req *req;
+   struct tcrypt_result result;
+
+   for (i = 0; i < ctcount; i++) {
+   unsigned int dlen = COMP_BUF_SIZE;
+   int ilen = ctemplate[i].inlen;
+
+   memset(output, 0, sizeof(output));
+   init_completion();
+   sg_init_one(, ctemplate[i].input, ilen);
+   sg_init_one(, output, dlen);
+
+   req = acomp_request_alloc(tfm);
+   if (!req) {
+   pr_err("alg: acomp: request alloc failed for %s\n",
+  algo);
+   ret = -ENOMEM;
+   goto out;
+   }
+
+   acomp_request_set_params(req, , , ilen, dlen);
+   acomp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
+  tcrypt_complete, );
+
+   ret = wait_async_op(, crypto_acomp_compress(req));
+   if (ret) {
+   pr_err("alg: acomp: compression failed on test %d for 
%s: ret=%d\n",
+  i + 1, algo, -ret);
+   acomp_request_free(req);
+   goto out;
+   }
+
+   if (req->dlen != ctemplate[i].outlen) {
+   pr_err("alg: acomp: Compression test %d failed for %s: 
output len = %d\n",
+  i + 1, algo, req->dlen);
+   ret = -EINVAL;
+   acomp_request_free(req);
+   goto out;
+   }
+
+   if (memcmp(output, ctemplate[i].output, req->dlen)) {
+   pr_err("alg: acomp: Compression test %d failed for 
%s\n",
+  i + 1, algo);
+   hexdump(output, req->dlen);
+   ret = -EINVAL;
+   acomp_request_free(req);
+   goto out;
+   }
+
+   acomp_request_free(req);
+   }
+
+   for (i = 0; i < dtcount; i++) {
+   unsigned int dlen = COMP_BUF_SIZE;
+   int ilen = dtemplate[i].inlen;
+
+   memset(output, 0, sizeof(output));
+   init_completion();
+   sg_init_one(, dtemplate[i].input, ilen);
+   sg_init_one(, output, dlen);
+
+   req = acomp_request_alloc(tfm);
+   if (!req) {
+   pr_err("alg: acomp: request alloc failed for %s\n",
+  algo);
+   ret = -ENOMEM;
+   goto out;
+   }
+
+   acomp_request_set_params(req, , , ilen, dlen);
+   acomp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
+  tcrypt_complete, );
+
+   ret = wait_async_op(, crypto_acomp_decompress(req));
+   if (ret) {
+   pr_err("alg: acomp: decompression failed on test %d for 
%s: ret=%d\n",
+  i + 1, algo, -ret);
+   acomp_request_free(req);
+   goto out;
+   }
+
+   if (req->dlen != dtemplate[i].outlen) {
+   pr_err("alg: acomp: Decompression test %d failed for 
%s: output len = %d\n",
+  i + 1, algo, req->dlen);
+   ret = -EINVAL;
+   acomp_request_free(req);
+   goto out;
+   }
+
+   if (memcmp(output, dtemplate[i].output, req->dlen)) {
+   pr_err("alg: acomp: Decompression test %d failed for 
%s\n",
+  i + 1, algo);
+   hexdump(output, req->dlen);
+   ret = -EINVAL;
+   acomp_request_free(req);
+   goto out;
+   }
+
+   acomp_request_free(req);
+   }
+
+   ret = 0;
+
+out:
+   return ret;
+}
+
 static int test_cprng(struct crypto_rng 

[PATCH v6 6/8] crypto: acomp - add support for 842 via scomp

2016-06-08 Thread Giovanni Cabiddu
This patch implements an scomp backend for the 842 compression algorithm.
This way, 842 is exposed through the acomp api.

Signed-off-by: Giovanni Cabiddu 
---
 crypto/842.c   |   82 +--
 crypto/Kconfig |1 +
 2 files changed, 80 insertions(+), 3 deletions(-)

diff --git a/crypto/842.c b/crypto/842.c
index 98e387e..a954ed3 100644
--- a/crypto/842.c
+++ b/crypto/842.c
@@ -31,11 +31,46 @@
 #include 
 #include 
 #include 
+#include 
 
 struct crypto842_ctx {
-   char wmem[SW842_MEM_COMPRESS];  /* working memory for compress */
+   void *wmem; /* working memory for compress */
 };
 
+static void *crypto842_alloc_ctx(struct crypto_scomp *tfm)
+{
+   void *ctx;
+
+   ctx = kmalloc(SW842_MEM_COMPRESS, GFP_KERNEL);
+   if (!ctx)
+   return ERR_PTR(-ENOMEM);
+
+   return ctx;
+}
+
+static int crypto842_init(struct crypto_tfm *tfm)
+{
+   struct crypto842_ctx *ctx = crypto_tfm_ctx(tfm);
+
+   ctx->wmem = crypto842_alloc_ctx(NULL);
+   if (IS_ERR(ctx->wmem))
+   return -ENOMEM;
+
+   return 0;
+}
+
+static void crypto842_free_ctx(struct crypto_scomp *tfm, void *ctx)
+{
+   kfree(ctx);
+}
+
+static void crypto842_exit(struct crypto_tfm *tfm)
+{
+   struct crypto842_ctx *ctx = crypto_tfm_ctx(tfm);
+
+   crypto842_free_ctx(NULL, ctx->wmem);
+}
+
 static int crypto842_compress(struct crypto_tfm *tfm,
  const u8 *src, unsigned int slen,
  u8 *dst, unsigned int *dlen)
@@ -45,6 +80,13 @@ static int crypto842_compress(struct crypto_tfm *tfm,
return sw842_compress(src, slen, dst, dlen, ctx->wmem);
 }
 
+static int crypto842_scompress(struct crypto_scomp *tfm,
+  const u8 *src, unsigned int slen,
+  u8 *dst, unsigned int *dlen, void *ctx)
+{
+   return sw842_compress(src, slen, dst, dlen, ctx);
+}
+
 static int crypto842_decompress(struct crypto_tfm *tfm,
const u8 *src, unsigned int slen,
u8 *dst, unsigned int *dlen)
@@ -52,27 +94,61 @@ static int crypto842_decompress(struct crypto_tfm *tfm,
return sw842_decompress(src, slen, dst, dlen);
 }
 
+static int crypto842_sdecompress(struct crypto_scomp *tfm,
+const u8 *src, unsigned int slen,
+u8 *dst, unsigned int *dlen, void *ctx)
+{
+   return sw842_decompress(src, slen, dst, dlen);
+}
+
 static struct crypto_alg alg = {
.cra_name   = "842",
.cra_driver_name= "842-generic",
.cra_priority   = 100,
.cra_flags  = CRYPTO_ALG_TYPE_COMPRESS,
-   .cra_ctxsize= sizeof(struct crypto842_ctx),
.cra_module = THIS_MODULE,
+   .cra_init   = crypto842_init,
+   .cra_exit   = crypto842_exit,
.cra_u  = { .compress = {
.coa_compress   = crypto842_compress,
.coa_decompress = crypto842_decompress } }
 };
 
+static struct scomp_alg scomp = {
+   .alloc_ctx  = crypto842_alloc_ctx,
+   .free_ctx   = crypto842_free_ctx,
+   .compress   = crypto842_scompress,
+   .decompress = crypto842_sdecompress,
+   .base   = {
+   .cra_name   = "842",
+   .cra_driver_name = "842-scomp",
+   .cra_priority= 100,
+   .cra_module  = THIS_MODULE,
+   }
+};
+
 static int __init crypto842_mod_init(void)
 {
-   return crypto_register_alg();
+   int ret;
+
+   ret = crypto_register_alg();
+   if (ret)
+   return ret;
+
+   ret = crypto_register_scomp();
+   if (ret) {
+   crypto_unregister_alg();
+   return ret;
+   }
+
+   return ret;
 }
 module_init(crypto842_mod_init);
 
 static void __exit crypto842_mod_exit(void)
 {
crypto_unregister_alg();
+   crypto_unregister_scomp();
 }
 module_exit(crypto842_mod_exit);
 
diff --git a/crypto/Kconfig b/crypto/Kconfig
index 59570da..09c88ba 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -1522,6 +1522,7 @@ config CRYPTO_LZO
 config CRYPTO_842
tristate "842 compression algorithm"
select CRYPTO_ALGAPI
+   select CRYPTO_ACOMP2
select 842_COMPRESS
select 842_DECOMPRESS
help
-- 
1.7.4.1

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v6 2/8] crypto: add driver-side scomp interface

2016-06-08 Thread Giovanni Cabiddu
Add a synchronous back-end (scomp) to acomp. This allows to easily expose
the already present compression algorithms in LKCF via acomp

Signed-off-by: Giovanni Cabiddu 
---
 crypto/Makefile |1 +
 crypto/acompress.c  |   49 +++-
 crypto/scompress.c  |  252 +++
 include/crypto/acompress.h  |   32 ++---
 include/crypto/internal/acompress.h |   15 ++
 include/crypto/internal/scompress.h |  134 +++
 include/linux/crypto.h  |2 +
 7 files changed, 463 insertions(+), 22 deletions(-)
 create mode 100644 crypto/scompress.c
 create mode 100644 include/crypto/internal/scompress.h

diff --git a/crypto/Makefile b/crypto/Makefile
index e817b38..fc8fcfe 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -32,6 +32,7 @@ obj-$(CONFIG_CRYPTO_HASH2) += crypto_hash.o
 obj-$(CONFIG_CRYPTO_AKCIPHER2) += akcipher.o
 
 obj-$(CONFIG_CRYPTO_ACOMP2) += acompress.o
+obj-$(CONFIG_CRYPTO_ACOMP2) += scompress.o
 
 $(obj)/rsapubkey-asn1.o: $(obj)/rsapubkey-asn1.c $(obj)/rsapubkey-asn1.h
 $(obj)/rsaprivkey-asn1.o: $(obj)/rsaprivkey-asn1.c $(obj)/rsaprivkey-asn1.h
diff --git a/crypto/acompress.c b/crypto/acompress.c
index f24fef3..a5e6cf1 100644
--- a/crypto/acompress.c
+++ b/crypto/acompress.c
@@ -22,8 +22,11 @@
 #include 
 #include 
 #include 
+#include 
 #include "internal.h"
 
+static const struct crypto_type crypto_acomp_type;
+
 #ifdef CONFIG_NET
 static int crypto_acomp_report(struct sk_buff *skb, struct crypto_alg *alg)
 {
@@ -67,6 +70,13 @@ static int crypto_acomp_init_tfm(struct crypto_tfm *tfm)
struct crypto_acomp *acomp = __crypto_acomp_tfm(tfm);
struct acomp_alg *alg = crypto_acomp_alg(acomp);
 
+   if (tfm->__crt_alg->cra_type != _acomp_type)
+   return crypto_init_scomp_ops_async(tfm);
+
+   acomp->compress = alg->compress;
+   acomp->decompress = alg->decompress;
+   acomp->reqsize = alg->reqsize;
+
if (alg->exit)
acomp->base.exit = crypto_acomp_exit_tfm;
 
@@ -76,15 +86,25 @@ static int crypto_acomp_init_tfm(struct crypto_tfm *tfm)
return 0;
 }
 
+unsigned int crypto_acomp_extsize(struct crypto_alg *alg)
+{
+   int extsize = crypto_alg_extsize(alg);
+
+   if (alg->cra_type != _acomp_type)
+   extsize += sizeof(struct crypto_scomp *);
+
+   return extsize;
+}
+
 static const struct crypto_type crypto_acomp_type = {
-   .extsize = crypto_alg_extsize,
+   .extsize = crypto_acomp_extsize,
.init_tfm = crypto_acomp_init_tfm,
 #ifdef CONFIG_PROC_FS
.show = crypto_acomp_show,
 #endif
.report = crypto_acomp_report,
.maskclear = ~CRYPTO_ALG_TYPE_MASK,
-   .maskset = CRYPTO_ALG_TYPE_MASK,
+   .maskset = CRYPTO_ALG_TYPE_ACOMPRESS_MASK,
.type = CRYPTO_ALG_TYPE_ACOMPRESS,
.tfmsize = offsetof(struct crypto_acomp, base),
 };
@@ -96,6 +116,31 @@ struct crypto_acomp *crypto_alloc_acomp(const char 
*alg_name, u32 type,
 }
 EXPORT_SYMBOL_GPL(crypto_alloc_acomp);
 
+struct acomp_req *acomp_request_alloc(struct crypto_acomp *acomp)
+{
+   struct crypto_tfm *tfm = crypto_acomp_tfm(acomp);
+   struct acomp_req *req;
+
+   req = __acomp_request_alloc(acomp);
+   if (req && (tfm->__crt_alg->cra_type != _acomp_type))
+   return crypto_acomp_scomp_alloc_ctx(req);
+
+   return req;
+}
+EXPORT_SYMBOL_GPL(acomp_request_alloc);
+
+void acomp_request_free(struct acomp_req *req)
+{
+   struct crypto_acomp *acomp = crypto_acomp_reqtfm(req);
+   struct crypto_tfm *tfm = crypto_acomp_tfm(acomp);
+
+   if (tfm->__crt_alg->cra_type != _acomp_type)
+   crypto_acomp_scomp_free_ctx(req);
+
+   __acomp_request_free(req);
+}
+EXPORT_SYMBOL_GPL(acomp_request_free);
+
 int crypto_register_acomp(struct acomp_alg *alg)
 {
struct crypto_alg *base = >base;
diff --git a/crypto/scompress.c b/crypto/scompress.c
new file mode 100644
index 000..850b427
--- /dev/null
+++ b/crypto/scompress.c
@@ -0,0 +1,252 @@
+/*
+ * Synchronous Compression operations
+ *
+ * Copyright 2015 LG Electronics Inc.
+ * Copyright (c) 2016, Intel Corporation
+ * Author: Giovanni Cabiddu 
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ */
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include "internal.h"
+
+static const struct crypto_type crypto_scomp_type;
+
+#ifdef CONFIG_NET
+static int crypto_scomp_report(struct sk_buff *skb, struct crypto_alg *alg)
+{
+   struct crypto_report_comp rscomp;
+
+   strncpy(rscomp.type, "scomp", sizeof(rscomp.type));
+
+   if 

[PATCH v6 1/8] crypto: add asynchronous compression api

2016-06-08 Thread Giovanni Cabiddu
This patch introduces acomp, an asynchronous compression api that uses
scatterlist buffers.

Signed-off-by: Giovanni Cabiddu 
---
 crypto/Kconfig  |   10 ++
 crypto/Makefile |2 +
 crypto/acompress.c  |  118 
 crypto/crypto_user.c|   21 +++
 include/crypto/acompress.h  |  261 +++
 include/crypto/internal/acompress.h |   66 +
 include/linux/crypto.h  |1 +
 7 files changed, 479 insertions(+), 0 deletions(-)
 create mode 100644 crypto/acompress.c
 create mode 100644 include/crypto/acompress.h
 create mode 100644 include/crypto/internal/acompress.h

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 1d33beb..24fef55 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -93,6 +93,15 @@ config CRYPTO_AKCIPHER
select CRYPTO_AKCIPHER2
select CRYPTO_ALGAPI
 
+config CRYPTO_ACOMP
+   tristate
+   select CRYPTO_ACOMP2
+   select CRYPTO_ALGAPI
+
+config CRYPTO_ACOMP2
+   tristate
+   select CRYPTO_ALGAPI2
+
 config CRYPTO_RSA
tristate "RSA algorithm"
select CRYPTO_AKCIPHER
@@ -115,6 +124,7 @@ config CRYPTO_MANAGER2
select CRYPTO_HASH2
select CRYPTO_BLKCIPHER2
select CRYPTO_AKCIPHER2
+   select CRYPTO_ACOMP2
 
 config CRYPTO_USER
tristate "Userspace cryptographic algorithm configuration"
diff --git a/crypto/Makefile b/crypto/Makefile
index 4f4ef7e..e817b38 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -31,6 +31,8 @@ obj-$(CONFIG_CRYPTO_HASH2) += crypto_hash.o
 
 obj-$(CONFIG_CRYPTO_AKCIPHER2) += akcipher.o
 
+obj-$(CONFIG_CRYPTO_ACOMP2) += acompress.o
+
 $(obj)/rsapubkey-asn1.o: $(obj)/rsapubkey-asn1.c $(obj)/rsapubkey-asn1.h
 $(obj)/rsaprivkey-asn1.o: $(obj)/rsaprivkey-asn1.c $(obj)/rsaprivkey-asn1.h
 clean-files += rsapubkey-asn1.c rsapubkey-asn1.h
diff --git a/crypto/acompress.c b/crypto/acompress.c
new file mode 100644
index 000..f24fef3
--- /dev/null
+++ b/crypto/acompress.c
@@ -0,0 +1,118 @@
+/*
+ * Asynchronous Compression operations
+ *
+ * Copyright (c) 2016, Intel Corporation
+ * Authors: Weigang Li 
+ *  Giovanni Cabiddu 
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ */
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include "internal.h"
+
+#ifdef CONFIG_NET
+static int crypto_acomp_report(struct sk_buff *skb, struct crypto_alg *alg)
+{
+   struct crypto_report_comp racomp;
+
+   strncpy(racomp.type, "acomp", sizeof(racomp.type));
+
+   if (nla_put(skb, CRYPTOCFGA_REPORT_COMPRESS,
+   sizeof(struct crypto_report_comp), ))
+   goto nla_put_failure;
+   return 0;
+
+nla_put_failure:
+   return -EMSGSIZE;
+}
+#else
+static int crypto_acomp_report(struct sk_buff *skb, struct crypto_alg *alg)
+{
+   return -ENOSYS;
+}
+#endif
+
+static void crypto_acomp_show(struct seq_file *m, struct crypto_alg *alg)
+   __attribute__ ((unused));
+
+static void crypto_acomp_show(struct seq_file *m, struct crypto_alg *alg)
+{
+   seq_puts(m, "type : acomp\n");
+}
+
+static void crypto_acomp_exit_tfm(struct crypto_tfm *tfm)
+{
+   struct crypto_acomp *acomp = __crypto_acomp_tfm(tfm);
+   struct acomp_alg *alg = crypto_acomp_alg(acomp);
+
+   alg->exit(acomp);
+}
+
+static int crypto_acomp_init_tfm(struct crypto_tfm *tfm)
+{
+   struct crypto_acomp *acomp = __crypto_acomp_tfm(tfm);
+   struct acomp_alg *alg = crypto_acomp_alg(acomp);
+
+   if (alg->exit)
+   acomp->base.exit = crypto_acomp_exit_tfm;
+
+   if (alg->init)
+   return alg->init(acomp);
+
+   return 0;
+}
+
+static const struct crypto_type crypto_acomp_type = {
+   .extsize = crypto_alg_extsize,
+   .init_tfm = crypto_acomp_init_tfm,
+#ifdef CONFIG_PROC_FS
+   .show = crypto_acomp_show,
+#endif
+   .report = crypto_acomp_report,
+   .maskclear = ~CRYPTO_ALG_TYPE_MASK,
+   .maskset = CRYPTO_ALG_TYPE_MASK,
+   .type = CRYPTO_ALG_TYPE_ACOMPRESS,
+   .tfmsize = offsetof(struct crypto_acomp, base),
+};
+
+struct crypto_acomp *crypto_alloc_acomp(const char *alg_name, u32 type,
+   u32 mask)
+{
+   return crypto_alloc_tfm(alg_name, _acomp_type, type, mask);
+}
+EXPORT_SYMBOL_GPL(crypto_alloc_acomp);
+
+int crypto_register_acomp(struct acomp_alg *alg)
+{
+   struct crypto_alg *base = >base;
+
+   base->cra_type = _acomp_type;
+   base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK;
+   base->cra_flags |= CRYPTO_ALG_TYPE_ACOMPRESS;
+
+   return 

[PATCH v6 0/8] crypto: asynchronous compression api

2016-06-08 Thread Giovanni Cabiddu
The following patch set introduces acomp, a generic asynchronous
(de)compression api with support for SG lists.
We propose a new crypto type called crypto_acomp_type, a new struct acomp_alg
and struct crypto_acomp, together with number of helper functions to register
acomp type algorithms and allocate tfm instances.
This interface will allow the following operations:

int (*compress)(struct acomp_req *req);
int (*decompress)(struct acomp_req *req);

Together with acomp we propose a new driver-side interface, scomp, which
handles compression implementations which use linear buffers. We converted all
compression algorithms available in LKCF to use this interface so that those
algorithms will be accessible through the acomp api.

Changes in v6:
- changed acomp_request_alloc prototype by removing gfp parameter,
  acomp_request_alloc will always use GFP_KERNEL

Changes in v5:
- removed qdecompress api, no longer needed
- removed produced and consumed counters in acomp_req
- added crypto_has_acomp function 

Changes in v4:
- added qdecompress api, a front-end for decompression algorithms which
  do not need additional vmalloc work space

Changes in v3:
- added driver-side scomp interface
- provided support for lzo, lz4, lz4hc, 842, deflate compression algorithms
  via the acomp api (through scomp)
- extended testmgr to support acomp
- removed extended acomp api for supporting deflate algorithm parameters
  (will be enhanced and re-proposed in future)
Note that (2) to (7) are a rework of Joonsoo Kim's scomp patches.

Changes in v2:
- added compression and decompression request sizes in acomp_alg
  in order to enable noctx support
- extended api with helpers to allocate compression and
  decompression requests

Changes from initial submit:
- added consumed and produced fields to acomp_req
- extended api to support configuration of deflate compressors

---
Giovanni Cabiddu (8):
  crypto: add asynchronous compression api
  crypto: add driver-side scomp interface
  crypto: acomp - add support for lzo via scomp
  crypto: acomp - add support for lz4 via scomp
  crypto: acomp - add support for lz4hc via scomp
  crypto: acomp - add support for 842 via scomp
  crypto: acomp - add support for deflate via scomp
  crypto: acomp - update testmgr with support for acomp

 crypto/842.c|   82 +++-
 crypto/Kconfig  |   15 ++
 crypto/Makefile |3 +
 crypto/acompress.c  |  163 ++
 crypto/crypto_user.c|   21 +++
 crypto/deflate.c|  111 ++--
 crypto/lz4.c|   91 +++--
 crypto/lz4hc.c  |   92 +++--
 crypto/lzo.c|   97 +++--
 crypto/scompress.c  |  252 ++
 crypto/testmgr.c|  158 --
 include/crypto/acompress.h  |  253 +++
 include/crypto/internal/acompress.h |   81 +++
 include/crypto/internal/scompress.h |  134 ++
 include/linux/crypto.h  |3 +
 15 files changed, 1495 insertions(+), 61 deletions(-)
 create mode 100644 crypto/acompress.c
 create mode 100644 crypto/scompress.c
 create mode 100644 include/crypto/acompress.h
 create mode 100644 include/crypto/internal/acompress.h
 create mode 100644 include/crypto/internal/scompress.h

-- 
1.7.4.1

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v6 3/8] crypto: acomp - add support for lzo via scomp

2016-06-08 Thread Giovanni Cabiddu
This patch implements an scomp backend for the lzo compression algorithm.
This way, lzo is exposed through the acomp api.

Signed-off-by: Giovanni Cabiddu 
---
 crypto/Kconfig |1 +
 crypto/lzo.c   |   97 +++-
 2 files changed, 83 insertions(+), 15 deletions(-)

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 24fef55..08075c1 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -1513,6 +1513,7 @@ config CRYPTO_DEFLATE
 config CRYPTO_LZO
tristate "LZO compression algorithm"
select CRYPTO_ALGAPI
+   select CRYPTO_ACOMP2
select LZO_COMPRESS
select LZO_DECOMPRESS
help
diff --git a/crypto/lzo.c b/crypto/lzo.c
index c3f3dd9..168df78 100644
--- a/crypto/lzo.c
+++ b/crypto/lzo.c
@@ -22,40 +22,55 @@
 #include 
 #include 
 #include 
+#include 
 
 struct lzo_ctx {
void *lzo_comp_mem;
 };
 
+static void *lzo_alloc_ctx(struct crypto_scomp *tfm)
+{
+   void *ctx;
+
+   ctx = kmalloc(LZO1X_MEM_COMPRESS, GFP_KERNEL | __GFP_NOWARN);
+   if (!ctx)
+   ctx = vmalloc(LZO1X_MEM_COMPRESS);
+   if (!ctx)
+   return ERR_PTR(-ENOMEM);
+
+   return ctx;
+}
+
 static int lzo_init(struct crypto_tfm *tfm)
 {
struct lzo_ctx *ctx = crypto_tfm_ctx(tfm);
 
-   ctx->lzo_comp_mem = kmalloc(LZO1X_MEM_COMPRESS,
-   GFP_KERNEL | __GFP_NOWARN);
-   if (!ctx->lzo_comp_mem)
-   ctx->lzo_comp_mem = vmalloc(LZO1X_MEM_COMPRESS);
-   if (!ctx->lzo_comp_mem)
+   ctx->lzo_comp_mem = lzo_alloc_ctx(NULL);
+   if (IS_ERR(ctx->lzo_comp_mem))
return -ENOMEM;
 
return 0;
 }
 
+static void lzo_free_ctx(struct crypto_scomp *tfm, void *ctx)
+{
+   kvfree(ctx);
+}
+
 static void lzo_exit(struct crypto_tfm *tfm)
 {
struct lzo_ctx *ctx = crypto_tfm_ctx(tfm);
 
-   kvfree(ctx->lzo_comp_mem);
+   lzo_free_ctx(NULL, ctx->lzo_comp_mem);
 }
 
-static int lzo_compress(struct crypto_tfm *tfm, const u8 *src,
-   unsigned int slen, u8 *dst, unsigned int *dlen)
+static int __lzo_compress(const u8 *src, unsigned int slen,
+ u8 *dst, unsigned int *dlen, void *ctx)
 {
-   struct lzo_ctx *ctx = crypto_tfm_ctx(tfm);
size_t tmp_len = *dlen; /* size_t(ulong) <-> uint on 64 bit */
int err;
 
-   err = lzo1x_1_compress(src, slen, dst, _len, ctx->lzo_comp_mem);
+   err = lzo1x_1_compress(src, slen, dst, _len, ctx);
 
if (err != LZO_E_OK)
return -EINVAL;
@@ -64,8 +79,23 @@ static int lzo_compress(struct crypto_tfm *tfm, const u8 
*src,
return 0;
 }
 
-static int lzo_decompress(struct crypto_tfm *tfm, const u8 *src,
- unsigned int slen, u8 *dst, unsigned int *dlen)
+static int lzo_compress(struct crypto_tfm *tfm, const u8 *src,
+   unsigned int slen, u8 *dst, unsigned int *dlen)
+{
+   struct lzo_ctx *ctx = crypto_tfm_ctx(tfm);
+
+   return __lzo_compress(src, slen, dst, dlen, ctx->lzo_comp_mem);
+}
+
+static int lzo_scompress(struct crypto_scomp *tfm, const u8 *src,
+unsigned int slen, u8 *dst, unsigned int *dlen,
+void *ctx)
+{
+   return __lzo_compress(src, slen, dst, dlen, ctx);
+}
+
+static int __lzo_decompress(const u8 *src, unsigned int slen,
+   u8 *dst, unsigned int *dlen)
 {
int err;
size_t tmp_len = *dlen; /* size_t(ulong) <-> uint on 64 bit */
@@ -77,7 +107,19 @@ static int lzo_decompress(struct crypto_tfm *tfm, const u8 
*src,
 
*dlen = tmp_len;
return 0;
+}
 
+static int lzo_decompress(struct crypto_tfm *tfm, const u8 *src,
+ unsigned int slen, u8 *dst, unsigned int *dlen)
+{
+   return __lzo_decompress(src, slen, dst, dlen);
+}
+
+static int lzo_sdecompress(struct crypto_scomp *tfm, const u8 *src,
+  unsigned int slen, u8 *dst, unsigned int *dlen,
+  void *ctx)
+{
+   return __lzo_decompress(src, slen, dst, dlen);
 }
 
 static struct crypto_alg alg = {
@@ -88,18 +130,43 @@ static struct crypto_alg alg = {
.cra_init   = lzo_init,
.cra_exit   = lzo_exit,
.cra_u  = { .compress = {
-   .coa_compress   = lzo_compress,
-   .coa_decompress = lzo_decompress } }
+   .coa_compress   = lzo_compress,
+   .coa_decompress = lzo_decompress } }
+};
+
+static struct scomp_alg scomp = {
+   .alloc_ctx  = lzo_alloc_ctx,
+   .free_ctx   = lzo_free_ctx,
+   .compress   = lzo_scompress,
+   .decompress = lzo_sdecompress,
+   .base   = {
+   .cra_name   = "lzo",
+   .cra_driver_name = "lzo-scomp",
+   .cra_module  = 

Re: [RFC] DRBG: which shall be default?

2016-06-08 Thread Stephan Mueller
Am Mittwoch, 8. Juni 2016, 09:56:42 schrieb Stephan Mueller:

Hi Stephan,

> Am Mittwoch, 8. Juni 2016, 10:41:40 schrieb Herbert Xu:
> 
> Hi Herbert,
> 
> > On Thu, Jun 02, 2016 at 02:06:55PM +0200, Stephan Mueller wrote:
> > > I am working on it. During the analysis, I saw, however, that the DRBG
> > > increments the counter before the encryption whereas the the CTR mode
> > > increments it after the encryption.
> > > 
> > > I could of course adjust the handling in the code, but this would be a
> > > real
> > > hack IMHO.
> > 
> > Changing the order of increment is equivalent to changing the IV, no?
> 
> I finally got it working. All I needed to do is to add a crypto_inc(V) after
> a recalculation of V.

One addition: my NULL vector is 128 bytes in size.
> 
> The performance with ctr-aes-aesni on 64 bit is as follows -- I used my LRNG
> implementation for testing for which I already have performance
> measurements:
> 
> - generating smaller lengths (I tested up to 128 bytes) of random numbers
> (which is the vast majority of random numbers to be generated), the
> performance is even worse by 10 to 15%
> 
> - generating larger lengths (tested with 4096 bytes) of random numbers, the
> performance increases by 3%
> 
> Using ctr(aes-aesni) on 32 bit, the numbers are generally worse by 5 to 10%.
> 
> So, with these numbers, I would conclude that switching to the CTR mode is
> not worthwhile.
> 
> Ciao
> Stephan
> --
> To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


Ciao
Stephan
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] DRBG: which shall be default?

2016-06-08 Thread Herbert Xu
On Wed, Jun 08, 2016 at 09:56:42AM +0200, Stephan Mueller wrote:
>
> The performance with ctr-aes-aesni on 64 bit is as follows -- I used my LRNG 
> implementation for testing for which I already have performance measurements:
> 
> - generating smaller lengths (I tested up to 128 bytes) of random numbers 
> (which is the vast majority of random numbers to be generated), the 
> performance is even worse by 10 to 15%
> 
> - generating larger lengths (tested with 4096 bytes) of random numbers, the 
> performance increases by 3%
> 
> Using ctr(aes-aesni) on 32 bit, the numbers are generally worse by 5 to 10%.

ctr(aes-aesni) is not the same thing as ctr-aes-aesni, the former
being just another way of doing what you were doing.  So did you
actually test the real optimised version which is ctr-aes-aesni?

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] DRBG: which shall be default?

2016-06-08 Thread Stephan Mueller
Am Mittwoch, 8. Juni 2016, 10:41:40 schrieb Herbert Xu:

Hi Herbert,

> On Thu, Jun 02, 2016 at 02:06:55PM +0200, Stephan Mueller wrote:
> > I am working on it. During the analysis, I saw, however, that the DRBG
> > increments the counter before the encryption whereas the the CTR mode
> > increments it after the encryption.
> > 
> > I could of course adjust the handling in the code, but this would be a
> > real
> > hack IMHO.
> 
> Changing the order of increment is equivalent to changing the IV, no?

I finally got it working. All I needed to do is to add a crypto_inc(V) after a 
recalculation of V.

The performance with ctr-aes-aesni on 64 bit is as follows -- I used my LRNG 
implementation for testing for which I already have performance measurements:

- generating smaller lengths (I tested up to 128 bytes) of random numbers 
(which is the vast majority of random numbers to be generated), the 
performance is even worse by 10 to 15%

- generating larger lengths (tested with 4096 bytes) of random numbers, the 
performance increases by 3%

Using ctr(aes-aesni) on 32 bit, the numbers are generally worse by 5 to 10%.

So, with these numbers, I would conclude that switching to the CTR mode is not 
worthwhile.

Ciao
Stephan
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html