[PATCH v2 2/2] Crypto: Add SHA-3 Test's in tcrypt

2016-06-16 Thread Raveendra Padasalagi
Added support for SHA-3 algorithm test's
in tcrypt module and related test vectors.

Signed-off-by: Raveendra Padasalagi 
---
 crypto/tcrypt.c  |  53 ++-
 crypto/testmgr.c |  40 ++
 crypto/testmgr.h | 125 +++
 3 files changed, 217 insertions(+), 1 deletion(-)

diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
index 579dce0..4675459 100644
--- a/crypto/tcrypt.c
+++ b/crypto/tcrypt.c
@@ -72,7 +72,8 @@ static char *check[] = {
"cast6", "arc4", "michael_mic", "deflate", "crc32c", "tea", "xtea",
"khazad", "wp512", "wp384", "wp256", "tnepres", "xeta",  "fcrypt",
"camellia", "seed", "salsa20", "rmd128", "rmd160", "rmd256", "rmd320",
-   "lzo", "cts", "zlib", NULL
+   "lzo", "cts", "zlib", "sha3-224", "sha3-256", "sha3-384", "sha3-512",
+   NULL
 };
 
 struct tcrypt_result {
@@ -1284,6 +1285,22 @@ static int do_test(const char *alg, u32 type, u32 mask, 
int m)
ret += tcrypt_test("crct10dif");
break;
 
+   case 48:
+   ret += tcrypt_test("sha3-224");
+   break;
+
+   case 49:
+   ret += tcrypt_test("sha3-256");
+   break;
+
+   case 50:
+   ret += tcrypt_test("sha3-384");
+   break;
+
+   case 51:
+   ret += tcrypt_test("sha3-512");
+   break;
+
case 100:
ret += tcrypt_test("hmac(md5)");
break;
@@ -1691,6 +1708,22 @@ static int do_test(const char *alg, u32 type, u32 mask, 
int m)
test_hash_speed("poly1305", sec, poly1305_speed_template);
if (mode > 300 && mode < 400) break;
 
+   case 322:
+   test_hash_speed("sha3-224", sec, generic_hash_speed_template);
+   if (mode > 300 && mode < 400) break;
+
+   case 323:
+   test_hash_speed("sha3-256", sec, generic_hash_speed_template);
+   if (mode > 300 && mode < 400) break;
+
+   case 324:
+   test_hash_speed("sha3-384", sec, generic_hash_speed_template);
+   if (mode > 300 && mode < 400) break;
+
+   case 325:
+   test_hash_speed("sha3-512", sec, generic_hash_speed_template);
+   if (mode > 300 && mode < 400) break;
+
case 399:
break;
 
@@ -1770,6 +1803,24 @@ static int do_test(const char *alg, u32 type, u32 mask, 
int m)
test_ahash_speed("rmd320", sec, generic_hash_speed_template);
if (mode > 400 && mode < 500) break;
 
+   case 418:
+   test_ahash_speed("sha3-224", sec, generic_hash_speed_template);
+   if (mode > 400 && mode < 500) break;
+
+   case 419:
+   test_ahash_speed("sha3-256", sec, generic_hash_speed_template);
+   if (mode > 400 && mode < 500) break;
+
+   case 420:
+   test_ahash_speed("sha3-384", sec, generic_hash_speed_template);
+   if (mode > 400 && mode < 500) break;
+
+
+   case 421:
+   test_ahash_speed("sha3-512", sec, generic_hash_speed_template);
+   if (mode > 400 && mode < 500) break;
+
+
case 499:
break;
 
diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index c727fb0..b773a56 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -3659,6 +3659,46 @@ static const struct alg_test_desc alg_test_descs[] = {
}
}
}, {
+   .alg = "sha3-224",
+   .test = alg_test_hash,
+   .fips_allowed = 1,
+   .suite = {
+   .hash = {
+   .vecs = sha3_224_tv_template,
+   .count = SHA3_224_TEST_VECTORS
+   }
+   }
+   }, {
+   .alg = "sha3-256",
+   .test = alg_test_hash,
+   .fips_allowed = 1,
+   .suite = {
+   .hash = {
+   .vecs = sha3_256_tv_template,
+   .count = SHA3_256_TEST_VECTORS
+   }
+   }
+   }, {
+   .alg = "sha3-384",
+   .test = alg_test_hash,
+   .fips_allowed = 1,
+   .suite = {
+   .hash = {
+   .vecs = sha3_384_tv_template,
+   .count = SHA3_384_TEST_VECTORS
+   }
+   }
+   }, {
+   .alg = "sha3-512",
+   .test = alg_test_hash,
+   .fips_allowed = 1,
+   .suite = {
+   .hash = {
+   .vecs = sha3_512_tv_template,
+   .count = SHA3_512_TEST_VECTORS
+   }
+   }
+   }, {
.alg = "sha384",
  

[PATCH v2 0/2] Add SHA-3 algorithm and test vectors.

2016-06-16 Thread Raveendra Padasalagi
This patchset adds the implementation of SHA-3 algorithm
in software and it's based on original implementation
pushed in patch https://lwn.net/Articles/518415/ with
additional changes to match the padding rules specified
in SHA-3 specification.

This patchset also includes changes in tcrypt module to
add support for SHA-3 algorithms test and related test
vectors for basic testing.

Broadcom Secure Processing Unit-2(SPU-2) engine supports
offloading of SHA-3 operations in hardware, in order to
add SHA-3 support in SPU-2 driver we needed to have the
software implementation and test framework in place.

The patchset is based on v4.7-rc1 tag and its tested on
Broadcom NorthStar2 SoC.

The patch set can be fetched from iproc-sha3-v2 branch
of https://github.com/Broadcom/arm64-linux.git

Changes since v1:
 - Renamed MODULE_ALIAS to MODULE_ALIAS_CRYPTO
 - Added aliases for below cra_driver_name's
   sha3-224-generic
   sha3-256-generic
   sha3-384-generic
   sha3-512-generic

Jeff Garzik (1):
  Crypto: Add SHA-3 hash algorithm

Raveendra Padasalagi (1):
  Crypto: Add SHA-3 Test's in tcrypt

 crypto/Kconfig|  10 ++
 crypto/Makefile   |   1 +
 crypto/sha3_generic.c | 300 ++
 crypto/tcrypt.c   |  53 -
 crypto/testmgr.c  |  40 +++
 crypto/testmgr.h  | 125 +
 include/crypto/sha3.h |  29 +
 7 files changed, 557 insertions(+), 1 deletion(-)
 create mode 100644 crypto/sha3_generic.c
 create mode 100644 include/crypto/sha3.h

-- 
1.9.1

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v6 3/6] crypto: AF_ALG -- add asymmetric cipher interface

2016-06-16 Thread Andrew Zaborowski
Hi Stephan,

On 16 June 2016 at 17:38, Stephan Mueller  wrote:
>> This isn't an issue with AF_ALG, I should have changed the subject
>> line perhaps.  In this case it's an inconsistency between some
>> implementations and the documentation (header comment).  It affects
>> users accessing the cipher through AF_ALG but also directly.
>
> As I want to send a new version of the algif_akcipher shortly now (hoping for
> an inclusion into 4.8), is there anything you see that I should prepare for
> regarding this issue? I.e. do you forsee a potential fix that would change the
> API or ABI of algif_akcipher?

No, as far as I understand algif_akcipher will do the right thing now
if the algorithm does the right thing.  It's only the two RSA drivers
that would need to align with the software RSA in what buffer length
they accept.

Best regards
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/2] Crypto: Add SHA-3 hash algorithm

2016-06-16 Thread Stephan Mueller
Am Donnerstag, 16. Juni 2016, 21:39:17 schrieb Raveendra Padasalagi:

Hi Raveendra,

> I need some clarification to address your comment
> 
> "Shouldn't there be a priority here?"
> 
> What I know regarding priority value for an algorithm
> is higher the priority value it will be get selected for execution.
> 
> For example, let's say for software implementation of the algorithm if
> priority value
> is specified as 100 and hardware driver implementation of the same
> algorithm uses
> the priority value of 300 then hardware algo is what selected for execution.
> 
> I just had a look at priority value specified for other hash
> algorithm's and none of the
> software implementation specify any value, So it will be 0.
> 
> I think it's okay to not to specify any priority value for software
> implementation,
> as hardware implementation can use non zero value if it needs higher
> priority.
> 
> What's your opinion ?

You are fully correct.

To be in line with the other hashes, maybe let us leave it at 0. I was 
thinking about "backend" ciphers that should never ever be selected (like the 
Intel AES-NI examples) which should have a lower prio than any other cipher. 
But then, they have unique cra_names, so it does not really matter :-)

Ciao
Stephan
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/2] Crypto: Add SHA-3 hash algorithm

2016-06-16 Thread Raveendra Padasalagi
Hi Stephan,

Yes, I was initially thinking of to put it as FIPS but looked at the
existing "crypto/Kconfig"
for other algorithms and found it to be using DFIPS. So kept this also
the same :)

I need some clarification to address your comment

"Shouldn't there be a priority here?"

What I know regarding priority value for an algorithm
is higher the priority value it will be get selected for execution.

For example, let's say for software implementation of the algorithm if
priority value
is specified as 100 and hardware driver implementation of the same
algorithm uses
the priority value of 300 then hardware algo is what selected for execution.

I just had a look at priority value specified for other hash
algorithm's and none of the
software implementation specify any value, So it will be 0.

I think it's okay to not to specify any priority value for software
implementation,
as hardware implementation can use non zero value if it needs higher priority.

What's your opinion ?


Regards,
Raveendra










On Thu, Jun 16, 2016 at 9:10 PM, Stephan Mueller  wrote:
> Am Donnerstag, 16. Juni 2016, 14:44:57 schrieb Raveendra Padasalagi:
>
> Hi Raveendra,
>
>> > Typo DFIPS?
>>
>> It's not typo, DFIPS mean here Draft FIPS 202.
>> Do you want me to put it in another way ?
>
> I have never seen DFIPS. Besides, most FIPS standards are drafts (including of
> FIPS 140-2 :-) ), because it would require a signature from some ministry big-
> wig in the US govt to "release" it. Hence, I expect that it would retain its
> draft state for a long time :-)
>
> But if DFIPS is what you think is right, leave it :-)
>
> Ciao
> Stephan
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v6 3/6] crypto: AF_ALG -- add asymmetric cipher interface

2016-06-16 Thread Stephan Mueller
Am Donnerstag, 16. Juni 2016, 16:59:01 schrieb Andrew Zaborowski:

Hi Andrew,

> Hi Stephan,
> 
> On 16 June 2016 at 10:05, Stephan Mueller  wrote:
> > Am Dienstag, 14. Juni 2016, 09:42:34 schrieb Andrew Zaborowski:
> > 
> > Hi Andrew,
> > 
> >> > I think we have agreed on dropping the length enforcement at the
> >> > interface
> >> > level.
> >> 
> >> Separately from this there's a problem with the user being unable to
> >> know if the algorithm is going to fail because of destination buffer
> >> size != key size (including kernel users).  For RSA, the qat
> >> implementation will fail while the software implementation won't.  For
> >> pkcs1pad(...) there's currently just one implementation but the user
> >> can't assume that.
> > 
> > If I understand your issue correctly, my initial code requiring the caller
> > to provide sufficient memory would have covered the issue, right?
> 
> This isn't an issue with AF_ALG, I should have changed the subject
> line perhaps.  In this case it's an inconsistency between some
> implementations and the documentation (header comment).  It affects
> users accessing the cipher through AF_ALG but also directly.

As I want to send a new version of the algif_akcipher shortly now (hoping for 
an inclusion into 4.8), is there anything you see that I should prepare for 
regarding this issue? I.e. do you forsee a potential fix that would change the 
API or ABI of algif_akcipher?
> 
> > If so, we seem
> > to have implementations which can handle shorter buffer sizes and some
> > which do not. Should a caller really try to figure the right buffer size
> > out? Why not requiring a mandatory buffer size and be done with it? I.e.
> > what is the gain to allow shorter buffer sizes (as pointed out by Mat)?
> 
> It's that client code doesn't need an intermediate layer with an
> additional buffer and a memcpy to provide a sensible API.  If the code
> wants to decrypt a 32-byte Digest Info structure with a given key or a
> reference to a key it makes no sense, logically or in terms of
> performance, for it to provide a key-sized buffer.
> 
> In the case of the userspace interface I think it's also rare for a
> recv() or read() on Linux to require a buffer larger than it's going
> to use, correct me if i'm wrong.  (I.e. fail if given a 32-byte
> buffer, return 32 bytes of data anyway)  Turning your questino around
> is there a gain from requiring larger buffers?

That is a good one :-)

I have that check removed.

Ciao
Stephan
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v6 3/6] crypto: AF_ALG -- add asymmetric cipher interface

2016-06-16 Thread Andrew Zaborowski
Hi Stephan,

On 16 June 2016 at 10:05, Stephan Mueller  wrote:
> Am Dienstag, 14. Juni 2016, 09:42:34 schrieb Andrew Zaborowski:
>
> Hi Andrew,
>
>> >
>> > I think we have agreed on dropping the length enforcement at the interface
>> > level.
>>
>> Separately from this there's a problem with the user being unable to
>> know if the algorithm is going to fail because of destination buffer
>> size != key size (including kernel users).  For RSA, the qat
>> implementation will fail while the software implementation won't.  For
>> pkcs1pad(...) there's currently just one implementation but the user
>> can't assume that.
>
> If I understand your issue correctly, my initial code requiring the caller to
> provide sufficient memory would have covered the issue, right?

This isn't an issue with AF_ALG, I should have changed the subject
line perhaps.  In this case it's an inconsistency between some
implementations and the documentation (header comment).  It affects
users accessing the cipher through AF_ALG but also directly.

> If so, we seem
> to have implementations which can handle shorter buffer sizes and some which
> do not. Should a caller really try to figure the right buffer size out? Why
> not requiring a mandatory buffer size and be done with it? I.e. what is the
> gain to allow shorter buffer sizes (as pointed out by Mat)?

It's that client code doesn't need an intermediate layer with an
additional buffer and a memcpy to provide a sensible API.  If the code
wants to decrypt a 32-byte Digest Info structure with a given key or a
reference to a key it makes no sense, logically or in terms of
performance, for it to provide a key-sized buffer.

In the case of the userspace interface I think it's also rare for a
recv() or read() on Linux to require a buffer larger than it's going
to use, correct me if i'm wrong.  (I.e. fail if given a 32-byte
buffer, return 32 bytes of data anyway)  Turning your questino around
is there a gain from requiring larger buffers?

Best regards
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 6/7] crypto: marvell: Adding load balancing between engines

2016-06-16 Thread Romain Perier

Hello,

Le 15/06/2016 23:13, Boris Brezillon a écrit :

On Wed, 15 Jun 2016 21:15:33 +0200
Romain Perier  wrote:


This commits adds support for fine grained load balancing on
multi-engine IPs. The engine is pre-selected based on its current load
and on the weight of the crypto request that is about to be processed.
The global crypto queue is also moved to each engine. These changes are


to the mv_cesa_engine object.


useful for preparing the code to support TDMA chaining between crypto
requests, because each tdma chain will be handled per engine.


These changes are required to allow chaining crypto requests at the DMA
level.


ack


diff --git a/drivers/crypto/marvell/cipher.c b/drivers/crypto/marvell/cipher.c
index fbaae2f..02aa38f 100644
--- a/drivers/crypto/marvell/cipher.c
+++ b/drivers/crypto/marvell/cipher.c
@@ -89,6 +89,9 @@ static void mv_cesa_ablkcipher_std_step(struct 
ablkcipher_request *req)
size_t  len = min_t(size_t, req->nbytes - sreq->offset,
CESA_SA_SRAM_PAYLOAD_SIZE);

+   mv_cesa_adjust_op(engine, >op);
+   memcpy_toio(engine->sram, >op, sizeof(sreq->op));
+
len = sg_pcopy_to_buffer(req->src, creq->src_nents,
 engine->sram + CESA_SA_DATA_SRAM_OFFSET,
 len, sreq->offset);
@@ -167,12 +170,9 @@ mv_cesa_ablkcipher_std_prepare(struct ablkcipher_request 
*req)
  {
struct mv_cesa_ablkcipher_req *creq = ablkcipher_request_ctx(req);
struct mv_cesa_ablkcipher_std_req *sreq = >req.std;
-   struct mv_cesa_engine *engine = sreq->base.engine;

sreq->size = 0;
sreq->offset = 0;
-   mv_cesa_adjust_op(engine, >op);
-   memcpy_toio(engine->sram, >op, sizeof(sreq->op));


Are these changes really related to this load balancing support?
AFAICT, it's something that could have been done earlier, and is not
dependent on the changes your introducing here, but maybe I'm missing
something.


Yeah, indeed. I suggest another commit for doing it. What do you think ?




  }


[...]


  static int mv_cesa_ecb_aes_encrypt(struct ablkcipher_request *req)
diff --git a/drivers/crypto/marvell/hash.c b/drivers/crypto/marvell/hash.c
index f7f84cc..5946a69 100644
--- a/drivers/crypto/marvell/hash.c
+++ b/drivers/crypto/marvell/hash.c
@@ -162,6 +162,15 @@ static void mv_cesa_ahash_std_step(struct ahash_request 
*req)
unsigned int new_cache_ptr = 0;
u32 frag_mode;
size_t  len;
+   unsigned int digsize;
+   int i;
+
+   mv_cesa_adjust_op(engine, >op_tmpl);
+   memcpy_toio(engine->sram, >op_tmpl, sizeof(creq->op_tmpl));
+
+   digsize = crypto_ahash_digestsize(crypto_ahash_reqtfm(req));
+   for (i = 0; i < digsize / 4; i++)
+   writel_relaxed(creq->state[i], engine->regs + CESA_IVDIG(i));

if (creq->cache_ptr)
memcpy_toio(engine->sram + CESA_SA_DATA_SRAM_OFFSET,
@@ -265,11 +274,8 @@ static void mv_cesa_ahash_std_prepare(struct ahash_request 
*req)
  {
struct mv_cesa_ahash_req *creq = ahash_request_ctx(req);
struct mv_cesa_ahash_std_req *sreq = >req.std;
-   struct mv_cesa_engine *engine = sreq->base.engine;

sreq->offset = 0;
-   mv_cesa_adjust_op(engine, >op_tmpl);
-   memcpy_toio(engine->sram, >op_tmpl, sizeof(creq->op_tmpl));


Same as above: it doesn't seem related to the load balancing stuff.


It might be moved into this "separated commit" described above.

Thanks,
Romain
--
Romain Perier, Free Electrons
Embedded Linux, Kernel and Android engineering
http://free-electrons.com
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 5/7] crypto: marvell: Adding a complete operation for async requests

2016-06-16 Thread Romain Perier

Hello,

Le 15/06/2016 22:55, Boris Brezillon a écrit :

+


Nit: not sure you should mix this cosmetic change with the other
changes.


Ok



You already have ivsize initialized.


+   memcpy_fromio(ablkreq->info, basereq->chain.last->data, ivsize);


Use memcpy() here.


good catch, for both.

Thanks,
Romain
--
Romain Perier, Free Electrons
Embedded Linux, Kernel and Android engineering
http://free-electrons.com
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] crypto: caam - replace deprecated EXTRA_CFLAGS

2016-06-16 Thread Tudor Ambarus
EXTRA_CFLAGS is still supported but its usage is deprecated.

Signed-off-by: Tudor Ambarus 
---
 drivers/crypto/caam/Makefile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/crypto/caam/Makefile b/drivers/crypto/caam/Makefile
index 399ad55..3e9d3e1 100644
--- a/drivers/crypto/caam/Makefile
+++ b/drivers/crypto/caam/Makefile
@@ -2,7 +2,7 @@
 # Makefile for the CAAM backend and dependent components
 #
 ifeq ($(CONFIG_CRYPTO_DEV_FSL_CAAM_DEBUG), y)
-   EXTRA_CFLAGS := -DDEBUG
+   ccflags-y := -DDEBUG
 endif
 
 ccflags-y += -I$(srctree)/crypto
-- 
1.8.3.1

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 4/7] crypto: marvell: Moving the tdma chain out of mv_cesa_tdma_req

2016-06-16 Thread Boris Brezillon
On Thu, 16 Jun 2016 14:02:42 +0200
Romain Perier  wrote:

> > Now that the dma specific fields are part of the base request there's no
> > reason to keep this union.
> >
> > You can just put struct mv_cesa_req base; directly under struct
> > mv_cesa_ablkcipher_req, and move mv_cesa_ablkcipher_std_req fields in
> > mv_cesa_ablkcipher_req.  
> 
> 
> Well, I think that I might keep the changes related to mv_cesa_tdma_req 
> in this commit (+ put struct mv_cesa_req base; direct under struct 
> mv_cesa_ablkcipher_req) and move the changes related to 
> mv_cesa_ablkcipher_std_req into another commit. What do you think ?

After re-reading the code, I'm not sure the last part (moving
mv_cesa_ablkcipher_std_req fields into mv_cesa_ablkcipher_req) is a
good idea anymore.

So let's just kill the union, and move mv_cesa_ablkcipher_std_req and
mv_cesa_req base in mv_cesa_ablkcipher_req (you'll also have to remove
the base field from the mv_cesa_ablkcipher_std_req struct).

-- 
Boris Brezillon, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 4/7] crypto: marvell: Moving the tdma chain out of mv_cesa_tdma_req

2016-06-16 Thread Boris Brezillon
On Thu, 16 Jun 2016 14:02:42 +0200
Romain Perier  wrote:

> > Now that the dma specific fields are part of the base request there's no
> > reason to keep this union.
> >
> > You can just put struct mv_cesa_req base; directly under struct
> > mv_cesa_ablkcipher_req, and move mv_cesa_ablkcipher_std_req fields in
> > mv_cesa_ablkcipher_req.  
> 
> 
> Well, I think that I might keep the changes related to mv_cesa_tdma_req 
> in this commit (+ put struct mv_cesa_req base; direct under struct 
> mv_cesa_ablkcipher_req) and move the changes related to 
> mv_cesa_ablkcipher_std_req into another commit. What do you think ?

Sounds good.

> >  
> >>struct mv_cesa_ahash_dma_iter iter;
> >>struct mv_cesa_op_ctx *op = NULL;
> >>unsigned int frag_len;
> >> @@ -662,11 +661,6 @@ static int mv_cesa_ahash_req_init(struct 
> >> ahash_request *req, bool *cached)
> >>struct mv_cesa_ahash_req *creq = ahash_request_ctx(req);
> >>int ret;
> >>
> >> -  if (cesa_dev->caps->has_tdma)
> >> -  creq->req.base.type = CESA_DMA_REQ;
> >> -  else
> >> -  creq->req.base.type = CESA_STD_REQ;
> >> -  
> >
> > Hm, where is it decided now? I mean, I don't see this test anywhere
> > else in your patch, which means you're now always using standard mode.  
> 
> It has been replaced by mv_cesa_req_get_type() + initializing 
> chain.first to NULL in std_init. So, that's the same thing, no ?

And that's exactly my point :-). When these fields are NULL the request
is a STD request...

> 
> >  
> >>creq->src_nents = sg_nents_for_len(req->src, req->nbytes);
> >>if (creq->src_nents < 0) {
> >>dev_err(cesa_dev->dev, "Invalid number of src SG");
> >> @@ -680,7 +674,7 @@ static int mv_cesa_ahash_req_init(struct ahash_request 
> >> *req, bool *cached)
> >>if (*cached)
> >>return 0;
> >>
> >> -  if (creq->req.base.type == CESA_DMA_REQ)
> >> +  if (mv_cesa_req_get_type(>req.base) == CESA_DMA_REQ)  

... and here you're testing if it's a DMA request, which will always be
false, since mv_cesa_ahash_dma_req_init() is the function supposed to
fill the ->first and ->last fields.

> >
> > Should be
> >
> > if (cesa_dev->caps->has_tdma)
> >  
> >>ret = mv_cesa_ahash_dma_req_init(req);  
> 
> Why ? mv_cesa_req_get_type() tests mv_cesa_req->chain and returns a code 
> depending on its value. This value is initialized according to what is 
> set un "has_tdma"...

As explained above, it's not ;).


-- 
Boris Brezillon, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 4/7] crypto: marvell: Moving the tdma chain out of mv_cesa_tdma_req

2016-06-16 Thread Romain Perier

Hello,

Le 15/06/2016 22:42, Boris Brezillon a écrit :

On Wed, 15 Jun 2016 21:15:31 +0200
Romain Perier  wrote:


Actually the only way to access the tdma chain is to use the 'req' union


Currently, ...


ok


Now that the dma specific fields are part of the base request there's no
reason to keep this union.

You can just put struct mv_cesa_req base; directly under struct
mv_cesa_ablkcipher_req, and move mv_cesa_ablkcipher_std_req fields in
mv_cesa_ablkcipher_req.



Well, I think that I might keep the changes related to mv_cesa_tdma_req 
in this commit (+ put struct mv_cesa_req base; direct under struct 
mv_cesa_ablkcipher_req) and move the changes related to 
mv_cesa_ablkcipher_std_req into another commit. What do you think ?




Initialize basereq earlier and pass it as the first argument of
mv_cesa_dma_process().


ok




@@ -174,9 +174,9 @@ static inline void
  mv_cesa_ablkcipher_dma_prepare(struct ablkcipher_request *req)
  {
struct mv_cesa_ablkcipher_req *creq = ablkcipher_request_ctx(req);
-   struct mv_cesa_tdma_req *dreq = >req.dma;
+   struct mv_cesa_req *dreq = >req.base;


You named it basereq in mv_cesa_ablkcipher_step(). Try to be
consistent, no matter the name.


ack





-   mv_cesa_dma_prepare(dreq, dreq->base.engine);
+   mv_cesa_dma_prepare(dreq, dreq->engine);
  }

  static inline void
@@ -199,7 +199,7 @@ static inline void mv_cesa_ablkcipher_prepare(struct 
crypto_async_request *req,
struct mv_cesa_ablkcipher_req *creq = ablkcipher_request_ctx(ablkreq);
creq->req.base.engine = engine;

-   if (creq->req.base.type == CESA_DMA_REQ)
+   if (mv_cesa_req_get_type(>req.base) == CESA_DMA_REQ)
mv_cesa_ablkcipher_dma_prepare(ablkreq);
else
mv_cesa_ablkcipher_std_prepare(ablkreq);
@@ -302,14 +302,13 @@ static int mv_cesa_ablkcipher_dma_req_init(struct 
ablkcipher_request *req,
struct mv_cesa_ablkcipher_req *creq = ablkcipher_request_ctx(req);
gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
  GFP_KERNEL : GFP_ATOMIC;
-   struct mv_cesa_tdma_req *dreq = >req.dma;
+   struct mv_cesa_req *dreq = >req.base;


Ditto.


ack



@@ -256,9 +256,9 @@ static int mv_cesa_ahash_std_process(struct ahash_request 
*req, u32 status)
  static inline void mv_cesa_ahash_dma_prepare(struct ahash_request *req)
  {
struct mv_cesa_ahash_req *creq = ahash_request_ctx(req);
-   struct mv_cesa_tdma_req *dreq = >req.dma.base;
+   struct mv_cesa_req *dreq = >req.base;


Ditto.


ack


@@ -340,7 +340,7 @@ static void mv_cesa_ahash_prepare(struct 
crypto_async_request *req,

creq->req.base.engine = engine;

-   if (creq->req.base.type == CESA_DMA_REQ)
+   if (mv_cesa_req_get_type(>req.base) == CESA_DMA_REQ)
mv_cesa_ahash_dma_prepare(ahashreq);
else
mv_cesa_ahash_std_prepare(ahashreq);
@@ -555,8 +555,7 @@ static int mv_cesa_ahash_dma_req_init(struct ahash_request 
*req)
struct mv_cesa_ahash_req *creq = ahash_request_ctx(req);
gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
  GFP_KERNEL : GFP_ATOMIC;
-   struct mv_cesa_ahash_dma_req *ahashdreq = >req.dma;
-   struct mv_cesa_tdma_req *dreq = >base;
+   struct mv_cesa_req *dreq = >req.base;


Ditto.


ack




struct mv_cesa_ahash_dma_iter iter;
struct mv_cesa_op_ctx *op = NULL;
unsigned int frag_len;
@@ -662,11 +661,6 @@ static int mv_cesa_ahash_req_init(struct ahash_request 
*req, bool *cached)
struct mv_cesa_ahash_req *creq = ahash_request_ctx(req);
int ret;

-   if (cesa_dev->caps->has_tdma)
-   creq->req.base.type = CESA_DMA_REQ;
-   else
-   creq->req.base.type = CESA_STD_REQ;
-


Hm, where is it decided now? I mean, I don't see this test anywhere
else in your patch, which means you're now always using standard mode.


It has been replaced by mv_cesa_req_get_type() + initializing 
chain.first to NULL in std_init. So, that's the same thing, no ?





creq->src_nents = sg_nents_for_len(req->src, req->nbytes);
if (creq->src_nents < 0) {
dev_err(cesa_dev->dev, "Invalid number of src SG");
@@ -680,7 +674,7 @@ static int mv_cesa_ahash_req_init(struct ahash_request 
*req, bool *cached)
if (*cached)
return 0;

-   if (creq->req.base.type == CESA_DMA_REQ)
+   if (mv_cesa_req_get_type(>req.base) == CESA_DMA_REQ)


Should be

if (cesa_dev->caps->has_tdma)


ret = mv_cesa_ahash_dma_req_init(req);


Why ? mv_cesa_req_get_type() tests mv_cesa_req->chain and returns a code 
depending on its value. This value is initialized according to what is 
set un "has_tdma"...



Thanks,
Regards,
Romain
--
Romain Perier, Free Electrons
Embedded Linux, Kernel and Android engineering
http://free-electrons.com
--
To unsubscribe from this 

Re: [PATCH] crypto: caam: fix misspelled upper_32_bits

2016-06-16 Thread Horia Ioan Geanta Neag
On 6/16/2016 12:04 PM, Arnd Bergmann wrote:
> An endianess fix mistakenly used higher_32_bits() instead of
> upper_32_bits(), and that doesn't exist:
> 
> drivers/crypto/caam/desc_constr.h: In function 'append_ptr':
> drivers/crypto/caam/desc_constr.h:84:75: error: implicit declaration of 
> function 'higher_32_bits' [-Werror=implicit-function-declaration]
>   *offset = cpu_to_caam_dma(ptr);
> 
> Signed-off-by: Arnd Bergmann 
> Fixes: 261ea058f016 ("crypto: caam - handle core endianness != caam 
> endianness")

Oops... thanks Arnd!
Reviewed-by: Horia Geantă 

> ---
>  drivers/crypto/caam/regs.h | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/crypto/caam/regs.h b/drivers/crypto/caam/regs.h
> index 8c766cf9202c..b3c5016f6458 100644
> --- a/drivers/crypto/caam/regs.h
> +++ b/drivers/crypto/caam/regs.h
> @@ -183,10 +183,10 @@ static inline u64 rd_reg64(void __iomem *reg)
>  #ifdef CONFIG_SOC_IMX7D
>  #define cpu_to_caam_dma(value) \
>   (((u64)cpu_to_caam32(lower_32_bits(value)) << 32) | \
> -  (u64)cpu_to_caam32(higher_32_bits(value)))
> +   (u64)cpu_to_caam32(upper_32_bits(value)))
>  #define caam_dma_to_cpu(value) \
>   (((u64)caam32_to_cpu(lower_32_bits(value)) << 32) | \
> -  (u64)caam32_to_cpu(higher_32_bits(value)))
> +   (u64)caam32_to_cpu(upper_32_bits(value)))
>  #else
>  #define cpu_to_caam_dma(value) cpu_to_caam64(value)
>  #define caam_dma_to_cpu(value) caam64_to_cpu(value)
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/2] Crypto: Add SHA-3 hash algorithm

2016-06-16 Thread Raveendra Padasalagi
Hi Stephan,

Thanks for the review comments. I will address it in the next patch.
Please look at my reply below against each comment.

Regards,
Raveendra

On Wed, Jun 15, 2016 at 5:12 PM, Stephan Mueller  wrote:
> Am Mittwoch, 15. Juni 2016, 15:11:58 schrieb Raveendra Padasalagi:
>
> Hi Raveendra,
>
>> From: Jeff Garzik 
>>
>> This patch adds the implementation of SHA3 algorithm
>> in software and it's based on original implementation
>> pushed in patch https://lwn.net/Articles/518415/ with
>> additional changes to match the padding rules specified
>> in SHA-3 specification.
>>
>> Signed-off-by: Jeff Garzik 
>> Signed-off-by: Raveendra Padasalagi 
>> ---
>>  crypto/Kconfig|  10 ++
>>  crypto/Makefile   |   1 +
>>  crypto/sha3_generic.c | 296
>> ++ include/crypto/sha3.h |
>> 29 +
>>  4 files changed, 336 insertions(+)
>>  create mode 100644 crypto/sha3_generic.c
>>  create mode 100644 include/crypto/sha3.h
>>
>> diff --git a/crypto/Kconfig b/crypto/Kconfig
>> index 1d33beb..83ee8cb 100644
>> --- a/crypto/Kconfig
>> +++ b/crypto/Kconfig
>> @@ -750,6 +750,16 @@ config CRYPTO_SHA512_SPARC64
>> SHA-512 secure hash standard (DFIPS 180-2) implemented
>> using sparc64 crypto instructions, when available.
>>
>> +config CRYPTO_SHA3
>> + tristate "SHA3 digest algorithm"
>> + select CRYPTO_HASH
>> + help
>> +   SHA-3 secure hash standard (DFIPS 202). It's based on
>
> Typo DFIPS?

It's not typo, DFIPS mean here Draft FIPS 202.
Do you want me to put it in another way ?

>> +   cryptographic sponge function family called Keccak.
>> +
>> +   References:
>> +   http://keccak.noekeon.org/
>> +
>>  config CRYPTO_TGR192
>>   tristate "Tiger digest algorithms"
>>   select CRYPTO_HASH
>> diff --git a/crypto/Makefile b/crypto/Makefile
>> index 4f4ef7e..0b82c47 100644
>> --- a/crypto/Makefile
>> +++ b/crypto/Makefile
>> @@ -61,6 +61,7 @@ obj-$(CONFIG_CRYPTO_RMD320) += rmd320.o
>>  obj-$(CONFIG_CRYPTO_SHA1) += sha1_generic.o
>>  obj-$(CONFIG_CRYPTO_SHA256) += sha256_generic.o
>>  obj-$(CONFIG_CRYPTO_SHA512) += sha512_generic.o
>> +obj-$(CONFIG_CRYPTO_SHA3) += sha3_generic.o
>>  obj-$(CONFIG_CRYPTO_WP512) += wp512.o
>>  obj-$(CONFIG_CRYPTO_TGR192) += tgr192.o
>>  obj-$(CONFIG_CRYPTO_GF128MUL) += gf128mul.o
>> diff --git a/crypto/sha3_generic.c b/crypto/sha3_generic.c
>> new file mode 100644
>> index 000..162dfc3
>> --- /dev/null
>> +++ b/crypto/sha3_generic.c
>> @@ -0,0 +1,296 @@
>> +/*
>> + * Cryptographic API.
>> + *
>> + * SHA-3, as specified in
>> + * http://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.202.pdf
>> + *
>> + * SHA-3 code by Jeff Garzik 
>> + *
>> + * This program is free software; you can redistribute it and/or modify it
>> + * under the terms of the GNU General Public License as published by the
>> Free + * Software Foundation; either version 2 of the License, or (at your
>> option)• + * any later version.
>> + *
>> + */
>> +#include 
>> +#include 
>> +#include 
>> +#include 
>> +#include 
>> +#include 
>> +
>> +#define KECCAK_ROUNDS 24
>> +
>> +#define ROTL64(x, y) (((x) << (y)) | ((x) >> (64 - (y
>> +
>> +static const u64 keccakf_rndc[24] = {
>> + 0x0001, 0x8082, 0x8000808a,
>> + 0x800080008000, 0x808b, 0x8001,
>> + 0x800080008081, 0x80008009, 0x008a,
>> + 0x0088, 0x80008009, 0x800a,
>> + 0x8000808b, 0x808b, 0x80008089,
>> + 0x80008003, 0x80008002, 0x8080,
>> + 0x800a, 0x8000800a, 0x800080008081,
>> + 0x80008080, 0x8001, 0x800080008008
>> +};
>> +
>> +static const int keccakf_rotc[24] = {
>> + 1,  3,  6,  10, 15, 21, 28, 36, 45, 55, 2,  14,
>> + 27, 41, 56, 8,  25, 43, 62, 18, 39, 61, 20, 44
>> +};
>> +
>> +static const int keccakf_piln[24] = {
>> + 10, 7,  11, 17, 18, 3, 5,  16, 8,  21, 24, 4,
>> + 15, 23, 19, 13, 12, 2, 20, 14, 22, 9,  6,  1
>> +};
>> +
>> +/* update the state with given number of rounds */
>> +
>> +static void keccakf(u64 st[25])
>> +{
>> + int i, j, round;
>> + u64 t, bc[5];
>> +
>> + for (round = 0; round < KECCAK_ROUNDS; round++) {
>> +
>> + /* Theta */
>> + for (i = 0; i < 5; i++)
>> + bc[i] = st[i] ^ st[i + 5] ^ st[i + 10] ^ st[i + 15]
>> + ^ st[i + 20];
>> +
>> + for (i = 0; i < 5; i++) {
>> + t = bc[(i + 4) % 5] ^ ROTL64(bc[(i + 1) % 5], 1);
>> + for (j = 0; j < 25; j += 5)
>> + st[j + i] ^= t;
>> + }
>> +
>> + /* Rho Pi */
>> + t = st[1];
>> + for (i = 0; i < 24; i++) {
>> +

[PATCH] crypto: caam: fix misspelled upper_32_bits

2016-06-16 Thread Arnd Bergmann
An endianess fix mistakenly used higher_32_bits() instead of
upper_32_bits(), and that doesn't exist:

drivers/crypto/caam/desc_constr.h: In function 'append_ptr':
drivers/crypto/caam/desc_constr.h:84:75: error: implicit declaration of 
function 'higher_32_bits' [-Werror=implicit-function-declaration]
  *offset = cpu_to_caam_dma(ptr);

Signed-off-by: Arnd Bergmann 
Fixes: 261ea058f016 ("crypto: caam - handle core endianness != caam endianness")
---
 drivers/crypto/caam/regs.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/crypto/caam/regs.h b/drivers/crypto/caam/regs.h
index 8c766cf9202c..b3c5016f6458 100644
--- a/drivers/crypto/caam/regs.h
+++ b/drivers/crypto/caam/regs.h
@@ -183,10 +183,10 @@ static inline u64 rd_reg64(void __iomem *reg)
 #ifdef CONFIG_SOC_IMX7D
 #define cpu_to_caam_dma(value) \
(((u64)cpu_to_caam32(lower_32_bits(value)) << 32) | \
-(u64)cpu_to_caam32(higher_32_bits(value)))
+ (u64)cpu_to_caam32(upper_32_bits(value)))
 #define caam_dma_to_cpu(value) \
(((u64)caam32_to_cpu(lower_32_bits(value)) << 32) | \
-(u64)caam32_to_cpu(higher_32_bits(value)))
+ (u64)caam32_to_cpu(upper_32_bits(value)))
 #else
 #define cpu_to_caam_dma(value) cpu_to_caam64(value)
 #define caam_dma_to_cpu(value) caam64_to_cpu(value)
-- 
2.9.0

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 3/7] crypto: marvell: Copy IV vectors by DMA transfers for acipher requests

2016-06-16 Thread Gregory CLEMENT
Hi Romain,
 
 On jeu., juin 16 2016, Romain Perier  wrote:

>
>>> diff --git a/drivers/crypto/marvell/cesa.h b/drivers/crypto/marvell/cesa.h
>>> index 74071e4..74b84bd 100644
>>> --- a/drivers/crypto/marvell/cesa.h
>>> +++ b/drivers/crypto/marvell/cesa.h
>>> @@ -275,6 +275,7 @@ struct mv_cesa_op_ctx {
>>>   #define CESA_TDMA_DUMMY   0
>>>   #define CESA_TDMA_DATA1
>>>   #define CESA_TDMA_OP  2
>>> +#define CESA_TDMA_IV   4
>>
>> Should be 3 and not 4: TDMA_TYPE is an enum, not a bit field.
>
> Ok
>
>>
>> Sometime it's better to offend the < 80 characters rule than doing
>> funky stuff ;).
>
> I just wanted to make checkpatch happy :D

In this case you can use a temporary variable.


-- 
Gregory Clement, Free Electrons
Kernel, drivers, real-time and embedded Linux
development, consulting, training and support.
http://free-electrons.com
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 3/7] crypto: marvell: Copy IV vectors by DMA transfers for acipher requests

2016-06-16 Thread Romain Perier

Hello,

Le 15/06/2016 22:07, Boris Brezillon a écrit :

On Wed, 15 Jun 2016 21:15:30 +0200
Romain Perier  wrote:


Adding a TDMA descriptor at the end of the request for copying the
output IV vector via a DMA transfer. This is required for processing
cipher requests asynchroniously in chained mode, otherwise the content


  asynchronously


of the IV vector will be overwriten for each new finished request.


BTW, Not sure the term 'asynchronously' is appropriate here. The
standard (AKA non-DMA) processing is also asynchronous. The real reason
here is that you want to chain the requests and offload as much
processing as possible to the DMA and crypto engine. And as you
explained, this is only possible if we retrieve the updated IV using
DMA.



What do you think of the following description ?
"
Adding a TDMA descriptor at the end of the request for copying the
output IV vector via a DMA transfer. This is a good way for offloading
as much as processing as possible to the DMA and the crypto engine.
This is also required for processing multiple cipher requests
in chained mode, otherwise the content of the IV vector would be
overwritten by the last processed request.
"

This point is true if multiple chained requests are processed via TDMA, 
the content of the "global" IV output vector would be overwritten

by the last request.



diff --git a/drivers/crypto/marvell/cesa.h b/drivers/crypto/marvell/cesa.h
index 74071e4..74b84bd 100644
--- a/drivers/crypto/marvell/cesa.h
+++ b/drivers/crypto/marvell/cesa.h
@@ -275,6 +275,7 @@ struct mv_cesa_op_ctx {
  #define CESA_TDMA_DUMMY   0
  #define CESA_TDMA_DATA1
  #define CESA_TDMA_OP  2
+#define CESA_TDMA_IV   4


Should be 3 and not 4: TDMA_TYPE is an enum, not a bit field.


Ok



Sometime it's better to offend the < 80 characters rule than doing
funky stuff ;).


I just wanted to make checkpatch happy :D
Yeah, that's ugly, I agree. I will fix this.




+   memcpy_fromio(ablkreq->info, dreq->chain.last->data, ivsize);
return ret;
-
-   memcpy_fromio(ablkreq->info,
- engine->sram + CESA_SA_CRYPT_IV_SRAM_OFFSET,
- 
crypto_ablkcipher_ivsize(crypto_ablkcipher_reqtfm(ablkreq)));
-
-   return 0;
+   }


Missing blank line.


ack




+   return mv_cesa_ablkcipher_std_process(ablkreq, status);


This version is more readable IMHO:

struct mv_cesa_tdma_req *dreq;
unsigned int ivsize;
int ret;

if (creq->req.base.type == CESA_STD_REQ)
return mv_cesa_ablkcipher_std_process(ablkreq, status);

ret = mv_cesa_dma_process(>req.dma, status);
if (ret)
return ret;

dreq = >req.dma;
ivsize =
crypto_ablkcipher_ivsize(crypto_ablkcipher_reqtfm(ablkreq));
memcpy_fromio(ablkreq->info, dreq->chain.last->data, ivsize);

return 0;



  static void mv_cesa_ablkcipher_step(struct crypto_async_request *req)
@@ -302,6 +307,7 @@ static int mv_cesa_ablkcipher_dma_req_init(struct 
ablkcipher_request *req,
struct mv_cesa_tdma_chain chain;
bool skip_ctx = false;
int ret;
+   unsigned int ivsize;

dreq->base.type = CESA_DMA_REQ;
dreq->chain.first = NULL;
@@ -360,6 +366,14 @@ static int mv_cesa_ablkcipher_dma_req_init(struct 
ablkcipher_request *req,

} while (mv_cesa_ablkcipher_req_iter_next_op());

+   /* Add output data for IV */
+   ivsize = crypto_ablkcipher_ivsize(crypto_ablkcipher_reqtfm(req));
+   ret = mv_cesa_dma_add_iv_op(, CESA_SA_CRYPT_IV_SRAM_OFFSET,
+   ivsize, CESA_TDMA_SRC_IN_SRAM, flags);
+
+   if (ret)
+   goto err_free_tdma;
+
dreq->chain = chain;

return 0;
diff --git a/drivers/crypto/marvell/tdma.c b/drivers/crypto/marvell/tdma.c
index d493714..88c87be 100644
--- a/drivers/crypto/marvell/tdma.c
+++ b/drivers/crypto/marvell/tdma.c
@@ -68,6 +68,9 @@ void mv_cesa_dma_cleanup(struct mv_cesa_tdma_req *dreq)
if (tdma->flags & CESA_TDMA_OP)


I realize this test is wrong.

It should be
type = tdma->flags & CESA_TDMA_TYPE_MSK;
if (type == CESA_TDMA_OP)


dma_pool_free(cesa_dev->dma->op_pool, tdma->op,
  le32_to_cpu(tdma->src));
+   else if (tdma->flags & CESA_TDMA_IV)


and here


I propose a separated commit to fix this problem. What do you think ?


else if (type == CESA_TDMA_IV)


+   dma_pool_free(cesa_dev->dma->iv_pool, tdma->data,
+ le32_to_cpu(tdma->dst));

tdma = tdma->next;
dma_pool_free(cesa_dev->dma->tdma_desc_pool, old_tdma,
@@ -120,6 +123,32 @@ mv_cesa_dma_add_desc(struct 

Re: [PATCH 2/7] crypto: marvell: Check engine is not already running when enabling a req

2016-06-16 Thread Romain Perier

Hello,

Le 15/06/2016 21:37, Boris Brezillon a écrit :


"
Add a BUG_ON() call when the driver tries to launch a crypto request
while the engine is still processing the previous one. This replaces
a silent system hang by a verbose kernel panic with the associated
backtrace to let the user know that something went wrong in the CESA
driver.
"


thanks




---
  drivers/crypto/marvell/cipher.c | 2 ++
  drivers/crypto/marvell/hash.c   | 2 ++
  drivers/crypto/marvell/tdma.c   | 2 ++
  3 files changed, 6 insertions(+)

diff --git a/drivers/crypto/marvell/cipher.c b/drivers/crypto/marvell/cipher.c
index dcf1fce..8d0fabb 100644
--- a/drivers/crypto/marvell/cipher.c
+++ b/drivers/crypto/marvell/cipher.c
@@ -106,6 +106,8 @@ static void mv_cesa_ablkcipher_std_step(struct 
ablkcipher_request *req)

mv_cesa_set_int_mask(engine, CESA_SA_INT_ACCEL0_DONE);
writel_relaxed(CESA_SA_CFG_PARA_DIS, engine->regs + CESA_SA_CFG);
+   BUG_ON(readl(engine->regs + CESA_SA_CMD)
+ & CESA_SA_CMD_EN_CESA_SA_ACCL0);


Nit: please put the '&' operator at the end of the first line and
align CESA_SA_CMD_EN_CESA_SA_ACCL0 on the open parenthesis.


Arf, ok I will fix this.



BUG_ON(readl(engine->regs + CESA_SA_CMD) &
   CESA_SA_CMD_EN_CESA_SA_ACCL0);


writel(CESA_SA_CMD_EN_CESA_SA_ACCL0, engine->regs + CESA_SA_CMD);
  }

diff --git a/drivers/crypto/marvell/hash.c b/drivers/crypto/marvell/hash.c
index 7ca2e0f..0fae351 100644
--- a/drivers/crypto/marvell/hash.c
+++ b/drivers/crypto/marvell/hash.c
@@ -237,6 +237,8 @@ static void mv_cesa_ahash_std_step(struct ahash_request 
*req)

mv_cesa_set_int_mask(engine, CESA_SA_INT_ACCEL0_DONE);
writel_relaxed(CESA_SA_CFG_PARA_DIS, engine->regs + CESA_SA_CFG);
+   BUG_ON(readl(engine->regs + CESA_SA_CMD)
+ & CESA_SA_CMD_EN_CESA_SA_ACCL0);


Ditto.


ack




writel(CESA_SA_CMD_EN_CESA_SA_ACCL0, engine->regs + CESA_SA_CMD);
  }

diff --git a/drivers/crypto/marvell/tdma.c b/drivers/crypto/marvell/tdma.c
index 7642798..d493714 100644
--- a/drivers/crypto/marvell/tdma.c
+++ b/drivers/crypto/marvell/tdma.c
@@ -53,6 +53,8 @@ void mv_cesa_dma_step(struct mv_cesa_tdma_req *dreq)
   engine->regs + CESA_SA_CFG);
writel_relaxed(dreq->chain.first->cur_dma,
   engine->regs + CESA_TDMA_NEXT_ADDR);
+   BUG_ON(readl(engine->regs + CESA_SA_CMD)
+ & CESA_SA_CMD_EN_CESA_SA_ACCL0);


Ditto.

ack

Regards,
Romain
--
Romain Perier, Free Electrons
Embedded Linux, Kernel and Android engineering
http://free-electrons.com
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v6 3/6] crypto: AF_ALG -- add asymmetric cipher interface

2016-06-16 Thread Stephan Mueller
Am Dienstag, 14. Juni 2016, 09:42:34 schrieb Andrew Zaborowski:

Hi Andrew,

> > 
> > I think we have agreed on dropping the length enforcement at the interface
> > level.
> 
> Separately from this there's a problem with the user being unable to
> know if the algorithm is going to fail because of destination buffer
> size != key size (including kernel users).  For RSA, the qat
> implementation will fail while the software implementation won't.  For
> pkcs1pad(...) there's currently just one implementation but the user
> can't assume that.

If I understand your issue correctly, my initial code requiring the caller to 
provide sufficient memory would have covered the issue, right? If so, we seem 
to have implementations which can handle shorter buffer sizes and some which 
do not. Should a caller really try to figure the right buffer size out? Why 
not requiring a mandatory buffer size and be done with it? I.e. what is the 
gain to allow shorter buffer sizes (as pointed out by Mat)? So, bottom line, I 
am wondering whether we should keep the algif_akcipher code to require a 
minimum buffer size.

If there is really a good argument to allow shorter buffers, then I guess we 
need an in-kernel API call (which should be reported to user space) which 
gives us the smallest usable buffer size. I guess that call would only be 
valid after a setkey operation as the output size depends on the key size. 
Instead of inventing a complete new API call, shouldn't the call 
crypto_akcipher_maxsize() be converted for this purpose? I requested that API 
call during the time the akcipher API was developed explicitly for getting the 
minimum buffer size the caller needs to provide.

Ciao
Stephan
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html