Re: [PATCH v5 2/3] crypto: kpp - Add DH software implementation

2016-05-30 Thread Herbert Xu
On Mon, May 09, 2016 at 10:40:40PM +0100, Salvatore Benedetto wrote:
>
> +static int dh_set_params(struct crypto_kpp *tfm, void *buffer,
> +  unsigned int len)
> +{
> + struct dh_ctx *ctx = dh_get_ctx(tfm);
> + struct dh_params *params = (struct dh_params *)buffer;
> +
> + if (unlikely(!buffer || !len))
> + return -EINVAL;

What's the point of len? It's never checked anywhere apart from
this non-zero check which is pointless.  Just get rid of it.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v5 1/3] crypto: Key-agreement Protocol Primitives API (KPP)

2016-05-30 Thread Herbert Xu
On Mon, May 09, 2016 at 10:40:39PM +0100, Salvatore Benedetto wrote:
> Add key-agreement protocol primitives (kpp) API which allows to
> implement primitives required by protocols such as DH and ECDH.
> The API is composed mainly by the following functions
>  * set_params() - It allows the user to set the parameters known to
>both parties involved in the key-agreement session
>  * set_secret() - It allows the user to set his secret, also
>referred to as his private key

Why can't we just have one function, set_secret or better yet setkey?

>  * generate_public_key() - It generates the public key to be sent to
>the other counterpart involved in the key-agreement session. The
>function has to be called after set_params() and set_secret()
>  * generate_secret() - It generates the shared secret for the session

Ditto, we only need one operation and that is multiplication by the
secret.

I'm OK with you keeping them separate for kpp users so that they
don't have to explicitly provide G but please ensure that drivers
only have to implement one of them.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH stable 3.16+] crypto: s5p-sss - Fix missed interrupts when working with 8 kB blocks

2016-05-30 Thread Herbert Xu
On Mon, May 30, 2016 at 12:09:28PM +0200, Krzysztof Kozlowski wrote:
> commit 79152e8d085fd64484afd473ef6830b45518acba upstream.
> 
> The tcrypt testing module on Exynos5422-based Odroid XU3/4 board failed on
> testing 8 kB size blocks:
> 
>   $ sudo modprobe tcrypt sec=1 mode=500
>   testing speed of async ecb(aes) (ecb-aes-s5p) encryption
>   test 0 (128 bit key, 16 byte blocks): 21971 operations in 1 seconds 
> (351536 bytes)
>   test 1 (128 bit key, 64 byte blocks): 21731 operations in 1 seconds 
> (1390784 bytes)
>   test 2 (128 bit key, 256 byte blocks): 21932 operations in 1 seconds 
> (5614592 bytes)
>   test 3 (128 bit key, 1024 byte blocks): 21685 operations in 1 seconds 
> (22205440 bytes)
>   test 4 (128 bit key, 8192 byte blocks):
> 
> This was caused by a race issue of missed BRDMA_DONE ("Block cipher
> Receiving DMA") interrupt. Device starts processing the data in DMA mode
> immediately after setting length of DMA block: receiving (FCBRDMAL) or
> transmitting (FCBTDMAL). The driver sets these lengths from interrupt
> handler through s5p_set_dma_indata() function (or xxx_setdata()).
> 
> However the interrupt handler was first dealing with receive buffer
> (dma-unmap old, dma-map new, set receive block length which starts the
> operation), then with transmit buffer and finally was clearing pending
> interrupts (FCINTPEND). Because of the time window between setting
> receive buffer length and clearing pending interrupts, the operation on
> receive buffer could end already and driver would miss new interrupt.
> 
> User manual for Exynos5422 confirms in example code that setting DMA
> block lengths should be the last operation.
> 
> The tcrypt hang could be also observed in following blocked-task dmesg:
> 
> INFO: task modprobe:258 blocked for more than 120 seconds.
>   Not tainted 4.6.0-rc4-next-20160419-5-g9eac8b7b7753-dirty #42
> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> modprobeD c06b09d8 0   258256 0x
> [] (__schedule) from [] (schedule+0x40/0xac)
> [] (schedule) from [] (schedule_timeout+0x124/0x178)
> [] (schedule_timeout) from [] (wait_for_common+0xb8/0x144)
> [] (wait_for_common) from [] 
> (test_acipher_speed+0x49c/0x740 [tcrypt])
> [] (test_acipher_speed [tcrypt]) from [] 
> (do_test+0x2240/0x30ec [tcrypt])
> [] (do_test [tcrypt]) from [] (tcrypt_mod_init+0x48/0xa4 
> [tcrypt])
> [] (tcrypt_mod_init [tcrypt]) from [] 
> (do_one_initcall+0x3c/0x16c)
> [] (do_one_initcall) from [] (do_init_module+0x5c/0x1ac)
> [] (do_init_module) from [] (load_module+0x1a30/0x1d08)
> [] (load_module) from [] (SyS_finit_module+0x8c/0x98)
> [] (SyS_finit_module) from [] (ret_fast_syscall+0x0/0x3c)
> 
> Fixes: a49e490c7a8a ("crypto: s5p-sss - add S5PV210 advanced crypto engine 
> support")
> Cc:  # 3.16+
> Signed-off-by: Krzysztof Kozlowski 
> Tested-by: Marek Szyprowski 
> Signed-off-by: Herbert Xu 
> [k.kozlowski: Backport to v3.16]

Acked-by: Herbert Xu 
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH-v3 0/5] random: replace urandom pool with a CRNG

2016-05-30 Thread Theodore Ts'o
On Mon, May 30, 2016 at 10:53:22AM -0700, Andi Kleen wrote:
> 
> It should work the same on larger systems, the solution scales
> naturally to lots of sockets. It's not clear it'll help enough on systems
> with a lot more cores per socket, like a Xeon Phi. But for now it should
> be good enough.

One change which I'm currently making is to use kmalloc_node() instead
of kmalloc() for the per-NUMA node, and I suspect *that* is going
to make a quite a lot of different on those systems where the ratio of
remote to local memory access times is larger (as I assume it probably
would be on really big systems).

- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH-v3 0/5] random: replace urandom pool with a CRNG

2016-05-30 Thread Andi Kleen
> In addition, on NUMA systems we make the CRNG state per-NUMA socket, to
> address the NUMA locking contention problem which Andi Kleen has been
> complaining about.  I'm not entirely sure this will work well on the
> crazy big SGI systems, but they are rare.  Whether they are rarer than

It should work the same on larger systems, the solution scales
naturally to lots of sockets. It's not clear it'll help enough on systems
with a lot more cores per socket, like a Xeon Phi. But for now it should
be good enough.

-Andi
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/5] random: make /dev/urandom scalable for silly userspace programs

2016-05-30 Thread Theodore Ts'o
On Mon, May 30, 2016 at 08:03:59AM +0200, Stephan Mueller wrote:
> >  static int rand_initialize(void)
> >  {
> > +#ifdef CONFIG_NUMA
> > +   int i;
> > +   int num_nodes = num_possible_nodes();
> > +   struct crng_state *crng;
> > +
> > +   crng_node_pool = kmalloc(num_nodes * sizeof(void *),
> > +GFP_KERNEL|__GFP_NOFAIL);
> > +
> > +   for (i=0; i < num_nodes; i++) {
> > +   crng = kmalloc(sizeof(struct crng_state),
> > +  GFP_KERNEL | __GFP_NOFAIL);
> > +   initialize_crng(crng);
> 
> Could you please help me understand the logic flow here: The NUMA secondary 
> DRNGs are initialized before the input/blocking pools and the primary DRNG.
> 
> The initialization call uses get_random_bytes() for the secondary DRNGs. But 
> since the primary DRNG is not yet initialized, where does the 
> get_random_bytes 
> gets its randomness from?

Yeah, I screwed up.  The hunk of code staring with "crng_node_pool =
kmalloc(..." and the for loop afterwards should be moved to after
_initialize_crng().  Serves me right for not testing CONFIG_NUMA
before sending out the patches.

This is *not* about adding entropy; as you've noted, this is done very
early in boot up, before there has been any chance for any kind of
entropy to be collected in any of the input pools.  It's more of a
insurance policy just in case on some platform, if it turns out that
assuming a bit's worth of entropy per interrupt was hopelessly
over-optimistic, at least the starting point will be different across
different kernels (and maybe different boot times, but on the sorts of
platforms where I'm most concerned, there may not be a real-time clock
and almost certainly not a architectural hwrng in the CPU).

   - Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/2] crypto: omap: convert to the new cryptoengine API

2016-05-30 Thread LABBE Corentin
On Mon, May 30, 2016 at 10:20:01AM +0800, Baolin Wang wrote:
> On 18 May 2016 at 17:21, LABBE Corentin  wrote:
> > Since the crypto engine has been converted to use crypto_async_request
> > instead of ablkcipher_request, minor changes are needed to use it.
> 
> I think you missed the conversion for omap des driver, please rebase
> your patch. Beyond that I think you did a good job for crypto engine
> if Herbert applied it.
> 

Thanks
I have just rebased it and added omap-des to the list of changes.

Regards

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v2 1/2] crypto: engine: permit to enqueue ashash_request

2016-05-30 Thread LABBE Corentin
The current crypto engine allow only ablkcipher_request to be enqueued.
Thus denying any use of it for hardware that also handle hash algo.

This patch convert all ablkcipher_request references to the
more general crypto_async_request.

Signed-off-by: LABBE Corentin 
---
 crypto/crypto_engine.c  | 17 +++--
 include/crypto/algapi.h | 14 +++---
 2 files changed, 14 insertions(+), 17 deletions(-)

diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c
index a55c82d..b658cb8 100644
--- a/crypto/crypto_engine.c
+++ b/crypto/crypto_engine.c
@@ -19,7 +19,7 @@
 #define CRYPTO_ENGINE_MAX_QLEN 10
 
 void crypto_finalize_request(struct crypto_engine *engine,
-struct ablkcipher_request *req, int err);
+struct crypto_async_request *req, int err);
 
 /**
  * crypto_pump_requests - dequeue one request from engine queue to process
@@ -34,7 +34,6 @@ static void crypto_pump_requests(struct crypto_engine *engine,
 bool in_kthread)
 {
struct crypto_async_request *async_req, *backlog;
-   struct ablkcipher_request *req;
unsigned long flags;
bool was_busy = false;
int ret;
@@ -82,9 +81,7 @@ static void crypto_pump_requests(struct crypto_engine *engine,
if (!async_req)
goto out;
 
-   req = ablkcipher_request_cast(async_req);
-
-   engine->cur_req = req;
+   engine->cur_req = async_req;
if (backlog)
backlog->complete(backlog, -EINPROGRESS);
 
@@ -142,7 +139,7 @@ static void crypto_pump_work(struct kthread_work *work)
  * @req: the request need to be listed into the engine queue
  */
 int crypto_transfer_request(struct crypto_engine *engine,
-   struct ablkcipher_request *req, bool need_pump)
+   struct crypto_async_request *req, bool need_pump)
 {
unsigned long flags;
int ret;
@@ -154,7 +151,7 @@ int crypto_transfer_request(struct crypto_engine *engine,
return -ESHUTDOWN;
}
 
-   ret = ablkcipher_enqueue_request(&engine->queue, req);
+   ret = crypto_enqueue_request(&engine->queue, req);
 
if (!engine->busy && need_pump)
queue_kthread_work(&engine->kworker, &engine->pump_requests);
@@ -171,7 +168,7 @@ EXPORT_SYMBOL_GPL(crypto_transfer_request);
  * @req: the request need to be listed into the engine queue
  */
 int crypto_transfer_request_to_engine(struct crypto_engine *engine,
- struct ablkcipher_request *req)
+ struct crypto_async_request *req)
 {
return crypto_transfer_request(engine, req, true);
 }
@@ -184,7 +181,7 @@ EXPORT_SYMBOL_GPL(crypto_transfer_request_to_engine);
  * @err: error number
  */
 void crypto_finalize_request(struct crypto_engine *engine,
-struct ablkcipher_request *req, int err)
+struct crypto_async_request *req, int err)
 {
unsigned long flags;
bool finalize_cur_req = false;
@@ -208,7 +205,7 @@ void crypto_finalize_request(struct crypto_engine *engine,
spin_unlock_irqrestore(&engine->queue_lock, flags);
}
 
-   req->base.complete(&req->base, err);
+   req->complete(req, err);
 
queue_kthread_work(&engine->kworker, &engine->pump_requests);
 }
diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h
index eeafd21..d720a2a 100644
--- a/include/crypto/algapi.h
+++ b/include/crypto/algapi.h
@@ -173,26 +173,26 @@ struct crypto_engine {
int (*unprepare_crypt_hardware)(struct crypto_engine *engine);
 
int (*prepare_request)(struct crypto_engine *engine,
-  struct ablkcipher_request *req);
+  struct crypto_async_request *req);
int (*unprepare_request)(struct crypto_engine *engine,
-struct ablkcipher_request *req);
+struct crypto_async_request *req);
int (*crypt_one_request)(struct crypto_engine *engine,
-struct ablkcipher_request *req);
+struct crypto_async_request *req);
 
struct kthread_worker   kworker;
struct task_struct  *kworker_task;
struct kthread_work pump_requests;
 
void*priv_data;
-   struct ablkcipher_request   *cur_req;
+   struct crypto_async_request *cur_req;
 };
 
 int crypto_transfer_request(struct crypto_engine *engine,
-   struct ablkcipher_request *req, bool need_pump);
+   struct crypto_async_request *req, bool need_pump);
 int crypto_transfer_request_to_engine(struct crypto_engine *engine,
- struct ablkcipher_request *req);
+ struct cryp

[PATCH v2 2/2] crypto: omap: convert to the new cryptoengine API

2016-05-30 Thread LABBE Corentin
Since the crypto engine has been converted to use crypto_async_request
instead of ablkcipher_request, minor changes are needed to use it.

Signed-off-by: LABBE Corentin 
---
 drivers/crypto/omap-aes.c | 10 ++
 drivers/crypto/omap-des.c | 10 ++
 2 files changed, 12 insertions(+), 8 deletions(-)

diff --git a/drivers/crypto/omap-aes.c b/drivers/crypto/omap-aes.c
index ce174d3..7007f13 100644
--- a/drivers/crypto/omap-aes.c
+++ b/drivers/crypto/omap-aes.c
@@ -519,7 +519,7 @@ static void omap_aes_finish_req(struct omap_aes_dev *dd, 
int err)
 
pr_debug("err: %d\n", err);
 
-   crypto_finalize_request(dd->engine, req, err);
+   crypto_finalize_request(dd->engine, &req->base, err);
 }
 
 static int omap_aes_crypt_dma_stop(struct omap_aes_dev *dd)
@@ -592,14 +592,15 @@ static int omap_aes_handle_queue(struct omap_aes_dev *dd,
 struct ablkcipher_request *req)
 {
if (req)
-   return crypto_transfer_request_to_engine(dd->engine, req);
+   return crypto_transfer_request_to_engine(dd->engine, 
&req->base);
 
return 0;
 }
 
 static int omap_aes_prepare_req(struct crypto_engine *engine,
-   struct ablkcipher_request *req)
+   struct crypto_async_request *areq)
 {
+   struct ablkcipher_request *req = ablkcipher_request_cast(areq);
struct omap_aes_ctx *ctx = crypto_ablkcipher_ctx(
crypto_ablkcipher_reqtfm(req));
struct omap_aes_dev *dd = omap_aes_find_dev(ctx);
@@ -642,8 +643,9 @@ static int omap_aes_prepare_req(struct crypto_engine 
*engine,
 }
 
 static int omap_aes_crypt_req(struct crypto_engine *engine,
- struct ablkcipher_request *req)
+ struct crypto_async_request *areq)
 {
+   struct ablkcipher_request *req = ablkcipher_request_cast(areq);
struct omap_aes_ctx *ctx = crypto_ablkcipher_ctx(
crypto_ablkcipher_reqtfm(req));
struct omap_aes_dev *dd = omap_aes_find_dev(ctx);
diff --git a/drivers/crypto/omap-des.c b/drivers/crypto/omap-des.c
index 3eedb03..0da5686 100644
--- a/drivers/crypto/omap-des.c
+++ b/drivers/crypto/omap-des.c
@@ -506,7 +506,7 @@ static void omap_des_finish_req(struct omap_des_dev *dd, 
int err)
pr_debug("err: %d\n", err);
 
pm_runtime_put(dd->dev);
-   crypto_finalize_request(dd->engine, req, err);
+   crypto_finalize_request(dd->engine, &req->base, err);
 }
 
 static int omap_des_crypt_dma_stop(struct omap_des_dev *dd)
@@ -572,14 +572,15 @@ static int omap_des_handle_queue(struct omap_des_dev *dd,
 struct ablkcipher_request *req)
 {
if (req)
-   return crypto_transfer_request_to_engine(dd->engine, req);
+   return crypto_transfer_request_to_engine(dd->engine, 
&req->base);
 
return 0;
 }
 
 static int omap_des_prepare_req(struct crypto_engine *engine,
-   struct ablkcipher_request *req)
+   struct crypto_async_request *areq)
 {
+   struct ablkcipher_request *req = ablkcipher_request_cast(areq);
struct omap_des_ctx *ctx = crypto_ablkcipher_ctx(
crypto_ablkcipher_reqtfm(req));
struct omap_des_dev *dd = omap_des_find_dev(ctx);
@@ -620,8 +621,9 @@ static int omap_des_prepare_req(struct crypto_engine 
*engine,
 }
 
 static int omap_des_crypt_req(struct crypto_engine *engine,
- struct ablkcipher_request *req)
+ struct crypto_async_request *areq)
 {
+   struct ablkcipher_request *req = ablkcipher_request_cast(areq);
struct omap_des_ctx *ctx = crypto_ablkcipher_ctx(
crypto_ablkcipher_reqtfm(req));
struct omap_des_dev *dd = omap_des_find_dev(ctx);
-- 
2.7.3

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v2 0/2] crypto: engine: permit to enqueue ashash_request

2016-05-30 Thread LABBE Corentin
Hello

I wanted to use the crypto engine for my Allwinner crypto driver but something
prevented me to use it: it cannot enqueue hash requests.
The first patch convert crypto engine to permit enqueuing of ahash_requests.
The second patch convert the only driver using crypto engine.

The second patch was only compile tested but the crypto engine with
hash support was tested on two different offtree driver (sun4i-ss and sun8i-ce)

Regards

Changes since v1:
- rebased on cryptodev for handling omap-des

LABBE Corentin (2):
  crypto: engine: permit to enqueue ashash_request
  crypto: omap: convert to the new cryptoengine API

 crypto/crypto_engine.c| 17 +++--
 drivers/crypto/omap-aes.c | 10 ++
 include/crypto/algapi.h   | 14 +++---
 3 files changed, 20 insertions(+), 21 deletions(-)

-- 
2.7.3

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH stable 3.16+] crypto: s5p-sss - Fix missed interrupts when working with 8 kB blocks

2016-05-30 Thread Krzysztof Kozlowski
commit 79152e8d085fd64484afd473ef6830b45518acba upstream.

The tcrypt testing module on Exynos5422-based Odroid XU3/4 board failed on
testing 8 kB size blocks:

$ sudo modprobe tcrypt sec=1 mode=500
testing speed of async ecb(aes) (ecb-aes-s5p) encryption
test 0 (128 bit key, 16 byte blocks): 21971 operations in 1 seconds 
(351536 bytes)
test 1 (128 bit key, 64 byte blocks): 21731 operations in 1 seconds 
(1390784 bytes)
test 2 (128 bit key, 256 byte blocks): 21932 operations in 1 seconds 
(5614592 bytes)
test 3 (128 bit key, 1024 byte blocks): 21685 operations in 1 seconds 
(22205440 bytes)
test 4 (128 bit key, 8192 byte blocks):

This was caused by a race issue of missed BRDMA_DONE ("Block cipher
Receiving DMA") interrupt. Device starts processing the data in DMA mode
immediately after setting length of DMA block: receiving (FCBRDMAL) or
transmitting (FCBTDMAL). The driver sets these lengths from interrupt
handler through s5p_set_dma_indata() function (or xxx_setdata()).

However the interrupt handler was first dealing with receive buffer
(dma-unmap old, dma-map new, set receive block length which starts the
operation), then with transmit buffer and finally was clearing pending
interrupts (FCINTPEND). Because of the time window between setting
receive buffer length and clearing pending interrupts, the operation on
receive buffer could end already and driver would miss new interrupt.

User manual for Exynos5422 confirms in example code that setting DMA
block lengths should be the last operation.

The tcrypt hang could be also observed in following blocked-task dmesg:

INFO: task modprobe:258 blocked for more than 120 seconds.
  Not tainted 4.6.0-rc4-next-20160419-5-g9eac8b7b7753-dirty #42
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
modprobeD c06b09d8 0   258256 0x
[] (__schedule) from [] (schedule+0x40/0xac)
[] (schedule) from [] (schedule_timeout+0x124/0x178)
[] (schedule_timeout) from [] (wait_for_common+0xb8/0x144)
[] (wait_for_common) from [] 
(test_acipher_speed+0x49c/0x740 [tcrypt])
[] (test_acipher_speed [tcrypt]) from [] 
(do_test+0x2240/0x30ec [tcrypt])
[] (do_test [tcrypt]) from [] (tcrypt_mod_init+0x48/0xa4 
[tcrypt])
[] (tcrypt_mod_init [tcrypt]) from [] 
(do_one_initcall+0x3c/0x16c)
[] (do_one_initcall) from [] (do_init_module+0x5c/0x1ac)
[] (do_init_module) from [] (load_module+0x1a30/0x1d08)
[] (load_module) from [] (SyS_finit_module+0x8c/0x98)
[] (SyS_finit_module) from [] (ret_fast_syscall+0x0/0x3c)

Fixes: a49e490c7a8a ("crypto: s5p-sss - add S5PV210 advanced crypto engine 
support")
Cc:  # 3.16+
Signed-off-by: Krzysztof Kozlowski 
Tested-by: Marek Szyprowski 
Signed-off-by: Herbert Xu 
[k.kozlowski: Backport to v3.16]

---

Backporting to earlier kernels does not make much sense as the driver
differs and the testing won't be possible probably.
---
 drivers/crypto/s5p-sss.c | 53 +++-
 1 file changed, 39 insertions(+), 14 deletions(-)

diff --git a/drivers/crypto/s5p-sss.c b/drivers/crypto/s5p-sss.c
index 4197ad9a711b..8a46de76b834 100644
--- a/drivers/crypto/s5p-sss.c
+++ b/drivers/crypto/s5p-sss.c
@@ -313,43 +313,55 @@ static int s5p_set_indata(struct s5p_aes_dev *dev, struct 
scatterlist *sg)
return err;
 }
 
-static void s5p_aes_tx(struct s5p_aes_dev *dev)
+/*
+ * Returns true if new transmitting (output) data is ready and its
+ * address+length have to be written to device (by calling
+ * s5p_set_dma_outdata()). False otherwise.
+ */
+static bool s5p_aes_tx(struct s5p_aes_dev *dev)
 {
int err = 0;
+   bool ret = false;
 
s5p_unset_outdata(dev);
 
if (!sg_is_last(dev->sg_dst)) {
err = s5p_set_outdata(dev, sg_next(dev->sg_dst));
-   if (err) {
+   if (err)
s5p_aes_complete(dev, err);
-   return;
-   }
-
-   s5p_set_dma_outdata(dev, dev->sg_dst);
+   else
+   ret = true;
} else {
s5p_aes_complete(dev, err);
 
dev->busy = true;
tasklet_schedule(&dev->tasklet);
}
+
+   return ret;
 }
 
-static void s5p_aes_rx(struct s5p_aes_dev *dev)
+/*
+ * Returns true if new receiving (input) data is ready and its
+ * address+length have to be written to device (by calling
+ * s5p_set_dma_indata()). False otherwise.
+ */
+static bool s5p_aes_rx(struct s5p_aes_dev *dev)
 {
int err;
+   bool ret = false;
 
s5p_unset_indata(dev);
 
if (!sg_is_last(dev->sg_src)) {
err = s5p_set_indata(dev, sg_next(dev->sg_src));
-   if (err) {
+   if (err)
s5p_aes_complete(dev, err);
-   return;
-   }
-
-   s5p_set_dma_indata(dev, dev->sg_src);
+   else
+   re