On 15 June 2016 at 15:39, Herbert Xu <herb...@gondor.apana.org.au> wrote:
> On Wed, Jun 15, 2016 at 03:38:02PM +0800, Baolin Wang wrote:
>>
>> But that means we should divide the bulk request into 512-byte size
>> requests and break up the mapped sg table for each
On 15 June 2016 at 14:49, Herbert Xu <herb...@gondor.apana.org.au> wrote:
> On Wed, Jun 15, 2016 at 02:27:04PM +0800, Baolin Wang wrote:
>>
>> After some investigation, I still think we should divide the bulk
>> request from dm-crypt into small request (each one is 512
Hi Herbert,
On 8 June 2016 at 10:00, Baolin Wang <baolin.w...@linaro.org> wrote:
> Hi Herbert,
>
> On 7 June 2016 at 22:16, Herbert Xu <herb...@gondor.apana.org.au> wrote:
>> On Tue, Jun 07, 2016 at 08:17:05PM +0800, Baolin Wang wrote:
>>> Now some cipher ha
Hi Herbert,
On 7 June 2016 at 22:16, Herbert Xu <herb...@gondor.apana.org.au> wrote:
> On Tue, Jun 07, 2016 at 08:17:05PM +0800, Baolin Wang wrote:
>> Now some cipher hardware engines prefer to handle bulk block rather than one
>> sector (512 bytes) created by dm-crypt, cause
In dm-crypt, it need to map one bio to scatterlist for improving the
hardware engine encryption efficiency. Thus this patch introduces the
blk_bio_map_sg() function to map one bio with scatterlists.
Signed-off-by: Baolin Wang <baolin.w...@linaro.org>
---
block/blk-merge.c
always 512
bytes and thus increase the hardware engine processing speed.
So introduce 'CRYPTO_ALG_BULK' flag to indicate this cipher can support bulk
mode.
Signed-off-by: Baolin Wang <baolin.w...@linaro.org>
---
include/crypto/skcipher.h |7 +++
include/linux/crypto.h|6 +++
(beaglebone black board with ecb(aes)
cipher and dd testing) using 64KB I/Os on an eMMC storage device I saw about
127% improvement in throughput for encrypted writes, and about 206% improvement
for encrypted reads.
Signed-off-by: Baolin Wang <baolin.w...@linaro.org>
---
drivers/md/dm-crypt.c
Since the ecb(aes) cipher does not need to handle the IV things for encryption
or decryption, that means it can support for bulk block when handling data.
Thus this patch adds the CRYPTO_ALG_BULK flag for ecb(aes) cipher to improve
the hardware aes engine's efficiency.
Signed-off-by: Baolin Wang
the blk_bio_map_sg() function to avoid duplicated code.
- Move the sg table allocation to crypt_ctr_cipher() function to avoid memory
allocation in the IO path.
- Remove the crypt_sg_entry() function.
- Other optimization.
Baolin Wang (4):
block: Introduce blk_bio_map_sg() to map one bio
crypto
On 3 June 2016 at 22:35, Jens Axboe <ax...@kernel.dk> wrote:
> On 05/27/2016 05:11 AM, Baolin Wang wrote:
>>
>> In dm-crypt, it need to map one bio to scatterlist for improving the
>> hardware engine encryption efficiency. Thus this patch introduces the
>> blk_bio
On 3 June 2016 at 22:38, Jens Axboe <ax...@kernel.dk> wrote:
> On 05/27/2016 05:11 AM, Baolin Wang wrote:
>>
>> +/*
>> + * Map a bio to scatterlist, return number of sg entries setup. Caller
>> must
>> + * make sure sg can hold bio segments entries
On 3 June 2016 at 18:09, Herbert Xu <herb...@gondor.apana.org.au> wrote:
> On Fri, Jun 03, 2016 at 05:23:59PM +0800, Baolin Wang wrote:
>>
>> Assuming one 64K size bio coming, we can map the whole bio with one sg
>> table in crypt_convert_bulk_block() function. But if w
On 3 June 2016 at 16:21, Herbert Xu <herb...@gondor.apana.org.au> wrote:
> On Fri, Jun 03, 2016 at 04:15:28PM +0800, Baolin Wang wrote:
>>
>> Suppose the cbc(aes) algorithm, which can not be handled through bulk
>> interface, it need to map the data sector by sector.
&
On 3 June 2016 at 15:54, Herbert Xu <herb...@gondor.apana.org.au> wrote:
> On Fri, Jun 03, 2016 at 03:10:31PM +0800, Baolin Wang wrote:
>> On 3 June 2016 at 14:51, Herbert Xu <herb...@gondor.apana.org.au> wrote:
>> > On Fri, Jun 03, 2016 at 02:48:34PM +0800, Baolin
On 3 June 2016 at 14:51, Herbert Xu <herb...@gondor.apana.org.au> wrote:
> On Fri, Jun 03, 2016 at 02:48:34PM +0800, Baolin Wang wrote:
>>
>> If we move the IV generation into the crypto API, we also can not
>> handle every algorithm with the bulk interface. Cause we al
Hi Herbet,
On 2 June 2016 at 16:26, Herbert Xu <herb...@gondor.apana.org.au> wrote:
> On Fri, May 27, 2016 at 07:11:23PM +0800, Baolin Wang wrote:
>> Now some cipher hardware engines prefer to handle bulk block rather than one
>> sector (512 bytes) created by dm-crypt, cause
bio map or request map.
Signed-off-by: Baolin Wang <baolin.w...@linaro.org>
---
block/blk-merge.c | 36 +++-
include/linux/blkdev.h |2 ++
2 files changed, 33 insertions(+), 5 deletions(-)
diff --git a/block/blk-merge.c b/block/blk-merge.c
index 2
always 512
bytes and thus increase the hardware engine processing speed.
So introduce 'CRYPTO_ALG_BULK' flag to indicate this cipher can support bulk
mode.
Signed-off-by: Baolin Wang <baolin.w...@linaro.org>
---
include/crypto/skcipher.h |7 +++
include/linux/crypto.h|6 +++
table allocation to crypt_ctr_cipher() function to avoid memory
allocation in the IO path.
- Remove the crypt_sg_entry() function.
- Other optimization.
Baolin Wang (4):
block: Introduce blk_bio_map_sg() to map one bio
crypto: Introduce CRYPTO_ALG_BULK flag
md: dm-crypt: Introduce the bulk
(beaglebone black board and dd testing)
using 64KB I/Os on an eMMC storage device I saw about 127% improvement in
throughput for encrypted writes, and about 206% improvement for encrypted reads.
But this is not fit for other modes which need different IV for each sector.
Signed-off-by: Baolin Wang
Since the ecb(aes) cipher does not need to handle the IV things for encryption
or decryption, that means it can support for bulk block when handling data.
Thus this patch adds the CRYPTO_ALG_BULK flag for ecb(aes) cipher to improve
the hardware aes engine's efficiency.
Signed-off-by: Baolin Wang
crypto_async_request *req);
> void crypto_finalize_request(struct crypto_engine *engine,
> -struct ablkcipher_request *req, int err);
> +struct crypto_async_request *req, int err);
> int crypto_engine_start(struct crypto_engine *engine);
> int crypto_engine_stop(struct crypto_engine *engine);
> struct crypto_engine *crypto_engine_alloc_init(struct device *dev, bool rt);
> --
> 2.7.3
>
Reviewed-by: Baolin Wang <baolin.w...@linaro.org>
--
Baolin.wang
Best Regards
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
t *req)
> + struct crypto_async_request *areq)
> {
> + struct ablkcipher_request *req = ablkcipher_request_cast(areq);
> struct omap_des_ctx *ctx = crypto_ablkcipher_ctx(
> crypto_ablkcipher_reqtfm(req));
> struct omap_des_dev *dd = omap_des_find_dev(ctx);
> --
> 2.7.3
>
Reviewed-by: Baolin Wang <baolin.w...@linaro.org>
--
Baolin.wang
Best Regards
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On 18 May 2016 at 17:21, LABBE Corentin wrote:
> Since the crypto engine has been converted to use crypto_async_request
> instead of ablkcipher_request, minor changes are needed to use it.
I think you missed the conversion for omap des driver, please rebase
your patch.
bio map or request map.
Signed-off-by: Baolin Wang <baolin.w...@linaro.org>
---
block/blk-merge.c | 36 +++-
include/linux/blkdev.h |2 ++
2 files changed, 33 insertions(+), 5 deletions(-)
diff --git a/block/blk-merge.c b/block/blk-merge.c
index 2
the crypt_sg_entry() function.
- Other optimization.
Baolin Wang (3):
block: Introduce blk_bio_map_sg() to map one bio
crypto: Introduce CRYPTO_ALG_BULK flag
md: dm-crypt: Introduce the bulk mode method when sending request
block/blk-merge.c | 36 +--
drivers/md/dm-crypt.c | 145
(beaglebone black board) using 64KB
I/Os on an eMMC storage device I saw about 60% improvement in throughput for
encrypted writes, and about 100% improvement for encrypted reads. But this
is not fit for other modes which need different IV for each sector.
Signed-off-by: Baolin Wang <baoli
always 512
bytes and thus increase the hardware engine processing speed.
So introduce 'CRYPTO_ALG_BULK' flag to indicate this cipher can support bulk
mode.
Signed-off-by: Baolin Wang <baolin.w...@linaro.org>
---
include/crypto/skcipher.h |7 +++
include/linux/crypto.h|6 +++
On 27 May 2016 at 15:53, Milan Broz <gmazyl...@gmail.com> wrote:
> On 05/27/2016 09:04 AM, Baolin Wang wrote:
>> Hi Milan,
>>
>> On 27 May 2016 at 14:31, Milan Broz <gmazyl...@gmail.com> wrote:
>>> On 05/25/2016 08:12 AM, Baolin Wang wrote:
>>>
Hi Milan,
On 27 May 2016 at 14:31, Milan Broz <gmazyl...@gmail.com> wrote:
> On 05/25/2016 08:12 AM, Baolin Wang wrote:
>> Now some cipher hardware engines prefer to handle bulk block rather than one
>> sector (512 bytes) created by dm-crypt, cause these ciph
On 25 May 2016 at 16:52, Ming Lei wrote:
>> /*
>> + * map a bio to scatterlist, return number of sg entries setup.
>> + */
>> +int blk_bio_map_sg(struct request_queue *q, struct bio *bio,
>> + struct scatterlist *sglist,
>> + struct
(beaglebone black board) using 64KB
I/Os on an eMMC storage device I saw about 60% improvement in throughput for
encrypted writes, and about 100% improvement for encrypted reads. But this
is not fit for other modes which need different IV for each sector.
Signed-off-by: Baolin Wang <baoli
In dm-crypt, it need to map one bio to scatterlist for improving the
hardware engine encryption efficiency. Thus this patch introduces the
blk_bio_map_sg() function to map one bio with scatterlists.
Signed-off-by: Baolin Wang <baolin.w...@linaro.org>
---
block/blk-merge.c
always 512
bytes and thus increase the hardware engine processing speed.
So introduce 'CRYPTO_ALG_BULK' flag to indicate this cipher can support bulk
mode.
Signed-off-by: Baolin Wang <baolin.w...@linaro.org>
---
include/crypto/skcipher.h |7 +++
include/linux/crypto.h|6 +++
This patchset will check if the cipher can support bulk mode, then dm-crypt
will handle different ways to send requests to crypto layer according to
cipher mode.
Looking forward to any comments and suggestions. Thanks.
Baolin Wang (3):
block: Introduce blk_bio_map_sg() to map one bio
crypto
Hi Robert,
On 5 April 2016 at 15:10, Baolin Wang <baolin.w...@linaro.org> wrote:
> Hi Robert,
>
> Sorry for the late reply.
>
> On 2 April 2016 at 23:00, Robert Jarzmik <robert.jarz...@free.fr> wrote:
>> Baolin Wang <baolin.w...@linaro.org> writes:
>&
On 18 April 2016 at 16:41, Herbert Xu <herb...@gondor.apana.org.au> wrote:
> On Mon, Apr 18, 2016 at 04:40:36PM +0800, Baolin Wang wrote:
>>
>> Simply to say, now there are many different hardware engines for
>> different vendors, some engines can supp
On 18 April 2016 at 16:31, Herbert Xu <herb...@gondor.apana.org.au> wrote:
> On Mon, Apr 18, 2016 at 04:28:46PM +0800, Baolin Wang wrote:
>>
>> What I meaning is if the xts engine can support bulk block, then the
>> engine driver can select bulk mode to do encryption,
On 18 April 2016 at 16:04, Herbert Xu <herb...@gondor.apana.org.au> wrote:
> On Mon, Apr 18, 2016 at 03:58:59PM +0800, Baolin Wang wrote:
>>
>> That depends on the hardware engine. Some cipher hardware engines
>> (like xts(aes) engine) can handle the intermediate v
On 18 April 2016 at 15:24, Herbert Xu <herb...@gondor.apana.org.au> wrote:
> On Mon, Apr 18, 2016 at 03:21:16PM +0800, Baolin Wang wrote:
>>
>> I don't think so, the dm-crypt can not send maximal requests at some
>> situations. For example, the 'cbc(aes)' cipher, i
On 18 April 2016 at 15:04, Herbert Xu <herb...@gondor.apana.org.au> wrote:
> On Mon, Apr 18, 2016 at 02:02:51PM +0800, Baolin Wang wrote:
>>
>> If the crypto hardware engine can support bulk data
>> encryption/decryption, so the engine driver can select bulk mode to
>
On 18 April 2016 at 13:45, Herbert Xu <herb...@gondor.apana.org.au> wrote:
> On Mon, Apr 18, 2016 at 01:31:09PM +0800, Baolin Wang wrote:
>>
>> We've tried to do this in dm-crypt, but it failed.
>> The dm-crypt maintainer explained to me that I should optimize the
&g
Hi Herbert,
On 15 April 2016 at 21:48, Herbert Xu <herb...@gondor.apana.org.au> wrote:
> On Tue, Mar 15, 2016 at 03:47:58PM +0800, Baolin Wang wrote:
>> Now some cipher hardware engines prefer to handle bulk block by merging
>> requests
>> to increase the b
Hi Robert,
Sorry for the late reply.
On 2 April 2016 at 23:00, Robert Jarzmik <robert.jarz...@free.fr> wrote:
> Baolin Wang <baolin.w...@linaro.org> writes:
>
>> +/**
>> + * sg_is_contiguous - Check if the scatterlists are contiguous
>> + *
If the crypto engine can support the bulk mode, that means the contiguous
requests from one block can be merged into one request to be handled by
crypto engine. If so, the crypto engine need the sector number of one request
to do merging action.
Signed-off-by: Baolin Wang <baolin.w...@linaro.
increase the hardware engine processing speed.
This patch introduces some helper functions to help to merge requests to improve
hardware engine efficiency.
Signed-off-by: Baolin Wang <baolin.w...@linaro.org>
---
crypto/ablk_helper.c | 135 ++
i
(SECTOR_MODE) for
initializing omap aes engine.
Signed-off-by: Baolin Wang <baolin.w...@linaro.org>
---
crypto/Kconfig|1 +
crypto/crypto_engine.c| 122 +++--
drivers/crypto/omap-aes.c |2 +-
include/crypto/algapi.h | 23 ++
is empty.
Signed-off-by: Baolin Wang <baolin.w...@linaro.org>
---
include/linux/scatterlist.h | 33 +
lib/scatterlist.c | 69 +++
2 files changed, 102 insertions(+)
diff --git a/include/linux/scatterlist.h b/include
the sg_is_contiguous() function.
Baolin Wang (4):
scatterlist: Introduce some helper functions
crypto: Introduce some helper functions to help to merge requests
crypto: Introduce the bulk mode for crypto engine framework
md: dm-crypt: Initialize the sector number for one request
crypto
On 10 March 2016 at 17:42, Robert Jarzmik wrote:
>>
>>
>> Ah, sorry that's a mistake. It should check as below:
>> static inline bool sg_is_contiguous(struct scatterlist *sga, struct
>> scatterlist *sgb)
>> {
>> return (unsigned int)sg_virt(sga) + sga->length ==
Hi Robert,
On 4 March 2016 at 03:15, Robert Jarzmik <robert.jarz...@free.fr> wrote:
> Baolin Wang <baolin.w...@linaro.org> writes:
>
>> @@ -212,6 +212,37 @@ static inline void sg_unmark_end(struct scatterlist *sg)
>> }
>>
>> /**
>> + * sg_is_conti
If the crypto engine can support the bulk mode, that means the contiguous
requests from one block can be merged into one request to be handled by
crypto engine. If so, the crypto engine need the sector number of one request
to do merging action.
Signed-off-by: Baolin Wang <baolin.w...@linaro.
(SECTOR_MODE) for
initializing aes engine.
Signed-off-by: Baolin Wang <baolin.w...@linaro.org>
---
crypto/Kconfig|1 +
crypto/crypto_engine.c| 122 +++--
drivers/crypto/omap-aes.c |2 +-
include/crypto/algapi.h | 23 ++
increase the hardware engine processing speed.
This patch introduces some helper functions to help to merge requests to improve
hardware engine efficiency.
Signed-off-by: Baolin Wang <baolin.w...@linaro.org>
---
crypto/ablk_helper.c | 135 ++
i
scatterlists are contiguous, 'sg_alloc_empty_table()' function to
allocate one empty sg table, 'sg_add_sg_to_table()' function to add
one scatterlist into sg table and 'sg_table_is_empty' function to
check if the sg table is empty.
Signed-off-by: Baolin Wang <baolin.w...@linaro.org>
---
include
On 1 February 2016 at 22:33, Herbert Xu <herb...@gondor.apana.org.au> wrote:
> On Tue, Jan 26, 2016 at 08:25:37PM +0800, Baolin Wang wrote:
>> Now block cipher engines need to implement and maintain their own
>> queue/thread
>> for processing requests, moreover currentl
. And this framework is patterned
on the SPI code and has worked out well there.
(https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/
drivers/spi/spi.c?id=ffbbdd21329f3e15eeca6df2d4bc11c04d9d91c0)
Signed-off-by: Baolin Wang <baolin.w...@linaro.org>
---
crypto/K
remove the 'queue' and 'queue_task' things in
omap aes driver.
Signed-off-by: Baolin Wang <baolin.w...@linaro.org>
---
drivers/crypto/Kconfig|1 +
drivers/crypto/omap-aes.c | 97 -
2 files changed, 45 insertions(+), 53 deletions(-)
diff
This patch introduces crypto_queue_len() helper function to help to get the
queue length in the crypto queue list now.
Signed-off-by: Baolin Wang <baolin.w...@linaro.org>
---
include/crypto/algapi.h |4
1 file changed, 4 insertions(+)
diff --git a/include/crypto/algapi.h b/i
59 matches
Mail list logo