Re: [PATCH 0/2] crypto: talitos: Add AES-XTS mode
On 3/3/2015 12:09 AM, Martin Hicks wrote: On Mon, Mar 02, 2015 at 03:37:28PM +0100, Milan Broz wrote: If crypto API allows to encrypt more sectors in one run (handling IV internally) dmcrypt can be modified of course. But do not forget we can use another IV (not only sequential number) e.g. ESSIV with XTS as well (even if it doesn't make much sense, some people are using it). Interesting, I'd not considered using XTS with an IV other than plain/64. The talitos hardware would not support aes/xts in any mode other than plain/plain64 I don't think...Although perhaps you could push in an 8-byte IV and the hardware would interpret it as the sector #. For talitos, there are two cases: 1. request data size is = data unit / sector size talitos can handle any IV / tweak scheme 2. request data size sector size since talitos internally generates the IV for the next sector by incrementing the previous IV, only IV schemes that allocate consecutive IV to consecutive sectors will function correctly. Let's not forget what XTS standard says about IVs / tweak values: - each data unit (sector in this case) is assigned a non-negative tweak value and - tweak values are assigned *consecutively*, starting from an arbitrary non-negative value - there's no requirement for tweak values to be unpredictable Thus, in theory ESSIV is not supposed to be used with XTS mode: the IVs for consecutive sectors are not consecutive values. In practice, as Milan said, the combination is sometimes used. It functions correctly in SW (and also in talitos as long as req. data size = sector size). Maybe the following question would be if the dmcrypt sector IV algorithms should moved into crypto API as well. (But because I misused dmcrypt IVs hooks for some additional operations for loopAES and old Truecrypt CBC mode, it is not so simple...) Speaking again with talitos in mind, there would be no advantage for this hardware. Although larger requests are possible only a single IV can be provided per request, so for algorithms like AES-CBC and dm-crypt 512byte IOs are the only option (short of switching to 4kB block size). Right, as explained above talitos does what the XTS mode standard mandates. So it won't work properly in case of cbc-aes:essiv with request sizes larger than sector size. Still, in SW at least, XTS could be improved to process more sectors in one shot, regardless of the IV scheme used - as long as there's a IV.next() function and both data size and sector size are known. Horia -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v2 2/5] crypto: talitos: Remove MD5_BLOCK_SIZE
This is properly defined in the md5 header file. Signed-off-by: Martin Hicks m...@bork.org --- drivers/crypto/talitos.c |6 ++ 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/drivers/crypto/talitos.c b/drivers/crypto/talitos.c index c49d977..89cf4d5 100644 --- a/drivers/crypto/talitos.c +++ b/drivers/crypto/talitos.c @@ -637,8 +637,6 @@ static void talitos_unregister_rng(struct device *dev) #define TALITOS_MAX_KEY_SIZE 96 #define TALITOS_MAX_IV_LENGTH 16 /* max of AES_BLOCK_SIZE, DES3_EDE_BLOCK_SIZE */ -#define MD5_BLOCK_SIZE64 - struct talitos_ctx { struct device *dev; int ch; @@ -2195,7 +2193,7 @@ static struct talitos_alg_template driver_algs[] = { .halg.base = { .cra_name = md5, .cra_driver_name = md5-talitos, - .cra_blocksize = MD5_BLOCK_SIZE, + .cra_blocksize = MD5_HMAC_BLOCK_SIZE, .cra_flags = CRYPTO_ALG_TYPE_AHASH | CRYPTO_ALG_ASYNC, } @@ -2285,7 +2283,7 @@ static struct talitos_alg_template driver_algs[] = { .halg.base = { .cra_name = hmac(md5), .cra_driver_name = hmac-md5-talitos, - .cra_blocksize = MD5_BLOCK_SIZE, + .cra_blocksize = MD5_HMAC_BLOCK_SIZE, .cra_flags = CRYPTO_ALG_TYPE_AHASH | CRYPTO_ALG_ASYNC, } -- 1.7.10.4 -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v2 0/5] crypto: talitos: Add crypto async queue handling
I was testing dm-crypt performance with a Freescale P1022 board with a recent kernel and was getting IO errors while doing testing with LUKS. Investigation showed that all hardware FIFO slots were filling and the driver was returning EAGAIN to the block layer, which is not an expected response for an async crypto implementation. The following patch series adds a few small fixes, and reworks the submission path to use the crypto_queue mechanism to handle the request backlog. Changes since v1: - Ran checkpatch.pl - Split the path for submitting new requests vs. issuing backlogged requests. - Avoid enqueuing a submitted request to the crypto queue unnecessarily. - Fix return paths where CRYPTO_TFM_REQ_MAY_BACKLOG is not set. Martin Hicks (5): crypto: talitos: Simplify per-channel initialization crypto: talitos: Remove MD5_BLOCK_SIZE crypto: talitos: Fix off-by-one and use all hardware slots crypto: talitos: Reorganize request submission data structures crypto: talitos: Add software backlog queue handling drivers/crypto/talitos.c | 240 +++--- drivers/crypto/talitos.h | 44 +++-- 2 files changed, 177 insertions(+), 107 deletions(-) -- 1.7.10.4 -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v2 1/5] crypto: talitos: Simplify per-channel initialization
There were multiple loops in a row, for each separate step of the initialization of the channels. Simplify to a single loop. Signed-off-by: Martin Hicks m...@bork.org --- drivers/crypto/talitos.c | 11 +++ 1 file changed, 3 insertions(+), 8 deletions(-) diff --git a/drivers/crypto/talitos.c b/drivers/crypto/talitos.c index 067ec21..c49d977 100644 --- a/drivers/crypto/talitos.c +++ b/drivers/crypto/talitos.c @@ -2706,20 +2706,16 @@ static int talitos_probe(struct platform_device *ofdev) goto err_out; } + priv-fifo_len = roundup_pow_of_two(priv-chfifo_len); + for (i = 0; i priv-num_channels; i++) { priv-chan[i].reg = priv-reg + TALITOS_CH_STRIDE * (i + 1); if (!priv-irq[1] || !(i 1)) priv-chan[i].reg += TALITOS_CH_BASE_OFFSET; - } - for (i = 0; i priv-num_channels; i++) { spin_lock_init(priv-chan[i].head_lock); spin_lock_init(priv-chan[i].tail_lock); - } - priv-fifo_len = roundup_pow_of_two(priv-chfifo_len); - - for (i = 0; i priv-num_channels; i++) { priv-chan[i].fifo = kzalloc(sizeof(struct talitos_request) * priv-fifo_len, GFP_KERNEL); if (!priv-chan[i].fifo) { @@ -2727,11 +2723,10 @@ static int talitos_probe(struct platform_device *ofdev) err = -ENOMEM; goto err_out; } - } - for (i = 0; i priv-num_channels; i++) atomic_set(priv-chan[i].submit_count, -(priv-chfifo_len - 1)); + } dma_set_mask(dev, DMA_BIT_MASK(36)); -- 1.7.10.4 -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v2 3/5] crypto: talitos: Fix off-by-one and use all hardware slots
The submission count was off by one. Signed-off-by: Martin Hicks m...@bork.org --- drivers/crypto/talitos.c |3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/crypto/talitos.c b/drivers/crypto/talitos.c index 89cf4d5..7709805 100644 --- a/drivers/crypto/talitos.c +++ b/drivers/crypto/talitos.c @@ -2722,8 +2722,7 @@ static int talitos_probe(struct platform_device *ofdev) goto err_out; } - atomic_set(priv-chan[i].submit_count, - -(priv-chfifo_len - 1)); + atomic_set(priv-chan[i].submit_count, -priv-chfifo_len); } dma_set_mask(dev, DMA_BIT_MASK(36)); -- 1.7.10.4 -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v2 4/5] crypto: talitos: Reorganize request submission data structures
This is preparatory work for moving to using the crypto async queue handling code. A talitos_request structure is buried into each talitos_edesc so that when talitos_submit() is called, everything required to defer the submission to the hardware is contained within talitos_edesc. Signed-off-by: Martin Hicks m...@bork.org --- drivers/crypto/talitos.c | 95 +++--- drivers/crypto/talitos.h | 41 +--- 2 files changed, 66 insertions(+), 70 deletions(-) diff --git a/drivers/crypto/talitos.c b/drivers/crypto/talitos.c index 7709805..b0c85ce 100644 --- a/drivers/crypto/talitos.c +++ b/drivers/crypto/talitos.c @@ -186,22 +186,16 @@ static int init_device(struct device *dev) * talitos_submit - submits a descriptor to the device for processing * @dev: the SEC device to be used * @ch:the SEC device channel to be used - * @desc: the descriptor to be processed by the device - * @callback: whom to call when processing is complete - * @context: a handle for use by caller (optional) + * @edesc: the descriptor to be processed by the device * * desc must contain valid dma-mapped (bus physical) address pointers. * callback must check err and feedback in descriptor header * for device processing status. */ -int talitos_submit(struct device *dev, int ch, struct talitos_desc *desc, - void (*callback)(struct device *dev, - struct talitos_desc *desc, - void *context, int error), - void *context) +int talitos_submit(struct device *dev, int ch, struct talitos_edesc *edesc) { struct talitos_private *priv = dev_get_drvdata(dev); - struct talitos_request *request; + struct talitos_request *request = edesc-req; unsigned long flags; int head; @@ -214,19 +208,15 @@ int talitos_submit(struct device *dev, int ch, struct talitos_desc *desc, } head = priv-chan[ch].head; - request = priv-chan[ch].fifo[head]; - - /* map descriptor and save caller data */ - request-dma_desc = dma_map_single(dev, desc, sizeof(*desc), + request-dma_desc = dma_map_single(dev, request-desc, + sizeof(*request-desc), DMA_BIDIRECTIONAL); - request-callback = callback; - request-context = context; /* increment fifo head */ priv-chan[ch].head = (priv-chan[ch].head + 1) (priv-fifo_len - 1); smp_wmb(); - request-desc = desc; + priv-chan[ch].fifo[head] = request; /* GO! */ wmb(); @@ -247,15 +237,16 @@ EXPORT_SYMBOL(talitos_submit); static void flush_channel(struct device *dev, int ch, int error, int reset_ch) { struct talitos_private *priv = dev_get_drvdata(dev); - struct talitos_request *request, saved_req; + struct talitos_request *request; unsigned long flags; int tail, status; spin_lock_irqsave(priv-chan[ch].tail_lock, flags); tail = priv-chan[ch].tail; - while (priv-chan[ch].fifo[tail].desc) { - request = priv-chan[ch].fifo[tail]; + while (priv-chan[ch].fifo[tail]) { + request = priv-chan[ch].fifo[tail]; + status = 0; /* descriptors with their done bits set don't get the error */ rmb(); @@ -271,14 +262,9 @@ static void flush_channel(struct device *dev, int ch, int error, int reset_ch) sizeof(struct talitos_desc), DMA_BIDIRECTIONAL); - /* copy entries so we can call callback outside lock */ - saved_req.desc = request-desc; - saved_req.callback = request-callback; - saved_req.context = request-context; - /* release request entry in fifo */ smp_wmb(); - request-desc = NULL; + priv-chan[ch].fifo[tail] = NULL; /* increment fifo tail */ priv-chan[ch].tail = (tail + 1) (priv-fifo_len - 1); @@ -287,8 +273,8 @@ static void flush_channel(struct device *dev, int ch, int error, int reset_ch) atomic_dec(priv-chan[ch].submit_count); - saved_req.callback(dev, saved_req.desc, saved_req.context, - status); + request-callback(dev, request-desc, request-context, status); + /* channel may resume processing in single desc error case */ if (error !reset_ch status == error) return; @@ -352,7 +338,8 @@ static u32 current_desc_hdr(struct device *dev, int ch) tail = priv-chan[ch].tail; iter = tail; - while (priv-chan[ch].fifo[iter].dma_desc != cur_desc) { + while (priv-chan[ch].fifo[iter] +
[PATCH v2 5/5] crypto: talitos: Add software backlog queue handling
I was running into situations where the hardware FIFO was filling up, and the code was returning EAGAIN to dm-crypt and just dropping the submitted crypto request. This adds support in talitos for a software backlog queue. When requests can't be queued to the hardware immediately EBUSY is returned. The queued requests are dispatched to the hardware in received order as hardware FIFO slots become available. Signed-off-by: Martin Hicks m...@bork.org --- drivers/crypto/talitos.c | 135 -- drivers/crypto/talitos.h |3 ++ 2 files changed, 110 insertions(+), 28 deletions(-) diff --git a/drivers/crypto/talitos.c b/drivers/crypto/talitos.c index b0c85ce..bb7fba0 100644 --- a/drivers/crypto/talitos.c +++ b/drivers/crypto/talitos.c @@ -182,55 +182,118 @@ static int init_device(struct device *dev) return 0; } -/** - * talitos_submit - submits a descriptor to the device for processing - * @dev: the SEC device to be used - * @ch:the SEC device channel to be used - * @edesc: the descriptor to be processed by the device - * - * desc must contain valid dma-mapped (bus physical) address pointers. - * callback must check err and feedback in descriptor header - * for device processing status. - */ -int talitos_submit(struct device *dev, int ch, struct talitos_edesc *edesc) +/* Dispatch 'request' if provided, otherwise a backlogged request */ +static int __talitos_handle_queue(struct device *dev, int ch, + struct talitos_edesc *edesc, + unsigned long *irq_flags) { struct talitos_private *priv = dev_get_drvdata(dev); - struct talitos_request *request = edesc-req; - unsigned long flags; + struct talitos_request *request; + struct crypto_async_request *areq; int head; + int ret = -EINPROGRESS; - spin_lock_irqsave(priv-chan[ch].head_lock, flags); - - if (!atomic_inc_not_zero(priv-chan[ch].submit_count)) { + if (!atomic_inc_not_zero(priv-chan[ch].submit_count)) /* h/w fifo is full */ - spin_unlock_irqrestore(priv-chan[ch].head_lock, flags); - return -EAGAIN; + return -EBUSY; + + if (!edesc) { + /* Dequeue the oldest request */ + areq = crypto_dequeue_request(priv-chan[ch].queue); + request = container_of(areq, struct talitos_request, base); + } else { + request = edesc-req; } - head = priv-chan[ch].head; request-dma_desc = dma_map_single(dev, request-desc, sizeof(*request-desc), DMA_BIDIRECTIONAL); /* increment fifo head */ + head = priv-chan[ch].head; priv-chan[ch].head = (priv-chan[ch].head + 1) (priv-fifo_len - 1); - smp_wmb(); - priv-chan[ch].fifo[head] = request; + spin_unlock_irqrestore(priv-chan[ch].head_lock, *irq_flags); + + /* +* Mark a backlogged request as in-progress. +*/ + if (!edesc) { + areq = request-context; + areq-complete(areq, -EINPROGRESS); + } + + spin_lock_irqsave(priv-chan[ch].head_lock, *irq_flags); /* GO! */ + priv-chan[ch].fifo[head] = request; wmb(); out_be32(priv-chan[ch].reg + TALITOS_FF, upper_32_bits(request-dma_desc)); out_be32(priv-chan[ch].reg + TALITOS_FF_LO, lower_32_bits(request-dma_desc)); + return ret; +} + +/** + * talitos_submit - performs submissions of a new descriptors + * + * @dev: the SEC device to be used + * @ch:the SEC device channel to be used + * @edesc: the request to be processed by the device + * + * edesc-req must contain valid dma-mapped (bus physical) address pointers. + * callback must check err and feedback in descriptor header + * for device processing status upon completion. + */ +int talitos_submit(struct device *dev, int ch, struct talitos_edesc *edesc) +{ + struct talitos_private *priv = dev_get_drvdata(dev); + struct talitos_request *request = edesc-req; + unsigned long flags; + int ret = -EINPROGRESS; + + spin_lock_irqsave(priv-chan[ch].head_lock, flags); + + if (priv-chan[ch].queue.qlen) { + /* +* There are backlogged requests. Just queue this new request +* and dispatch the oldest backlogged request to the hardware. +*/ + crypto_enqueue_request(priv-chan[ch].queue, + request-base); + __talitos_handle_queue(dev, ch, NULL, flags); + ret = -EBUSY; + } else { + ret = __talitos_handle_queue(dev, ch, edesc, flags); + if (ret == -EBUSY) + /* Hardware FIFO is full */ +
Re: [PATCH 0/3] fix some CAAM warnings.
On 3/3/2015 8:50 AM, yanjiang@windriver.com wrote: From: Yanjiang Jin yanjiang@windriver.com Hi, This patch series fix some CAAM compile and runtime warnings. The patches 0001 and 0002 are same as V1. I have tested this on fsl-p5020ds board using upstream 4.0.0-rc1+ with the below configs: CONFIG_DMA_API_DEBUG=y # CONFIG_CRYPTO_MANAGER_DISABLE_TESTS is not set Change log: v2: Abandon the v1 patch 0003-crypto-caamhash-add-two-missed-dma_mapping_error.patch. Replace the v1 patch 0004-crypto-caamhash-replace-kmalloc-with-kzalloc.patch with the new patch 0003-crypto-caamhash-fix-uninitialized-edesc-sec4_sg_byte.patch. Yanjiang Jin (3): crypto: caam: fix some compile warnings crypto: caam_rng: fix rng_unmap_ctx's DMA_UNMAP size problem crypto: caamhash: - fix uninitialized edesc-sec4_sg_bytes field drivers/crypto/caam/caamhash.c | 1 + drivers/crypto/caam/caamrng.c| 4 ++-- drivers/crypto/caam/sg_sw_sec4.h | 8 3 files changed, 7 insertions(+), 6 deletions(-) For the series Reviewed-by: Horia Geanta horia.gea...@freescale.com Thanks! Horia -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 2/3] crypto: caam_rng: fix rng_unmap_ctx's DMA_UNMAP size problem
On Tue, 3 Mar 2015 14:50:52 +0800 yanjiang@windriver.com wrote: - dma_unmap_single(jrdev, ctx-sh_desc_dma, DESC_RNG_LEN, - DMA_TO_DEVICE); + dma_unmap_single(jrdev, ctx-sh_desc_dma, + desc_bytes(ctx-sh_desc), DMA_TO_DEVICE); alignment: the 'd' in desc_bytes should fall directly under the 'j' in jrdev. Also, DESC_RNG_LEN should be corrected to (4 * CAAM_CMD_SZ). Kim -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 1/3] crypto: caam: fix some compile warnings
On Tue, 3 Mar 2015 14:50:51 +0800 yanjiang@windriver.com wrote: This commit is to avoid the below warnings: drivers/crypto/caam/sg_sw_sec4.h:88:12: warning: 'dma_map_sg_chained' defined but not used [-Wunused-function] static int dma_map_sg_chained(struct device *dev, struct scatterlist *sg, ^ drivers/crypto/caam/sg_sw_sec4.h:104:12: warning: 'dma_unmap_sg_chained' defined but not used [-Wunused-function] static int dma_unmap_sg_chained(struct device *dev, ^ I'm not seeing these warnings - both caamalg.c and caamhash.c use those functions fine. -static int dma_map_sg_chained(struct device *dev, struct scatterlist *sg, +static inline int dma_map_sg_chained(struct device *dev, struct scatterlist *sg, unsigned int nents, enum dma_data_direction dir, bool chained) not to mention this isn't how to fix a defined but not used warning: marking the functions inline results in different compiler output. NACK from me. Kim -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v2 5/5] crypto: talitos: Add software backlog queue handling
On Tue, 3 Mar 2015 08:21:37 -0500 Martin Hicks m...@bork.org wrote: @@ -1170,6 +1237,8 @@ static struct talitos_edesc *talitos_edesc_alloc(struct device *dev, edesc-dma_len, DMA_BIDIRECTIONAL); edesc-req.desc = edesc-desc; + /* A copy of the crypto_async_request to use the crypto_queue backlog */ + memcpy(edesc-req.base, areq, sizeof(struct crypto_async_request)); this seems backward, or, at least can be done more efficiently IMO: talitos_cra_init should set the tfm's reqsize so the rest of the driver can wholly embed its talitos_edesc (which should also wholly encapsulate its talitos_request (i.e., not via a pointer)) into the crypto API's request handle allocation. This would absorb and eliminate the talitos_edesc kmalloc and frees, the above memcpy, and replace the container_of after the crypto_dequeue_request with an offset_of, right? When scatter-gather buffers are needed, we can assume a slower-path and make them do their own allocations, since their sizes vary depending on each request. Of course, a pointer to those allocations would need to be retained somewhere in the request handle. Only potential problem is getting the crypto API to set the GFP_DMA flag in the allocation request, but presumably a CRYPTO_TFM_REQ_DMA crt_flag can be made to handle that. Kim -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 1/3] crypto: caam: fix some compile warnings
On 2015年03月04日 10:32, yjin wrote: On 2015年03月04日 02:59, Kim Phillips wrote: On Tue, 3 Mar 2015 14:50:51 +0800 yanjiang@windriver.com wrote: This commit is to avoid the below warnings: drivers/crypto/caam/sg_sw_sec4.h:88:12: warning: 'dma_map_sg_chained' defined but not used [-Wunused-function] static int dma_map_sg_chained(struct device *dev, struct scatterlist *sg, ^ drivers/crypto/caam/sg_sw_sec4.h:104:12: warning: 'dma_unmap_sg_chained' defined but not used [-Wunused-function] static int dma_unmap_sg_chained(struct device *dev, ^ I'm not seeing these warnings - both caamalg.c and caamhash.c use those functions fine. As you said, both caamalg.c and caamhash.c use those functions, so no warning reported. But if a new file just wants to include sg_sw_sec4.h, doesn't want to use these functions, the above warnings will appear. We can find an example in Freescale SDK 1.6: caampkc.c includes pkc_desc.h, pkc_desc.h includes sg_sw_sec4.h, but caampkc.c doesn't call those functions. Without my patch, every file which includes sg_sw_sec4.h must call these two functions in the future, I don't think it is a good idea. Thanks! Yanjiang -static int dma_map_sg_chained(struct device *dev, struct scatterlist *sg, +static inline int dma_map_sg_chained(struct device *dev, struct scatterlist *sg, unsigned int nents, enum dma_data_direction dir, bool chained) not to mention this isn't how to fix a defined but not used warning: marking the functions inline results in different compiler output. NACK from me. An alternative is moving the definitions to a .c file, but I don't think it will be fundamental different. I know I am fixing a potential error which doesn't exist now, it seems useless for the current upstream version, we can abandon my patch. But I still think the current implementation adds unnecessary restrictions for its users. Thanks! Yanjiang Kim -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/ -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 2/3] crypto: caam_rng: fix rng_unmap_ctx's DMA_UNMAP size problem
On 2015年03月04日 03:31, Kim Phillips wrote: On Tue, 3 Mar 2015 14:50:52 +0800 yanjiang@windriver.com wrote: - dma_unmap_single(jrdev, ctx-sh_desc_dma, DESC_RNG_LEN, -DMA_TO_DEVICE); + dma_unmap_single(jrdev, ctx-sh_desc_dma, + desc_bytes(ctx-sh_desc), DMA_TO_DEVICE); alignment: the 'd' in desc_bytes should fall directly under the 'j' in jrdev. Also, DESC_RNG_LEN should be corrected to (4 * CAAM_CMD_SZ). Hi Kim, I can't find the obvious limitation for the RNG descriptor length in Freescale documents, could you point out it? Thanks! Yanjiang Kim -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 1/3] crypto: caam: fix some compile warnings
On 2015年03月04日 02:59, Kim Phillips wrote: On Tue, 3 Mar 2015 14:50:51 +0800 yanjiang@windriver.com wrote: This commit is to avoid the below warnings: drivers/crypto/caam/sg_sw_sec4.h:88:12: warning: 'dma_map_sg_chained' defined but not used [-Wunused-function] static int dma_map_sg_chained(struct device *dev, struct scatterlist *sg, ^ drivers/crypto/caam/sg_sw_sec4.h:104:12: warning: 'dma_unmap_sg_chained' defined but not used [-Wunused-function] static int dma_unmap_sg_chained(struct device *dev, ^ I'm not seeing these warnings - both caamalg.c and caamhash.c use those functions fine. As you said, both caamalg.c and caamhash.c use those functions, so no warning reported. But if a new file just wants to include sg_sw_sec4.h, doesn't want to use these functions, the above warnings will appear. We can find an example in Freescale SDK 1.6: caampkc.c includes pkc_desc.h, pkc_desc.h includes sg_sw_sec4.h, but caampkc.c doesn't call those functions. Without my patch, every file which includes sg_sw_sec4.h must call these two functions in the future, I don't think it is a good idea. Thanks! Yanjiang -static int dma_map_sg_chained(struct device *dev, struct scatterlist *sg, +static inline int dma_map_sg_chained(struct device *dev, struct scatterlist *sg, unsigned int nents, enum dma_data_direction dir, bool chained) not to mention this isn't how to fix a defined but not used warning: marking the functions inline results in different compiler output. NACK from me. Kim -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 0/2] crypto: talitos: Add AES-XTS mode
On Tue, Mar 3, 2015 at 10:44 AM, Horia Geantă horia.gea...@freescale.com wrote: On 3/3/2015 12:09 AM, Martin Hicks wrote: On Mon, Mar 02, 2015 at 03:37:28PM +0100, Milan Broz wrote: If crypto API allows to encrypt more sectors in one run (handling IV internally) dmcrypt can be modified of course. But do not forget we can use another IV (not only sequential number) e.g. ESSIV with XTS as well (even if it doesn't make much sense, some people are using it). Interesting, I'd not considered using XTS with an IV other than plain/64. The talitos hardware would not support aes/xts in any mode other than plain/plain64 I don't think...Although perhaps you could push in an 8-byte IV and the hardware would interpret it as the sector #. For talitos, there are two cases: 1. request data size is = data unit / sector size talitos can handle any IV / tweak scheme 2. request data size sector size since talitos internally generates the IV for the next sector by incrementing the previous IV, only IV schemes that allocate consecutive IV to consecutive sectors will function correctly. it's not clear to me that #1 is right. I guess it could be, but the IV length would be limited to 8 bytes. This also points out that claiming that the XTS IV size is 16 bytes, as my current patch does, could be problematic. It's handy because the first 8 bytes should contain a plain64 sector #, and the second u64 can be used to encode the sector size but it would be a mistake for someone to use the second 8 bytes for the rest of a 16byte IV. mh -- Martin Hicks P.Eng. | m...@bork.org Bork Consulting Inc. | +1 (613) 266-2296 -- To unsubscribe from this list: send the line unsubscribe linux-crypto in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html