Crypto Fixes for 2.6.32

2009-10-20 Thread Herbert Xu
Hi Linus:

This push fixes a regression in the padlock-sha driver that causes
faults on 32-bit VIA processors.


Please pull from

git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6.git

or

master.kernel.org:/pub/scm/linux/kernel/git/herbert/crypto-2.6.git


Herbert Xu (1):
  crypto: padlock-sha - Fix stack alignment

 drivers/crypto/padlock-sha.c |   14 --
 1 files changed, 12 insertions(+), 2 deletions(-)

Thanks,
-- 
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmVHI~} herb...@gondor.apana.org.au
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Crypto Fixes for 2.6.32

2009-10-20 Thread Herbert Xu
Hi Linus:

 This push fixes a regression in the padlock-sha driver that causes
 faults on 32-bit VIA processors.

I've just added another regression fix that's specific to the
Intel AESNI instruction where the FPU test was reversed.


Please pull from

git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6.git

or

master.kernel.org:/pub/scm/linux/kernel/git/herbert/crypto-2.6.git


Herbert Xu (1):
  crypto: padlock-sha - Fix stack alignment

Huang Ying (1):
  crypto: aesni-intel - Fix irq_fpu_usable usage

 arch/x86/crypto/aesni-intel_glue.c |   10 +-
 drivers/crypto/padlock-sha.c   |   14 --
 2 files changed, 17 insertions(+), 7 deletions(-)

Thanks,
-- 
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmVHI~} herb...@gondor.apana.org.au
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [BUGFIX] Fix irq_fpu_usable usage in aesni

2009-10-20 Thread Herbert Xu
On Mon, Oct 19, 2009 at 10:00:17AM +0800, Huang Ying wrote:
 When renaming kernel_fpu_using to irq_fpu_usable, the semantics of the
 function is changed too, from mesuring whether kernel is using FPU,
 that is, the FPU is NOT available, to measuring whether FPU is usable,
 that is, the FPU is available.
 
 But the usage of irq_fpu_usable in aesni-intel_glue.c is not changed
 accordingly. This patch fixes this.
 
 Signed-off-by: Huang Ying ying.hu...@intel.com

Patch applied to crypto-2.6.  Thanks!
-- 
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmVHI~} herb...@gondor.apana.org.au
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [PATCH v0 1/2] DMA: fsldma: Disable DMA_INTERRUPT when Async_tx enabled

2009-10-20 Thread Suresh Vishnu-B05022
 -Original Message-
 From: Ira W. Snyder [mailto:i...@ovro.caltech.edu] 
 Sent: Friday, October 16, 2009 9:04 PM
 To: Dan Williams
 Cc: Suresh Vishnu-B05022; herb...@gondor.apana.org.au; 
 linux-ker...@vger.kernel.org; linux-r...@vger.kernel.org; 
 linuxppc-...@ozlabs.org; linux-crypto@vger.kernel.org; Tabi 
 Timur-B04825
 Subject: Re: [PATCH v0 1/2] DMA: fsldma: Disable 
 DMA_INTERRUPT when Async_tx enabled
 
 On Thu, Oct 15, 2009 at 06:25:14PM -0700, Dan Williams wrote:
  [ added Leo and Timur to the Cc ]
  
  On Wed, Oct 14, 2009 at 11:41 PM, Vishnu Suresh 
 vis...@freescale.com wrote:
   This patch disables the use of DMA_INTERRUPT capability with 
   Async_tx
  
   The fsldma produces a null transfer with DMA_INTERRUPT capability 
   when used with Async_tx. When RAID devices queue a 
 transaction via 
   Async_tx, this  results in a hang.
  
   Signed-off-by: Vishnu Suresh vis...@freescale.com
   ---
    drivers/dma/fsldma.c |    6 ++
    1 files changed, 6 insertions(+), 0 deletions(-)
  
   diff --git a/drivers/dma/fsldma.c b/drivers/dma/fsldma.c index 
   296f9e7..66d9b39 100644
   --- a/drivers/dma/fsldma.c
   +++ b/drivers/dma/fsldma.c
   @@ -1200,7 +1200,13 @@ static int __devinit 
 of_fsl_dma_probe(struct 
   of_device *dev,
                                                  - 
 fdev-reg.start + 
   1);
  
          dma_cap_set(DMA_MEMCPY, fdev-common.cap_mask);
   +#ifndef CONFIG_ASYNC_CORE
   +       /*
   +        * The DMA_INTERRUPT async_tx is a NULL transfer, 
 which will
   +        * triger a PE interrupt.
   +        */
          dma_cap_set(DMA_INTERRUPT, fdev-common.cap_mask);
   +#endif
          dma_cap_set(DMA_SLAVE, fdev-common.cap_mask);
          fdev-common.device_alloc_chan_resources = 
   fsl_dma_alloc_chan_resources;
          fdev-common.device_free_chan_resources = 
   fsl_dma_free_chan_resources;
  
  You are basically saying that fsl_dma_prep_interrupt() is 
 buggy.  Can 
  that routine be fixed rather than this piecemeal solution?  If it 
  cannot be fixed (i.e. hardware issue) then fsl_dma_prep_interrupt() 
  should just be disabled/deleted altogether.
We are working to fix this issue.
  
 
 For what it's worth, I've used the following code in the 
 recent past, without any issues. This was on an 83xx, within 
 the last few kernel releases. I haven't tried it on the latest -rc.
This works fine as long as only DMA_MEMCPY is being used.
The async_tx_channel_switch does not occur and the 
device_prep_dma_interrupt is not called. 
However, when a DMA_XOR capable device is exposed, 
which is differnet from the DMA_MEMCPY/INTERRUPT device, this path is hit.

Is it proper to schedule a dma_interrupt from the channel switch call, 
even when the depend_tx and tx channels correspond to different devices?

 
 Using device_prep_dma_memcpy() can trigger a callback as 
 well, so the interrupt feature isn't strictly needed. Just 
 attach the callback to the last memcpy operation.
 
 static dma_cookie_t dma_async_interrupt(struct dma_chan *chan,
 dma_async_tx_callback 
 callback,
 void *data) {
 struct dma_device *dev = chan-device;
 struct dma_async_tx_descriptor *tx; 
 
 /* Set up the DMA */
 tx = dev-device_prep_dma_interrupt(chan, DMA_PREP_INTERRUPT);
 if (!tx)
 return -ENOMEM;
 
 tx-callback = callback;
 tx-callback_param = data;
 
 return tx-tx_submit(tx);
 }
 
 Ira
 
 
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[QUESTION] blkcipher_walk_phys and memory

2009-10-20 Thread Сергей Миронов
Dear all, I have a couple of questions about crypto internals, please
help me to understand some concepts. My drivers implements
CRYPTO_ALG_TYPE_BLKCIPHER algorithm and has to perform encryption via
DMA transfers.
I decided to use blkcipher_walk_phys() and blkcipher_walk_done()
functions. These functions take scatterlist and return one or more
struct page's and offsets. Then i pass those to dma_map_page() which
returns  dma_addr_t  to send to my device's DMA engine.
Everything seems to work (on ARM) but i have several questions:

1. Is it generally correct to do this? (see *_encrypt() handler below)
2. Article at http://linux-mm.org/DeviceDriverMmap
says You may be tempted to call virt_to_page(addr) to get a struct
page pointer for a kmalloced address, but this is a violation of the
abstraction: kmalloc does not return pages, it returns another type of
memory object.
And blkcipher_walk_init() internally calls virt_to_page() on a pointer
allocated from unknown location (for example, in tcrypt.c there is
statically allocated buffer, but it is just one case of many). Is it
correct and how to think about it?


static int mcrypto_3des_ecb_encrypt(struct blkcipher_desc *desc,
struct scatterlist *dst, struct scatterlist *src, unsigned int nbytes)
{
struct mcrypto_device *device = g_mcrypto_device;
struct mcrypto_3des_ctx *ctx = crypto_blkcipher_ctx(desc-tfm);
struct blkcipher_walk walk;
int err;

blkcipher_walk_init(walk, dst, src, nbytes);
err = blkcipher_walk_phys(desc, walk);
ctx-mode = mcrypto_encrypt;

while((nbytes = walk.nbytes)) {
dma_addr_t src_dma, dst_dma;
size_t size = nbytes - (nbytes % DES3_EDE_MIN_BLOCK_SIZE);

src_dma = dma_map_page(device-dev, walk.src.phys.page,
walk.src.phys.offset, size, DMA_TO_DEVICE);
dst_dma = dma_map_page(device-dev, walk.dst.phys.page,
walk.dst.phys.offset, size, DMA_FROM_DEVICE);

/* Performs actual DMA transfers and waits for completion */
mcrypto_3des_dmacrypt(device, ctx, src_dma, dst_dma, size);

dma_unmap_page(device-dev, dst_dma, size, DMA_FROM_DEVICE);
dma_unmap_page(device-dev, src_dma, size, DMA_TO_DEVICE);

err = blkcipher_walk_done(desc, walk, nbytes-size);
}
return err;
}


-- 
Thanks,
Sergey
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [QUESTION] blkcipher_walk_phys and memory

2009-10-20 Thread Sebastian Andrzej Siewior
* ?? ??? | 2009-10-20 18:28:34 [+0400]:

Dear all, I have a couple of questions about crypto internals, please
help me to understand some concepts. My drivers implements
CRYPTO_ALG_TYPE_BLKCIPHER algorithm and has to perform encryption via
DMA transfers.
In the end you might prefer to use CRYPTO_ALG_TYPE_ABLKCIPHER. The
difference is that you could enqueue multiple transfers on the HW and
receive an interrupt once it is done.

I decided to use blkcipher_walk_phys() and blkcipher_walk_done()
functions. These functions take scatterlist and return one or more
struct page's and offsets. Then i pass those to dma_map_page() which
returns  dma_addr_t  to send to my device's DMA engine.
Everything seems to work (on ARM) but i have several questions:

1. Is it generally correct to do this? (see *_encrypt() handler below)
See comment below

2. Article at http://linux-mm.org/DeviceDriverMmap
says You may be tempted to call virt_to_page(addr) to get a struct
page pointer for a kmalloced address, but this is a violation of the
abstraction: kmalloc does not return pages, it returns another type of
memory object.
And blkcipher_walk_init() internally calls virt_to_page() on a pointer
It does not:
|static inline void blkcipher_walk_init(struct blkcipher_walk *walk,
|   struct scatterlist *dst,
|   struct scatterlist *src,
|   unsigned int nbytes)
|{   
|walk-in.sg = src;  
|walk-out.sg = dst;
|walk-total = nbytes;
|}  

allocated from unknown location (for example, in tcrypt.c there is
statically allocated buffer, but it is just one case of many). Is it
correct and how to think about it?
virt_to_page() is safe as long as you don't have HIGHMEM support or you
receive memory from vmalloc(). The blkcipher uses scatterwalk_map() for
src/dst which in turn ends up in kmap() what ensures that virt_to_page()
works on that page. In case it was a HIGHMEM page, it will be copied
into kernel address range and on unmap it will be copied back. And here
is some overhead: your HW should be able to access the HIGHMEM page
directly.

static int mcrypto_3des_ecb_encrypt(struct blkcipher_desc *desc,
struct scatterlist *dst, struct scatterlist *src, unsigned int nbytes)
{
struct mcrypto_device *device = g_mcrypto_device;
struct mcrypto_3des_ctx *ctx = crypto_blkcipher_ctx(desc-tfm);
struct blkcipher_walk walk;
int err;

blkcipher_walk_init(walk, dst, src, nbytes);
err = blkcipher_walk_phys(desc, walk);
ctx-mode = mcrypto_encrypt;

while((nbytes = walk.nbytes)) {
dma_addr_t src_dma, dst_dma;
size_t size = nbytes - (nbytes % DES3_EDE_MIN_BLOCK_SIZE);

src_dma = dma_map_page(device-dev, walk.src.phys.page,
walk.src.phys.offset, size, DMA_TO_DEVICE);
dst_dma = dma_map_page(device-dev, walk.dst.phys.page,
walk.dst.phys.offset, size, DMA_FROM_DEVICE);

It might happen that walk.src.phys.page == walk.dst.phys.page. In that
case you should use DMA_BIDIRECTIONAL


/* Performs actual DMA transfers and waits for completion */
mcrypto_3des_dmacrypt(device, ctx, src_dma, dst_dma, size);

dma_unmap_page(device-dev, dst_dma, size, DMA_FROM_DEVICE);
dma_unmap_page(device-dev, src_dma, size, DMA_TO_DEVICE);

err = blkcipher_walk_done(desc, walk, nbytes-size);
}
return err;
}

Sebastian
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html