Re: [PATCH 3/7] staging: ccree: add support for older HW revisions

2017-06-23 Thread kbuild test robot
Hi Gilad,

[auto build test WARNING on staging/staging-testing]
[also build test WARNING on next-20170623]
[cannot apply to v4.12-rc6]
[if your patch is applied to the wrong git tree, please drop us a note to help 
improve the system]

url:
https://github.com/0day-ci/linux/commits/Gilad-Ben-Yossef/staging-ccree-bug-fixes-and-TODO-items-for-4-13/20170623-134445
config: x86_64-randconfig-b0-06241039 (attached as .config)
compiler: gcc-4.4 (Debian 4.4.7-8) 4.4.7
reproduce:
# save the attached .config to linux build tree
make ARCH=x86_64 

All warnings (new ones prefixed by >>):

   drivers/staging/ccree/ssi_sram_mgr.c: In function 'ssi_sram_mgr_init':
>> drivers/staging/ccree/ssi_sram_mgr.c:76: warning: format '%x' expects type 
>> 'unsigned int', but argument 3 has type 'dma_addr_t'

vim +76 drivers/staging/ccree/ssi_sram_mgr.c

60  /* Allocate "this" context */
61  drvdata->sram_mgr_handle = kzalloc(
62  sizeof(struct ssi_sram_mgr_ctx), GFP_KERNEL);
63  if (!drvdata->sram_mgr_handle) {
64  SSI_LOG_ERR("Not enough memory to allocate SRAM_MGR ctx 
(%zu)\n",
65  sizeof(struct ssi_sram_mgr_ctx));
66  rc = -ENOMEM;
67  goto out;
68  }
69  smgr_ctx = drvdata->sram_mgr_handle;
70  
71  if (drvdata->hw_rev < CC_HW_REV_712) {
72  /* Pool starts after ROM bytes */
73  start = 
(dma_addr_t)CC_HAL_READ_REGISTER(CC_REG_OFFSET(HOST_RGF,
74  HOST_SEP_SRAM_THRESHOLD));
75  if ((start & 0x3) != 0) {
  > 76  SSI_LOG_ERR("Invalid SRAM offset 0x%x\n", 
start);
77  rc = -ENODEV;
78  goto out;
79  }
80  }
81  
82  smgr_ctx->sram_free_offset = start;
83  return 0;
84  

---
0-DAY kernel test infrastructureOpen Source Technology Center
https://lists.01.org/pipermail/kbuild-all   Intel Corporation


.config.gz
Description: application/gzip


RE: [bug] sha1-avx2 and read beyond

2017-06-23 Thread Albrekht, Ilya
Hello all,

I'm sorry for late reply (I was out of office for a month).

It's been a while since we touched this code. We are going to do our best to 
support it. I'll be back to the office earlier next week and will figure out 
the fix ASAP.

Best Regards,
Ilya Albrekht

-Original Message-
From: Tim Chen [mailto:tim.c.c...@linux.intel.com] 
Sent: Friday, June 23, 2017 9:39 AM
To: Jan Stancek; Herbert Xu; megha@linux.intel.com
Cc: linux-crypto@vger.kernel.org; Albrekht, Ilya; Locktyukhin, Maxim; Zohar, 
Ronen; mo...@linux.intel.com; mini...@googlemail.com; h...@linux.intel.com; 
ma...@denx.de
Subject: Re: [bug] sha1-avx2 and read beyond

On 06/23/2017 01:48 AM, Jan Stancek wrote:
> 
> 
> - Original Message -
>> On Wed, May 24, 2017 at 08:46:57AM -0400, Jan Stancek wrote:
>>>
>>>
>>> - Original Message -
 Hi,

 I'm seeing rare crashes during NFS cthon with krb5 auth. After some 
 digging I arrived at potential problem with sha1-avx2.
>>>
>>> Adding more sha1_avx2 experts to CC.
>>>

 Problem appears to be that sha1_transform_avx2() reads beyond 
 number of blocks you pass, if it is an odd number. It appears to 
 try read one block more. This creates a problem if it falls beyond 
 a page and there's nothing there.
>>>
>>> As noted in my reply, worst case appears to be read ahead of up to 3 
>>> SHA1 blocks beyond end of data:
>>>   http://marc.info/?l=linux-crypto-vger=149373371023377
>>>
>>>  +--+-+-+-+
>>>  | 2*SHA1_BLOCK_SIZE  | 2*SHA1_BLOCK_SIZE |  
>>> +--+-+-+-+
>>> ^ page boundary
>>> ^ data end
>>>
>>> It is still reproducible with 4.12-rc2.
>>
>> Can someone from Intel please look into this? Otherwise we'll have to 
>> disable sha-avx2.
> 
> So I take it my workaround patch [1] is not acceptable in short-term 
> as well?
> 
> [1] http://marc.info/?l=linux-crypto-vger=149373371023377
> 
> Regards,
> Jan
> 

Megha,

Can you take a look at this issue?

Thanks.

Tim


Re: [PATCH 0/3] Introduce AMD Secure Processor device

2017-06-23 Thread Brijesh Singh



On 06/22/2017 08:25 AM, Pavel Machek wrote:

On Thu 2017-06-22 06:42:01, Brijesh Singh wrote:

CCP device (drivers/crypto/ccp/ccp.ko) is part of AMD Secure Processor,
which is not dedicated solely to crypto. The AMD Secure Processor includes
CCP and PSP (Platform Secure Processor) devices.

This patch series adds a framework that allows functional component of the
AMD Secure Processor to be initialized and handled appropriately. The series
does not makes any logic modification into CCP - it refactors the code to
integerate CCP into AMD secure processor framework.


Ok, so this is just preparation. When finished, what services will it provide
to Linux userland?


Yes, this is in preparation to add PSP [1] and SEV (Secure Encrypted 
Virtualization)
[2] support. When finished, the SEV will provide:

a) in-kernel API to communicate with SEV FW inside the AMD Secure Processor
b) userspace ioctl to manage the platform keys/certificates

I have posted PSP and SEV patches as part of SEV RFC, see below

[1] http://marc.info/?l=linux-mm=148846780431232=2
[2] http://marc.info/?l=linux-mm=148847075032602=2

-Brijesh


Re: [PATCH v2 3/7] staging: ccree: add support for older HW revisions

2017-06-23 Thread Greg Kroah-Hartman
On Thu, Jun 22, 2017 at 04:36:57PM +0300, Gilad Ben-Yossef wrote:
> Add support for the older CryptoCell 710 and 630P hardware revisions.

No, I do not want to add new features to staging drivers where ever
possible.  I want you to spend your time fixing up the code to be good
enough to get it out of staging, then you can add any new hardware
support you want to it without having to go back and clean up anything
else.

That is one of the requirements of code in staging, sorry.

thanks,

greg k-h


Re: [PATCH v2 5/7] staging: ccree: add clock management support

2017-06-23 Thread Greg Kroah-Hartman
On Thu, Jun 22, 2017 at 04:36:59PM +0300, Gilad Ben-Yossef wrote:
> Some SoC which implement CryptoCell have a dedicated clock
> tied to it, some do not. Implement clock support if exists
> based on device tree data and tie power management to it.
> 
> Signed-off-by: Gilad Ben-Yossef 
> ---
>  drivers/staging/ccree/Makefile |  2 +-
>  drivers/staging/ccree/ssi_driver.c | 40 +
>  drivers/staging/ccree/ssi_driver.h |  4 +++
>  drivers/staging/ccree/ssi_pm.c | 13 +
>  drivers/staging/ccree/ssi_pm_ext.c | 60 
> --
>  drivers/staging/ccree/ssi_pm_ext.h | 33 -
>  6 files changed, 47 insertions(+), 105 deletions(-)
>  delete mode 100644 drivers/staging/ccree/ssi_pm_ext.c
>  delete mode 100644 drivers/staging/ccree/ssi_pm_ext.h

Ok, adding new features that make the code smaller is ok, this is nice :)


Re: [PATCH v2 2/7] staging: ccree: register setkey for none hash macs

2017-06-23 Thread Greg Kroah-Hartman
On Thu, Jun 22, 2017 at 04:36:56PM +0300, Gilad Ben-Yossef wrote:
> Fix a bug where the transformation init code did
> not register a setkey method for none hash based MACs.

"none hash based MACs"?  Is that the correct language, I don't
understand it, sorry, can you expand on it a bit in your v3 series?

> Fixes commit 50cfbbb7e627 ("staging: ccree: add ahash support").

This line should be written as:
Fixes: 50cfbbb7e627 ("staging: ccree: add ahash support").

thanks,

greg k-h


Re: [PATCH 0/3] Introduce AMD Secure Processor device

2017-06-23 Thread Pavel Machek
On Thu 2017-06-22 06:42:01, Brijesh Singh wrote:
> CCP device (drivers/crypto/ccp/ccp.ko) is part of AMD Secure Processor,
> which is not dedicated solely to crypto. The AMD Secure Processor includes
> CCP and PSP (Platform Secure Processor) devices.
> 
> This patch series adds a framework that allows functional component of the
> AMD Secure Processor to be initialized and handled appropriately. The series
> does not makes any logic modification into CCP - it refactors the code to
> integerate CCP into AMD secure processor framework.

Ok, so this is just preparation. When finished, what services will it provide
to Linux userland?



Re: Regarding Porting of hardware cryptography to Linux kernel

2017-06-23 Thread Marek Vasut
On 06/23/2017 07:16 AM, sagar khadgi wrote:
> Hi Marek,

Hi,

> Thanks for replying.
> 
> Regarding Hardware:
> 
> I am using Xilinx zynq FPGA with Athena Core which as AES, RSA, DSA,
> ECDSA, NRBG etc feature.
> I am trying to integrate with Linux kernal. Can you please tell me is
> there any document/example project/link which I can refer.

Yes, see the link I posted in my previous email. Moreover, you can look
into the drivers in the Linux kernel and inspire yourself there.

> Thanks and regards,
> Sagar
> 
> 
> 
> On Thu, Jun 22, 2017 at 5:27 PM, Marek Vasut  > wrote:
> 
> On 06/22/2017 01:50 PM, sagar khadgi wrote:
> > Hi Marek,
> 
> Hi,
> 
> > I am having one microcontroller which has AES, HMAC, SHA, DES, RSA, DSA,
> > ECDSA cryptographic services. I want to port it to Linux Kernal so that
> > I can access them through OpenSSL from user space.
> >
> > I am new to Linux. Can you please provide me any document or link for
> > porting the hardware cryptography to Linux kernel.
> 
> Try https://www.kernel.org/doc/html/v4.11/crypto/index.html
> 
> 
> unless you provide further details about your hardware, it's hard to
> help you more.
> 
> > Thanking you in advance.
> >
> > Regards,
> > Sagar
> 
> 
> --
> Best regards,
> Marek Vasut
> 
> 


-- 
Best regards,
Marek Vasut


Re: [PATCH v6 0/2] IV Generation algorithms for dm-crypt

2017-06-23 Thread Eric Biggers
On Fri, Jun 23, 2017 at 04:13:41PM +0800, Herbert Xu wrote:
> Binoy Jayan  wrote:
> > ===
> > dm-crypt optimization for larger block sizes
> > ===
> > 
> > Currently, the iv generation algorithms are implemented in dm-crypt.c. The 
> > goal
> > is to move these algorithms from the dm layer to the kernel crypto layer by
> > implementing them as template ciphers so they can be used in relation with
> > algorithms like aes, and with multiple modes like cbc, ecb etc. As part of 
> > this
> > patchset, the iv-generation code is moved from the dm layer to the crypto 
> > layer
> > and adapt the dm-layer to send a whole 'bio' (as defined in the block layer)
> > at a time. Each bio contains the in memory representation of physically
> > contiguous disk blocks. Since the bio itself may not be contiguous in main
> > memory, the dm layer sets up a chained scatterlist of these blocks split 
> > into
> > physically contiguous segments in memory so that DMA can be performed.
> 
> There is currently a patch-set for fscrypt to add essiv support.  It
> would be interesting to know whether your implementation of essiv
> can also be used in that patchset.  That would confirm that we're on
> the right track.
> 

You can find the fscrypt patch at https://patchwork.kernel.org/patch/9795327/

Note that it's encrypting 4096-byte blocks, not 512-byte.  Also, it's using
AES-256 for the ESSIV tfm (since it uses a SHA-256 hash) but AES-128 for the
"real" encryption.  It's possible this is a mistake and it should be AES-128 for
both.  (If it is, it needs to be fixed before it's released in 4.13.)

Eric


Re: [PATCH 7/7] crypto: caam: cleanup CONFIG_64BIT ifdefs when using io{read|write}64

2017-06-23 Thread Logan Gunthorpe
Thanks Horia.

I'm inclined to just use your patch verbatim. I can set you as author,
but no matter how I do it, I'll need your Signed-off-by.

Logan

On 23/06/17 12:51 AM, Horia Geantă wrote:
> On 6/22/2017 7:49 PM, Logan Gunthorpe wrote:
>> Now that ioread64 and iowrite64 are always available we don't
>> need the ugly ifdefs to change their implementation when they
>> are not.
>>
> Thanks Logan.
> 
> Note however this is not equivalent - it changes the behaviour, since
> CAAM engine on i.MX6S/SL/D/Q platforms is broken in terms of 64-bit
> register endianness - see CONFIG_CRYPTO_DEV_FSL_CAAM_IMX usage in code
> you are removing.
> 
> [Yes, current code has its problems, as it does not differentiate b/w
> i.MX platforms with and without the (unofficial) erratum, but this
> should be fixed separately.]
> 
> Below is the change that would keep current logic - still forcing i.MX
> to write CAAM 64-bit registers in BE even if the engine is LE (yes, diff
> is doing a poor job).
> 
> Horia
> 
> diff --git a/drivers/crypto/caam/regs.h b/drivers/crypto/caam/regs.h
> index 84d2f838a063..b893ebb24e65 100644
> --- a/drivers/crypto/caam/regs.h
> +++ b/drivers/crypto/caam/regs.h
> @@ -134,50 +134,25 @@ static inline void clrsetbits_32(void __iomem
> *reg, u32 clear, u32 set)
>   *base + 0x : least-significant 32 bits
>   *base + 0x0004 : most-significant 32 bits
>   */
> -#ifdef CONFIG_64BIT
>  static inline void wr_reg64(void __iomem *reg, u64 data)
>  {
> +#ifndef CONFIG_CRYPTO_DEV_FSL_CAAM_IMX
> if (caam_little_end)
> iowrite64(data, reg);
> else
> -   iowrite64be(data, reg);
> -}
> -
> -static inline u64 rd_reg64(void __iomem *reg)
> -{
> -   if (caam_little_end)
> -   return ioread64(reg);
> -   else
> -   return ioread64be(reg);
> -}
> -
> -#else /* CONFIG_64BIT */
> -static inline void wr_reg64(void __iomem *reg, u64 data)
> -{
> -#ifndef CONFIG_CRYPTO_DEV_FSL_CAAM_IMX
> -   if (caam_little_end) {
> -   wr_reg32((u32 __iomem *)(reg) + 1, data >> 32);
> -   wr_reg32((u32 __iomem *)(reg), data);
> -   } else
>  #endif
> -   {
> -   wr_reg32((u32 __iomem *)(reg), data >> 32);
> -   wr_reg32((u32 __iomem *)(reg) + 1, data);
> -   }
> +   iowrite64be(data, reg);
>  }
> 
>  static inline u64 rd_reg64(void __iomem *reg)
>  {
>  #ifndef CONFIG_CRYPTO_DEV_FSL_CAAM_IMX
> if (caam_little_end)
> -   return ((u64)rd_reg32((u32 __iomem *)(reg) + 1) << 32 |
> -   (u64)rd_reg32((u32 __iomem *)(reg)));
> +   return ioread64(reg);
> else
>  #endif
> -   return ((u64)rd_reg32((u32 __iomem *)(reg)) << 32 |
> -   (u64)rd_reg32((u32 __iomem *)(reg) + 1));
> +   return ioread64be(reg);
>  }
> -#endif /* CONFIG_64BIT  */
> 
>  #ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
>  #ifdef CONFIG_SOC_IMX7D
> 
> 
>> Signed-off-by: Logan Gunthorpe 
>> Cc: "Horia Geantă" 
>> Cc: Dan Douglass 
>> Cc: Herbert Xu 
>> Cc: "David S. Miller" 
>> ---
>>  drivers/crypto/caam/regs.h | 29 -
>>  1 file changed, 29 deletions(-)
>>
>> diff --git a/drivers/crypto/caam/regs.h b/drivers/crypto/caam/regs.h
>> index 84d2f838a063..26fc19dd0c39 100644
>> --- a/drivers/crypto/caam/regs.h
>> +++ b/drivers/crypto/caam/regs.h
>> @@ -134,7 +134,6 @@ static inline void clrsetbits_32(void __iomem *reg, u32 
>> clear, u32 set)
>>   *base + 0x : least-significant 32 bits
>>   *base + 0x0004 : most-significant 32 bits
>>   */
>> -#ifdef CONFIG_64BIT
>>  static inline void wr_reg64(void __iomem *reg, u64 data)
>>  {
>>  if (caam_little_end)
>> @@ -151,34 +150,6 @@ static inline u64 rd_reg64(void __iomem *reg)
>>  return ioread64be(reg);
>>  }
>>  
>> -#else /* CONFIG_64BIT */
>> -static inline void wr_reg64(void __iomem *reg, u64 data)
>> -{
>> -#ifndef CONFIG_CRYPTO_DEV_FSL_CAAM_IMX
>> -if (caam_little_end) {
>> -wr_reg32((u32 __iomem *)(reg) + 1, data >> 32);
>> -wr_reg32((u32 __iomem *)(reg), data);
>> -} else
>> -#endif
>> -{
>> -wr_reg32((u32 __iomem *)(reg), data >> 32);
>> -wr_reg32((u32 __iomem *)(reg) + 1, data);
>> -}
>> -}
>> -
>> -static inline u64 rd_reg64(void __iomem *reg)
>> -{
>> -#ifndef CONFIG_CRYPTO_DEV_FSL_CAAM_IMX
>> -if (caam_little_end)
>> -return ((u64)rd_reg32((u32 __iomem *)(reg) + 1) << 32 |
>> -(u64)rd_reg32((u32 __iomem *)(reg)));
>> -else
>> -#endif
>> -return ((u64)rd_reg32((u32 __iomem *)(reg)) << 32 |
>> -(u64)rd_reg32((u32 __iomem *)(reg) + 1));
>> -}
>> -#endif /* CONFIG_64BIT  */
>> -
>>  #ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
>>  #ifdef CONFIG_SOC_IMX7D
>>  #define 

Re: [PATCH v1] crypto: brcm - software fallback for cryptlen zero

2017-06-23 Thread Raveendra Padasalagi
Need to consider some more scenarios.
So NAKing this patch. Will send out re-vised version.

Regards,
Raveendra


On Fri, Jun 23, 2017 at 2:24 PM, Raveendra Padasalagi
 wrote:
> Zero length payload requests are not handled in
> Broadcom SPU2 engine, so this patch adds conditional
> check to fallback to software implementation for AES-GCM
> and AES-CCM algorithms.
>
> Fixes: 9d12ba86f818 ("crypto: brcm - Add Broadcom SPU driver")
> Signed-off-by: Raveendra Padasalagi 
> Reviewed-by: Ray Jui 
> Reviewed-by: Scott Branden 
> Cc: sta...@vger.kernel.org
> ---
>
> Changes in v1:
>  - Added Cc tag in the Signed-off area to send the patch to stable kernel
>
>  drivers/crypto/bcm/cipher.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/crypto/bcm/cipher.c b/drivers/crypto/bcm/cipher.c
> index cc0d5b9..6c80863 100644
> --- a/drivers/crypto/bcm/cipher.c
> +++ b/drivers/crypto/bcm/cipher.c
> @@ -2625,7 +2625,7 @@ static int aead_need_fallback(struct aead_request *req)
>  */
> if (((ctx->cipher.mode == CIPHER_MODE_GCM) ||
>  (ctx->cipher.mode == CIPHER_MODE_CCM)) &&
> -   (req->assoclen == 0)) {
> +   ((req->assoclen == 0) || (req->cryptlen == 0))) {
> if ((rctx->is_encrypt && (req->cryptlen == 0)) ||
> (!rctx->is_encrypt && (req->cryptlen == 
> ctx->digestsize))) {
> flow_log("AES GCM/CCM needs fallback for 0 len 
> req\n");
> --
> 1.9.1
>


Re: [bug] sha1-avx2 and read beyond

2017-06-23 Thread Tim Chen
On 06/23/2017 01:48 AM, Jan Stancek wrote:
> 
> 
> - Original Message -
>> On Wed, May 24, 2017 at 08:46:57AM -0400, Jan Stancek wrote:
>>>
>>>
>>> - Original Message -
 Hi,

 I'm seeing rare crashes during NFS cthon with krb5 auth. After
 some digging I arrived at potential problem with sha1-avx2.
>>>
>>> Adding more sha1_avx2 experts to CC.
>>>

 Problem appears to be that sha1_transform_avx2() reads beyond
 number of blocks you pass, if it is an odd number. It appears
 to try read one block more. This creates a problem if it falls
 beyond a page and there's nothing there.
>>>
>>> As noted in my reply, worst case appears to be read ahead
>>> of up to 3 SHA1 blocks beyond end of data:
>>>   http://marc.info/?l=linux-crypto-vger=149373371023377
>>>
>>>  +--+-+-+-+
>>>  | 2*SHA1_BLOCK_SIZE  | 2*SHA1_BLOCK_SIZE |
>>>  +--+-+-+-+
>>> ^ page boundary
>>> ^ data end
>>>
>>> It is still reproducible with 4.12-rc2.
>>
>> Can someone from Intel please look into this? Otherwise we'll have
>> to disable sha-avx2.
> 
> So I take it my workaround patch [1] is not acceptable in
> short-term as well?
> 
> [1] http://marc.info/?l=linux-crypto-vger=149373371023377
> 
> Regards,
> Jan
> 

Megha,

Can you take a look at this issue?

Thanks.

Tim


[PATCH v2 1/3] crypto: ccp - Use devres interface to allocate PCI/iomap and cleanup

2017-06-23 Thread Brijesh Singh
Update pci and platform files to use devres interface to allocate the PCI
and iomap resources. Also add helper functions to consolicate module init,
exit and power mangagement code duplication.

Signed-off-by: Brijesh Singh 
---
 drivers/crypto/ccp/ccp-dev-v3.c   |   8 +++
 drivers/crypto/ccp/ccp-dev.c  |  61 
 drivers/crypto/ccp/ccp-dev.h  |   6 ++
 drivers/crypto/ccp/ccp-pci.c  | 114 +-
 drivers/crypto/ccp/ccp-platform.c |  56 ++-
 5 files changed, 107 insertions(+), 138 deletions(-)

diff --git a/drivers/crypto/ccp/ccp-dev-v3.c b/drivers/crypto/ccp/ccp-dev-v3.c
index 367c2e3..1cae5a3 100644
--- a/drivers/crypto/ccp/ccp-dev-v3.c
+++ b/drivers/crypto/ccp/ccp-dev-v3.c
@@ -586,6 +586,14 @@ static const struct ccp_actions ccp3_actions = {
.irqhandler = ccp_irq_handler,
 };
 
+const struct ccp_vdata ccpv3_platform = {
+   .version = CCP_VERSION(3, 0),
+   .setup = NULL,
+   .perform = _actions,
+   .bar = 2,
+   .offset = 0,
+};
+
 const struct ccp_vdata ccpv3 = {
.version = CCP_VERSION(3, 0),
.setup = NULL,
diff --git a/drivers/crypto/ccp/ccp-dev.c b/drivers/crypto/ccp/ccp-dev.c
index 2506b50..ce35e43 100644
--- a/drivers/crypto/ccp/ccp-dev.c
+++ b/drivers/crypto/ccp/ccp-dev.c
@@ -538,8 +538,69 @@ bool ccp_queues_suspended(struct ccp_device *ccp)
 
return ccp->cmd_q_count == suspended;
 }
+
+int ccp_dev_suspend(struct ccp_device *ccp, pm_message_t state)
+{
+   unsigned long flags;
+   unsigned int i;
+
+   spin_lock_irqsave(>cmd_lock, flags);
+
+   ccp->suspending = 1;
+
+   /* Wake all the queue kthreads to prepare for suspend */
+   for (i = 0; i < ccp->cmd_q_count; i++)
+   wake_up_process(ccp->cmd_q[i].kthread);
+
+   spin_unlock_irqrestore(>cmd_lock, flags);
+
+   /* Wait for all queue kthreads to say they're done */
+   while (!ccp_queues_suspended(ccp))
+   wait_event_interruptible(ccp->suspend_queue,
+ccp_queues_suspended(ccp));
+
+   return 0;
+}
+
+int ccp_dev_resume(struct ccp_device *ccp)
+{
+   unsigned long flags;
+   unsigned int i;
+
+   spin_lock_irqsave(>cmd_lock, flags);
+
+   ccp->suspending = 0;
+
+   /* Wake up all the kthreads */
+   for (i = 0; i < ccp->cmd_q_count; i++) {
+   ccp->cmd_q[i].suspended = 0;
+   wake_up_process(ccp->cmd_q[i].kthread);
+   }
+
+   spin_unlock_irqrestore(>cmd_lock, flags);
+
+   return 0;
+}
 #endif
 
+int ccp_dev_init(struct ccp_device *ccp)
+{
+   if (ccp->vdata->setup)
+   ccp->vdata->setup(ccp);
+
+   ccp->io_regs = ccp->io_map + ccp->vdata->offset;
+
+   return ccp->vdata->perform->init(ccp);
+}
+
+void ccp_dev_destroy(struct ccp_device *ccp)
+{
+   if (!ccp)
+   return;
+
+   ccp->vdata->perform->destroy(ccp);
+}
+
 static int __init ccp_mod_init(void)
 {
 #ifdef CONFIG_X86
diff --git a/drivers/crypto/ccp/ccp-dev.h b/drivers/crypto/ccp/ccp-dev.h
index a70154a..df2e76e 100644
--- a/drivers/crypto/ccp/ccp-dev.h
+++ b/drivers/crypto/ccp/ccp-dev.h
@@ -652,6 +652,11 @@ void ccp_dmaengine_unregister(struct ccp_device *ccp);
 void ccp5_debugfs_setup(struct ccp_device *ccp);
 void ccp5_debugfs_destroy(void);
 
+int ccp_dev_init(struct ccp_device *ccp);
+void ccp_dev_destroy(struct ccp_device *ccp);
+int ccp_dev_suspend(struct ccp_device *ccp, pm_message_t state);
+int ccp_dev_resume(struct ccp_device *ccp);
+
 /* Structure for computation functions that are device-specific */
 struct ccp_actions {
int (*aes)(struct ccp_op *);
@@ -679,6 +684,7 @@ struct ccp_vdata {
const unsigned int offset;
 };
 
+extern const struct ccp_vdata ccpv3_platform;
 extern const struct ccp_vdata ccpv3;
 extern const struct ccp_vdata ccpv5a;
 extern const struct ccp_vdata ccpv5b;
diff --git a/drivers/crypto/ccp/ccp-pci.c b/drivers/crypto/ccp/ccp-pci.c
index e880d4cf4..490ad0a 100644
--- a/drivers/crypto/ccp/ccp-pci.c
+++ b/drivers/crypto/ccp/ccp-pci.c
@@ -150,28 +150,13 @@ static void ccp_free_irqs(struct ccp_device *ccp)
ccp->irq = 0;
 }
 
-static int ccp_find_mmio_area(struct ccp_device *ccp)
-{
-   struct device *dev = ccp->dev;
-   struct pci_dev *pdev = to_pci_dev(dev);
-   resource_size_t io_len;
-   unsigned long io_flags;
-
-   io_flags = pci_resource_flags(pdev, ccp->vdata->bar);
-   io_len = pci_resource_len(pdev, ccp->vdata->bar);
-   if ((io_flags & IORESOURCE_MEM) &&
-   (io_len >= (ccp->vdata->offset + 0x800)))
-   return ccp->vdata->bar;
-
-   return -EIO;
-}
-
 static int ccp_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 {
struct ccp_device *ccp;
struct ccp_pci *ccp_pci;
struct device *dev = >dev;
-   unsigned int bar;
+   void __iomem * const *iomap_table;
+   int bar_mask;
 

[PATCH v2 3/3] crypto: cpp - Abstract interrupt registeration

2017-06-23 Thread Brijesh Singh
The CCP and PSP devices part of AMD Secure Procesor may share the same
interrupt. Hence we expand the SP device to register a common interrupt
handler and provide functions to CCP and PSP devices to register their
interrupt callback which will be invoked upon interrupt.

Signed-off-by: Brijesh Singh 
---
 drivers/crypto/ccp/ccp-dev-v3.c   |   6 +--
 drivers/crypto/ccp/ccp-dev-v5.c   |   7 ++-
 drivers/crypto/ccp/ccp-dev.c  |   3 +-
 drivers/crypto/ccp/ccp-dev.h  |   2 -
 drivers/crypto/ccp/ccp-pci.c  | 103 +++-
 drivers/crypto/ccp/ccp-platform.c |  57 ++--
 drivers/crypto/ccp/sp-dev.c   | 107 ++
 drivers/crypto/ccp/sp-dev.h   |  17 +-
 8 files changed, 187 insertions(+), 115 deletions(-)

diff --git a/drivers/crypto/ccp/ccp-dev-v3.c b/drivers/crypto/ccp/ccp-dev-v3.c
index 57179034..695fde8 100644
--- a/drivers/crypto/ccp/ccp-dev-v3.c
+++ b/drivers/crypto/ccp/ccp-dev-v3.c
@@ -453,7 +453,7 @@ static int ccp_init(struct ccp_device *ccp)
iowrite32(ccp->qim, ccp->io_regs + IRQ_STATUS_REG);
 
/* Request an irq */
-   ret = ccp->get_irq(ccp);
+   ret = sp_request_ccp_irq(ccp->sp, ccp_irq_handler, ccp->name, ccp);
if (ret) {
dev_err(dev, "unable to allocate an IRQ\n");
goto e_pool;
@@ -510,7 +510,7 @@ static int ccp_init(struct ccp_device *ccp)
if (ccp->cmd_q[i].kthread)
kthread_stop(ccp->cmd_q[i].kthread);
 
-   ccp->free_irq(ccp);
+   sp_free_ccp_irq(ccp->sp, ccp);
 
 e_pool:
for (i = 0; i < ccp->cmd_q_count; i++)
@@ -549,7 +549,7 @@ static void ccp_destroy(struct ccp_device *ccp)
if (ccp->cmd_q[i].kthread)
kthread_stop(ccp->cmd_q[i].kthread);
 
-   ccp->free_irq(ccp);
+   sp_free_ccp_irq(ccp->sp, ccp);
 
for (i = 0; i < ccp->cmd_q_count; i++)
dma_pool_destroy(ccp->cmd_q[i].dma_pool);
diff --git a/drivers/crypto/ccp/ccp-dev-v5.c b/drivers/crypto/ccp/ccp-dev-v5.c
index 8ed2b37..b0391f0 100644
--- a/drivers/crypto/ccp/ccp-dev-v5.c
+++ b/drivers/crypto/ccp/ccp-dev-v5.c
@@ -880,7 +880,7 @@ static int ccp5_init(struct ccp_device *ccp)
 
dev_dbg(dev, "Requesting an IRQ...\n");
/* Request an irq */
-   ret = ccp->get_irq(ccp);
+   ret = sp_request_ccp_irq(ccp->sp, ccp5_irq_handler, ccp->name, ccp);
if (ret) {
dev_err(dev, "unable to allocate an IRQ\n");
goto e_pool;
@@ -986,7 +986,7 @@ static int ccp5_init(struct ccp_device *ccp)
kthread_stop(ccp->cmd_q[i].kthread);
 
 e_irq:
-   ccp->free_irq(ccp);
+   sp_free_ccp_irq(ccp->sp, ccp);
 
 e_pool:
for (i = 0; i < ccp->cmd_q_count; i++)
@@ -1036,7 +1036,7 @@ static void ccp5_destroy(struct ccp_device *ccp)
if (ccp->cmd_q[i].kthread)
kthread_stop(ccp->cmd_q[i].kthread);
 
-   ccp->free_irq(ccp);
+   sp_free_ccp_irq(ccp->sp, ccp);
 
for (i = 0; i < ccp->cmd_q_count; i++) {
cmd_q = >cmd_q[i];
@@ -1105,7 +1105,6 @@ static const struct ccp_actions ccp5_actions = {
.init = ccp5_init,
.destroy = ccp5_destroy,
.get_free_slots = ccp5_get_free_slots,
-   .irqhandler = ccp5_irq_handler,
 };
 
 const struct ccp_vdata ccpv5a = {
diff --git a/drivers/crypto/ccp/ccp-dev.c b/drivers/crypto/ccp/ccp-dev.c
index 8a1674a..7c751bf 100644
--- a/drivers/crypto/ccp/ccp-dev.c
+++ b/drivers/crypto/ccp/ccp-dev.c
@@ -599,8 +599,7 @@ int ccp_dev_init(struct sp_device *sp)
goto e_err;
}
 
-   ccp->get_irq = sp->get_irq;
-   ccp->free_irq = sp->free_irq;
+   ccp->use_tasklet = sp->use_tasklet;
 
ccp->io_regs = sp->io_map + ccp->vdata->offset;
if (ccp->vdata->setup)
diff --git a/drivers/crypto/ccp/ccp-dev.h b/drivers/crypto/ccp/ccp-dev.h
index ca44821..193f309 100644
--- a/drivers/crypto/ccp/ccp-dev.h
+++ b/drivers/crypto/ccp/ccp-dev.h
@@ -351,8 +351,6 @@ struct ccp_device {
/* Bus specific device information
 */
void *dev_specific;
-   int (*get_irq)(struct ccp_device *ccp);
-   void (*free_irq)(struct ccp_device *ccp);
unsigned int qim;
unsigned int irq;
bool use_tasklet;
diff --git a/drivers/crypto/ccp/ccp-pci.c b/drivers/crypto/ccp/ccp-pci.c
index 7eab3c6..f6b9858 100644
--- a/drivers/crypto/ccp/ccp-pci.c
+++ b/drivers/crypto/ccp/ccp-pci.c
@@ -28,67 +28,37 @@
 
 #define MSIX_VECTORS   2
 
-struct ccp_msix {
-   u32 vector;
-   char name[16];
-};
-
 struct ccp_pci {
int msix_count;
-   struct ccp_msix msix[MSIX_VECTORS];
+   struct msix_entry msix_entry[MSIX_VECTORS];
 };
 
-static int ccp_get_msix_irqs(struct ccp_device *ccp)
+static int ccp_get_msix_irqs(struct sp_device *sp)
 {
-   struct sp_device *sp = ccp->sp;
struct 

[PATCH v2 0/3] Introduce AMD Secure Processor device

2017-06-23 Thread Brijesh Singh
CCP device (drivers/crypto/ccp/ccp.ko) is part of AMD Secure Processor,
which is not dedicated solely to crypto. The AMD Secure Processor includes
CCP and PSP (Platform Secure Processor) devices.

This patch series adds a framework that allows functional component of the
AMD Secure Processor to be initialized and handled appropriately. The series
does not makes any logic modification into CCP - it refactors the code to
integerate CCP into AMD secure processor framework.

---

Changes since v1:
 - remove unused function [sp_get_device()]

Brijesh Singh (3):
  crypto: ccp - Use devres interface to allocate PCI/iomap and cleanup
  crypto: ccp - Introduce the AMD Secure Processor device
  crypto: cpp - Abstract interrupt registeration

 drivers/crypto/Kconfig|  10 +-
 drivers/crypto/ccp/Kconfig|  43 +++---
 drivers/crypto/ccp/Makefile   |   6 +-
 drivers/crypto/ccp/ccp-dev-v3.c   |  17 ++-
 drivers/crypto/ccp/ccp-dev-v5.c   |  12 +-
 drivers/crypto/ccp/ccp-dev.c  | 124 ++--
 drivers/crypto/ccp/ccp-dev.h  |  19 +--
 drivers/crypto/ccp/ccp-pci.c  | 264 ---
 drivers/crypto/ccp/ccp-platform.c | 165 --
 drivers/crypto/ccp/sp-dev.c   | 287 ++
 drivers/crypto/ccp/sp-dev.h   | 133 ++
 include/linux/ccp.h   |   3 +-
 12 files changed, 712 insertions(+), 371 deletions(-)
 create mode 100644 drivers/crypto/ccp/sp-dev.c
 create mode 100644 drivers/crypto/ccp/sp-dev.h

-- 
2.9.4



[PATCH v2 2/3] crypto: ccp - Introduce the AMD Secure Processor device

2017-06-23 Thread Brijesh Singh
The CCP device is part of the AMD Secure Processor. In order to expand
the usage of the AMD Secure Processor, create a framework that allows
functional components of the AMD Secure Processor to be initialized and
handled appropriately.

Signed-off-by: Brijesh Singh 
---
 drivers/crypto/Kconfig|  10 +--
 drivers/crypto/ccp/Kconfig|  43 +
 drivers/crypto/ccp/Makefile   |   6 +-
 drivers/crypto/ccp/ccp-dev-v3.c   |   5 +-
 drivers/crypto/ccp/ccp-dev-v5.c   |   5 +-
 drivers/crypto/ccp/ccp-dev.c  | 106 +-
 drivers/crypto/ccp/ccp-dev.h  |  21 +
 drivers/crypto/ccp/ccp-pci.c  |  81 +++--
 drivers/crypto/ccp/ccp-platform.c |  70 ---
 drivers/crypto/ccp/sp-dev.c   | 180 ++
 drivers/crypto/ccp/sp-dev.h   | 120 +
 include/linux/ccp.h   |   3 +-
 12 files changed, 475 insertions(+), 175 deletions(-)
 create mode 100644 drivers/crypto/ccp/sp-dev.c
 create mode 100644 drivers/crypto/ccp/sp-dev.h

diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig
index 0528a62..418f991 100644
--- a/drivers/crypto/Kconfig
+++ b/drivers/crypto/Kconfig
@@ -512,14 +512,14 @@ config CRYPTO_DEV_ATMEL_SHA
  To compile this driver as a module, choose M here: the module
  will be called atmel-sha.
 
-config CRYPTO_DEV_CCP
-   bool "Support for AMD Cryptographic Coprocessor"
+config CRYPTO_DEV_SP
+   bool "Support for AMD Secure Processor"
depends on ((X86 && PCI) || (ARM64 && (OF_ADDRESS || ACPI))) && 
HAS_IOMEM
help
- The AMD Cryptographic Coprocessor provides hardware offload support
- for encryption, hashing and related operations.
+ The AMD Secure Processor provides hardware offload support for memory
+ encryption in virtualization and cryptographic hashing and related 
operations.
 
-if CRYPTO_DEV_CCP
+if CRYPTO_DEV_SP
source "drivers/crypto/ccp/Kconfig"
 endif
 
diff --git a/drivers/crypto/ccp/Kconfig b/drivers/crypto/ccp/Kconfig
index 2238f77..bc08f03 100644
--- a/drivers/crypto/ccp/Kconfig
+++ b/drivers/crypto/ccp/Kconfig
@@ -1,26 +1,37 @@
-config CRYPTO_DEV_CCP_DD
-   tristate "Cryptographic Coprocessor device driver"
-   depends on CRYPTO_DEV_CCP
-   default m
-   select HW_RANDOM
-   select DMA_ENGINE
-   select DMADEVICES
-   select CRYPTO_SHA1
-   select CRYPTO_SHA256
-   help
- Provides the interface to use the AMD Cryptographic Coprocessor
- which can be used to offload encryption operations such as SHA,
- AES and more. If you choose 'M' here, this module will be called
- ccp.
-
 config CRYPTO_DEV_CCP_CRYPTO
tristate "Encryption and hashing offload support"
-   depends on CRYPTO_DEV_CCP_DD
+   depends on CRYPTO_DEV_SP_DD
default m
select CRYPTO_HASH
select CRYPTO_BLKCIPHER
select CRYPTO_AUTHENC
+   select CRYPTO_DEV_CCP
help
  Support for using the cryptographic API with the AMD Cryptographic
  Coprocessor. This module supports offload of SHA and AES algorithms.
  If you choose 'M' here, this module will be called ccp_crypto.
+
+config CRYPTO_DEV_SP_DD
+   tristate "Secure Processor device driver"
+   depends on CRYPTO_DEV_SP
+   default m
+   help
+ Provides the interface to use the AMD Secure Processor. The
+ AMD Secure Processor support the Platform Security Processor (PSP)
+ and Cryptographic Coprocessor (CCP). If you choose 'M' here, this
+ module will be called ccp.
+
+if CRYPTO_DEV_SP_DD
+config CRYPTO_DEV_CCP
+   bool "Cryptographic Coprocessor interface"
+   default y
+   select HW_RANDOM
+   select DMA_ENGINE
+   select DMADEVICES
+   select CRYPTO_SHA1
+   select CRYPTO_SHA256
+   help
+ Provides the interface to use the AMD Cryptographic Coprocessor
+ which can be used to offload encryption operations such as SHA,
+ AES and more.
+endif
diff --git a/drivers/crypto/ccp/Makefile b/drivers/crypto/ccp/Makefile
index 59493fd..ea42888 100644
--- a/drivers/crypto/ccp/Makefile
+++ b/drivers/crypto/ccp/Makefile
@@ -1,9 +1,9 @@
-obj-$(CONFIG_CRYPTO_DEV_CCP_DD) += ccp.o
-ccp-objs := ccp-dev.o \
+obj-$(CONFIG_CRYPTO_DEV_SP_DD) += ccp.o
+ccp-objs  := sp-dev.o ccp-platform.o
+ccp-$(CONFIG_CRYPTO_DEV_CCP) += ccp-dev.o \
ccp-ops.o \
ccp-dev-v3.o \
ccp-dev-v5.o \
-   ccp-platform.o \
ccp-dmaengine.o \
ccp-debugfs.o
 ccp-$(CONFIG_PCI) += ccp-pci.o
diff --git a/drivers/crypto/ccp/ccp-dev-v3.c b/drivers/crypto/ccp/ccp-dev-v3.c
index 1cae5a3..57179034 100644
--- a/drivers/crypto/ccp/ccp-dev-v3.c
+++ b/drivers/crypto/ccp/ccp-dev-v3.c
@@ -359,8 +359,7 @@ static void ccp_irq_bh(unsigned long data)
 
 static irqreturn_t ccp_irq_handler(int irq, 

[PATCH] crypto: virtio - Refacotor virtio_crypto driver for new virito crypto services

2017-06-23 Thread Xin Zeng
In current virtio crypto device driver, some common data structures and
implementations that should be used by other virtio crypto algorithms
(e.g. asymmetric crypto algorithms) introduce symmetric crypto algorithms
specific implementations.
This patch refactors these pieces of code so that they can be reused by
other virtio crypto algorithms.

Acked-by: Gonglei 
Signed-off-by: Xin Zeng 
---
 drivers/crypto/virtio/virtio_crypto_algs.c   | 109 +--
 drivers/crypto/virtio/virtio_crypto_common.h |  22 +-
 drivers/crypto/virtio/virtio_crypto_core.c   |  37 ++---
 3 files changed, 98 insertions(+), 70 deletions(-)

diff --git a/drivers/crypto/virtio/virtio_crypto_algs.c 
b/drivers/crypto/virtio/virtio_crypto_algs.c
index 49defda..5035b0d 100644
--- a/drivers/crypto/virtio/virtio_crypto_algs.c
+++ b/drivers/crypto/virtio/virtio_crypto_algs.c
@@ -27,12 +27,68 @@
 #include 
 #include "virtio_crypto_common.h"
 
+
+struct virtio_crypto_ablkcipher_ctx {
+   struct virtio_crypto *vcrypto;
+   struct crypto_tfm *tfm;
+
+   struct virtio_crypto_sym_session_info enc_sess_info;
+   struct virtio_crypto_sym_session_info dec_sess_info;
+};
+
+struct virtio_crypto_sym_request {
+   struct virtio_crypto_request base;
+
+   /* Cipher or aead */
+   uint32_t type;
+   struct virtio_crypto_ablkcipher_ctx *ablkcipher_ctx;
+   struct ablkcipher_request *ablkcipher_req;
+   uint8_t *iv;
+   /* Encryption? */
+   bool encrypt;
+};
+
 /*
  * The algs_lock protects the below global virtio_crypto_active_devs
  * and crypto algorithms registion.
  */
 static DEFINE_MUTEX(algs_lock);
 static unsigned int virtio_crypto_active_devs;
+static void virtio_crypto_ablkcipher_finalize_req(
+   struct virtio_crypto_sym_request *vc_sym_req,
+   struct ablkcipher_request *req,
+   int err);
+
+static void virtio_crypto_dataq_sym_callback
+   (struct virtio_crypto_request *vc_req, int len)
+{
+   struct virtio_crypto_sym_request *vc_sym_req =
+   container_of(vc_req, struct virtio_crypto_sym_request, base);
+   struct ablkcipher_request *ablk_req;
+   int error;
+
+   /* Finish the encrypt or decrypt process */
+   if (vc_sym_req->type == VIRTIO_CRYPTO_SYM_OP_CIPHER) {
+   switch (vc_req->status) {
+   case VIRTIO_CRYPTO_OK:
+   error = 0;
+   break;
+   case VIRTIO_CRYPTO_INVSESS:
+   case VIRTIO_CRYPTO_ERR:
+   error = -EINVAL;
+   break;
+   case VIRTIO_CRYPTO_BADMSG:
+   error = -EBADMSG;
+   break;
+   default:
+   error = -EIO;
+   break;
+   }
+   ablk_req = vc_sym_req->ablkcipher_req;
+   virtio_crypto_ablkcipher_finalize_req(vc_sym_req,
+   ablk_req, error);
+   }
+}
 
 static u64 virtio_crypto_alg_sg_nents_length(struct scatterlist *sg)
 {
@@ -286,13 +342,14 @@ static int virtio_crypto_ablkcipher_setkey(struct 
crypto_ablkcipher *tfm,
 }
 
 static int
-__virtio_crypto_ablkcipher_do_req(struct virtio_crypto_request *vc_req,
+__virtio_crypto_ablkcipher_do_req(struct virtio_crypto_sym_request *vc_sym_req,
struct ablkcipher_request *req,
struct data_queue *data_vq)
 {
struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req);
+   struct virtio_crypto_ablkcipher_ctx *ctx = vc_sym_req->ablkcipher_ctx;
+   struct virtio_crypto_request *vc_req = _sym_req->base;
unsigned int ivsize = crypto_ablkcipher_ivsize(tfm);
-   struct virtio_crypto_ablkcipher_ctx *ctx = vc_req->ablkcipher_ctx;
struct virtio_crypto *vcrypto = ctx->vcrypto;
struct virtio_crypto_op_data_req *req_data;
int src_nents, dst_nents;
@@ -326,9 +383,9 @@ __virtio_crypto_ablkcipher_do_req(struct 
virtio_crypto_request *vc_req,
}
 
vc_req->req_data = req_data;
-   vc_req->type = VIRTIO_CRYPTO_SYM_OP_CIPHER;
+   vc_sym_req->type = VIRTIO_CRYPTO_SYM_OP_CIPHER;
/* Head of operation */
-   if (vc_req->encrypt) {
+   if (vc_sym_req->encrypt) {
req_data->header.session_id =
cpu_to_le64(ctx->enc_sess_info.session_id);
req_data->header.opcode =
@@ -383,7 +440,7 @@ __virtio_crypto_ablkcipher_do_req(struct 
virtio_crypto_request *vc_req,
memcpy(iv, req->info, ivsize);
sg_init_one(_sg, iv, ivsize);
sgs[num_out++] = _sg;
-   vc_req->iv = iv;
+   vc_sym_req->iv = iv;
 
/* Source data */
for (i = 0; i < src_nents; i++)
@@ -421,15 +478,18 @@ static int virtio_crypto_ablkcipher_encrypt(struct 
ablkcipher_request *req)
 {
struct crypto_ablkcipher *atfm = 

Re: [PATCH] Documentation/bindings: crypto: remove the dma-mask property

2017-06-23 Thread Arnd Bergmann
On Fri, Jun 23, 2017 at 4:52 PM, Antoine Tenart
 wrote:
> The dma-mask property is broken and was removed in the device trees
> having a safexcel-eip197 node and in the safexcel cryptographic
> driver. This patch removes the dma-mask property from the documentation
> as well.
>
> Signed-off-by: Antoine Tenart 

Acked-by: Arnd Bergmann 


[PATCH] Documentation/bindings: crypto: remove the dma-mask property

2017-06-23 Thread Antoine Tenart
The dma-mask property is broken and was removed in the device trees
having a safexcel-eip197 node and in the safexcel cryptographic
driver. This patch removes the dma-mask property from the documentation
as well.

Signed-off-by: Antoine Tenart 
---
 Documentation/devicetree/bindings/crypto/inside-secure-safexcel.txt | 2 --
 1 file changed, 2 deletions(-)

diff --git 
a/Documentation/devicetree/bindings/crypto/inside-secure-safexcel.txt 
b/Documentation/devicetree/bindings/crypto/inside-secure-safexcel.txt
index f69773f4252b..941bb6a6fb13 100644
--- a/Documentation/devicetree/bindings/crypto/inside-secure-safexcel.txt
+++ b/Documentation/devicetree/bindings/crypto/inside-secure-safexcel.txt
@@ -8,7 +8,6 @@ Required properties:
 
 Optional properties:
 - clocks: Reference to the crypto engine clock.
-- dma-mask: The address mask limitation. Defaults to 64.
 
 Example:
 
@@ -24,6 +23,5 @@ Example:
interrupt-names = "mem", "ring0", "ring1", "ring2", "ring3",
  "eip";
clocks = <_syscon0 1 26>;
-   dma-mask = <0xff 0x>;
status = "disabled";
};
-- 
2.9.4



Re: [PATCH] crypto: inside-secure - do not parse the dma mask from dt

2017-06-23 Thread Arnd Bergmann
On Fri, Jun 23, 2017 at 4:05 PM, Antoine Tenart
 wrote:
> Remove the dma mask parsing from dt as this should not be encoded into
> the engine device tree node. Keep the fallback value for now, which
> should work for the boards already supported upstream.
>
> Signed-off-by: Antoine Tenart 

Acked-by: Arnd Bergmann 

> ---
>
> Hi Herbert,
>
> As pointed our by Arnd (ic Cc) parsing the dma mask from the dt node of
> the engine is broken. This property will be removed from the device
> trees having an inside-secure safexcel engine node. While the
> inside-secure won't fail because of this (as it will fallback to a
> 64 bits mask), the code handling the dma-mask property is dead. This
> patch removes it.

Do we also need a patch to update the DT binding?

   Arnd


[PATCH] crypto: chcr: Avoid algo allocation in softirq.

2017-06-23 Thread Harsh Jain
Thsi patch fixes calling "crypto_alloc_cipher" call in bottom halves.
Pre allocate aes cipher required to update Tweak value for XTS.

Signed-off-by: Harsh Jain 
---
 drivers/crypto/chelsio/chcr_algo.c   | 23 +++
 drivers/crypto/chelsio/chcr_crypto.h |  1 +
 2 files changed, 16 insertions(+), 8 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_algo.c 
b/drivers/crypto/chelsio/chcr_algo.c
index aa4e5b8..508cbc7 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -899,26 +899,20 @@ static int chcr_update_tweak(struct ablkcipher_request 
*req, u8 *iv)
u8 *key;
unsigned int keylen;
 
-   cipher = crypto_alloc_cipher("aes-generic", 0, 0);
+   cipher = ablkctx->aes_generic;
memcpy(iv, req->info, AES_BLOCK_SIZE);
 
-   if (IS_ERR(cipher)) {
-   ret = -ENOMEM;
-   goto out;
-   }
keylen = ablkctx->enckey_len / 2;
key = ablkctx->key + keylen;
ret = crypto_cipher_setkey(cipher, key, keylen);
if (ret)
-   goto out1;
+   goto out;
 
crypto_cipher_encrypt_one(cipher, iv, iv);
for (i = 0; i < (reqctx->processed / AES_BLOCK_SIZE); i++)
gf128mul_x_ble((le128 *)iv, (le128 *)iv);
 
crypto_cipher_decrypt_one(cipher, iv, iv);
-out1:
-   crypto_free_cipher(cipher);
 out:
return ret;
 }
@@ -1262,6 +1256,17 @@ static int chcr_cra_init(struct crypto_tfm *tfm)
pr_err("failed to allocate fallback for %s\n", alg->cra_name);
return PTR_ERR(ablkctx->sw_cipher);
}
+
+   if (get_cryptoalg_subtype(tfm) == CRYPTO_ALG_SUB_TYPE_XTS) {
+   /* To update tweak*/
+   ablkctx->aes_generic = crypto_alloc_cipher("aes-generic", 0, 0);
+   if (IS_ERR(ablkctx->aes_generic)) {
+   pr_err("failed to allocate aes cipher for tweak\n");
+   return PTR_ERR(ablkctx->aes_generic);
+   }
+   } else
+   ablkctx->aes_generic = NULL;
+
tfm->crt_ablkcipher.reqsize =  sizeof(struct chcr_blkcipher_req_ctx);
return chcr_device_init(crypto_tfm_ctx(tfm));
 }
@@ -1292,6 +1297,8 @@ static void chcr_cra_exit(struct crypto_tfm *tfm)
struct ablk_ctx *ablkctx = ABLK_CTX(ctx);
 
crypto_free_skcipher(ablkctx->sw_cipher);
+   if (ablkctx->aes_generic)
+   crypto_free_cipher(ablkctx->aes_generic);
 }
 
 static int get_alg_config(struct algo_param *params,
diff --git a/drivers/crypto/chelsio/chcr_crypto.h 
b/drivers/crypto/chelsio/chcr_crypto.h
index a4f95b0..30af1ee 100644
--- a/drivers/crypto/chelsio/chcr_crypto.h
+++ b/drivers/crypto/chelsio/chcr_crypto.h
@@ -155,6 +155,7 @@
 
 struct ablk_ctx {
struct crypto_skcipher *sw_cipher;
+   struct crypto_cipher *aes_generic;
__be32 key_ctx_hdr;
unsigned int enckey_len;
unsigned char ciph_mode;
-- 
2.1.4



[PATCH] crypto: inside-secure - do not parse the dma mask from dt

2017-06-23 Thread Antoine Tenart
Remove the dma mask parsing from dt as this should not be encoded into
the engine device tree node. Keep the fallback value for now, which
should work for the boards already supported upstream.

Signed-off-by: Antoine Tenart 
---

Hi Herbert,

As pointed our by Arnd (ic Cc) parsing the dma mask from the dt node of
the engine is broken. This property will be removed from the device
trees having an inside-secure safexcel engine node. While the
inside-secure won't fail because of this (as it will fallback to a
64 bits mask), the code handling the dma-mask property is dead. This
patch removes it.

Thanks!
Antoine

 drivers/crypto/inside-secure/safexcel.c | 5 +
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/drivers/crypto/inside-secure/safexcel.c 
b/drivers/crypto/inside-secure/safexcel.c
index e7f87ac12685..1fabd4aee81b 100644
--- a/drivers/crypto/inside-secure/safexcel.c
+++ b/drivers/crypto/inside-secure/safexcel.c
@@ -773,7 +773,6 @@ static int safexcel_probe(struct platform_device *pdev)
struct device *dev = >dev;
struct resource *res;
struct safexcel_crypto_priv *priv;
-   u64 dma_mask;
int i, ret;
 
priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
@@ -802,9 +801,7 @@ static int safexcel_probe(struct platform_device *pdev)
return -EPROBE_DEFER;
}
 
-   if (of_property_read_u64(dev->of_node, "dma-mask", _mask))
-   dma_mask = DMA_BIT_MASK(64);
-   ret = dma_set_mask_and_coherent(dev, dma_mask);
+   ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64));
if (ret)
goto err_clk;
 
-- 
2.9.4



[RFC PATCH linux-next] crypto: cvm_encrypt() can be static

2017-06-23 Thread kbuild test robot

Signed-off-by: Fengguang Wu 
---
 cptvf_algs.c |4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/crypto/cavium/cpt/cptvf_algs.c 
b/drivers/crypto/cavium/cpt/cptvf_algs.c
index 443c362..4303674 100644
--- a/drivers/crypto/cavium/cpt/cptvf_algs.c
+++ b/drivers/crypto/cavium/cpt/cptvf_algs.c
@@ -222,12 +222,12 @@ static inline int cvm_enc_dec(struct ablkcipher_request 
*req, u32 enc)
return -EINPROGRESS;
 }
 
-int cvm_encrypt(struct ablkcipher_request *req)
+static int cvm_encrypt(struct ablkcipher_request *req)
 {
return cvm_enc_dec(req, true);
 }
 
-int cvm_decrypt(struct ablkcipher_request *req)
+static int cvm_decrypt(struct ablkcipher_request *req)
 {
return cvm_enc_dec(req, false);
 }


[linux-next:master 7715/9581] drivers/crypto/cavium/cpt/cptvf_algs.c:225:5: sparse: symbol 'cvm_encrypt' was not declared. Should it be static?

2017-06-23 Thread kbuild test robot
tree:   https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git 
master
head:   a73468728fd8f34ccbd7c60f0808024ae491f4d6
commit: e2eb769ed0bdc06cb523f475db411ce3a5f1c465 [7715/9581] crypto: cavium - 
Remove the individual encrypt/decrypt function for each algorithm
reproduce:
# apt-get install sparse
git checkout e2eb769ed0bdc06cb523f475db411ce3a5f1c465
make ARCH=x86_64 allmodconfig
make C=1 CF=-D__CHECK_ENDIAN__


sparse warnings: (new ones prefixed by >>)

   drivers/crypto/cavium/cpt/cptvf_algs.c:135:21: sparse: incorrect type in 
assignment (different base types)
   drivers/crypto/cavium/cpt/cptvf_algs.c:135:21:expected unsigned long 
long [unsigned] [long] [long long] [usertype] 
   drivers/crypto/cavium/cpt/cptvf_algs.c:135:21:got restricted __be64 
[usertype] 
   drivers/crypto/cavium/cpt/cptvf_algs.c:137:25: sparse: incorrect type in 
assignment (different base types)
   drivers/crypto/cavium/cpt/cptvf_algs.c:137:25:expected unsigned long 
long [unsigned] [long] [long long] [usertype] 
   drivers/crypto/cavium/cpt/cptvf_algs.c:137:25:got restricted __be64 
[usertype] 
>> drivers/crypto/cavium/cpt/cptvf_algs.c:225:5: sparse: symbol 'cvm_encrypt' 
>> was not declared. Should it be static?
   drivers/crypto/cavium/cpt/cptvf_algs.c:135:21: sparse: incorrect type in 
assignment (different base types)
   drivers/crypto/cavium/cpt/cptvf_algs.c:135:21:expected unsigned long 
long [unsigned] [long] [long long] [usertype] 
   drivers/crypto/cavium/cpt/cptvf_algs.c:135:21:got restricted __be64 
[usertype] 
   drivers/crypto/cavium/cpt/cptvf_algs.c:137:25: sparse: incorrect type in 
assignment (different base types)
   drivers/crypto/cavium/cpt/cptvf_algs.c:137:25:expected unsigned long 
long [unsigned] [long] [long long] [usertype] 
   drivers/crypto/cavium/cpt/cptvf_algs.c:137:25:got restricted __be64 
[usertype] 
>> drivers/crypto/cavium/cpt/cptvf_algs.c:230:5: sparse: symbol 'cvm_decrypt' 
>> was not declared. Should it be static?
   drivers/crypto/cavium/cpt/cptvf_algs.c:235:5: sparse: symbol 
'cvm_xts_setkey' was not declared. Should it be static?
   drivers/crypto/cavium/cpt/cptvf_algs.c:321:5: sparse: symbol 
'cvm_enc_dec_init' was not declared. Should it be static?

Please review and possibly fold the followup patch.

---
0-DAY kernel test infrastructureOpen Source Technology Center
https://lists.01.org/pipermail/kbuild-all   Intel Corporation


[PATCH][crypto-next] crypto: cavium/nitrox - Change in firmware path.

2017-06-23 Thread Srikanth Jampala
Moved the firmware to "cavium" subdirectory as suggested by
Kyle McMartin.

Signed-off-by: Srikanth Jampala 
---
 drivers/crypto/cavium/nitrox/nitrox_main.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/crypto/cavium/nitrox/nitrox_main.c 
b/drivers/crypto/cavium/nitrox/nitrox_main.c
index ae44a46..9ccefb9 100644
--- a/drivers/crypto/cavium/nitrox/nitrox_main.c
+++ b/drivers/crypto/cavium/nitrox/nitrox_main.c
@@ -18,8 +18,9 @@
 #define SE_GROUP 0
 
 #define DRIVER_VERSION "1.0"
+#define FW_DIR "cavium/"
 /* SE microcode */
-#define SE_FW  "cnn55xx_se.fw"
+#define SE_FW  FW_DIR "cnn55xx_se.fw"
 
 static const char nitrox_driver_name[] = "CNN55XX";
 
-- 
2.9.4



Re: [PATCH 1/1] cavium: Add firmware for CNN55XX crypto driver.

2017-06-23 Thread srikanth jampala
Sure kyle, I will work on this.

Thanks.

On Friday 23 June 2017 12:39 AM, Kyle McMartin wrote:
> On Fri, Jun 16, 2017 at 07:52:26PM +0530, Srikanth Jampala wrote:
>> This patchset adds the firmware for CNN55XX cryto driver,
>> supports Symmetric crypto operations.
>>
>> The version of the firmware is v07.
>>
>> Signed-off-by: Srikanth Jampala 
>> ---
>>  WHENCE|   9 +
>>  cnn55xx_se.fw | Bin 0 -> 27698 bytes
> 
> any chance i could convince you to put this in a cavium/ subdirectory?
> 
> --kyle
> 


Re: [PATCH v10 1/2] crypto: skcipher AF_ALG - overhaul memory management

2017-06-23 Thread Stephan Müller
Am Freitag, 23. Juni 2017, 08:10:48 CEST schrieb Herbert Xu:

Hi Herbert,

> On Wed, Jun 21, 2017 at 10:03:02PM +0200, Stephan Müller wrote:
> > +   /* convert iovecs of output buffers into RX SGL */
> > +   while (len < ctx->used && msg_data_left(msg)) {
> 
> How are we supposed to reach the wait path when ctx->used == 0?

Right. 

May I ask whether that wait is correct to begin with? The recvmsg is protected 
by a lock_sock. Thus, if the code is waiting, the lock is still held. So, how 
can data be inserted into the socket by sendmsg/sendpage while recvmsg is 
waiting? Don't we have a deadlock here?
> 
> > +   /*
> > +* This error covers -EIOCBQUEUED which implies that we can
> > +* only handle one AIO request. If the caller wants to have
> > +* multiple AIO requests in parallel, he must make multiple
> > +* separate AIO calls.
> > +*/
> > +   if (err < 0) {
> > +   if (err == -EIOCBQUEUED)
> > +   ret = err;
> > +   goto out;
> > 
> > }
> > 
> > +   if (!err)
> > +   goto out;
> 
> You can combine the two now as err <= 0.

Fixed, thank you.
> 
> Thanks,



Ciao
Stephan


[PATCH v1] crypto: brcm - Fix SHA3-512 algorithm failure

2017-06-23 Thread Raveendra Padasalagi
In Broadcom SPU driver, due to missing break statement
in spu2_hash_xlate() while mapping SPU2 equivalent
SHA3-512 value, -EINVAL is chosen and hence leading to
failure of SHA3-512 algorithm. This patch fixes the same.

Fixes: 9d12ba86f818 ("crypto: brcm - Add Broadcom SPU driver")
Signed-off-by: Raveendra Padasalagi 
Reviewed-by: Ray Jui 
Reviewed-by: Scott Branden 
Cc: sta...@vger.kernel.org
---

Changes in v1:
 - Added Cc and fixes tag in the Signed-off area to send the patch
   to stable kernel

 drivers/crypto/bcm/spu2.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/crypto/bcm/spu2.c b/drivers/crypto/bcm/spu2.c
index ef04c97..bf7ac62 100644
--- a/drivers/crypto/bcm/spu2.c
+++ b/drivers/crypto/bcm/spu2.c
@@ -302,6 +302,7 @@ static int spu2_hash_mode_xlate(enum hash_mode hash_mode,
break;
case HASH_ALG_SHA3_512:
*spu2_type = SPU2_HASH_TYPE_SHA3_512;
+   break;
case HASH_ALG_LAST:
default:
err = -EINVAL;
-- 
1.9.1



[PATCH v1] crypto: brcm - software fallback for cryptlen zero

2017-06-23 Thread Raveendra Padasalagi
Zero length payload requests are not handled in
Broadcom SPU2 engine, so this patch adds conditional
check to fallback to software implementation for AES-GCM
and AES-CCM algorithms.

Fixes: 9d12ba86f818 ("crypto: brcm - Add Broadcom SPU driver")
Signed-off-by: Raveendra Padasalagi 
Reviewed-by: Ray Jui 
Reviewed-by: Scott Branden 
Cc: sta...@vger.kernel.org
---

Changes in v1:
 - Added Cc tag in the Signed-off area to send the patch to stable kernel

 drivers/crypto/bcm/cipher.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/crypto/bcm/cipher.c b/drivers/crypto/bcm/cipher.c
index cc0d5b9..6c80863 100644
--- a/drivers/crypto/bcm/cipher.c
+++ b/drivers/crypto/bcm/cipher.c
@@ -2625,7 +2625,7 @@ static int aead_need_fallback(struct aead_request *req)
 */
if (((ctx->cipher.mode == CIPHER_MODE_GCM) ||
 (ctx->cipher.mode == CIPHER_MODE_CCM)) &&
-   (req->assoclen == 0)) {
+   ((req->assoclen == 0) || (req->cryptlen == 0))) {
if ((rctx->is_encrypt && (req->cryptlen == 0)) ||
(!rctx->is_encrypt && (req->cryptlen == ctx->digestsize))) {
flow_log("AES GCM/CCM needs fallback for 0 len req\n");
-- 
1.9.1



Re: [bug] sha1-avx2 and read beyond

2017-06-23 Thread Herbert Xu
On Fri, Jun 23, 2017 at 04:48:51AM -0400, Jan Stancek wrote:
>
> So I take it my workaround patch [1] is not acceptable in
> short-term as well?
> 
> [1] http://marc.info/?l=linux-crypto-vger=149373371023377

As we don't have a proper fix we may not be aware of the complete
scope of the problem (e.g., the overrun may go beyond 3 blocks).
As this is code that is exposed to remote entities, it would be
safest to disable it until we get a proper fix.

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: dm-crypt IV generation (summary)

2017-06-23 Thread Herbert Xu
On Thu, May 18, 2017 at 01:40:38PM +0200, Ondrej Mosnacek wrote:
>
> > Actually I think this one can probably easily handled in the crypto
> > layer.  All we need is to add a multikey template that sits on top
> > of an underlying IV generator.  The multikey instance can then accept
> > a key length that is a multiple of the underlying key length.
> 
> I thought of that, too, but unfortunately with TCW the key structure is:
> 
> | KEY 1 | KEY 2 | ... | KEY n | IV SEED (size = IV size) | WHITENING
> (size = 16 bytes) |
> 
> So part of the key needs to be processed before it is split into multiple 
> keys.
> 
> Also with the LMK mode, there is a similar optional key appendix,
> which is of the same size as the other subkeys.

The format of the key isn't an issue.  Because we're writing this
from scratch.  We can change the format in any way we want, e.g.,
we could include the value n if we wanted in the key stream.

Yes dm-crypt would need to massage the key before passing it over
to the crypto layer, but that should be pretty easy.

> My point is that it doesn't make much sense to have a crypto API alg
> that calls get_random_bytes() as part of its implementation. IMHO,
> that might tempt HW drivers to replace it with some crappy
> alternatives, which wouldn't be good... Also, how would we test such
> alg with testmgr?

You are right.  There is no way we can judge the quality of a
hardware IV generator, just as there is no way to judge the quality
of hardware RNG without looking at its internal structure.

But that doesn't mean that we shouldn't support them if they exist.

In any case, this scenario already exists today with IPsec where
we have multiple hardware implementations that generate the IV.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [bug] sha1-avx2 and read beyond

2017-06-23 Thread Jan Stancek


- Original Message -
> On Wed, May 24, 2017 at 08:46:57AM -0400, Jan Stancek wrote:
> > 
> > 
> > - Original Message -
> > > Hi,
> > > 
> > > I'm seeing rare crashes during NFS cthon with krb5 auth. After
> > > some digging I arrived at potential problem with sha1-avx2.
> > 
> > Adding more sha1_avx2 experts to CC.
> > 
> > > 
> > > Problem appears to be that sha1_transform_avx2() reads beyond
> > > number of blocks you pass, if it is an odd number. It appears
> > > to try read one block more. This creates a problem if it falls
> > > beyond a page and there's nothing there.
> > 
> > As noted in my reply, worst case appears to be read ahead
> > of up to 3 SHA1 blocks beyond end of data:
> >   http://marc.info/?l=linux-crypto-vger=149373371023377
> > 
> >  +--+-+-+-+
> >  | 2*SHA1_BLOCK_SIZE  | 2*SHA1_BLOCK_SIZE |
> >  +--+-+-+-+
> > ^ page boundary
> > ^ data end
> > 
> > It is still reproducible with 4.12-rc2.
> 
> Can someone from Intel please look into this? Otherwise we'll have
> to disable sha-avx2.

So I take it my workaround patch [1] is not acceptable in
short-term as well?

[1] http://marc.info/?l=linux-crypto-vger=149373371023377

Regards,
Jan


Re: [bug] sha1-avx2 and read beyond

2017-06-23 Thread Herbert Xu
On Wed, May 24, 2017 at 08:46:57AM -0400, Jan Stancek wrote:
> 
> 
> - Original Message -
> > Hi,
> > 
> > I'm seeing rare crashes during NFS cthon with krb5 auth. After
> > some digging I arrived at potential problem with sha1-avx2.
> 
> Adding more sha1_avx2 experts to CC.
> 
> > 
> > Problem appears to be that sha1_transform_avx2() reads beyond
> > number of blocks you pass, if it is an odd number. It appears
> > to try read one block more. This creates a problem if it falls
> > beyond a page and there's nothing there.
> 
> As noted in my reply, worst case appears to be read ahead
> of up to 3 SHA1 blocks beyond end of data:
>   http://marc.info/?l=linux-crypto-vger=149373371023377
> 
>  +--+-+-+-+
>  | 2*SHA1_BLOCK_SIZE  | 2*SHA1_BLOCK_SIZE |
>  +--+-+-+-+
> ^ page boundary
> ^ data end
> 
> It is still reproducible with 4.12-rc2.

Can someone from Intel please look into this? Otherwise we'll have
to disable sha-avx2.

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: encrypt_done called from interrupt context on rk3288 crypto driver

2017-06-23 Thread Herbert Xu
On Thu, May 25, 2017 at 10:38:13PM +0300, Emil Karlson wrote:
> Greetings
> 
> It seems to me that rk3288 crypto driver calls encrypt_done from
> interrupt context which causes runtime tests to fail.

Zain, can you please take a look at this?

It is illegal to call the completion function from hardirq context.

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [PATCH] crypto: brcm - software fallback for cryptlen zero

2017-06-23 Thread Anup Patel
On Fri, Jun 23, 2017 at 1:52 PM, Raveendra Padasalagi
 wrote:
> Zero length payload requests are not handled in
> Broadcom SPU2 engine, so this patch adds conditional
> check to fallback to software implementation for AES-GCM
> and AES-CCM algorithms.
>
> Signed-off-by: Raveendra Padasalagi 
> Reviewed-by: Ray Jui 
> Reviewed-by: Scott Branden 
> ---
>  drivers/crypto/bcm/cipher.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/crypto/bcm/cipher.c b/drivers/crypto/bcm/cipher.c
> index cc0d5b9..6c80863 100644
> --- a/drivers/crypto/bcm/cipher.c
> +++ b/drivers/crypto/bcm/cipher.c
> @@ -2625,7 +2625,7 @@ static int aead_need_fallback(struct aead_request *req)
>  */
> if (((ctx->cipher.mode == CIPHER_MODE_GCM) ||
>  (ctx->cipher.mode == CIPHER_MODE_CCM)) &&
> -   (req->assoclen == 0)) {
> +   ((req->assoclen == 0) || (req->cryptlen == 0))) {
> if ((rctx->is_encrypt && (req->cryptlen == 0)) ||
> (!rctx->is_encrypt && (req->cryptlen == 
> ctx->digestsize))) {
> flow_log("AES GCM/CCM needs fallback for 0 len 
> req\n");
> --
> 1.9.1
>

This should go in linux-stable too.

Please CC Linux stable and include "Fixes:".

Regards,
Anup


Re: [PATCH] crypto: brcm - Fix SHA3-512 algorithm failure

2017-06-23 Thread Anup Patel
On Fri, Jun 23, 2017 at 12:15 PM, Raveendra Padasalagi
 wrote:
> In Broadcom SPU driver, due to missing break statement
> in spu2_hash_xlate() while mapping SPU2 equivalent
> SHA3-512 value, -EINVAL is chosen and hence leading to
> failure of SHA3-512 algorithm. This patch fixes the same.
>
> Signed-off-by: Raveendra Padasalagi 
> Reviewed-by: Ray Jui 
> Reviewed-by: Scott Branden 
> ---
>  drivers/crypto/bcm/spu2.c | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/drivers/crypto/bcm/spu2.c b/drivers/crypto/bcm/spu2.c
> index ef04c97..bf7ac62 100644
> --- a/drivers/crypto/bcm/spu2.c
> +++ b/drivers/crypto/bcm/spu2.c
> @@ -302,6 +302,7 @@ static int spu2_hash_mode_xlate(enum hash_mode hash_mode,
> break;
> case HASH_ALG_SHA3_512:
> *spu2_type = SPU2_HASH_TYPE_SHA3_512;
> +   break;
> case HASH_ALG_LAST:
> default:
> err = -EINVAL;
> --
> 1.9.1
>

This should go in linux-stable too.

Please CC Linux stable and include "Fixes:".

Regards,
Anup


Re: [Patch V5 1/7] crypto: Multi-buffer encryption infrastructure support

2017-06-23 Thread Herbert Xu
On Thu, Jun 08, 2017 at 12:52:54PM -0700, Megha Dey wrote:
>
> I will move this code to the mcryptd.c.
> 
> About the naming scheme, could you give me an example where the internal
> and external algorithm have the same name? I tried searching but did not
> find any.
> 
> When the outer and inner algorithm have the same name, I see a crash
> when testing using tcrypt. This is because the wrong algortihm (with a
> higher priority) is being picked up in __crypto_alg_lookup.  
> 
> Inner alg:
> Currently:
> alg name:__cbc(aes), driver name:__cbc-aes-aesni-mb
> 
> expected:
> alg name:cbc(aes), driver name: cbc-aes-aesni-mb
> 
> Outer alg:
> Currently:
> alg name:cbc(aes), driver name:cbc-aes-aesni-mb
> 
> expected:
> alg name:cbc(aes), driver name:mcryptd-cbc-aes-aesni-mb

This all looks right.  So I'm not sure why you're getting the crash.
We're relying on the INTERNAL flag to ensure the internal algorithm
is not picked up except when we strictly ask for it.

In fact I see something fishy in your testmgr code (the last patch
in the series I think).  It's setting the INTERNAL bit when
allocating tfms, that does not look right.

The only one that should be setting this is mcryptd.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


[PATCH] crypto: brcm - software fallback for cryptlen zero

2017-06-23 Thread Raveendra Padasalagi
Zero length payload requests are not handled in
Broadcom SPU2 engine, so this patch adds conditional
check to fallback to software implementation for AES-GCM
and AES-CCM algorithms.

Signed-off-by: Raveendra Padasalagi 
Reviewed-by: Ray Jui 
Reviewed-by: Scott Branden 
---
 drivers/crypto/bcm/cipher.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/crypto/bcm/cipher.c b/drivers/crypto/bcm/cipher.c
index cc0d5b9..6c80863 100644
--- a/drivers/crypto/bcm/cipher.c
+++ b/drivers/crypto/bcm/cipher.c
@@ -2625,7 +2625,7 @@ static int aead_need_fallback(struct aead_request *req)
 */
if (((ctx->cipher.mode == CIPHER_MODE_GCM) ||
 (ctx->cipher.mode == CIPHER_MODE_CCM)) &&
-   (req->assoclen == 0)) {
+   ((req->assoclen == 0) || (req->cryptlen == 0))) {
if ((rctx->is_encrypt && (req->cryptlen == 0)) ||
(!rctx->is_encrypt && (req->cryptlen == ctx->digestsize))) {
flow_log("AES GCM/CCM needs fallback for 0 len req\n");
-- 
1.9.1



Re: [PATCH v6 0/2] IV Generation algorithms for dm-crypt

2017-06-23 Thread Herbert Xu
Binoy Jayan  wrote:
> ===
> dm-crypt optimization for larger block sizes
> ===
> 
> Currently, the iv generation algorithms are implemented in dm-crypt.c. The 
> goal
> is to move these algorithms from the dm layer to the kernel crypto layer by
> implementing them as template ciphers so they can be used in relation with
> algorithms like aes, and with multiple modes like cbc, ecb etc. As part of 
> this
> patchset, the iv-generation code is moved from the dm layer to the crypto 
> layer
> and adapt the dm-layer to send a whole 'bio' (as defined in the block layer)
> at a time. Each bio contains the in memory representation of physically
> contiguous disk blocks. Since the bio itself may not be contiguous in main
> memory, the dm layer sets up a chained scatterlist of these blocks split into
> physically contiguous segments in memory so that DMA can be performed.

There is currently a patch-set for fscrypt to add essiv support.  It
would be interesting to know whether your implementation of essiv
can also be used in that patchset.  That would confirm that we're on
the right track.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [PATCH 3/7] staging: ccree: add support for older HW revisions

2017-06-23 Thread kbuild test robot
Hi Gilad,

[auto build test WARNING on staging/staging-testing]
[also build test WARNING on next-20170622]
[cannot apply to v4.12-rc6]
[if your patch is applied to the wrong git tree, please drop us a note to help 
improve the system]

url:
https://github.com/0day-ci/linux/commits/Gilad-Ben-Yossef/staging-ccree-bug-fixes-and-TODO-items-for-4-13/20170623-134445
config: sparc64-allmodconfig (attached as .config)
compiler: sparc64-linux-gnu-gcc (Debian 6.1.1-9) 6.1.1 20160705
reproduce:
wget 
https://raw.githubusercontent.com/01org/lkp-tests/master/sbin/make.cross -O 
~/bin/make.cross
chmod +x ~/bin/make.cross
# save the attached .config to linux build tree
make.cross ARCH=sparc64 

All warnings (new ones prefixed by >>):

   In file included from drivers/staging/ccree/ssi_sram_mgr.c:17:0:
   drivers/staging/ccree/ssi_sram_mgr.c: In function 'ssi_sram_mgr_init':
   include/linux/kern_levels.h:4:18: warning: format '%x' expects argument of 
type 'unsigned int', but argument 3 has type 'dma_addr_t {aka long long 
unsigned int}' [-Wformat=]
#define KERN_SOH "\001"  /* ASCII Start Of Header */
 ^
   drivers/staging/ccree/ssi_driver.h:97:9: note: in definition of macro 
'SSI_LOG'
 printk(level "ccree::%s: " format, __func__, ##__VA_ARGS__)
^
   include/linux/kern_levels.h:10:18: note: in expansion of macro 'KERN_SOH'
#define KERN_ERR KERN_SOH "3" /* error conditions */
 ^~~~
>> drivers/staging/ccree/ssi_driver.h:98:42: note: in expansion of macro 
>> 'KERN_ERR'
#define SSI_LOG_ERR(format, ...) SSI_LOG(KERN_ERR, format, ##__VA_ARGS__)
 ^~~~
>> drivers/staging/ccree/ssi_sram_mgr.c:76:4: note: in expansion of macro 
>> 'SSI_LOG_ERR'
   SSI_LOG_ERR("Invalid SRAM offset 0x%x\n", start);
   ^~~

vim +/KERN_ERR +98 drivers/staging/ccree/ssi_driver.h

abefd674 Gilad Ben-Yossef 2017-04-23   91  /* AXI_ID is not actually the AXI ID 
of the transaction but the value of AXI_ID
250a00a7 Derek Robson 2017-05-30   92   * field in the HW descriptor. The 
DMA engine +8 that value.
250a00a7 Derek Robson 2017-05-30   93   */
abefd674 Gilad Ben-Yossef 2017-04-23   94  
abefd674 Gilad Ben-Yossef 2017-04-23   95  /* Logging macros */
abefd674 Gilad Ben-Yossef 2017-04-23   96  #define SSI_LOG(level, format, ...) \
891144d7 Gilad Ben-Yossef 2017-06-22  @97   printk(level "ccree::%s: " 
format, __func__, ##__VA_ARGS__)
abefd674 Gilad Ben-Yossef 2017-04-23  @98  #define SSI_LOG_ERR(format, ...) 
SSI_LOG(KERN_ERR, format, ##__VA_ARGS__)
abefd674 Gilad Ben-Yossef 2017-04-23   99  #define SSI_LOG_WARNING(format, ...) 
SSI_LOG(KERN_WARNING, format, ##__VA_ARGS__)
abefd674 Gilad Ben-Yossef 2017-04-23  100  #define SSI_LOG_NOTICE(format, ...) 
SSI_LOG(KERN_NOTICE, format, ##__VA_ARGS__)
abefd674 Gilad Ben-Yossef 2017-04-23  101  #define SSI_LOG_INFO(format, ...) 
SSI_LOG(KERN_INFO, format, ##__VA_ARGS__)

:: The code at line 98 was first introduced by commit
:: abefd6741d540fc624e73a2a3bdef2397bcbd064 staging: ccree: introduce 
CryptoCell HW driver

:: TO: Gilad Ben-Yossef <gi...@benyossef.com>
:: CC: Greg Kroah-Hartman <gre...@linuxfoundation.org>

---
0-DAY kernel test infrastructureOpen Source Technology Center
https://lists.01.org/pipermail/kbuild-all   Intel Corporation


.config.gz
Description: application/gzip


Re: [RFC PATCH] gcm - fix setkey cache coherence issues

2017-06-23 Thread Herbert Xu
On Fri, Jun 23, 2017 at 07:33:20AM +, Radu Solea wrote:
>
> Normally I would agree with you, if it's a weird requirement coming
> from hardware or driver. In this case I think it's different. This is
> not a limitation coming from one driver or one particular hardware
> variety. It applies to all platforms that do not have hw cache
> coherence and a large enough cacheline.
>  
> A couple of lines below the allocation hash is linked into a
> scatterlist, a data structure with remarkably high chances of ending up
> in a DMA endpoint, yet we choose to ignore all other DMA requirements?

What I'm saying is that you cannot rely on crypto API users to do
this for you.  Sure we can fix this one spot in gcm.c.  But any
other user of caam anywhere in the kernel can do exactly the same
thing.

You cannot expect them to know to allocate IVs at cacheline boundaries.
So if you have this requirement (which the generic C version certainly
does not), then you'll need to deal with it in the driver.

Of course if every DMA driver needed to do the same thing, then it's
something the crypto API should take care of, e.g., like we do for
alignmask.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [RFC PATCH] gcm - fix setkey cache coherence issues

2017-06-23 Thread Radu Solea
On Vi, 2017-06-23 at 14:31 +0800, Herbert Xu wrote:
> 
> The crypto API cannot rely on users providing aligned buffers.  So
> if your driver has an alignment requirement, it either has to use
> the existing crypto API alignmask setting which can cope with some
> unaligned inputs, e.g., the IV if you use the skcipher walk
> mechanism,
> or you must copy unaligned data yourself before performing DMA on
> them.
> 
> Cheers,

Normally I would agree with you, if it's a weird requirement coming
from hardware or driver. In this case I think it's different. This is
not a limitation coming from one driver or one particular hardware
variety. It applies to all platforms that do not have hw cache
coherence and a large enough cacheline.
 
A couple of lines below the allocation hash is linked into a
scatterlist, a data structure with remarkably high chances of ending up
in a DMA endpoint, yet we choose to ignore all other DMA requirements?

Cheers,
Radu.

Re: [PATCH 7/7] crypto: caam: cleanup CONFIG_64BIT ifdefs when using io{read|write}64

2017-06-23 Thread Horia Geantă
On 6/22/2017 7:49 PM, Logan Gunthorpe wrote:
> Now that ioread64 and iowrite64 are always available we don't
> need the ugly ifdefs to change their implementation when they
> are not.
> 
Thanks Logan.

Note however this is not equivalent - it changes the behaviour, since
CAAM engine on i.MX6S/SL/D/Q platforms is broken in terms of 64-bit
register endianness - see CONFIG_CRYPTO_DEV_FSL_CAAM_IMX usage in code
you are removing.

[Yes, current code has its problems, as it does not differentiate b/w
i.MX platforms with and without the (unofficial) erratum, but this
should be fixed separately.]

Below is the change that would keep current logic - still forcing i.MX
to write CAAM 64-bit registers in BE even if the engine is LE (yes, diff
is doing a poor job).

Horia

diff --git a/drivers/crypto/caam/regs.h b/drivers/crypto/caam/regs.h
index 84d2f838a063..b893ebb24e65 100644
--- a/drivers/crypto/caam/regs.h
+++ b/drivers/crypto/caam/regs.h
@@ -134,50 +134,25 @@ static inline void clrsetbits_32(void __iomem
*reg, u32 clear, u32 set)
  *base + 0x : least-significant 32 bits
  *base + 0x0004 : most-significant 32 bits
  */
-#ifdef CONFIG_64BIT
 static inline void wr_reg64(void __iomem *reg, u64 data)
 {
+#ifndef CONFIG_CRYPTO_DEV_FSL_CAAM_IMX
if (caam_little_end)
iowrite64(data, reg);
else
-   iowrite64be(data, reg);
-}
-
-static inline u64 rd_reg64(void __iomem *reg)
-{
-   if (caam_little_end)
-   return ioread64(reg);
-   else
-   return ioread64be(reg);
-}
-
-#else /* CONFIG_64BIT */
-static inline void wr_reg64(void __iomem *reg, u64 data)
-{
-#ifndef CONFIG_CRYPTO_DEV_FSL_CAAM_IMX
-   if (caam_little_end) {
-   wr_reg32((u32 __iomem *)(reg) + 1, data >> 32);
-   wr_reg32((u32 __iomem *)(reg), data);
-   } else
 #endif
-   {
-   wr_reg32((u32 __iomem *)(reg), data >> 32);
-   wr_reg32((u32 __iomem *)(reg) + 1, data);
-   }
+   iowrite64be(data, reg);
 }

 static inline u64 rd_reg64(void __iomem *reg)
 {
 #ifndef CONFIG_CRYPTO_DEV_FSL_CAAM_IMX
if (caam_little_end)
-   return ((u64)rd_reg32((u32 __iomem *)(reg) + 1) << 32 |
-   (u64)rd_reg32((u32 __iomem *)(reg)));
+   return ioread64(reg);
else
 #endif
-   return ((u64)rd_reg32((u32 __iomem *)(reg)) << 32 |
-   (u64)rd_reg32((u32 __iomem *)(reg) + 1));
+   return ioread64be(reg);
 }
-#endif /* CONFIG_64BIT  */

 #ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
 #ifdef CONFIG_SOC_IMX7D


> Signed-off-by: Logan Gunthorpe 
> Cc: "Horia Geantă" 
> Cc: Dan Douglass 
> Cc: Herbert Xu 
> Cc: "David S. Miller" 
> ---
>  drivers/crypto/caam/regs.h | 29 -
>  1 file changed, 29 deletions(-)
> 
> diff --git a/drivers/crypto/caam/regs.h b/drivers/crypto/caam/regs.h
> index 84d2f838a063..26fc19dd0c39 100644
> --- a/drivers/crypto/caam/regs.h
> +++ b/drivers/crypto/caam/regs.h
> @@ -134,7 +134,6 @@ static inline void clrsetbits_32(void __iomem *reg, u32 
> clear, u32 set)
>   *base + 0x : least-significant 32 bits
>   *base + 0x0004 : most-significant 32 bits
>   */
> -#ifdef CONFIG_64BIT
>  static inline void wr_reg64(void __iomem *reg, u64 data)
>  {
>   if (caam_little_end)
> @@ -151,34 +150,6 @@ static inline u64 rd_reg64(void __iomem *reg)
>   return ioread64be(reg);
>  }
>  
> -#else /* CONFIG_64BIT */
> -static inline void wr_reg64(void __iomem *reg, u64 data)
> -{
> -#ifndef CONFIG_CRYPTO_DEV_FSL_CAAM_IMX
> - if (caam_little_end) {
> - wr_reg32((u32 __iomem *)(reg) + 1, data >> 32);
> - wr_reg32((u32 __iomem *)(reg), data);
> - } else
> -#endif
> - {
> - wr_reg32((u32 __iomem *)(reg), data >> 32);
> - wr_reg32((u32 __iomem *)(reg) + 1, data);
> - }
> -}
> -
> -static inline u64 rd_reg64(void __iomem *reg)
> -{
> -#ifndef CONFIG_CRYPTO_DEV_FSL_CAAM_IMX
> - if (caam_little_end)
> - return ((u64)rd_reg32((u32 __iomem *)(reg) + 1) << 32 |
> - (u64)rd_reg32((u32 __iomem *)(reg)));
> - else
> -#endif
> - return ((u64)rd_reg32((u32 __iomem *)(reg)) << 32 |
> - (u64)rd_reg32((u32 __iomem *)(reg) + 1));
> -}
> -#endif /* CONFIG_64BIT  */
> -
>  #ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
>  #ifdef CONFIG_SOC_IMX7D
>  #define cpu_to_caam_dma(value) \
> 


Re: [PATCH v2 2/2] crypto: engine - Permit to enqueue skcipher request

2017-06-23 Thread Herbert Xu
On Mon, Jun 19, 2017 at 09:55:24AM +0200, Corentin Labbe wrote:
>
> Since there are two different user of "crypto engine + ablkcipher", it will 
> be not easy to convert them in one serie. (I could do it, but I simply could 
> not test it for OMAP (lack of hw))
> And any new user which want to use crypto engine+skcipher (like me with the 
> sun8i-ce driver) are simply stuck.

You're right.  We'll need to do this in a backwards-compatible way.  In fact
we already do something similar in skcipher.c itself.  Simply look at the
cra_type field and if it matches blkcipher/ablkcipher/givcipher then it's
legacy ablkcipher, otherwise it's skcipher.

Also the way crypto_engine looks at the request type in the data-path is
suboptimal.  This should really be built into the cra_type object.  For
example, we can have cra_type->engine->prepare_request which would just
do the right thing.

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


[PATCH] crypto: brcm - Fix SHA3-512 algorithm failure

2017-06-23 Thread Raveendra Padasalagi
In Broadcom SPU driver, due to missing break statement
in spu2_hash_xlate() while mapping SPU2 equivalent
SHA3-512 value, -EINVAL is chosen and hence leading to
failure of SHA3-512 algorithm. This patch fixes the same.

Signed-off-by: Raveendra Padasalagi 
Reviewed-by: Ray Jui 
Reviewed-by: Scott Branden 
---
 drivers/crypto/bcm/spu2.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/crypto/bcm/spu2.c b/drivers/crypto/bcm/spu2.c
index ef04c97..bf7ac62 100644
--- a/drivers/crypto/bcm/spu2.c
+++ b/drivers/crypto/bcm/spu2.c
@@ -302,6 +302,7 @@ static int spu2_hash_mode_xlate(enum hash_mode hash_mode,
break;
case HASH_ALG_SHA3_512:
*spu2_type = SPU2_HASH_TYPE_SHA3_512;
+   break;
case HASH_ALG_LAST:
default:
err = -EINVAL;
-- 
1.9.1



Re: [RFC PATCH] gcm - fix setkey cache coherence issues

2017-06-23 Thread Herbert Xu
On Thu, Jun 22, 2017 at 01:56:40PM +, Radu Solea wrote:
> There are two ways of fixing this AFAIK: the first is adding
> cacheline_aligned so those fields don't fall into the same cacheline.
> The second is to kzalloc hash and iv separately. kmalloc should honor
> ARCH_DMA_MINALIGN which would make this issue go away. 

Thanks for the explanation.  I see the problem now.

The crypto API cannot rely on users providing aligned buffers.  So
if your driver has an alignment requirement, it either has to use
the existing crypto API alignmask setting which can cope with some
unaligned inputs, e.g., the IV if you use the skcipher walk mechanism,
or you must copy unaligned data yourself before performing DMA on them.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [PATCH v10 1/2] crypto: skcipher AF_ALG - overhaul memory management

2017-06-23 Thread Herbert Xu
On Wed, Jun 21, 2017 at 10:03:02PM +0200, Stephan Müller wrote:
>
> + /* convert iovecs of output buffers into RX SGL */
> + while (len < ctx->used && msg_data_left(msg)) {

How are we supposed to reach the wait path when ctx->used == 0?

> + /*
> +  * This error covers -EIOCBQUEUED which implies that we can
> +  * only handle one AIO request. If the caller wants to have
> +  * multiple AIO requests in parallel, he must make multiple
> +  * separate AIO calls.
> +  */
> + if (err < 0) {
> + if (err == -EIOCBQUEUED)
> + ret = err;
> + goto out;
>   }
> + if (!err)
> + goto out;

You can combine the two now as err <= 0.

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt