Re: [PATCH v2 15/21] xen-blkfront: Make use of the new sg_map helper function

2017-04-27 Thread Logan Gunthorpe


On 27/04/17 05:20 PM, Jason Gunthorpe wrote:
> It seems the most robust: test for iomem, and jump to a slow path
> copy, otherwise inline the kmap and memcpy
> 
> Every place doing memcpy from sgl will need that pattern to be
> correct.

Ok, sounds like a good place to start to me. I'll see what I can do for
a v3 of this set. Though, I probably won't send anything until after the
merge window.

>>> sg_miter will still fail when the sg contains __iomem, however I would
>>> expect that the sg_copy will work with iomem, by using the __iomem
>>> memcpy variant.
>>
>> Yes, that's true. Any sg_miters that ever see iomem will need to be
>> converted to support it. This isn't much different than the other
>> kmap(sg_page()) users I was converting that will also fail if they see
>> iomem. Though, I suspect an sg_miter user would be easier to convert to
>> iomem than a random kmap user.
> 
> How? sg_miter seems like the next nightmare down this path, what is
> sg_miter_next supposed to do when something hits an iomem sgl?

My proposal is roughly included in the draft I sent upthread. We add an
sg_miter flag indicating the iteratee supports iomem and if miter finds
iomem (with the support flag set) it sets ioaddr which is __iomem. The
iteratee then just needs to null check addr and ioaddr and perform the
appropriate action.

Logan



Re: [PATCH v2 15/21] xen-blkfront: Make use of the new sg_map helper function

2017-04-27 Thread Jason Gunthorpe
On Thu, Apr 27, 2017 at 05:03:45PM -0600, Logan Gunthorpe wrote:
> 
> 
> On 27/04/17 04:11 PM, Jason Gunthorpe wrote:
> > On Thu, Apr 27, 2017 at 03:53:37PM -0600, Logan Gunthorpe wrote:
> > Well, that is in the current form, with more users it would make sense
> > to optimize for the single page case, eg by providing the existing
> > call, providing a faster single-page-only variant of the copy, perhaps
> > even one that is inlined.
> 
> Ok, does it make sense then to have an sg_copy_page_to_buffer (or some
> such... I'm having trouble thinking of a sane name that isn't too long).
> That just does k(un)map_atomic and memcpy? I could try that if it makes
> sense to people.

It seems the most robust: test for iomem, and jump to a slow path
copy, otherwise inline the kmap and memcpy

Every place doing memcpy from sgl will need that pattern to be
correct.

> > sg_miter will still fail when the sg contains __iomem, however I would
> > expect that the sg_copy will work with iomem, by using the __iomem
> > memcpy variant.
> 
> Yes, that's true. Any sg_miters that ever see iomem will need to be
> converted to support it. This isn't much different than the other
> kmap(sg_page()) users I was converting that will also fail if they see
> iomem. Though, I suspect an sg_miter user would be easier to convert to
> iomem than a random kmap user.

How? sg_miter seems like the next nightmare down this path, what is
sg_miter_next supposed to do when something hits an iomem sgl?

miter.addr is supposed to be a kernel pointer that must not be
__iomem..

Jason


Re: [PATCH v2 15/21] xen-blkfront: Make use of the new sg_map helper function

2017-04-27 Thread Logan Gunthorpe


On 27/04/17 04:11 PM, Jason Gunthorpe wrote:
> On Thu, Apr 27, 2017 at 03:53:37PM -0600, Logan Gunthorpe wrote:
> Well, that is in the current form, with more users it would make sense
> to optimize for the single page case, eg by providing the existing
> call, providing a faster single-page-only variant of the copy, perhaps
> even one that is inlined.

Ok, does it make sense then to have an sg_copy_page_to_buffer (or some
such... I'm having trouble thinking of a sane name that isn't too long).
That just does k(un)map_atomic and memcpy? I could try that if it makes
sense to people.

>> Switching the for_each_sg to sg_miter is probably the nicer solution as
>> it takes care of the mapping and the offset/length accounting for you
>> and will have similar performance.
> 
> sg_miter will still fail when the sg contains __iomem, however I would
> expect that the sg_copy will work with iomem, by using the __iomem
> memcpy variant.

Yes, that's true. Any sg_miters that ever see iomem will need to be
converted to support it. This isn't much different than the other
kmap(sg_page()) users I was converting that will also fail if they see
iomem. Though, I suspect an sg_miter user would be easier to convert to
iomem than a random kmap user.

Logan


Re: [PATCH 6/6] ima: Support appended signatures for appraisal

2017-04-27 Thread Mehmet Kayaalp

> On Apr 27, 2017, at 5:41 PM, Thiago Jung Bauermann 
>  wrote:
> 
> Am Mittwoch, 26. April 2017, 18:18:34 BRT schrieb Mehmet Kayaalp:
>>> On Apr 20, 2017, at 7:41 PM, Thiago Jung Bauermann
>>>  wrote:
>>> 
>>> This patch introduces the appended_imasig keyword to the IMA policy syntax
>>> to specify that a given hook should expect the file to have the IMA
>>> signature appended to it. Here is how it can be used in a rule:
>>> 
>>> appraise func=KEXEC_KERNEL_CHECK appraise_type=appended_imasig
>>> appraise func=KEXEC_KERNEL_CHECK appraise_type=appended_imasig|imasig
>>> 
>>> In the second form, IMA will accept either an appended signature or a
>>> signature stored in the extended attribute. In that case, it will first
>>> check whether there is an appended signature, and if not it will read it
>>> from the extended attribute.
>>> 
>>> The format of the appended signature is the same used for signed kernel
>>> modules. This means that the file can be signed with the scripts/sign-file
>> 
>>> tool, with a command line such as this:
>> I would suggest naming the appraise_type as modsig (or some variant) to
>> clarify that the format is defined by how module signatures are handled.
>> Maybe we'd like to define a different appended/inline signature format for
>> IMA in the future.
> 
> I like the suggestion. Would that mean that we will keep refering to it as 
> "module signature format", and thus nothing changes in patch 5?

I think so. If we want IMA to own the format, we might want to go further than
just changing the word "module" in the marker. We might consider having more
flexibility and some additional fields, for example multiple signatures, or 
certificate
chains, ascii/binary encodings etc. We could maybe add a different type for CMS
Signed-Data.

Mehmet






Re: [PATCH v2 15/21] xen-blkfront: Make use of the new sg_map helper function

2017-04-27 Thread Jason Gunthorpe
On Thu, Apr 27, 2017 at 03:53:37PM -0600, Logan Gunthorpe wrote:
> On 27/04/17 02:53 PM, Jason Gunthorpe wrote:
> > blkfront is one of the drivers I looked at, and it appears to only be
> > memcpying with the bvec_data pointer, so I wonder why it does not use
> > sg_copy_X_buffer instead..
> 
> But you'd potentially end up calling sg_copy_to_buffer multiple times
> per page within the sg (given that gnttab_foreach_grant_in_range might
> call blkif_copy_from_grant/blkif_setup_rw_req_grant multiple times).
> Even calling sg_copy_to_buffer once per page seems rather inefficient as
> it uses sg_miter internally.

Well, that is in the current form, with more users it would make sense
to optimize for the single page case, eg by providing the existing
call, providing a faster single-page-only variant of the copy, perhaps
even one that is inlined.

> Switching the for_each_sg to sg_miter is probably the nicer solution as
> it takes care of the mapping and the offset/length accounting for you
> and will have similar performance.

sg_miter will still fail when the sg contains __iomem, however I would
expect that the sg_copy will work with iomem, by using the __iomem
memcpy variant.

So, sg_copy should always be preferred in this new world with mixed
__iomem since it is the only primitive that can transparently handle
it.

Jason


Re: [PATCH v2 15/21] xen-blkfront: Make use of the new sg_map helper function

2017-04-27 Thread Logan Gunthorpe


On 27/04/17 02:53 PM, Jason Gunthorpe wrote:
> blkfront is one of the drivers I looked at, and it appears to only be
> memcpying with the bvec_data pointer, so I wonder why it does not use
> sg_copy_X_buffer instead..

Yes, sort of...

But you'd potentially end up calling sg_copy_to_buffer multiple times
per page within the sg (given that gnttab_foreach_grant_in_range might
call blkif_copy_from_grant/blkif_setup_rw_req_grant multiple times).
Even calling sg_copy_to_buffer once per page seems rather inefficient as
it uses sg_miter internally.

Switching the for_each_sg to sg_miter is probably the nicer solution as
it takes care of the mapping and the offset/length accounting for you
and will have similar performance.

But, yes, if performance is not an issue, switching it to
sg_copy_to_buffer would be a less invasive change than sg_miter. Which
the same might be said about a lot of these cases.

Unfortunately, changing from kmap_atomic (which is a null operation in a
lot of cases) to sg_copy_X_buffer is a pretty big performance hit.

Logan


Re: [PATCH 6/6] ima: Support appended signatures for appraisal

2017-04-27 Thread Thiago Jung Bauermann
Am Mittwoch, 26. April 2017, 18:18:34 BRT schrieb Mehmet Kayaalp:
> > On Apr 20, 2017, at 7:41 PM, Thiago Jung Bauermann
> >  wrote:
> > 
> > This patch introduces the appended_imasig keyword to the IMA policy syntax
> > to specify that a given hook should expect the file to have the IMA
> > signature appended to it. Here is how it can be used in a rule:
> > 
> > appraise func=KEXEC_KERNEL_CHECK appraise_type=appended_imasig
> > appraise func=KEXEC_KERNEL_CHECK appraise_type=appended_imasig|imasig
> > 
> > In the second form, IMA will accept either an appended signature or a
> > signature stored in the extended attribute. In that case, it will first
> > check whether there is an appended signature, and if not it will read it
> > from the extended attribute.
> > 
> > The format of the appended signature is the same used for signed kernel
> > modules. This means that the file can be signed with the scripts/sign-file
> 
> > tool, with a command line such as this:
> I would suggest naming the appraise_type as modsig (or some variant) to
> clarify that the format is defined by how module signatures are handled.
> Maybe we'd like to define a different appended/inline signature format for
> IMA in the future.

I like the suggestion. Would that mean that we will keep refering to it as 
"module signature format", and thus nothing changes in patch 5?

-- 
Thiago Jung Bauermann
IBM Linux Technology Center



Re: [PATCH v2 15/21] xen-blkfront: Make use of the new sg_map helper function

2017-04-27 Thread Jason Gunthorpe
On Thu, Apr 27, 2017 at 02:19:24PM -0600, Logan Gunthorpe wrote:
> 
> 
> On 26/04/17 01:37 AM, Roger Pau Monné wrote:
> > On Tue, Apr 25, 2017 at 12:21:02PM -0600, Logan Gunthorpe wrote:
> >> Straightforward conversion to the new helper, except due to the lack
> >> of error path, we have to use SG_MAP_MUST_NOT_FAIL which may BUG_ON in
> >> certain cases in the future.
> >>
> >> Signed-off-by: Logan Gunthorpe 
> >> Cc: Boris Ostrovsky 
> >> Cc: Juergen Gross 
> >> Cc: Konrad Rzeszutek Wilk 
> >> Cc: "Roger Pau Monné" 
> >>  drivers/block/xen-blkfront.c | 20 +++-
> >>  1 file changed, 11 insertions(+), 9 deletions(-)
> >>
> >> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> >> index 3945963..ed62175 100644
> >> +++ b/drivers/block/xen-blkfront.c
> >> @@ -816,8 +816,9 @@ static int blkif_queue_rw_req(struct request *req, 
> >> struct blkfront_ring_info *ri
> >>BUG_ON(sg->offset + sg->length > PAGE_SIZE);
> >>  
> >>if (setup.need_copy) {
> >> -  setup.bvec_off = sg->offset;
> >> -  setup.bvec_data = kmap_atomic(sg_page(sg));
> >> +  setup.bvec_off = 0;
> >> +  setup.bvec_data = sg_map(sg, 0, SG_KMAP_ATOMIC |
> >> +   SG_MAP_MUST_NOT_FAIL);
> > 
> > I assume that sg_map already adds sg->offset to the address?
> 
> Correct.
> 
> > Also wondering whether we can get rid of bvec_off and just increment 
> > bvec_data,
> > adding Julien who IIRC added this code.
> 
> bvec_off is used to keep track of the offset within the current mapping
> so it's not a great idea given that you'd want to kunmap_atomic the
> original address and not something with an offset. It would be nice if
> this could be converted to use the sg_miter interface but that's a much
> more invasive change that would require someone who knows this code and
> can properly test it. I'd be very grateful if someone actually took that on.

blkfront is one of the drivers I looked at, and it appears to only be
memcpying with the bvec_data pointer, so I wonder why it does not use
sg_copy_X_buffer instead..

Jason


Re: [PATCH v2 15/21] xen-blkfront: Make use of the new sg_map helper function

2017-04-27 Thread Logan Gunthorpe


On 26/04/17 01:37 AM, Roger Pau Monné wrote:
> On Tue, Apr 25, 2017 at 12:21:02PM -0600, Logan Gunthorpe wrote:
>> Straightforward conversion to the new helper, except due to the lack
>> of error path, we have to use SG_MAP_MUST_NOT_FAIL which may BUG_ON in
>> certain cases in the future.
>>
>> Signed-off-by: Logan Gunthorpe 
>> Cc: Boris Ostrovsky 
>> Cc: Juergen Gross 
>> Cc: Konrad Rzeszutek Wilk 
>> Cc: "Roger Pau Monné" 
>> ---
>>  drivers/block/xen-blkfront.c | 20 +++-
>>  1 file changed, 11 insertions(+), 9 deletions(-)
>>
>> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
>> index 3945963..ed62175 100644
>> --- a/drivers/block/xen-blkfront.c
>> +++ b/drivers/block/xen-blkfront.c
>> @@ -816,8 +816,9 @@ static int blkif_queue_rw_req(struct request *req, 
>> struct blkfront_ring_info *ri
>>  BUG_ON(sg->offset + sg->length > PAGE_SIZE);
>>  
>>  if (setup.need_copy) {
>> -setup.bvec_off = sg->offset;
>> -setup.bvec_data = kmap_atomic(sg_page(sg));
>> +setup.bvec_off = 0;
>> +setup.bvec_data = sg_map(sg, 0, SG_KMAP_ATOMIC |
>> + SG_MAP_MUST_NOT_FAIL);
> 
> I assume that sg_map already adds sg->offset to the address?

Correct.

> Also wondering whether we can get rid of bvec_off and just increment 
> bvec_data,
> adding Julien who IIRC added this code.

bvec_off is used to keep track of the offset within the current mapping
so it's not a great idea given that you'd want to kunmap_atomic the
original address and not something with an offset. It would be nice if
this could be converted to use the sg_miter interface but that's a much
more invasive change that would require someone who knows this code and
can properly test it. I'd be very grateful if someone actually took that on.

Logan



Re: [PATCH v2 01/21] scatterlist: Introduce sg_map helper functions

2017-04-27 Thread Logan Gunthorpe

On 26/04/17 01:44 AM, Christoph Hellwig wrote:
> I think we'll at least need a draft of those to make sense of these
> patches.  Otherwise they just look very clumsy.

Ok, what follows is a draft patch attempting to show where I'm thinking
of going with this. Obviously it will not compile because it assumes
the users throughout the kernel are a bit different than they are today.
Notably, there is no sg_page anymore.

There's also likely a ton of issues and arguments to have over a bunch
of the specifics below and I'd expect the concept to evolve more
as cleanup occurs. This itself is an evolution of the draft I posted
replying to you in my last RFC thread.

Also, before any of this is truly useful to us, pfn_t would have to
infect a few other places in the kernel.

Thanks,

Logan


diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h
index fad170b..85ef928 100644
--- a/include/linux/scatterlist.h
+++ b/include/linux/scatterlist.h
@@ -6,13 +6,14 @@
 #include 
 #include 
 #include 
+#include 
 #include 

 struct scatterlist {
 #ifdef CONFIG_DEBUG_SG
unsigned long   sg_magic;
 #endif
-   unsigned long   page_link;
+   pfn_t   pfn;
unsigned intoffset;
unsigned intlength;
dma_addr_t  dma_address;
@@ -60,15 +61,68 @@ struct sg_table {

 #define SG_MAGIC   0x87654321

-/*
- * We overload the LSB of the page pointer to indicate whether it's
- * a valid sg entry, or whether it points to the start of a new
scatterlist.
- * Those low bits are there for everyone! (thanks mason :-)
- */
-#define sg_is_chain(sg)((sg)->page_link & 0x01)
-#define sg_is_last(sg) ((sg)->page_link & 0x02)
-#define sg_chain_ptr(sg)   \
-   ((struct scatterlist *) ((sg)->page_link & ~0x03))
+static inline bool sg_is_chain(struct scatterlist *sg)
+{
+   return sg->pfn.val & PFN_SG_CHAIN;
+}
+
+static inline bool sg_is_last(struct scatterlist *sg)
+{
+   return sg->pfn.val & PFN_SG_LAST;
+}
+
+static inline struct scatterlist *sg_chain_ptr(struct scatterlist *sg)
+{
+   unsigned long sgl = pfn_t_to_pfn(sg->pfn);
+   return (struct scatterlist *)(sgl << PAGE_SHIFT);
+}
+
+static inline bool sg_is_iomem(struct scatterlist *sg)
+{
+   return pfn_t_is_iomem(sg->pfn);
+}
+
+/**
+ * sg_assign_pfn - Assign a given pfn_t to an SG entry
+ * @sg:SG entry
+ * @pfn:   The pfn
+ *
+ * Description:
+ *   Assign a pfn to sg entry. Also see sg_set_pfn(), the most commonly
used
+ *   variant.w
+ *
+ **/
+static inline void sg_assign_pfn(struct scatterlist *sg, pfn_t pfn)
+{
+#ifdef CONFIG_DEBUG_SG
+   BUG_ON(sg->sg_magic != SG_MAGIC);
+   BUG_ON(sg_is_chain(sg));
+   BUG_ON(pfn.val & (PFN_SG_CHAIN | PFN_SG_LAST));
+#endif
+
+   sg->pfn = pfn;
+}
+
+/**
+ * sg_set_pfn - Set sg entry to point at given pfn
+ * @sg: SG entry
+ * @pfn:The page
+ * @len:Length of data
+ * @offset: Offset into page
+ *
+ * Description:
+ *   Use this function to set an sg entry pointing at a pfn, never assign
+ *   the page directly. We encode sg table information in the lower bits
+ *   of the page pointer. See sg_pfn_t for looking up the pfn_t belonging
+ *   to an sg entry.
+ **/
+static inline void sg_set_pfn(struct scatterlist *sg, pfn_t pfn,
+ unsigned int len, unsigned int offset)
+{
+   sg_assign_pfn(sg, pfn);
+   sg->offset = offset;
+   sg->length = len;
+}

 /**
  * sg_assign_page - Assign a given page to an SG entry
@@ -82,18 +136,13 @@ struct sg_table {
  **/
 static inline void sg_assign_page(struct scatterlist *sg, struct page
*page)
 {
-   unsigned long page_link = sg->page_link & 0x3;
+   if (!page) {
+   pfn_t null_pfn = {0};
+   sg_assign_pfn(sg, null_pfn);
+   return;
+   }

-   /*
-* In order for the low bit stealing approach to work, pages
-* must be aligned at a 32-bit boundary as a minimum.
-*/
-   BUG_ON((unsigned long) page & 0x03);
-#ifdef CONFIG_DEBUG_SG
-   BUG_ON(sg->sg_magic != SG_MAGIC);
-   BUG_ON(sg_is_chain(sg));
-#endif
-   sg->page_link = page_link | (unsigned long) page;
+   sg_assign_pfn(sg, page_to_pfn_t(page));
 }

 /**
@@ -106,8 +155,7 @@ static inline void sg_assign_page(struct scatterlist
*sg, struct page *page)
  * Description:
  *   Use this function to set an sg entry pointing at a page, never assign
  *   the page directly. We encode sg table information in the lower bits
- *   of the page pointer. See sg_page() for looking up the page belonging
- *   to an sg entry.
+ *   of the page pointer.
  *
  **/
 static inline void sg_set_page(struct scatterlist *sg, struct page *page,
@@ -118,13 +166,53 @@ static inline void sg_set_page(struct scatterlist
*sg, struct page *page,
sg->length = len;
 }

-static inline struct page *sg_page(struct scatterlist *sg)
+/**
+ * sg_pfn_t - Return the 

Re: ecdh: generation and retention of ecc privkey in kernel/hardware

2017-04-27 Thread Marcel Holtmann
Hi Tudor,

> I'm working with a crypto accelerator that is capable of generating and
> retaining ecc private keys in hardware and further use them for ecdh.
> The private keys can not be read from the device. This is good because
> the less software has access to secrets, the better.
> 
> Generation and retention of ecc private keys are also helpful in a user
> space to kernel ecdh offload. The privkey can be generated in kernel and 
> never revealed to user space.
> 
> I propose to extend the ecc software support to allow the generation of
> private keys. ECDH software implementation and drivers will permit the
> users to provide NULL keys. In this case, the kernel (or the device, if
> possible) will generate the ecc private key and further use it for ecdh.
> 
> What's your feeling on this?

can we represent these keys via keyctl as part of the kernel keyring? I think 
when it comes to asymmetric crypto and its keys, we need to have these as keys 
represented in kernel keyring. Recently we added keyctl features to sign, 
verify, encrypt and decrypt operations.

The crypto subsystem concept is broken when it comes to keys in hardware since 
it enforces the concept of always being able to fallback on a software 
implementation of the algorithm.

Regards

Marcel



Re: [PATCH 5/7] IB/hfi1: use pcie_flr instead of duplicating it

2017-04-27 Thread Bjorn Helgaas
On Thu, Apr 27, 2017 at 08:47:58AM +0200, Christoph Hellwig wrote:
> On Tue, Apr 25, 2017 at 02:39:55PM -0500, Bjorn Helgaas wrote:
> > This still leaves these:
> > 
> >   [PATCH 4/7] ixgbe: use pcie_flr instead of duplicating it
> >   [PATCH 6/7] crypto: qat: use pcie_flr instead of duplicating it
> >   [PATCH 7/7] liquidio: use pcie_flr instead of duplicating it
> > 
> > I haven't seen any response to 4 and 6.  Felix reported an unused
> > variable in 7.  Let me know if you'd like me to do anything with
> > these.
> 
> Now that Jeff ACKed 4 it might be worth to add it to the pci tree last
> minute.  I'll resend liquidio and qat to the respective maintainers for
> the next merge window.

I applied 4 with Jeff's ack to pci/virtualization for v4.12, thanks!


Re: [PATCH v2 01/21] scatterlist: Introduce sg_map helper functions

2017-04-27 Thread Logan Gunthorpe


On 27/04/17 09:27 AM, Jason Gunthorpe wrote:
> On Thu, Apr 27, 2017 at 08:53:38AM +0200, Christoph Hellwig wrote:
> How about first switching as many call sites as possible to use
> sg_copy_X_buffer instead of kmap?

Yeah, I could look at doing that first.

One problem is we might get more Naks of the form of Herbert Xu's who
might be concerned with the performance implications.

These are definitely a bit more invasive changes than thin wrappers
around kmap calls.

> A random audit of Logan's series suggests this is actually a fairly
> common thing.

It's not _that_ common but there are a significant fraction. One of my
patches actually did this to two places that seemed to be reimplementing
the sg_copy_X_buffer logic.

Thanks,

Logan


[PATCH] crypto: talitos: Extend max key length for SHA384/512-HMAC

2017-04-27 Thread Martin Hicks

The max keysize for both of these is 128, not 96.  Before, with keysizes
over 96, the memcpy in ahash_setkey() would overwrite memory beyond the
key field.

Signed-off-by: Martin Hicks 
---
 drivers/crypto/talitos.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/crypto/talitos.c b/drivers/crypto/talitos.c
index 0bba6a1..97dc85e 100644
--- a/drivers/crypto/talitos.c
+++ b/drivers/crypto/talitos.c
@@ -816,7 +816,7 @@ static void talitos_unregister_rng(struct device *dev)
  * HMAC_SNOOP_NO_AFEA (HSNA) instead of type IPSEC_ESP
  */
 #define TALITOS_CRA_PRIORITY_AEAD_HSNA (TALITOS_CRA_PRIORITY - 1)
-#define TALITOS_MAX_KEY_SIZE   96
+#define TALITOS_MAX_KEY_SIZE   SHA512_BLOCK_SIZE /* SHA512 has the 
largest keysize input */
 #define TALITOS_MAX_IV_LENGTH  16 /* max of AES_BLOCK_SIZE, 
DES3_EDE_BLOCK_SIZE */
 
 struct talitos_ctx {
-- 
1.7.10.4


-- 
Martin Hicks P.Eng.|  m...@bork.org
Bork Consulting Inc.   |  +1 (613) 266-2296


Re: [PATCH v2 07/21] crypto: shash, caam: Make use of the new sg_map helper function

2017-04-27 Thread Logan Gunthorpe


On 26/04/17 09:56 PM, Herbert Xu wrote:
> On Tue, Apr 25, 2017 at 12:20:54PM -0600, Logan Gunthorpe wrote:
>> Very straightforward conversion to the new function in the caam driver
>> and shash library.
>>
>> Signed-off-by: Logan Gunthorpe 
>> Cc: Herbert Xu 
>> Cc: "David S. Miller" 
>> ---
>>  crypto/shash.c| 9 ++---
>>  drivers/crypto/caam/caamalg.c | 8 +++-
>>  2 files changed, 9 insertions(+), 8 deletions(-)
>>
>> diff --git a/crypto/shash.c b/crypto/shash.c
>> index 5e31c8d..5914881 100644
>> --- a/crypto/shash.c
>> +++ b/crypto/shash.c
>> @@ -283,10 +283,13 @@ int shash_ahash_digest(struct ahash_request *req, 
>> struct shash_desc *desc)
>>  if (nbytes < min(sg->length, ((unsigned int)(PAGE_SIZE)) - offset)) {
>>  void *data;
>>  
>> -data = kmap_atomic(sg_page(sg));
>> -err = crypto_shash_digest(desc, data + offset, nbytes,
>> +data = sg_map(sg, 0, SG_KMAP_ATOMIC);
>> +if (IS_ERR(data))
>> +return PTR_ERR(data);
>> +
>> +err = crypto_shash_digest(desc, data, nbytes,
>>req->result);
>> -kunmap_atomic(data);
>> +sg_unmap(sg, data, 0, SG_KMAP_ATOMIC);
>>  crypto_yield(desc->flags);
>>  } else
>>  err = crypto_shash_init(desc) ?:
> 
> Nack.  This is an optimisation for the special case of a single
> SG list entry.  In fact in the common case the kmap_atomic should
> disappear altogether in the no-highmem case.  So replacing it
> with sg_map is not acceptable.

What you seem to have missed is that sg_map is just a thin wrapper
around kmap_atomic. Perhaps with a future check for a mappable page.
This change should have zero impact on performance.

Logan


Re: [PATCH v2 01/21] scatterlist: Introduce sg_map helper functions

2017-04-27 Thread Logan Gunthorpe


On 27/04/17 12:53 AM, Christoph Hellwig wrote:
> I think you'll need to follow the existing kmap semantics and never
> fail the iomem version either.  Otherwise you'll have a special case
> that's almost never used that has a different error path.
>
> Again, wrong way.  Suddenly making things fail for your special case
> that normally don't fail is a receipe for bugs.

I don't disagree but these restrictions make the problem impossible to
solve? If there is iomem behind a page in an SGL and someone tries to
map it, we either have to fail or we break iomem safety which was your
original concern.

Logan



Re: [PATCH v2 01/21] scatterlist: Introduce sg_map helper functions

2017-04-27 Thread Jason Gunthorpe
On Thu, Apr 27, 2017 at 08:53:38AM +0200, Christoph Hellwig wrote:

> > The main difficulty we
> > have now is that neither of those functions are expected to fail and we
> > need them to be able to in cases where the page doesn't map to system
> > RAM. This patch series is trying to address it for users of scatterlist.
> > I'm certainly open to other suggestions.
> 
> I think you'll need to follow the existing kmap semantics and never
> fail the iomem version either.  Otherwise you'll have a special case
> that's almost never used that has a different error path.

How about first switching as many call sites as possible to use
sg_copy_X_buffer instead of kmap?

A random audit of Logan's series suggests this is actually a fairly
common thing.

eg drivers/mmc/host/sdhci.c is only doing this:

buffer = sdhci_kmap_atomic(sg, );
memcpy(buffer, align, size);
sdhci_kunmap_atomic(buffer, );

drivers/scsi/mvsas/mv_sas.c is this:

+   to = sg_map(sg_resp, 0, SG_KMAP_ATOMIC);
+   memcpy(to,
+  slot->response + sizeof(struct mvs_err_info),
+  sg_dma_len(sg_resp));
+   sg_unmap(sg_resp, to, 0, SG_KMAP_ATOMIC);

etc.

Lots of other places seem similar, if not sometimes a little bit more
convoluted..

Switching all the trivial cases to use copy might bring more clarity
as to what is actually required for the remaining few users? If there
are only a few then it may no longer matter if the API is not idyllic.

Jason


Re: [PATCH 3/5] KEYS: DH: don't feed uninitialized result memory into KDF

2017-04-27 Thread David Howells
Eric Biggers  wrote:

> > > By the way: do we really need this in the kernel at all, given that it's
> > > just doing some math on data which userspace has access to?
> > 
> > It is the question about how we want the keys subsystem to operate. The DH
> > shared secret shall not be used as a key. But the DH operation is part of
> > the key subsystem. If there is never a case where the result of the DH
> > operation is used in the kernel, then the KDF can be removed and my
> > patches could be reverted. However, in this case, the entire DH business
> > could be questioned as this can easily be done in user space as well.
> > 
> 
> Well, who exactly is asking for Diffie-Hellman in the kernel at all?  If it
> can be done in userspace then it should be done there.  Having it in the
> kernel means having yet another API that's callable by unprivileged users
> and needs to be audited for security vulnerabilities.  Just because the
> kernel can support doing hashes or has an arbitrary-precision arithmetic
> library or whatever doesn't mean it's the right place to do random crypto
> stuff.

I understood that there is the possibility of offloading this to hardware.

David


ecdh: generation and retention of ecc privkey in kernel/hardware

2017-04-27 Thread Tudor Ambarus

Hi, Herbert,

I'm working with a crypto accelerator that is capable of generating and
retaining ecc private keys in hardware and further use them for ecdh.
The private keys can not be read from the device. This is good because
the less software has access to secrets, the better.

Generation and retention of ecc private keys are also helpful in a user
space to kernel ecdh offload. The privkey can be generated in kernel and 
never revealed to user space.


I propose to extend the ecc software support to allow the generation of
private keys. ECDH software implementation and drivers will permit the
users to provide NULL keys. In this case, the kernel (or the device, if
possible) will generate the ecc private key and further use it for ecdh.

What's your feeling on this?

Thanks,
ta






[PATCH v4 0/3] arm64: marvell: add cryptographic engine support for 7k/8k

2017-04-27 Thread Antoine Tenart
Hi all,

This series adds support for the Inside Secure SafeXcel EIP197
cryptographic engine which can be found on Marvell Armada 7k and 8k
boards. A new cryptographic engine driver is added, as well as the
relevant device tree definition for the Armada 7040 DB and 8040 DB
boards.

This driver needs two firmwares to work correctly. These firmware are
usually used for more advanced operations than the ones supported (as of
now), but we still need them to pass the data to the internal
cryptographic engine.

This series was tested in various ways on both the Armada 7040 DB and
the Armada 8040 DB: using the crypto framework self tests, using tcrypt
and while performing various transfers with iperf on top of IPsec.

This series is based on top of v4.11-rc1, and is available on a branch
(which also contains the PPv2 support for 7k/8k, to ease the process of
testing IPsec): https://github.com/atenart/linux  v4.11-rc1/7k8k-crypto
I can rebase if needed. Patches adding device tree nodes and modifying
the arm64 defconfig have been applied to the mvebu tree already.

I'd like to thank Ofer Heifetz and Igal Liberman who helped me to do
this driver by adding functionalities, optimizing functions and testing
(a lot).

Thanks,
Antoine

Since v3:
  - Updated to use the skcipher API instead of the ablkcipher one.
  - Added explicit zeroing of the keys.
  - Fixed the DMA mask.
  - Removed a few remaining phys_addr_t.

Since v2:
  - Used sha*_zero_message_hash definitions instead of custom ones.
  - Used memzero_explicit() instead of memset() to avoid leaks.
  - Misc cosmetic updates.

Since v1:
  - Fixed two compilation issues reported by Kbuild.
  - Removed all dma_to_phys() calls and used the dma addresses instead.
  - Added a call to dma_set_mask_and_coherent() before calling any DMA
API helper.
  - Removed some DMA free functions to avoid double-freeing.
  - Do not rely on sg_nents_for_len() to get the number of sg anymore.
  - Added a dedicated kmalloc'ed cache to use for dma_map_single().

Antoine Tenart (3):
  Documentation/bindings: Document the SafeXel cryptographic engine
driver
  crypto: inside-secure: add SafeXcel EIP197 crypto engine driver
  MAINTAINERS: add a maintainer for the Inside Secure crypto driver

 .../bindings/crypto/inside-secure-safexcel.txt |   27 +
 MAINTAINERS|6 +
 drivers/crypto/Kconfig |   17 +
 drivers/crypto/Makefile|1 +
 drivers/crypto/inside-secure/Makefile  |2 +
 drivers/crypto/inside-secure/safexcel.c|  927 +
 drivers/crypto/inside-secure/safexcel.h|  572 +++
 drivers/crypto/inside-secure/safexcel_cipher.c |  556 +++
 drivers/crypto/inside-secure/safexcel_hash.c   | 1045 
 drivers/crypto/inside-secure/safexcel_ring.c   |  157 +++
 10 files changed, 3310 insertions(+)
 create mode 100644 
Documentation/devicetree/bindings/crypto/inside-secure-safexcel.txt
 create mode 100644 drivers/crypto/inside-secure/Makefile
 create mode 100644 drivers/crypto/inside-secure/safexcel.c
 create mode 100644 drivers/crypto/inside-secure/safexcel.h
 create mode 100644 drivers/crypto/inside-secure/safexcel_cipher.c
 create mode 100644 drivers/crypto/inside-secure/safexcel_hash.c
 create mode 100644 drivers/crypto/inside-secure/safexcel_ring.c

-- 
2.11.0



[PATCH v4 1/3] Documentation/bindings: Document the SafeXel cryptographic engine driver

2017-04-27 Thread Antoine Tenart
The Inside Secure Safexcel cryptographic engine is found on some Marvell
SoCs (7k/8k). Document the bindings used by its driver.

Signed-off-by: Antoine Tenart 
---
 .../bindings/crypto/inside-secure-safexcel.txt | 27 ++
 1 file changed, 27 insertions(+)
 create mode 100644 
Documentation/devicetree/bindings/crypto/inside-secure-safexcel.txt

diff --git 
a/Documentation/devicetree/bindings/crypto/inside-secure-safexcel.txt 
b/Documentation/devicetree/bindings/crypto/inside-secure-safexcel.txt
new file mode 100644
index ..ff56b9384fcc
--- /dev/null
+++ b/Documentation/devicetree/bindings/crypto/inside-secure-safexcel.txt
@@ -0,0 +1,27 @@
+Inside Secure SafeXcel cryptographic engine
+
+Required properties:
+- compatible: Should be "inside-secure,safexcel-eip197".
+- reg: Base physical address of the engine and length of memory mapped region.
+- interrupts: Interrupt numbers for the rings and engine.
+- interrupt-names: Should be "ring0", "ring1", "ring2", "ring3", "eip", "mem".
+
+Optional properties:
+- clocks: Reference to the crypto engine clock.
+
+Example:
+
+   crypto: crypto@80 {
+   compatible = "inside-secure,safexcel-eip197";
+   reg = <0x80 0x20>;
+   interrupts = ,
+,
+,
+,
+,
+;
+   interrupt-names = "mem", "ring0", "ring1", "ring2", "ring3",
+ "eip";
+   clocks = <_syscon0 1 26>;
+   status = "disabled";
+   };
-- 
2.11.0



[PATCH v4 2/3] crypto: inside-secure: add SafeXcel EIP197 crypto engine driver

2017-04-27 Thread Antoine Tenart
Add support for Inside Secure SafeXcel EIP197 cryptographic engine,
which can be found on Marvell Armada 7k and 8k boards. This driver
currently implements: ecb(aes), cbc(aes), sha1, sha224, sha256 and
hmac(sah1) algorithms.

Two firmwares are needed for this engine to work. Their are mostly used
for more advanced operations than the ones supported (as of now), but we
still need them to pass the data to the internal cryptographic engine.

Signed-off-by: Antoine Tenart 
---
 drivers/crypto/Kconfig |   17 +
 drivers/crypto/Makefile|1 +
 drivers/crypto/inside-secure/Makefile  |2 +
 drivers/crypto/inside-secure/safexcel.c|  927 +
 drivers/crypto/inside-secure/safexcel.h|  572 +
 drivers/crypto/inside-secure/safexcel_cipher.c |  556 +
 drivers/crypto/inside-secure/safexcel_hash.c   | 1045 
 drivers/crypto/inside-secure/safexcel_ring.c   |  157 
 8 files changed, 3277 insertions(+)
 create mode 100644 drivers/crypto/inside-secure/Makefile
 create mode 100644 drivers/crypto/inside-secure/safexcel.c
 create mode 100644 drivers/crypto/inside-secure/safexcel.h
 create mode 100644 drivers/crypto/inside-secure/safexcel_cipher.c
 create mode 100644 drivers/crypto/inside-secure/safexcel_hash.c
 create mode 100644 drivers/crypto/inside-secure/safexcel_ring.c

diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig
index 473d31288ad8..d12a40450858 100644
--- a/drivers/crypto/Kconfig
+++ b/drivers/crypto/Kconfig
@@ -619,4 +619,21 @@ config CRYPTO_DEV_BCM_SPU
  Secure Processing Unit (SPU). The SPU driver registers ablkcipher,
  ahash, and aead algorithms with the kernel cryptographic API.
 
+config CRYPTO_DEV_SAFEXCEL
+   tristate "Inside Secure's SafeXcel cryptographic engine driver"
+   depends on HAS_DMA && OF
+   depends on (ARM64 && ARCH_MVEBU) || (COMPILE_TEST && 64BIT)
+   select CRYPTO_AES
+   select CRYPTO_BLKCIPHER
+   select CRYPTO_HASH
+   select CRYPTO_HMAC
+   select CRYPTO_SHA1
+   select CRYPTO_SHA256
+   select CRYPTO_SHA512
+   help
+ This driver interfaces with the SafeXcel EIP-197 cryptographic engine
+ designed by Inside Secure. Select this if you want to use CBC/ECB
+ chain mode, AES cipher mode and SHA1/SHA224/SHA256/SHA512 hash
+ algorithms.
+
 endif # CRYPTO_HW
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index 739609471169..7ed3e7940f76 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -36,3 +36,4 @@ obj-$(CONFIG_CRYPTO_DEV_UX500) += ux500/
 obj-$(CONFIG_CRYPTO_DEV_VIRTIO) += virtio/
 obj-$(CONFIG_CRYPTO_DEV_VMX) += vmx/
 obj-$(CONFIG_CRYPTO_DEV_BCM_SPU) += bcm/
+obj-$(CONFIG_CRYPTO_DEV_SAFEXCEL) += inside-secure/
diff --git a/drivers/crypto/inside-secure/Makefile 
b/drivers/crypto/inside-secure/Makefile
new file mode 100644
index ..302f07dde98c
--- /dev/null
+++ b/drivers/crypto/inside-secure/Makefile
@@ -0,0 +1,2 @@
+obj-$(CONFIG_CRYPTO_DEV_SAFEXCEL) += crypto_safexcel.o
+crypto_safexcel-objs := safexcel.o safexcel_ring.o safexcel_cipher.o 
safexcel_hash.o
diff --git a/drivers/crypto/inside-secure/safexcel.c 
b/drivers/crypto/inside-secure/safexcel.c
new file mode 100644
index ..bcd49b96be4b
--- /dev/null
+++ b/drivers/crypto/inside-secure/safexcel.c
@@ -0,0 +1,927 @@
+/*
+ * Copyright (C) 2017 Marvell
+ *
+ * Antoine Tenart 
+ *
+ * This file is licensed under the terms of the GNU General Public
+ * License version 2. This program is licensed "as is" without any
+ * warranty of any kind, whether express or implied.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+
+#include "safexcel.h"
+
+static u32 max_rings = EIP197_MAX_RINGS;
+module_param(max_rings, uint, 0644);
+MODULE_PARM_DESC(max_rings, "Maximum number of rings to use.");
+
+static void eip197_trc_cache_init(struct safexcel_crypto_priv *priv)
+{
+   u32 val, htable_offset;
+   int i;
+
+   /* Enable the record cache memory access */
+   val = readl(priv->base + EIP197_CS_RAM_CTRL);
+   val &= ~EIP197_TRC_ENABLE_MASK;
+   val |= EIP197_TRC_ENABLE_0;
+   writel(val, priv->base + EIP197_CS_RAM_CTRL);
+
+   /* Clear all ECC errors */
+   writel(0, priv->base + EIP197_TRC_ECCCTRL);
+
+   /*
+* Make sure the cache memory is accessible by taking record cache into
+* reset.
+*/
+   val = readl(priv->base + EIP197_TRC_PARAMS);
+   val |= EIP197_TRC_PARAMS_SW_RESET;
+   val &= ~EIP197_TRC_PARAMS_DATA_ACCESS;
+   writel(val, priv->base + EIP197_TRC_PARAMS);
+
+   /* Clear all records */
+   for (i = 0; i < EIP197_CS_RC_MAX; i++) {
+   u32 val, offset = 

[PATCH v4 3/3] MAINTAINERS: add a maintainer for the Inside Secure crypto driver

2017-04-27 Thread Antoine Tenart
A new cryptographic engine driver was added in
drivers/crypto/inside-secure. Add myself as a maintainer for this
driver.

Signed-off-by: Antoine Tenart 
---
 MAINTAINERS | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index c265a5fe4848..7240b9bca638 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -6487,6 +6487,12 @@ F:   Documentation/input/multi-touch-protocol.txt
 F: drivers/input/input-mt.c
 K: \b(ABS|SYN)_MT_
 
+INSIDE SECURE CRYPTO DRIVER
+M: Antoine Tenart 
+F: drivers/crypto/inside-secure/
+S: Maintained
+L: linux-crypto@vger.kernel.org
+
 INTEL ASoC BDW/HSW DRIVERS
 M: Jie Yang 
 L: alsa-de...@alsa-project.org (moderated for non-subscribers)
-- 
2.11.0



Re: [PATCH 4/8] random: remove unused branch in hot code path

2017-04-27 Thread Geert Uytterhoeven
Hi Ted,

(replying to an old thread)

On Wed, Jan 18, 2017 at 5:35 AM, Theodore Ts'o  wrote:
> On Tue, Dec 27, 2016 at 11:40:23PM +0100, Stephan Müller wrote:
>> The variable ip is defined to be a __u64 which is always 8 bytes on any
>> architecture. Thus, the check for sizeof(ip) > 4 will always be true.
>>
>> As the check happens in a hot code path, remove the branch.
>
> The fact that it's a hot code path means the compiler will optimize it
> out, so the fact that it's on the hot code path is irrelevant.  The
> main issue is that on platforms with a 32-bit IP's, ip >> 32 will
> always be zero.  It might be that we can just do this via
>
> #if BITS_PER_LONG == 32
>   ...
> #else
>   ...
> #endif
>
> I'm not sure that works for all platforms, though.  More research is
> needed...

Is the intention for the test "sizeof(ip) > 4" to distinguish between
32-bit and 64-bit platforms?
As ip is __u64, while regs is a pointer, shouldn't the test be
"sizeof(regs) > 4"?

Gr{oetje,eeting}s,

Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- ge...@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
-- Linus Torvalds


[PATCH] crypto: stm32 - fix building as a module

2017-04-27 Thread Arnd Bergmann
The names in the MODULE_DEVICE_TABLE and the actual array don't match:

drivers/crypto/stm32/stm32_crc32.c:309:21: error: 'sti_dt_ids' undeclared here 
(not in a function); did you mean 'stm32_dt_ids'?

This changes the reference that was evidently copied incorrectly from
another driver.

Fixes: b51dbe90912a ("crypto: stm32 - Support for STM32 CRC32 crypto module")
Signed-off-by: Arnd Bergmann 
---
 drivers/crypto/stm32/stm32_crc32.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/crypto/stm32/stm32_crc32.c 
b/drivers/crypto/stm32/stm32_crc32.c
index 765282262d44..ec83b1e6bfe8 100644
--- a/drivers/crypto/stm32/stm32_crc32.c
+++ b/drivers/crypto/stm32/stm32_crc32.c
@@ -306,7 +306,7 @@ static const struct of_device_id stm32_dt_ids[] = {
{ .compatible = "st,stm32f7-crc", },
{},
 };
-MODULE_DEVICE_TABLE(of, sti_dt_ids);
+MODULE_DEVICE_TABLE(of, stm32_dt_ids);
 
 static struct platform_driver stm32_crc_driver = {
.probe  = stm32_crc_probe,
-- 
2.9.0



Re: [PATCH v2 01/21] scatterlist: Introduce sg_map helper functions

2017-04-27 Thread Christoph Hellwig
On Wed, Apr 26, 2017 at 12:11:33PM -0600, Logan Gunthorpe wrote:
> Ok, well for starters I think you are mistaken about kmap being able to
> fail. I'm having a hard time finding many users of that function that
> bother to check for an error when calling it.

A quick audit of the arch code shows you're right - kmap can't fail
anywhere anymore.

> The main difficulty we
> have now is that neither of those functions are expected to fail and we
> need them to be able to in cases where the page doesn't map to system
> RAM. This patch series is trying to address it for users of scatterlist.
> I'm certainly open to other suggestions.

I think you'll need to follow the existing kmap semantics and never
fail the iomem version either.  Otherwise you'll have a special case
that's almost never used that has a different error path.

> There are a fair number of cases in the kernel that do something like:
> 
> if (something)
> x = kmap(page);
> else
> x = kmap_atomic(page);
> ...
> if (something)
> kunmap(page)
> else
> kunmap_atomic(x)
> 
> Which just seems cumbersome to me.

Passing a different flag based on something isn't really much better.

> In any case, if you can accept an sg_kmap and sg_kmap_atomic api just
> say so and I'll make the change. But I'll still need a flags variable
> for SG_MAP_MUST_NOT_FAIL to support legacy cases that have no fail path
> and both of those functions will need to be pretty nearly replicas of
> each other.

Again, wrong way.  Suddenly making things fail for your special case
that normally don't fail is a receipe for bugs.


Re: [PATCH 5/7] IB/hfi1: use pcie_flr instead of duplicating it

2017-04-27 Thread Christoph Hellwig
On Tue, Apr 25, 2017 at 02:39:55PM -0500, Bjorn Helgaas wrote:
> This still leaves these:
> 
>   [PATCH 4/7] ixgbe: use pcie_flr instead of duplicating it
>   [PATCH 6/7] crypto: qat: use pcie_flr instead of duplicating it
>   [PATCH 7/7] liquidio: use pcie_flr instead of duplicating it
> 
> I haven't seen any response to 4 and 6.  Felix reported an unused
> variable in 7.  Let me know if you'd like me to do anything with
> these.

Now that Jeff ACKed 4 it might be worth to add it to the pci tree last
minute.  I'll resend liquidio and qat to the respective maintainers for
the next merge window.