Re: [PATCH v8 0/4] crypto: add algif_akcipher user space API

2017-08-11 Thread Andrew Zaborowski
HI,

On 11 August 2017 at 02:48, Mat Martineau
 wrote:
> The last round of reviews for AF_ALG akcipher left off at an impasse around
> a year ago: the consensus was that hardware key support was needed, but that
> requirement was in conflict with the "always have a software fallback" rule
> for the crypto subsystem. For example, a private key securely generated by
> and stored in a TPM could not be copied out for use by a software algorithm.
> Has anything come about to resolve this impasse?
>
> There were some patches around to add keyring support by associating a key
> ID with an akcipher socket, but that approach ran in to a mismatch between
> the proposed keyring API for the verify operation and the semantics of
> AF_ALG verify.
>
> AF_ALG is best suited for crypto use cases where a socket is set up once and
> there are lots of reads and writes to justify the setup cost. With
> asymmetric crypto, the setup cost is high when you might only use the socket
> for a brief time to do one verify or encrypt operation.

Would that time be shorter when going through the keyctl API?

In any case there will be situations, similar to the lightweight TLS
implementation use case, where this isn't a factor.

>
> Given the efficiency and hardware key issues, AF_ALG seems to be mismatched
> with asymmetric crypto.

The hardware key support would obviously be a benefit but it's
orthogonal to this I believe.  That issue is not specific to akcipher
either, there will be hardware-only symmetric keys that can't be used
with current ALG_IF.

The ALG_IF API provides a slightly lower level access to the
algorithms listed in /proc/crypto than the keyctl API and I don't see
the reason that some of those algorithms should not be available.

Best regards


Re: [PATCH v6 3/6] crypto: AF_ALG -- add asymmetric cipher interface

2016-06-16 Thread Andrew Zaborowski
Hi Stephan,

On 16 June 2016 at 17:38, Stephan Mueller  wrote:
>> This isn't an issue with AF_ALG, I should have changed the subject
>> line perhaps.  In this case it's an inconsistency between some
>> implementations and the documentation (header comment).  It affects
>> users accessing the cipher through AF_ALG but also directly.
>
> As I want to send a new version of the algif_akcipher shortly now (hoping for
> an inclusion into 4.8), is there anything you see that I should prepare for
> regarding this issue? I.e. do you forsee a potential fix that would change the
> API or ABI of algif_akcipher?

No, as far as I understand algif_akcipher will do the right thing now
if the algorithm does the right thing.  It's only the two RSA drivers
that would need to align with the software RSA in what buffer length
they accept.

Best regards
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v6 3/6] crypto: AF_ALG -- add asymmetric cipher interface

2016-06-16 Thread Andrew Zaborowski
Hi Stephan,

On 16 June 2016 at 10:05, Stephan Mueller <smuel...@chronox.de> wrote:
> Am Dienstag, 14. Juni 2016, 09:42:34 schrieb Andrew Zaborowski:
>
> Hi Andrew,
>
>> >
>> > I think we have agreed on dropping the length enforcement at the interface
>> > level.
>>
>> Separately from this there's a problem with the user being unable to
>> know if the algorithm is going to fail because of destination buffer
>> size != key size (including kernel users).  For RSA, the qat
>> implementation will fail while the software implementation won't.  For
>> pkcs1pad(...) there's currently just one implementation but the user
>> can't assume that.
>
> If I understand your issue correctly, my initial code requiring the caller to
> provide sufficient memory would have covered the issue, right?

This isn't an issue with AF_ALG, I should have changed the subject
line perhaps.  In this case it's an inconsistency between some
implementations and the documentation (header comment).  It affects
users accessing the cipher through AF_ALG but also directly.

> If so, we seem
> to have implementations which can handle shorter buffer sizes and some which
> do not. Should a caller really try to figure the right buffer size out? Why
> not requiring a mandatory buffer size and be done with it? I.e. what is the
> gain to allow shorter buffer sizes (as pointed out by Mat)?

It's that client code doesn't need an intermediate layer with an
additional buffer and a memcpy to provide a sensible API.  If the code
wants to decrypt a 32-byte Digest Info structure with a given key or a
reference to a key it makes no sense, logically or in terms of
performance, for it to provide a key-sized buffer.

In the case of the userspace interface I think it's also rare for a
recv() or read() on Linux to require a buffer larger than it's going
to use, correct me if i'm wrong.  (I.e. fail if given a 32-byte
buffer, return 32 bytes of data anyway)  Turning your questino around
is there a gain from requiring larger buffers?

Best regards
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v6 3/6] crypto: AF_ALG -- add asymmetric cipher interface

2016-06-14 Thread Andrew Zaborowski
Hi Stephan,

On 14 June 2016 at 07:12, Stephan Mueller <smuel...@chronox.de> wrote:
> Am Dienstag, 14. Juni 2016, 00:16:11 schrieb Andrew Zaborowski:
>> On 8 June 2016 at 21:14, Mat Martineau
>>
>> <mathew.j.martin...@linux.intel.com> wrote:
>> > On Wed, 8 Jun 2016, Stephan Mueller wrote:
>> >> What is your concern?
>> >
>> > Userspace must allocate larger buffers than it knows are necessary for
>> > expected results.
>> >
>> > It looks like the software rsa implementation handles shorter output
>> > buffers ok (mpi_write_to_sgl will return EOVERFLOW if the the buffer is
>> > too small), however I see at least one hardware rsa driver that requires
>> > the output buffer to be the maximum size. But this inconsistency might be
>> > best addressed within the software cipher or drivers rather than in
>> > recvmsg.
>> Should the hardware drivers fix this instead?  I've looked at the qat
>> and caam drivers, they both require the destination buffer size to be
>> the key size and in both cases there would be no penalty for dropping
>> this requirement as far as I see.  Both do a memmove if the result
>> ends up being shorter than key size.  In case the caller knows it is
>> expecting a specific output size, the driver will have to use a self
>> allocated buffer + a memcpy in those same cases where it would later
>> use memmove instead.  Alternatively the sg passed to dma_map_sg can be
>> prepended with a dummy segment the right size to save the memcpy.
>>
>> akcipher.h only says:
>> @dst_len: Size of the output buffer. It needs to be at least as big as
>> the expected result depending on the operation
>>
>> Note that for random input data the memmove will be done about 1 in
>> 256 times but with PKCS#1 padding the signature always has a leading
>> zero.
>>
>> Requiring buffers bigger than needed makes the added work of dropping
>> the zero bytes from the sglist and potentially re-adding them in the
>> client difficult to justify.  RSA doing this sets a precedent for a
>> future pkcs1pad (or other algorithm) implementation to do the same
>> thing and a portable client having to always know the key size and use
>> key-sized buffers.
>
> I think we have agreed on dropping the length enforcement at the interface
> level.

Separately from this there's a problem with the user being unable to
know if the algorithm is going to fail because of destination buffer
size != key size (including kernel users).  For RSA, the qat
implementation will fail while the software implementation won't.  For
pkcs1pad(...) there's currently just one implementation but the user
can't assume that.

Best regards
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 4/8] akcipher: Move the RSA DER encoding to the crypto layer

2016-02-23 Thread Andrew Zaborowski
Hi David,

On 23 February 2016 at 11:55, David Howells <dhowe...@redhat.com> wrote:
> Andrew Zaborowski <balr...@googlemail.com> wrote:
>
>> AIUI Tadeusz is proposing adding the hashing as a new feature.  Note
>> though that the hash paremeter won't make sense for the encrypt,
>> decrypt or verify operations.
>
> The hash parameter is necessary for the verify operation.  From my
> perspective, I want a verify operation that takes the signature, the message
> hash and the hash name and gives me back an error code.

>From the certificates point of view yes, but the akcipher API only has
the four operations each of which has one input buffer and out output
buffer.

Without overhauling akcipher you could modify pkcs1pad so that sign
takes the hash as input, adds the DER struct in front of it to build
the signature, and the verify operation could at most check that the
DER string matches the hash type and return the hash.  But I think
RFC2437 suggests that you rather compare the signatures, not the
hashes.

Cheers
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 4/8] akcipher: Move the RSA DER encoding to the crypto layer

2016-02-22 Thread Andrew Zaborowski
Hi,

On 22 February 2016 at 23:28, David Howells  wrote:
> Tadeusz Struk  wrote:
>
>> I wonder if this should be merged with the crypto/rsa-pkcs1pad.c template
>> that we already have. Looks like the two do the same padding now.

I think that'd be a good thing to do.

>> Should we merge then and pass the hash param as a separate template param,
>> e.g the public_key would allocate "pkcs1pad(rsa, sha1)"?
>
> Ummm...  Possibly.  Is that how it's used?

Currently it only does the padding and doesn't care about the hash.
The input is expected to be the entire DigestInfo struct.

AIUI Tadeusz is proposing adding the hashing as a new feature.  Note
though that the hash paremeter won't make sense for the encrypt,
decrypt or verify operations.

Also note that TLS 1.0 uses the padding to sign data that is not a
DigestInfo structure and even for 1.2 there are situations where
you'll be hashing the data yourself over some time and then you'll
want the algorithm to only do the padding and RSA signing.

Cheers
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] crypto: rsa-padding - don't allocate buffer on stack

2015-12-11 Thread Andrew Zaborowski
Avoid the s390 compile "warning: 'pkcs1pad_encrypt_sign_complete'
uses dynamic stack allocation" reported by kbuild test robot.  Don't
use a flat zero-filled buffer, instead zero the contents of the SGL.

Signed-off-by: Andrew Zaborowski <andrew.zaborow...@intel.com>
---
 crypto/rsa-pkcs1pad.c | 27 +++
 1 file changed, 19 insertions(+), 8 deletions(-)

diff --git a/crypto/rsa-pkcs1pad.c b/crypto/rsa-pkcs1pad.c
index accc67d..50f5c97 100644
--- a/crypto/rsa-pkcs1pad.c
+++ b/crypto/rsa-pkcs1pad.c
@@ -110,21 +110,32 @@ static int pkcs1pad_encrypt_sign_complete(struct 
akcipher_request *req, int err)
struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
struct pkcs1pad_ctx *ctx = akcipher_tfm_ctx(tfm);
struct pkcs1pad_request *req_ctx = akcipher_request_ctx(req);
-   uint8_t zeros[ctx->key_size - req_ctx->child_req.dst_len];
+   size_t pad_len = ctx->key_size - req_ctx->child_req.dst_len;
+   size_t chunk_len, pad_left;
+   struct sg_mapping_iter miter;
 
if (!err) {
-   if (req_ctx->child_req.dst_len < ctx->key_size) {
-   memset(zeros, 0, sizeof(zeros));
-   sg_copy_from_buffer(req->dst,
-   sg_nents_for_len(req->dst,
-   sizeof(zeros)),
-   zeros, sizeof(zeros));
+   if (pad_len) {
+   sg_miter_start(, req->dst,
+   sg_nents_for_len(req->dst, pad_len),
+   SG_MITER_ATOMIC | SG_MITER_TO_SG);
+
+   pad_left = pad_len;
+   while (pad_left) {
+   sg_miter_next();
+
+   chunk_len = min(miter.length, pad_left);
+   memset(miter.addr, 0, chunk_len);
+   pad_left -= chunk_len;
+   }
+
+   sg_miter_stop();
}
 
sg_pcopy_from_buffer(req->dst,
sg_nents_for_len(req->dst, ctx->key_size),
req_ctx->out_buf, req_ctx->child_req.dst_len,
-   sizeof(zeros));
+   pad_len);
}
req->dst_len = ctx->key_size;
 
-- 
2.1.4

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v7 3/4] crypto: akcipher: add akcipher declarations needed by templates.

2015-12-05 Thread Andrew Zaborowski
Add a struct akcipher_instance and struct akcipher_spawn similar to
how AEAD declares them and the macros for converting to/from
crypto_instance/crypto_spawn.  Also add register functions to
avoid exposing crypto_akcipher_type.

Signed-off-by: Andrew Zaborowski <andrew.zaborow...@intel.com>
---
v2: no changes since v1
v3: drop the new crypto_akcipher_type methods and
add struct akcipher_instance
v4: avoid exposing crypto_akcipher_type after all, add struct akcipher_spawn
and utilities
v5: add akcipher_instance.free
v6: only support akcipher_instance.free, not crypto_template.free,
add further akcipher.h macros
v7: remove duplicate crypto_spawn_akcipher added in v6
---
 crypto/akcipher.c  | 34 -
 include/crypto/internal/akcipher.h | 78 ++
 2 files changed, 111 insertions(+), 1 deletion(-)

diff --git a/crypto/akcipher.c b/crypto/akcipher.c
index 120ec04..def301e 100644
--- a/crypto/akcipher.c
+++ b/crypto/akcipher.c
@@ -21,6 +21,7 @@
 #include 
 #include 
 #include 
+#include 
 #include "internal.h"
 
 #ifdef CONFIG_NET
@@ -75,9 +76,17 @@ static int crypto_akcipher_init_tfm(struct crypto_tfm *tfm)
return 0;
 }
 
+static void crypto_akcipher_free_instance(struct crypto_instance *inst)
+{
+   struct akcipher_instance *akcipher = akcipher_instance(inst);
+
+   akcipher->free(akcipher);
+}
+
 static const struct crypto_type crypto_akcipher_type = {
.extsize = crypto_alg_extsize,
.init_tfm = crypto_akcipher_init_tfm,
+   .free = crypto_akcipher_free_instance,
 #ifdef CONFIG_PROC_FS
.show = crypto_akcipher_show,
 #endif
@@ -88,6 +97,14 @@ static const struct crypto_type crypto_akcipher_type = {
.tfmsize = offsetof(struct crypto_akcipher, base),
 };
 
+int crypto_grab_akcipher(struct crypto_akcipher_spawn *spawn, const char *name,
+u32 type, u32 mask)
+{
+   spawn->base.frontend = _akcipher_type;
+   return crypto_grab_spawn(>base, name, type, mask);
+}
+EXPORT_SYMBOL_GPL(crypto_grab_akcipher);
+
 struct crypto_akcipher *crypto_alloc_akcipher(const char *alg_name, u32 type,
  u32 mask)
 {
@@ -95,13 +112,20 @@ struct crypto_akcipher *crypto_alloc_akcipher(const char 
*alg_name, u32 type,
 }
 EXPORT_SYMBOL_GPL(crypto_alloc_akcipher);
 
-int crypto_register_akcipher(struct akcipher_alg *alg)
+static void akcipher_prepare_alg(struct akcipher_alg *alg)
 {
struct crypto_alg *base = >base;
 
base->cra_type = _akcipher_type;
base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK;
base->cra_flags |= CRYPTO_ALG_TYPE_AKCIPHER;
+}
+
+int crypto_register_akcipher(struct akcipher_alg *alg)
+{
+   struct crypto_alg *base = >base;
+
+   akcipher_prepare_alg(alg);
return crypto_register_alg(base);
 }
 EXPORT_SYMBOL_GPL(crypto_register_akcipher);
@@ -112,5 +136,13 @@ void crypto_unregister_akcipher(struct akcipher_alg *alg)
 }
 EXPORT_SYMBOL_GPL(crypto_unregister_akcipher);
 
+int akcipher_register_instance(struct crypto_template *tmpl,
+  struct akcipher_instance *inst)
+{
+   akcipher_prepare_alg(>alg);
+   return crypto_register_instance(tmpl, akcipher_crypto_instance(inst));
+}
+EXPORT_SYMBOL_GPL(akcipher_register_instance);
+
 MODULE_LICENSE("GPL");
 MODULE_DESCRIPTION("Generic public key cipher type");
diff --git a/include/crypto/internal/akcipher.h 
b/include/crypto/internal/akcipher.h
index 9a2bda1..479a007 100644
--- a/include/crypto/internal/akcipher.h
+++ b/include/crypto/internal/akcipher.h
@@ -13,6 +13,22 @@
 #ifndef _CRYPTO_AKCIPHER_INT_H
 #define _CRYPTO_AKCIPHER_INT_H
 #include 
+#include 
+
+struct akcipher_instance {
+   void (*free)(struct akcipher_instance *inst);
+   union {
+   struct {
+   char head[offsetof(struct akcipher_alg, base)];
+   struct crypto_instance base;
+   } s;
+   struct akcipher_alg alg;
+   };
+};
+
+struct crypto_akcipher_spawn {
+   struct crypto_spawn base;
+};
 
 /*
  * Transform internal helpers.
@@ -38,6 +54,56 @@ static inline const char *akcipher_alg_name(struct 
crypto_akcipher *tfm)
return crypto_akcipher_tfm(tfm)->__crt_alg->cra_name;
 }
 
+static inline struct crypto_instance *akcipher_crypto_instance(
+   struct akcipher_instance *inst)
+{
+   return container_of(>alg.base, struct crypto_instance, alg);
+}
+
+static inline struct akcipher_instance *akcipher_instance(
+   struct crypto_instance *inst)
+{
+   return container_of(>alg, struct akcipher_instance, alg.base);
+}
+
+static inline struct akcipher_instance *akcipher_alg_instance(
+   struct crypto_akcipher *akcipher)
+{
+   return akcipher_instance(crypto_tfm_alg_instance(>base));
+}
+
+static inline void *akcipher_

[PATCH v7 4/4] crypto: RSA padding algorithm

2015-12-05 Thread Andrew Zaborowski
This patch adds PKCS#1 v1.5 standard RSA padding as a separate template.
This way an RSA cipher with padding can be obtained by instantiating
"pkcs1pad(rsa)".  The reason for adding this is that RSA is almost
never used without this padding (or OAEP) so it will be needed for
either certificate work in the kernel or the userspace, and I also hear
that it is likely implemented by hardware RSA in which case hardware
implementations of the whole of pkcs1pad(rsa) can be provided.

Signed-off-by: Andrew Zaborowski <andrew.zaborow...@intel.com>
---
v2: rename rsa-padding.c to rsa-pkcs1pad.c,
use a memset instead of a loop,
add a key size check in pkcs1pad_sign,
add a general comment about pkcs1pad_verify
v3: rewrite the initialisation to avoid an obsolete and less flexible
mechanism, now following the aead template initialisation.
v4: follow the aead template initialisation exactly.
v5: use the instance .free, set no template .free.
v6: don't use crypto_alg.cra_init / cra_exit
v7: check for CRYPTO_TFM_REQ_MAY_SLEEP, remove removal of
crypto_spawn_akcipher
---
 crypto/Makefile   |   1 +
 crypto/rsa-pkcs1pad.c | 617 ++
 crypto/rsa.c  |  16 +-
 include/crypto/internal/rsa.h |   2 +
 4 files changed, 635 insertions(+), 1 deletion(-)
 create mode 100644 crypto/rsa-pkcs1pad.c

diff --git a/crypto/Makefile b/crypto/Makefile
index f7aba92..2acdbbd 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -40,6 +40,7 @@ rsa_generic-y := rsapubkey-asn1.o
 rsa_generic-y += rsaprivkey-asn1.o
 rsa_generic-y += rsa.o
 rsa_generic-y += rsa_helper.o
+rsa_generic-y += rsa-pkcs1pad.o
 obj-$(CONFIG_CRYPTO_RSA) += rsa_generic.o
 
 cryptomgr-y := algboss.o testmgr.o
diff --git a/crypto/rsa-pkcs1pad.c b/crypto/rsa-pkcs1pad.c
new file mode 100644
index 000..accc67d
--- /dev/null
+++ b/crypto/rsa-pkcs1pad.c
@@ -0,0 +1,617 @@
+/*
+ * RSA padding templates.
+ *
+ * Copyright (c) 2015  Intel Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+struct pkcs1pad_ctx {
+   struct crypto_akcipher *child;
+
+   unsigned int key_size;
+};
+
+struct pkcs1pad_request {
+   struct akcipher_request child_req;
+
+   struct scatterlist in_sg[3], out_sg[2];
+   uint8_t *in_buf, *out_buf;
+};
+
+static int pkcs1pad_set_pub_key(struct crypto_akcipher *tfm, const void *key,
+   unsigned int keylen)
+{
+   struct pkcs1pad_ctx *ctx = akcipher_tfm_ctx(tfm);
+   int err, size;
+
+   err = crypto_akcipher_set_pub_key(ctx->child, key, keylen);
+
+   if (!err) {
+   /* Find out new modulus size from rsa implementation */
+   size = crypto_akcipher_maxsize(ctx->child);
+
+   ctx->key_size = size > 0 ? size : 0;
+   if (size <= 0)
+   err = size;
+   }
+
+   return err;
+}
+
+static int pkcs1pad_set_priv_key(struct crypto_akcipher *tfm, const void *key,
+   unsigned int keylen)
+{
+   struct pkcs1pad_ctx *ctx = akcipher_tfm_ctx(tfm);
+   int err, size;
+
+   err = crypto_akcipher_set_priv_key(ctx->child, key, keylen);
+
+   if (!err) {
+   /* Find out new modulus size from rsa implementation */
+   size = crypto_akcipher_maxsize(ctx->child);
+
+   ctx->key_size = size > 0 ? size : 0;
+   if (size <= 0)
+   err = size;
+   }
+
+   return err;
+}
+
+static int pkcs1pad_get_max_size(struct crypto_akcipher *tfm)
+{
+   struct pkcs1pad_ctx *ctx = akcipher_tfm_ctx(tfm);
+
+   /*
+* The maximum destination buffer size for the encrypt/sign operations
+* will be the same as for RSA, even though it's smaller for
+* decrypt/verify.
+*/
+
+   return ctx->key_size ?: -EINVAL;
+}
+
+static void pkcs1pad_sg_set_buf(struct scatterlist *sg, void *buf, size_t len,
+   struct scatterlist *next)
+{
+   int nsegs = next ? 1 : 0;
+
+   if (offset_in_page(buf) + len <= PAGE_SIZE) {
+   nsegs += 1;
+   sg_init_table(sg, nsegs);
+   sg_set_buf(sg, buf, len);
+   } else {
+   nsegs += 2;
+   sg_init_table(sg, nsegs);
+   sg_set_buf(sg + 0, buf, PAGE_SIZE - offset_in_page(buf));
+   sg_set_buf(sg + 1, buf + PAGE_SIZE - offset_in_page(buf),
+   offset_in_page(buf) + len - PAGE_SIZE);
+   }
+
+   if (next)
+   sg_chain(sg, nsegs, next);
+}
+
+static int pkcs1pad_encrypt_sign_complete(struct akcipher_request *r

[PATCH v6 3/4] crypto: akcipher: add akcipher declarations needed by templates.

2015-11-29 Thread Andrew Zaborowski
Add a struct akcipher_instance and struct akcipher_spawn similar to
how AEAD declares them and the macros for converting to/from
crypto_instance/crypto_spawn.  Also add register functions to
avoid exposing crypto_akcipher_type.

Signed-off-by: Andrew Zaborowski <andrew.zaborow...@intel.com>
---
v2: no changes since v1
v3: drop the new crypto_akcipher_type methods and
add struct akcipher_instance
v4: avoid exposing crypto_akcipher_type after all, add struct akcipher_spawn
and utilities
v5: add akcipher_instance.free
v6: only support akcipher_instance.free, not crypto_template.free,
add further akcipher.h macros
---
 crypto/akcipher.c  | 34 ++-
 include/crypto/internal/akcipher.h | 84 ++
 2 files changed, 117 insertions(+), 1 deletion(-)

diff --git a/crypto/akcipher.c b/crypto/akcipher.c
index 120ec04..def301e 100644
--- a/crypto/akcipher.c
+++ b/crypto/akcipher.c
@@ -21,6 +21,7 @@
 #include 
 #include 
 #include 
+#include 
 #include "internal.h"
 
 #ifdef CONFIG_NET
@@ -75,9 +76,17 @@ static int crypto_akcipher_init_tfm(struct crypto_tfm *tfm)
return 0;
 }
 
+static void crypto_akcipher_free_instance(struct crypto_instance *inst)
+{
+   struct akcipher_instance *akcipher = akcipher_instance(inst);
+
+   akcipher->free(akcipher);
+}
+
 static const struct crypto_type crypto_akcipher_type = {
.extsize = crypto_alg_extsize,
.init_tfm = crypto_akcipher_init_tfm,
+   .free = crypto_akcipher_free_instance,
 #ifdef CONFIG_PROC_FS
.show = crypto_akcipher_show,
 #endif
@@ -88,6 +97,14 @@ static const struct crypto_type crypto_akcipher_type = {
.tfmsize = offsetof(struct crypto_akcipher, base),
 };
 
+int crypto_grab_akcipher(struct crypto_akcipher_spawn *spawn, const char *name,
+u32 type, u32 mask)
+{
+   spawn->base.frontend = _akcipher_type;
+   return crypto_grab_spawn(>base, name, type, mask);
+}
+EXPORT_SYMBOL_GPL(crypto_grab_akcipher);
+
 struct crypto_akcipher *crypto_alloc_akcipher(const char *alg_name, u32 type,
  u32 mask)
 {
@@ -95,13 +112,20 @@ struct crypto_akcipher *crypto_alloc_akcipher(const char 
*alg_name, u32 type,
 }
 EXPORT_SYMBOL_GPL(crypto_alloc_akcipher);
 
-int crypto_register_akcipher(struct akcipher_alg *alg)
+static void akcipher_prepare_alg(struct akcipher_alg *alg)
 {
struct crypto_alg *base = >base;
 
base->cra_type = _akcipher_type;
base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK;
base->cra_flags |= CRYPTO_ALG_TYPE_AKCIPHER;
+}
+
+int crypto_register_akcipher(struct akcipher_alg *alg)
+{
+   struct crypto_alg *base = >base;
+
+   akcipher_prepare_alg(alg);
return crypto_register_alg(base);
 }
 EXPORT_SYMBOL_GPL(crypto_register_akcipher);
@@ -112,5 +136,13 @@ void crypto_unregister_akcipher(struct akcipher_alg *alg)
 }
 EXPORT_SYMBOL_GPL(crypto_unregister_akcipher);
 
+int akcipher_register_instance(struct crypto_template *tmpl,
+  struct akcipher_instance *inst)
+{
+   akcipher_prepare_alg(>alg);
+   return crypto_register_instance(tmpl, akcipher_crypto_instance(inst));
+}
+EXPORT_SYMBOL_GPL(akcipher_register_instance);
+
 MODULE_LICENSE("GPL");
 MODULE_DESCRIPTION("Generic public key cipher type");
diff --git a/include/crypto/internal/akcipher.h 
b/include/crypto/internal/akcipher.h
index 9a2bda1..8f90c99 100644
--- a/include/crypto/internal/akcipher.h
+++ b/include/crypto/internal/akcipher.h
@@ -13,6 +13,22 @@
 #ifndef _CRYPTO_AKCIPHER_INT_H
 #define _CRYPTO_AKCIPHER_INT_H
 #include 
+#include 
+
+struct akcipher_instance {
+   void (*free)(struct akcipher_instance *inst);
+   union {
+   struct {
+   char head[offsetof(struct akcipher_alg, base)];
+   struct crypto_instance base;
+   } s;
+   struct akcipher_alg alg;
+   };
+};
+
+struct crypto_akcipher_spawn {
+   struct crypto_spawn base;
+};
 
 /*
  * Transform internal helpers.
@@ -38,6 +54,62 @@ static inline const char *akcipher_alg_name(struct 
crypto_akcipher *tfm)
return crypto_akcipher_tfm(tfm)->__crt_alg->cra_name;
 }
 
+static inline struct crypto_instance *akcipher_crypto_instance(
+   struct akcipher_instance *inst)
+{
+   return container_of(>alg.base, struct crypto_instance, alg);
+}
+
+static inline struct akcipher_instance *akcipher_instance(
+   struct crypto_instance *inst)
+{
+   return container_of(>alg, struct akcipher_instance, alg.base);
+}
+
+static inline struct akcipher_instance *akcipher_alg_instance(
+   struct crypto_akcipher *akcipher)
+{
+   return akcipher_instance(crypto_tfm_alg_instance(>base));
+}
+
+static inline void *akcipher_instance_ctx(struct akcipher_instance *inst)
+{
+   re

[PATCH v5 3/4] crypto: akcipher: add akcipher declarations needed by templates.

2015-11-26 Thread Andrew Zaborowski
Add a struct akcipher_instance and struct akcipher_spawn similar to
how AEAD declares them and the macros for converting to/from
crypto_instance/crypto_spawn.  Also add register functions to
avoid exposing crypto_akcipher_type.

Signed-off-by: Andrew Zaborowski <andrew.zaborow...@intel.com>
---
v2: no changes since v1
v3: drop the new crypto_akcipher_type methods and
add struct akcipher_instance
v4: avoid exposing crypto_akcipher_type after all, add struct akcipher_spawn
and utilities
v5: add akcipher_instance.free
---
 crypto/akcipher.c  | 39 -
 include/crypto/internal/akcipher.h | 60 ++
 2 files changed, 98 insertions(+), 1 deletion(-)

diff --git a/crypto/akcipher.c b/crypto/akcipher.c
index 120ec04..eef68e8 100644
--- a/crypto/akcipher.c
+++ b/crypto/akcipher.c
@@ -21,6 +21,7 @@
 #include 
 #include 
 #include 
+#include 
 #include "internal.h"
 
 #ifdef CONFIG_NET
@@ -75,9 +76,22 @@ static int crypto_akcipher_init_tfm(struct crypto_tfm *tfm)
return 0;
 }
 
+static void crypto_akcipher_free_instance(struct crypto_instance *inst)
+{
+   struct akcipher_instance *akcipher = akcipher_instance(inst);
+
+   if (!akcipher->free) {
+   inst->tmpl->free(inst);
+   return;
+   }
+
+   akcipher->free(akcipher);
+}
+
 static const struct crypto_type crypto_akcipher_type = {
.extsize = crypto_alg_extsize,
.init_tfm = crypto_akcipher_init_tfm,
+   .free = crypto_akcipher_free_instance,
 #ifdef CONFIG_PROC_FS
.show = crypto_akcipher_show,
 #endif
@@ -88,6 +102,14 @@ static const struct crypto_type crypto_akcipher_type = {
.tfmsize = offsetof(struct crypto_akcipher, base),
 };
 
+int crypto_grab_akcipher(struct crypto_akcipher_spawn *spawn, const char *name,
+u32 type, u32 mask)
+{
+   spawn->base.frontend = _akcipher_type;
+   return crypto_grab_spawn(>base, name, type, mask);
+}
+EXPORT_SYMBOL_GPL(crypto_grab_akcipher);
+
 struct crypto_akcipher *crypto_alloc_akcipher(const char *alg_name, u32 type,
  u32 mask)
 {
@@ -95,13 +117,20 @@ struct crypto_akcipher *crypto_alloc_akcipher(const char 
*alg_name, u32 type,
 }
 EXPORT_SYMBOL_GPL(crypto_alloc_akcipher);
 
-int crypto_register_akcipher(struct akcipher_alg *alg)
+static void akcipher_prepare_alg(struct akcipher_alg *alg)
 {
struct crypto_alg *base = >base;
 
base->cra_type = _akcipher_type;
base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK;
base->cra_flags |= CRYPTO_ALG_TYPE_AKCIPHER;
+}
+
+int crypto_register_akcipher(struct akcipher_alg *alg)
+{
+   struct crypto_alg *base = >base;
+
+   akcipher_prepare_alg(alg);
return crypto_register_alg(base);
 }
 EXPORT_SYMBOL_GPL(crypto_register_akcipher);
@@ -112,5 +141,13 @@ void crypto_unregister_akcipher(struct akcipher_alg *alg)
 }
 EXPORT_SYMBOL_GPL(crypto_unregister_akcipher);
 
+int akcipher_register_instance(struct crypto_template *tmpl,
+  struct akcipher_instance *inst)
+{
+   akcipher_prepare_alg(>alg);
+   return crypto_register_instance(tmpl, akcipher_crypto_instance(inst));
+}
+EXPORT_SYMBOL_GPL(akcipher_register_instance);
+
 MODULE_LICENSE("GPL");
 MODULE_DESCRIPTION("Generic public key cipher type");
diff --git a/include/crypto/internal/akcipher.h 
b/include/crypto/internal/akcipher.h
index 9a2bda1..9e4103d 100644
--- a/include/crypto/internal/akcipher.h
+++ b/include/crypto/internal/akcipher.h
@@ -13,6 +13,22 @@
 #ifndef _CRYPTO_AKCIPHER_INT_H
 #define _CRYPTO_AKCIPHER_INT_H
 #include 
+#include 
+
+struct akcipher_instance {
+   void (*free)(struct akcipher_instance *inst);
+   union {
+   struct {
+   char head[offsetof(struct akcipher_alg, base)];
+   struct crypto_instance base;
+   } s;
+   struct akcipher_alg alg;
+   };
+};
+
+struct crypto_akcipher_spawn {
+   struct crypto_spawn base;
+};
 
 /*
  * Transform internal helpers.
@@ -38,6 +54,38 @@ static inline const char *akcipher_alg_name(struct 
crypto_akcipher *tfm)
return crypto_akcipher_tfm(tfm)->__crt_alg->cra_name;
 }
 
+static inline struct crypto_instance *akcipher_crypto_instance(
+   struct akcipher_instance *inst)
+{
+   return container_of(>alg.base, struct crypto_instance, alg);
+}
+
+static inline struct akcipher_instance *akcipher_instance(
+   struct crypto_instance *inst)
+{
+   return container_of(>alg, struct akcipher_instance, alg.base);
+}
+
+int crypto_grab_akcipher(struct crypto_akcipher_spawn *spawn, const char *name,
+   u32 type, u32 mask);
+
+static inline void crypto_drop_akcipher(struct crypto_akcipher_spawn *spawn)
+{
+   crypto_drop_spawn(>base);

[PATCH v4 3/4] crypto: akcipher: add akcipher declarations needed by templates.

2015-11-25 Thread Andrew Zaborowski
Add a struct akcipher_instance and struct akcipher_spawn similar to
how AEAD declares them and the macros for converting to/from
crypto_instance/crypto_spawn.  Also add register functions to
avoid exposing crypto_akcipher_type.

Signed-off-by: Andrew Zaborowski <andrew.zaborow...@intel.com>
---
v2: no changes since v1
v3: drop the new crypto_akcipher_type methods and
add struct akcipher_instance
v4: avoid exposing crypto_akcipher_type after all, add struct akcipher_spawn
and utilities
---
 crypto/akcipher.c  | 26 -
 include/crypto/internal/akcipher.h | 59 ++
 2 files changed, 84 insertions(+), 1 deletion(-)

diff --git a/crypto/akcipher.c b/crypto/akcipher.c
index 120ec04..47169fb 100644
--- a/crypto/akcipher.c
+++ b/crypto/akcipher.c
@@ -21,6 +21,7 @@
 #include 
 #include 
 #include 
+#include 
 #include "internal.h"
 
 #ifdef CONFIG_NET
@@ -88,6 +89,14 @@ static const struct crypto_type crypto_akcipher_type = {
.tfmsize = offsetof(struct crypto_akcipher, base),
 };
 
+int crypto_grab_akcipher(struct crypto_akcipher_spawn *spawn, const char *name,
+u32 type, u32 mask)
+{
+   spawn->base.frontend = _akcipher_type;
+   return crypto_grab_spawn(>base, name, type, mask);
+}
+EXPORT_SYMBOL_GPL(crypto_grab_akcipher);
+
 struct crypto_akcipher *crypto_alloc_akcipher(const char *alg_name, u32 type,
  u32 mask)
 {
@@ -95,13 +104,20 @@ struct crypto_akcipher *crypto_alloc_akcipher(const char 
*alg_name, u32 type,
 }
 EXPORT_SYMBOL_GPL(crypto_alloc_akcipher);
 
-int crypto_register_akcipher(struct akcipher_alg *alg)
+static void akcipher_prepare_alg(struct akcipher_alg *alg)
 {
struct crypto_alg *base = >base;
 
base->cra_type = _akcipher_type;
base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK;
base->cra_flags |= CRYPTO_ALG_TYPE_AKCIPHER;
+}
+
+int crypto_register_akcipher(struct akcipher_alg *alg)
+{
+   struct crypto_alg *base = >base;
+
+   akcipher_prepare_alg(alg);
return crypto_register_alg(base);
 }
 EXPORT_SYMBOL_GPL(crypto_register_akcipher);
@@ -112,5 +128,13 @@ void crypto_unregister_akcipher(struct akcipher_alg *alg)
 }
 EXPORT_SYMBOL_GPL(crypto_unregister_akcipher);
 
+int akcipher_register_instance(struct crypto_template *tmpl,
+  struct akcipher_instance *inst)
+{
+   akcipher_prepare_alg(>alg);
+   return crypto_register_instance(tmpl, akcipher_crypto_instance(inst));
+}
+EXPORT_SYMBOL_GPL(akcipher_register_instance);
+
 MODULE_LICENSE("GPL");
 MODULE_DESCRIPTION("Generic public key cipher type");
diff --git a/include/crypto/internal/akcipher.h 
b/include/crypto/internal/akcipher.h
index 9a2bda1..e38d079 100644
--- a/include/crypto/internal/akcipher.h
+++ b/include/crypto/internal/akcipher.h
@@ -13,6 +13,21 @@
 #ifndef _CRYPTO_AKCIPHER_INT_H
 #define _CRYPTO_AKCIPHER_INT_H
 #include 
+#include 
+
+struct akcipher_instance {
+   union {
+   struct {
+   char head[offsetof(struct akcipher_alg, base)];
+   struct crypto_instance base;
+   } s;
+   struct akcipher_alg alg;
+   };
+};
+
+struct crypto_akcipher_spawn {
+   struct crypto_spawn base;
+};
 
 /*
  * Transform internal helpers.
@@ -38,6 +53,38 @@ static inline const char *akcipher_alg_name(struct 
crypto_akcipher *tfm)
return crypto_akcipher_tfm(tfm)->__crt_alg->cra_name;
 }
 
+static inline struct crypto_instance *akcipher_crypto_instance(
+   struct akcipher_instance *inst)
+{
+   return container_of(>alg.base, struct crypto_instance, alg);
+}
+
+static inline struct akcipher_instance *akcipher_instance(
+   struct crypto_instance *inst)
+{
+   return container_of(>alg, struct akcipher_instance, alg.base);
+}
+
+int crypto_grab_akcipher(struct crypto_akcipher_spawn *spawn, const char *name,
+   u32 type, u32 mask);
+
+static inline void crypto_drop_akcipher(struct crypto_akcipher_spawn *spawn)
+{
+   crypto_drop_spawn(>base);
+}
+
+static inline struct akcipher_alg *crypto_spawn_akcipher_alg(
+   struct crypto_akcipher_spawn *spawn)
+{
+   return container_of(spawn->base.alg, struct akcipher_alg, base);
+}
+
+static inline struct crypto_akcipher *crypto_spawn_akcipher(
+   struct crypto_akcipher_spawn *spawn)
+{
+   return crypto_spawn_tfm2(>base);
+}
+
 /**
  * crypto_register_akcipher() -- Register public key algorithm
  *
@@ -57,4 +104,16 @@ int crypto_register_akcipher(struct akcipher_alg *alg);
  * @alg:   algorithm definition
  */
 void crypto_unregister_akcipher(struct akcipher_alg *alg);
+
+/**
+ * akcipher_register_instance() -- Unregister public key template instance
+ *
+ * Function registers an implementation of an asymmetric key algorithm

[PATCH v4 2/4] crypto: rsa: only require output buffers as big as needed.

2015-11-25 Thread Andrew Zaborowski
rhe RSA operations explicitly left-align the integers being written
skipping any leading zero bytes, but still require the output buffers to
include just enough space for the integer + the leading zero bytes.
Since the size of integer + the leading zero bytes (i.e. the key modulus
size) can now be obtained more easily through crypto_akcipher_maxsize
change the operations to only require as big a buffer as actually needed
if the caller has that information.  The semantics for request->dst_len
don't change.

Signed-off-by: Andrew Zaborowski <andrew.zaborow...@intel.com>
---
No changes since v1
---
 crypto/rsa.c | 24 
 1 file changed, 24 deletions(-)

diff --git a/crypto/rsa.c b/crypto/rsa.c
index 1093e04..58aad69 100644
--- a/crypto/rsa.c
+++ b/crypto/rsa.c
@@ -91,12 +91,6 @@ static int rsa_enc(struct akcipher_request *req)
goto err_free_c;
}
 
-   if (req->dst_len < mpi_get_size(pkey->n)) {
-   req->dst_len = mpi_get_size(pkey->n);
-   ret = -EOVERFLOW;
-   goto err_free_c;
-   }
-
ret = -ENOMEM;
m = mpi_read_raw_from_sgl(req->src, req->src_len);
if (!m)
@@ -136,12 +130,6 @@ static int rsa_dec(struct akcipher_request *req)
goto err_free_m;
}
 
-   if (req->dst_len < mpi_get_size(pkey->n)) {
-   req->dst_len = mpi_get_size(pkey->n);
-   ret = -EOVERFLOW;
-   goto err_free_m;
-   }
-
ret = -ENOMEM;
c = mpi_read_raw_from_sgl(req->src, req->src_len);
if (!c)
@@ -180,12 +168,6 @@ static int rsa_sign(struct akcipher_request *req)
goto err_free_s;
}
 
-   if (req->dst_len < mpi_get_size(pkey->n)) {
-   req->dst_len = mpi_get_size(pkey->n);
-   ret = -EOVERFLOW;
-   goto err_free_s;
-   }
-
ret = -ENOMEM;
m = mpi_read_raw_from_sgl(req->src, req->src_len);
if (!m)
@@ -225,12 +207,6 @@ static int rsa_verify(struct akcipher_request *req)
goto err_free_m;
}
 
-   if (req->dst_len < mpi_get_size(pkey->n)) {
-   req->dst_len = mpi_get_size(pkey->n);
-   ret = -EOVERFLOW;
-   goto err_free_m;
-   }
-
ret = -ENOMEM;
s = mpi_read_raw_from_sgl(req->src, req->src_len);
if (!s) {
-- 
2.1.4

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v4 1/4] lib/mpi: only require buffers as big as needed for the integer

2015-11-25 Thread Andrew Zaborowski
Since mpi_write_to_sgl and mpi_read_buffer explicitly left-align the
integers being written it makes no sense to require a buffer big enough for
the number + the leading zero bytes which are not written.  The error
returned also doesn't convey any information.  So instead require only the
size needed and return -EOVERFLOW to signal when buffer too short.

Signed-off-by: Andrew Zaborowski <andrew.zaborow...@intel.com>
---
No changes since v1
---
 lib/mpi/mpicoder.c | 21 +
 1 file changed, 17 insertions(+), 4 deletions(-)

diff --git a/lib/mpi/mpicoder.c b/lib/mpi/mpicoder.c
index c7e0a70..074d2df 100644
--- a/lib/mpi/mpicoder.c
+++ b/lib/mpi/mpicoder.c
@@ -135,7 +135,9 @@ EXPORT_SYMBOL_GPL(mpi_read_from_buffer);
  * @buf:   bufer to which the output will be written to. Needs to be at
  * leaset mpi_get_size(a) long.
  * @buf_len:   size of the buf.
- * @nbytes:receives the actual length of the data written.
+ * @nbytes:receives the actual length of the data written on success and
+ * the data to-be-written on -EOVERFLOW in case buf_len was too
+ * small.
  * @sign:  if not NULL, it will be set to the sign of a.
  *
  * Return: 0 on success or error code in case of error
@@ -148,7 +150,7 @@ int mpi_read_buffer(MPI a, uint8_t *buf, unsigned buf_len, 
unsigned *nbytes,
unsigned int n = mpi_get_size(a);
int i, lzeros = 0;
 
-   if (buf_len < n || !buf || !nbytes)
+   if (!buf || !nbytes)
return -EINVAL;
 
if (sign)
@@ -163,6 +165,11 @@ int mpi_read_buffer(MPI a, uint8_t *buf, unsigned buf_len, 
unsigned *nbytes,
break;
}
 
+   if (buf_len < n - lzeros) {
+   *nbytes = n - lzeros;
+   return -EOVERFLOW;
+   }
+
p = buf;
*nbytes = n - lzeros;
 
@@ -332,7 +339,8 @@ EXPORT_SYMBOL_GPL(mpi_set_buffer);
  * @nbytes:in/out param - it has the be set to the maximum number of
  * bytes that can be written to sgl. This has to be at least
  * the size of the integer a. On return it receives the actual
- * length of the data written.
+ * length of the data written on success or the data that would
+ * be written if buffer was too small.
  * @sign:  if not NULL, it will be set to the sign of a.
  *
  * Return: 0 on success or error code in case of error
@@ -345,7 +353,7 @@ int mpi_write_to_sgl(MPI a, struct scatterlist *sgl, 
unsigned *nbytes,
unsigned int n = mpi_get_size(a);
int i, x, y = 0, lzeros = 0, buf_len;
 
-   if (!nbytes || *nbytes < n)
+   if (!nbytes)
return -EINVAL;
 
if (sign)
@@ -360,6 +368,11 @@ int mpi_write_to_sgl(MPI a, struct scatterlist *sgl, 
unsigned *nbytes,
break;
}
 
+   if (*nbytes < n - lzeros) {
+   *nbytes = n - lzeros;
+   return -EOVERFLOW;
+   }
+
*nbytes = n - lzeros;
buf_len = sgl->length;
p2 = sg_virt(sgl);
-- 
2.1.4

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v4 4/4] crypto: RSA padding algorithm

2015-11-25 Thread Andrew Zaborowski
This patch adds PKCS#1 v1.5 standard RSA padding as a separate template.
This way an RSA cipher with padding can be obtained by instantiating
"pkcs1pad(rsa)".  The reason for adding this is that RSA is almost
never used without this padding (or OAEP) so it will be needed for
either certificate work in the kernel or the userspace, and I also hear
that it is likely implemented by hardware RSA in which case hardware
implementations of the whole of pkcs1pad(rsa) can be provided.

Signed-off-by: Andrew Zaborowski <andrew.zaborow...@intel.com>
---
v2: rename rsa-padding.c to rsa-pkcs1pad.c,
use a memset instead of a loop,
add a key size check in pkcs1pad_sign,
add a general comment about pkcs1pad_verify
v3: rewrite the initialisation to avoid an obsolete and less flexible
mechanism, now following the aead template initialisation.
v4: follow the aead template initialisation exactly.
---
 crypto/Makefile   |   1 +
 crypto/rsa-pkcs1pad.c | 605 ++
 crypto/rsa.c  |  16 +-
 include/crypto/internal/rsa.h |   2 +
 4 files changed, 623 insertions(+), 1 deletion(-)
 create mode 100644 crypto/rsa-pkcs1pad.c

diff --git a/crypto/Makefile b/crypto/Makefile
index f7aba92..2acdbbd 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -40,6 +40,7 @@ rsa_generic-y := rsapubkey-asn1.o
 rsa_generic-y += rsaprivkey-asn1.o
 rsa_generic-y += rsa.o
 rsa_generic-y += rsa_helper.o
+rsa_generic-y += rsa-pkcs1pad.o
 obj-$(CONFIG_CRYPTO_RSA) += rsa_generic.o
 
 cryptomgr-y := algboss.o testmgr.o
diff --git a/crypto/rsa-pkcs1pad.c b/crypto/rsa-pkcs1pad.c
new file mode 100644
index 000..2e03a27
--- /dev/null
+++ b/crypto/rsa-pkcs1pad.c
@@ -0,0 +1,605 @@
+/*
+ * RSA padding templates.
+ *
+ * Copyright (c) 2015  Intel Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+struct pkcs1pad_ctx {
+   struct crypto_akcipher *child;
+
+   unsigned int key_size;
+};
+
+struct pkcs1pad_request {
+   struct akcipher_request child_req;
+
+   struct scatterlist in_sg[3], out_sg[2];
+   uint8_t *in_buf, *out_buf;
+};
+
+static int pkcs1pad_set_pub_key(struct crypto_akcipher *tfm, const void *key,
+   unsigned int keylen)
+{
+   struct pkcs1pad_ctx *ctx = akcipher_tfm_ctx(tfm);
+   int err, size;
+
+   err = crypto_akcipher_set_pub_key(ctx->child, key, keylen);
+
+   if (!err) {
+   /* Find out new modulus size from rsa implementation */
+   size = crypto_akcipher_maxsize(ctx->child);
+
+   ctx->key_size = size > 0 ? size : 0;
+   if (size <= 0)
+   err = size;
+   }
+
+   return err;
+}
+
+static int pkcs1pad_set_priv_key(struct crypto_akcipher *tfm, const void *key,
+   unsigned int keylen)
+{
+   struct pkcs1pad_ctx *ctx = akcipher_tfm_ctx(tfm);
+   int err, size;
+
+   err = crypto_akcipher_set_priv_key(ctx->child, key, keylen);
+
+   if (!err) {
+   /* Find out new modulus size from rsa implementation */
+   size = crypto_akcipher_maxsize(ctx->child);
+
+   ctx->key_size = size > 0 ? size : 0;
+   if (size <= 0)
+   err = size;
+   }
+
+   return err;
+}
+
+static int pkcs1pad_get_max_size(struct crypto_akcipher *tfm)
+{
+   struct pkcs1pad_ctx *ctx = akcipher_tfm_ctx(tfm);
+
+   /*
+* The maximum destination buffer size for the encrypt/sign operations
+* will be the same as for RSA, even though it's smaller for
+* decrypt/verify.
+*/
+
+   return ctx->key_size ?: -EINVAL;
+}
+
+static void pkcs1pad_sg_set_buf(struct scatterlist *sg, void *buf, size_t len,
+   struct scatterlist *next)
+{
+   int nsegs = next ? 1 : 0;
+
+   if (offset_in_page(buf) + len <= PAGE_SIZE) {
+   nsegs += 1;
+   sg_init_table(sg, nsegs);
+   sg_set_buf(sg, buf, len);
+   } else {
+   nsegs += 2;
+   sg_init_table(sg, nsegs);
+   sg_set_buf(sg + 0, buf, PAGE_SIZE - offset_in_page(buf));
+   sg_set_buf(sg + 1, buf + PAGE_SIZE - offset_in_page(buf),
+   offset_in_page(buf) + len - PAGE_SIZE);
+   }
+
+   if (next)
+   sg_chain(sg, nsegs, next);
+}
+
+static int pkcs1pad_encrypt_sign_complete(struct akcipher_request *req, int 
err)
+{
+   struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
+   struct pkcs1pad_ctx *ctx = akcipher_tfm_ctx(tfm);
+   struct pkcs1pad_request *req_ctx = 

[PATCH] crypto: Docs blurb about templates.

2015-11-23 Thread Andrew Zaborowski
Signed-off-by: Andrew Zaborowski <andrew.zaborow...@intel.com>
---
These are some notes about the template structs that can take some
head-scratching to figure out from the code.  Please check that this is
the current intended use.
---
 Documentation/crypto/api-intro.txt | 40 ++
 1 file changed, 40 insertions(+)

diff --git a/Documentation/crypto/api-intro.txt 
b/Documentation/crypto/api-intro.txt
index 8b49302..39b5caa 100644
--- a/Documentation/crypto/api-intro.txt
+++ b/Documentation/crypto/api-intro.txt
@@ -117,6 +117,46 @@ Also check the TODO list at the web site listed below to 
see what people
 might already be working on.
 
 
+TEMPLATE ALGORITHMS
+
+Templates dynamically create algorithms based on algorithm names passed
+as parameters.  In most cases they modify how another algorithm works by
+wrapping around an instance of the other algorithm and operating on its
+inputs, outputs, and/or the keys.  They can call the child transform's
+operations in an arbitrary order.  The template can convert one algorithm
+type to another and may also combine multiple instances of one or
+multiple algorithms.
+
+The following additional types are used with templates:
+
+* struct crypto_template
+  Describes the template and has methods to create actual algorithms as
+  crypto_instance structures.  These are not instances of algorithms
+  (transforms), instances of the template are algorithms.  The template
+  does not appear in /proc/crypto but the algorithms do.  The struct
+  crypto_template does not statically determine the resulting crypto
+  types.
+
+* struct crypto_instance
+  Represents an instance of a template.  Its first member is the
+  "struct crypto_alg alg" which is a dynamically created algorithm that
+  behaves like any other.  The structure also points back to the template
+  used.  The crypto type-specific methods and other algorithm context is
+  prepended to struct crypto_instance in a way that it's also prepended
+  to the .alg member.  The children algorithm(s) used by the template
+  instance are pointed to by the crypto_spawn structure(s) normally
+  appended after the crypto_instance.
+
+  Actual transforms are created when the context is allocated and .init_tfm
+  is called same as with non-template algorithms, but the .init_tfm
+  function will need to trigger creation of child transform(s) from the
+  crypto_spawn structure(s).
+
+* struct crypto_spawn
+  Links a template algorithm (crypto_instance) and a reference to one child
+  algorithm.
+
+
 BUGS
 
 Send bug reports to:
-- 
2.1.4

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v3 1/4] lib/mpi: only require buffers as big as needed for the integer

2015-11-19 Thread Andrew Zaborowski
Since mpi_write_to_sgl and mpi_read_buffer explicitly left-align the
integers being written it makes no sense to require a buffer big enough for
the number + the leading zero bytes which are not written.  The error
returned also doesn't convey any information.  So instead require only the
size needed and return -EOVERFLOW to signal when buffer too short.

Signed-off-by: Andrew Zaborowski <andrew.zaborow...@intel.com>
---
No changes since v1
---
 lib/mpi/mpicoder.c | 21 +
 1 file changed, 17 insertions(+), 4 deletions(-)

diff --git a/lib/mpi/mpicoder.c b/lib/mpi/mpicoder.c
index c7e0a70..074d2df 100644
--- a/lib/mpi/mpicoder.c
+++ b/lib/mpi/mpicoder.c
@@ -135,7 +135,9 @@ EXPORT_SYMBOL_GPL(mpi_read_from_buffer);
  * @buf:   bufer to which the output will be written to. Needs to be at
  * leaset mpi_get_size(a) long.
  * @buf_len:   size of the buf.
- * @nbytes:receives the actual length of the data written.
+ * @nbytes:receives the actual length of the data written on success and
+ * the data to-be-written on -EOVERFLOW in case buf_len was too
+ * small.
  * @sign:  if not NULL, it will be set to the sign of a.
  *
  * Return: 0 on success or error code in case of error
@@ -148,7 +150,7 @@ int mpi_read_buffer(MPI a, uint8_t *buf, unsigned buf_len, 
unsigned *nbytes,
unsigned int n = mpi_get_size(a);
int i, lzeros = 0;
 
-   if (buf_len < n || !buf || !nbytes)
+   if (!buf || !nbytes)
return -EINVAL;
 
if (sign)
@@ -163,6 +165,11 @@ int mpi_read_buffer(MPI a, uint8_t *buf, unsigned buf_len, 
unsigned *nbytes,
break;
}
 
+   if (buf_len < n - lzeros) {
+   *nbytes = n - lzeros;
+   return -EOVERFLOW;
+   }
+
p = buf;
*nbytes = n - lzeros;
 
@@ -332,7 +339,8 @@ EXPORT_SYMBOL_GPL(mpi_set_buffer);
  * @nbytes:in/out param - it has the be set to the maximum number of
  * bytes that can be written to sgl. This has to be at least
  * the size of the integer a. On return it receives the actual
- * length of the data written.
+ * length of the data written on success or the data that would
+ * be written if buffer was too small.
  * @sign:  if not NULL, it will be set to the sign of a.
  *
  * Return: 0 on success or error code in case of error
@@ -345,7 +353,7 @@ int mpi_write_to_sgl(MPI a, struct scatterlist *sgl, 
unsigned *nbytes,
unsigned int n = mpi_get_size(a);
int i, x, y = 0, lzeros = 0, buf_len;
 
-   if (!nbytes || *nbytes < n)
+   if (!nbytes)
return -EINVAL;
 
if (sign)
@@ -360,6 +368,11 @@ int mpi_write_to_sgl(MPI a, struct scatterlist *sgl, 
unsigned *nbytes,
break;
}
 
+   if (*nbytes < n - lzeros) {
+   *nbytes = n - lzeros;
+   return -EOVERFLOW;
+   }
+
*nbytes = n - lzeros;
buf_len = sgl->length;
p2 = sg_virt(sgl);
-- 
2.1.4

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 4/4] crypto: RSA padding algorithm

2015-11-19 Thread Andrew Zaborowski
This patch adds PKCS#1 v1.5 standard RSA padding as a separate template.
This way an RSA cipher with padding can be obtained by instantiating
"pkcs1pad(rsa)".  The reason for adding this is that RSA is almost
never used without this padding (or OAEP) so it will be needed for
either certificate work in the kernel or the userspace, and I also hear
that it is likely implemented by hardware RSA in which case hardware
implementations of the whole of pkcs1pad(rsa) can be provided.

Signed-off-by: Andrew Zaborowski <andrew.zaborow...@intel.com>
---
v2: rename rsa-padding.c to rsa-pkcs1pad.c,
use a memset instead of a loop,
add a key size check in pkcs1pad_sign,
add a general comment about pkcs1pad_verify
v3: rewrite the initialisation to avoid an obsolete and less flexible
mechanism, now following the aead template initialisation.
---
 crypto/Makefile   |   1 +
 crypto/rsa-pkcs1pad.c | 604 ++
 crypto/rsa.c  |  16 +-
 include/crypto/internal/rsa.h |   2 +
 4 files changed, 622 insertions(+), 1 deletion(-)
 create mode 100644 crypto/rsa-pkcs1pad.c

diff --git a/crypto/Makefile b/crypto/Makefile
index f7aba92..2acdbbd 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -40,6 +40,7 @@ rsa_generic-y := rsapubkey-asn1.o
 rsa_generic-y += rsaprivkey-asn1.o
 rsa_generic-y += rsa.o
 rsa_generic-y += rsa_helper.o
+rsa_generic-y += rsa-pkcs1pad.o
 obj-$(CONFIG_CRYPTO_RSA) += rsa_generic.o
 
 cryptomgr-y := algboss.o testmgr.o
diff --git a/crypto/rsa-pkcs1pad.c b/crypto/rsa-pkcs1pad.c
new file mode 100644
index 000..8ee22a2
--- /dev/null
+++ b/crypto/rsa-pkcs1pad.c
@@ -0,0 +1,604 @@
+/*
+ * RSA padding templates.
+ *
+ * Copyright (c) 2015  Intel Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+struct pkcs1pad_ctx {
+   struct crypto_akcipher *child;
+
+   unsigned int key_size;
+};
+
+struct pkcs1pad_request {
+   struct akcipher_request child_req;
+
+   struct scatterlist in_sg[3], out_sg[2];
+   uint8_t *in_buf, *out_buf;
+};
+
+static int pkcs1pad_set_pub_key(struct crypto_akcipher *tfm, const void *key,
+   unsigned int keylen)
+{
+   struct pkcs1pad_ctx *ctx = akcipher_tfm_ctx(tfm);
+   int err, size;
+
+   err = crypto_akcipher_set_pub_key(ctx->child, key, keylen);
+
+   if (!err) {
+   /* Find out new modulus size from rsa implementation */
+   size = crypto_akcipher_maxsize(ctx->child);
+
+   ctx->key_size = size > 0 ? size : 0;
+   if (size <= 0)
+   err = size;
+   }
+
+   return err;
+}
+
+static int pkcs1pad_set_priv_key(struct crypto_akcipher *tfm, const void *key,
+   unsigned int keylen)
+{
+   struct pkcs1pad_ctx *ctx = akcipher_tfm_ctx(tfm);
+   int err, size;
+
+   err = crypto_akcipher_set_priv_key(ctx->child, key, keylen);
+
+   if (!err) {
+   /* Find out new modulus size from rsa implementation */
+   size = crypto_akcipher_maxsize(ctx->child);
+
+   ctx->key_size = size > 0 ? size : 0;
+   if (size <= 0)
+   err = size;
+   }
+
+   return err;
+}
+
+static int pkcs1pad_get_max_size(struct crypto_akcipher *tfm)
+{
+   struct pkcs1pad_ctx *ctx = akcipher_tfm_ctx(tfm);
+
+   /*
+* The maximum destination buffer size for the encrypt/sign operations
+* will be the same as for RSA, even though it's smaller for
+* decrypt/verify.
+*/
+
+   return ctx->key_size ?: -EINVAL;
+}
+
+static void pkcs1pad_sg_set_buf(struct scatterlist *sg, void *buf, size_t len,
+   struct scatterlist *next)
+{
+   int nsegs = next ? 1 : 0;
+
+   if (offset_in_page(buf) + len <= PAGE_SIZE) {
+   nsegs += 1;
+   sg_init_table(sg, nsegs);
+   sg_set_buf(sg, buf, len);
+   } else {
+   nsegs += 2;
+   sg_init_table(sg, nsegs);
+   sg_set_buf(sg + 0, buf, PAGE_SIZE - offset_in_page(buf));
+   sg_set_buf(sg + 1, buf + PAGE_SIZE - offset_in_page(buf),
+   offset_in_page(buf) + len - PAGE_SIZE);
+   }
+
+   if (next)
+   sg_chain(sg, nsegs, next);
+}
+
+static int pkcs1pad_encrypt_sign_complete(struct akcipher_request *req, int 
err)
+{
+   struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
+   struct pkcs1pad_ctx *ctx = akcipher_tfm_ctx(tfm);
+   struct pkcs1pad_request *req_ctx = akcipher_request_ctx(req);
+   uint

[PATCH v3 2/4] crypto: rsa: only require output buffers as big as needed.

2015-11-19 Thread Andrew Zaborowski
rhe RSA operations explicitly left-align the integers being written
skipping any leading zero bytes, but still require the output buffers to
include just enough space for the integer + the leading zero bytes.
Since the size of integer + the leading zero bytes (i.e. the key modulus
size) can now be obtained more easily through crypto_akcipher_maxsize
change the operations to only require as big a buffer as actually needed
if the caller has that information.  The semantics for request->dst_len
don't change.

Signed-off-by: Andrew Zaborowski <andrew.zaborow...@intel.com>
---
No changes since v1
---
 crypto/rsa.c | 24 
 1 file changed, 24 deletions(-)

diff --git a/crypto/rsa.c b/crypto/rsa.c
index 1093e04..58aad69 100644
--- a/crypto/rsa.c
+++ b/crypto/rsa.c
@@ -91,12 +91,6 @@ static int rsa_enc(struct akcipher_request *req)
goto err_free_c;
}
 
-   if (req->dst_len < mpi_get_size(pkey->n)) {
-   req->dst_len = mpi_get_size(pkey->n);
-   ret = -EOVERFLOW;
-   goto err_free_c;
-   }
-
ret = -ENOMEM;
m = mpi_read_raw_from_sgl(req->src, req->src_len);
if (!m)
@@ -136,12 +130,6 @@ static int rsa_dec(struct akcipher_request *req)
goto err_free_m;
}
 
-   if (req->dst_len < mpi_get_size(pkey->n)) {
-   req->dst_len = mpi_get_size(pkey->n);
-   ret = -EOVERFLOW;
-   goto err_free_m;
-   }
-
ret = -ENOMEM;
c = mpi_read_raw_from_sgl(req->src, req->src_len);
if (!c)
@@ -180,12 +168,6 @@ static int rsa_sign(struct akcipher_request *req)
goto err_free_s;
}
 
-   if (req->dst_len < mpi_get_size(pkey->n)) {
-   req->dst_len = mpi_get_size(pkey->n);
-   ret = -EOVERFLOW;
-   goto err_free_s;
-   }
-
ret = -ENOMEM;
m = mpi_read_raw_from_sgl(req->src, req->src_len);
if (!m)
@@ -225,12 +207,6 @@ static int rsa_verify(struct akcipher_request *req)
goto err_free_m;
}
 
-   if (req->dst_len < mpi_get_size(pkey->n)) {
-   req->dst_len = mpi_get_size(pkey->n);
-   ret = -EOVERFLOW;
-   goto err_free_m;
-   }
-
ret = -ENOMEM;
s = mpi_read_raw_from_sgl(req->src, req->src_len);
if (!s) {
-- 
2.1.4

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 3/4] crypto: akcipher: add crypto_akcipher_type methods needed by templates.

2015-11-13 Thread Andrew Zaborowski
Add two dummy methods that are required by the crypto API internals:
.ctxsize and .init
(just because the framework calls them without checking if they were
provided).  They're only required by the complicated code path needed to
instantiate a template algorithm.  Also expose crypto_akcipher_type like
other crypto types are exposed to be used from outside modules.

Signed-off-by: Andrew Zaborowski <andrew.zaborow...@intel.com>
---
 crypto/akcipher.c   | 16 +++-
 include/crypto/algapi.h |  1 +
 2 files changed, 16 insertions(+), 1 deletion(-)

diff --git a/crypto/akcipher.c b/crypto/akcipher.c
index 120ec04..6ef7f99 100644
--- a/crypto/akcipher.c
+++ b/crypto/akcipher.c
@@ -53,6 +53,11 @@ static void crypto_akcipher_show(struct seq_file *m, struct 
crypto_alg *alg)
seq_puts(m, "type : akcipher\n");
 }
 
+static int crypto_akcipher_init(struct crypto_tfm *tfm, u32 type, u32 mask)
+{
+   return 0;
+}
+
 static void crypto_akcipher_exit_tfm(struct crypto_tfm *tfm)
 {
struct crypto_akcipher *akcipher = __crypto_akcipher_tfm(tfm);
@@ -75,8 +80,16 @@ static int crypto_akcipher_init_tfm(struct crypto_tfm *tfm)
return 0;
 }
 
-static const struct crypto_type crypto_akcipher_type = {
+static unsigned int crypto_akcipher_ctxsize(struct crypto_alg *alg, u32 type,
+   u32 mask)
+{
+   return alg->cra_ctxsize;
+}
+
+const struct crypto_type crypto_akcipher_type = {
+   .ctxsize = crypto_akcipher_ctxsize,
.extsize = crypto_alg_extsize,
+   .init = crypto_akcipher_init,
.init_tfm = crypto_akcipher_init_tfm,
 #ifdef CONFIG_PROC_FS
.show = crypto_akcipher_show,
@@ -87,6 +100,7 @@ static const struct crypto_type crypto_akcipher_type = {
.type = CRYPTO_ALG_TYPE_AKCIPHER,
.tfmsize = offsetof(struct crypto_akcipher, base),
 };
+EXPORT_SYMBOL_GPL(crypto_akcipher_type);
 
 struct crypto_akcipher *crypto_alloc_akcipher(const char *alg_name, u32 type,
  u32 mask)
diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h
index c9fe145..1089f20 100644
--- a/include/crypto/algapi.h
+++ b/include/crypto/algapi.h
@@ -130,6 +130,7 @@ struct ablkcipher_walk {
 
 extern const struct crypto_type crypto_ablkcipher_type;
 extern const struct crypto_type crypto_blkcipher_type;
+extern const struct crypto_type crypto_akcipher_type;
 
 void crypto_mod_put(struct crypto_alg *alg);
 
-- 
2.1.4

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 2/4] crypto: rsa: only require output buffers as big as needed.

2015-11-13 Thread Andrew Zaborowski
rhe RSA operations explicitly left-align the integers being written
skipping any leading zero bytes, but still require the output buffers to
include just enough space for the integer + the leading zero bytes.
Since the size of integer + the leading zero bytes (i.e. the key modulus
size) can now be obtained more easily through crypto_akcipher_maxsize
change the operations to only require as big a buffer as actually needed
if the caller has that information.  The semantics for request->dst_len
don't change.

Signed-off-by: Andrew Zaborowski <andrew.zaborow...@intel.com>
---
 crypto/rsa.c | 24 
 1 file changed, 24 deletions(-)

diff --git a/crypto/rsa.c b/crypto/rsa.c
index 1093e04..58aad69 100644
--- a/crypto/rsa.c
+++ b/crypto/rsa.c
@@ -91,12 +91,6 @@ static int rsa_enc(struct akcipher_request *req)
goto err_free_c;
}
 
-   if (req->dst_len < mpi_get_size(pkey->n)) {
-   req->dst_len = mpi_get_size(pkey->n);
-   ret = -EOVERFLOW;
-   goto err_free_c;
-   }
-
ret = -ENOMEM;
m = mpi_read_raw_from_sgl(req->src, req->src_len);
if (!m)
@@ -136,12 +130,6 @@ static int rsa_dec(struct akcipher_request *req)
goto err_free_m;
}
 
-   if (req->dst_len < mpi_get_size(pkey->n)) {
-   req->dst_len = mpi_get_size(pkey->n);
-   ret = -EOVERFLOW;
-   goto err_free_m;
-   }
-
ret = -ENOMEM;
c = mpi_read_raw_from_sgl(req->src, req->src_len);
if (!c)
@@ -180,12 +168,6 @@ static int rsa_sign(struct akcipher_request *req)
goto err_free_s;
}
 
-   if (req->dst_len < mpi_get_size(pkey->n)) {
-   req->dst_len = mpi_get_size(pkey->n);
-   ret = -EOVERFLOW;
-   goto err_free_s;
-   }
-
ret = -ENOMEM;
m = mpi_read_raw_from_sgl(req->src, req->src_len);
if (!m)
@@ -225,12 +207,6 @@ static int rsa_verify(struct akcipher_request *req)
goto err_free_m;
}
 
-   if (req->dst_len < mpi_get_size(pkey->n)) {
-   req->dst_len = mpi_get_size(pkey->n);
-   ret = -EOVERFLOW;
-   goto err_free_m;
-   }
-
ret = -ENOMEM;
s = mpi_read_raw_from_sgl(req->src, req->src_len);
if (!s) {
-- 
2.1.4

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 4/4] crypto: RSA padding algorithm

2015-11-13 Thread Andrew Zaborowski
This patch adds PKCS#1 v1.5 standard RSA padding as a separate template.
This way an RSA cipher with padding can be obtained by instantiating
"pkcs1pad(rsa)".  The reason for adding this is that RSA is almost
never used without this padding (or OAEP) so it will be needed for
either certificate work in the kernel or the userspace, and also I hear
that it is likely implemented by hardware RSA in which case an
implementation of the whole of pkcs1pad(rsa) can be provided.

Signed-off-by: Andrew Zaborowski <andrew.zaborow...@intel.com>
---
 crypto/Makefile   |   1 +
 crypto/rsa-pkcs1pad.c | 593 ++
 crypto/rsa.c  |  16 +-
 include/crypto/internal/rsa.h |   2 +
 4 files changed, 611 insertions(+), 1 deletion(-)
 create mode 100644 crypto/rsa-pkcs1pad.c

diff --git a/crypto/Makefile b/crypto/Makefile
index f7aba92..2acdbbd 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -40,6 +40,7 @@ rsa_generic-y := rsapubkey-asn1.o
 rsa_generic-y += rsaprivkey-asn1.o
 rsa_generic-y += rsa.o
 rsa_generic-y += rsa_helper.o
+rsa_generic-y += rsa-pkcs1pad.o
 obj-$(CONFIG_CRYPTO_RSA) += rsa_generic.o
 
 cryptomgr-y := algboss.o testmgr.o
diff --git a/crypto/rsa-pkcs1pad.c b/crypto/rsa-pkcs1pad.c
new file mode 100644
index 000..d8e0c71
--- /dev/null
+++ b/crypto/rsa-pkcs1pad.c
@@ -0,0 +1,593 @@
+/*
+ * RSA padding templates.
+ *
+ * Copyright (c) 2015  Intel Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+struct pkcs1pad_ctx {
+   struct crypto_akcipher *child;
+
+   unsigned int key_size;
+};
+
+struct pkcs1pad_request {
+   struct akcipher_request child_req;
+
+   struct scatterlist in_sg[3], out_sg[2];
+   uint8_t *in_buf, *out_buf;
+};
+
+static int pkcs1pad_set_pub_key(struct crypto_akcipher *tfm, const void *key,
+   unsigned int keylen)
+{
+   struct pkcs1pad_ctx *ctx = akcipher_tfm_ctx(tfm);
+   int err, size;
+
+   err = crypto_akcipher_set_pub_key(ctx->child, key, keylen);
+
+   if (!err) {
+   /* Find out new modulus size from rsa implementation */
+   size = crypto_akcipher_maxsize(ctx->child);
+
+   ctx->key_size = size > 0 ? size : 0;
+   if (size <= 0)
+   err = size;
+   }
+
+   return err;
+}
+
+static int pkcs1pad_set_priv_key(struct crypto_akcipher *tfm, const void *key,
+   unsigned int keylen)
+{
+   struct pkcs1pad_ctx *ctx = akcipher_tfm_ctx(tfm);
+   int err, size;
+
+   err = crypto_akcipher_set_priv_key(ctx->child, key, keylen);
+
+   if (!err) {
+   /* Find out new modulus size from rsa implementation */
+   size = crypto_akcipher_maxsize(ctx->child);
+
+   ctx->key_size = size > 0 ? size : 0;
+   if (size <= 0)
+   err = size;
+   }
+
+   return err;
+}
+
+static int pkcs1pad_get_max_size(struct crypto_akcipher *tfm)
+{
+   struct pkcs1pad_ctx *ctx = akcipher_tfm_ctx(tfm);
+
+   /*
+* The maximum destination buffer size for the encrypt/sign operations
+* will be the same as for RSA, even though it's smaller for
+* decrypt/verify.
+*/
+
+   return ctx->key_size ?: -EINVAL;
+}
+
+static void pkcs1pad_sg_set_buf(struct scatterlist *sg, void *buf, size_t len,
+   struct scatterlist *next)
+{
+   int nsegs = next ? 1 : 0;
+
+   if (offset_in_page(buf) + len <= PAGE_SIZE) {
+   nsegs += 1;
+   sg_init_table(sg, nsegs);
+   sg_set_buf(sg, buf, len);
+   } else {
+   nsegs += 2;
+   sg_init_table(sg, nsegs);
+   sg_set_buf(sg + 0, buf, PAGE_SIZE - offset_in_page(buf));
+   sg_set_buf(sg + 1, buf + PAGE_SIZE - offset_in_page(buf),
+   offset_in_page(buf) + len - PAGE_SIZE);
+   }
+
+   if (next)
+   sg_chain(sg, nsegs, next);
+}
+
+static int pkcs1pad_encrypt_sign_complete(struct akcipher_request *req, int 
err)
+{
+   struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
+   struct pkcs1pad_ctx *ctx = akcipher_tfm_ctx(tfm);
+   struct pkcs1pad_request *req_ctx = akcipher_request_ctx(req);
+   uint8_t zeros[ctx->key_size - req_ctx->child_req.dst_len];
+
+   if (!err) {
+   if (req_ctx->child_req.dst_len < ctx->key_size) {
+   memset(zeros, 0, sizeof(zeros));
+   sg_copy_from_buffer(req->dst,
+   sg_nents_for_len(

[PATCH 1/4] lib/mpi: only require buffers as big as needed for the integer

2015-11-13 Thread Andrew Zaborowski
Since mpi_write_to_sgl and mpi_read_buffer explicitly left-align the
integers being written it makes no sense to require a buffer big enough for
the number + the leading zero bytes which are not written.  The error
returned also doesn't convey any information.  So instead require only the
size needed and return -EOVERFLOW to signal when buffer too short.

Signed-off-by: Andrew Zaborowski <andrew.zaborow...@intel.com>
---
 lib/mpi/mpicoder.c | 21 +
 1 file changed, 17 insertions(+), 4 deletions(-)

diff --git a/lib/mpi/mpicoder.c b/lib/mpi/mpicoder.c
index c7e0a70..074d2df 100644
--- a/lib/mpi/mpicoder.c
+++ b/lib/mpi/mpicoder.c
@@ -135,7 +135,9 @@ EXPORT_SYMBOL_GPL(mpi_read_from_buffer);
  * @buf:   bufer to which the output will be written to. Needs to be at
  * leaset mpi_get_size(a) long.
  * @buf_len:   size of the buf.
- * @nbytes:receives the actual length of the data written.
+ * @nbytes:receives the actual length of the data written on success and
+ * the data to-be-written on -EOVERFLOW in case buf_len was too
+ * small.
  * @sign:  if not NULL, it will be set to the sign of a.
  *
  * Return: 0 on success or error code in case of error
@@ -148,7 +150,7 @@ int mpi_read_buffer(MPI a, uint8_t *buf, unsigned buf_len, 
unsigned *nbytes,
unsigned int n = mpi_get_size(a);
int i, lzeros = 0;
 
-   if (buf_len < n || !buf || !nbytes)
+   if (!buf || !nbytes)
return -EINVAL;
 
if (sign)
@@ -163,6 +165,11 @@ int mpi_read_buffer(MPI a, uint8_t *buf, unsigned buf_len, 
unsigned *nbytes,
break;
}
 
+   if (buf_len < n - lzeros) {
+   *nbytes = n - lzeros;
+   return -EOVERFLOW;
+   }
+
p = buf;
*nbytes = n - lzeros;
 
@@ -332,7 +339,8 @@ EXPORT_SYMBOL_GPL(mpi_set_buffer);
  * @nbytes:in/out param - it has the be set to the maximum number of
  * bytes that can be written to sgl. This has to be at least
  * the size of the integer a. On return it receives the actual
- * length of the data written.
+ * length of the data written on success or the data that would
+ * be written if buffer was too small.
  * @sign:  if not NULL, it will be set to the sign of a.
  *
  * Return: 0 on success or error code in case of error
@@ -345,7 +353,7 @@ int mpi_write_to_sgl(MPI a, struct scatterlist *sgl, 
unsigned *nbytes,
unsigned int n = mpi_get_size(a);
int i, x, y = 0, lzeros = 0, buf_len;
 
-   if (!nbytes || *nbytes < n)
+   if (!nbytes)
return -EINVAL;
 
if (sign)
@@ -360,6 +368,11 @@ int mpi_write_to_sgl(MPI a, struct scatterlist *sgl, 
unsigned *nbytes,
break;
}
 
+   if (*nbytes < n - lzeros) {
+   *nbytes = n - lzeros;
+   return -EOVERFLOW;
+   }
+
*nbytes = n - lzeros;
buf_len = sgl->length;
p2 = sg_virt(sgl);
-- 
2.1.4

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 3/4] crypto: akcipher: add crypto_akcipher_type methods needed by templates.

2015-11-10 Thread Andrew Zaborowski
Add two dummy methods that are required by the crypto API internals:
.ctxsize and .init
(just because the framework calls them without checking if they were
provided).  They're only required by the complicated code path needed to
instantiate a template algorithm.  Also expose crypto_akcipher_type like
other crypto types are exposed to be used from outside modules.

Signed-off-by: Andrew Zaborowski <andrew.zaborow...@intel.com>
---
 crypto/akcipher.c   | 16 +++-
 include/crypto/algapi.h |  1 +
 2 files changed, 16 insertions(+), 1 deletion(-)

diff --git a/crypto/akcipher.c b/crypto/akcipher.c
index 120ec04..6ef7f99 100644
--- a/crypto/akcipher.c
+++ b/crypto/akcipher.c
@@ -53,6 +53,11 @@ static void crypto_akcipher_show(struct seq_file *m, struct 
crypto_alg *alg)
seq_puts(m, "type : akcipher\n");
 }
 
+static int crypto_akcipher_init(struct crypto_tfm *tfm, u32 type, u32 mask)
+{
+   return 0;
+}
+
 static void crypto_akcipher_exit_tfm(struct crypto_tfm *tfm)
 {
struct crypto_akcipher *akcipher = __crypto_akcipher_tfm(tfm);
@@ -75,8 +80,16 @@ static int crypto_akcipher_init_tfm(struct crypto_tfm *tfm)
return 0;
 }
 
-static const struct crypto_type crypto_akcipher_type = {
+static unsigned int crypto_akcipher_ctxsize(struct crypto_alg *alg, u32 type,
+   u32 mask)
+{
+   return alg->cra_ctxsize;
+}
+
+const struct crypto_type crypto_akcipher_type = {
+   .ctxsize = crypto_akcipher_ctxsize,
.extsize = crypto_alg_extsize,
+   .init = crypto_akcipher_init,
.init_tfm = crypto_akcipher_init_tfm,
 #ifdef CONFIG_PROC_FS
.show = crypto_akcipher_show,
@@ -87,6 +100,7 @@ static const struct crypto_type crypto_akcipher_type = {
.type = CRYPTO_ALG_TYPE_AKCIPHER,
.tfmsize = offsetof(struct crypto_akcipher, base),
 };
+EXPORT_SYMBOL_GPL(crypto_akcipher_type);
 
 struct crypto_akcipher *crypto_alloc_akcipher(const char *alg_name, u32 type,
  u32 mask)
diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h
index c9fe145..1089f20 100644
--- a/include/crypto/algapi.h
+++ b/include/crypto/algapi.h
@@ -130,6 +130,7 @@ struct ablkcipher_walk {
 
 extern const struct crypto_type crypto_ablkcipher_type;
 extern const struct crypto_type crypto_blkcipher_type;
+extern const struct crypto_type crypto_akcipher_type;
 
 void crypto_mod_put(struct crypto_alg *alg);
 
-- 
2.1.4

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 4/4] crypto: RSA padding algorithm

2015-11-10 Thread Andrew Zaborowski
This patch adds PKCS#1 v1.5 standard RSA padding as a separate template.
This way an RSA cipher with padding can be obtained by instantiating
"pkcs1pad(rsa)".  The reason for adding this is that RSA is almost
never used without this padding (or OAEP) so it will be needed for
either certificate work in the kernel or the userspace, and also I hear
that it is likely implemented by hardware RSA in which case an
implementation of the whole "pkcs1pad(rsa)" can be provided.

Signed-off-by: Andrew Zaborowski <andrew.zaborow...@intel.com>
---
 crypto/Makefile   |   1 +
 crypto/rsa-padding.c  | 586 ++
 crypto/rsa.c  |  16 +-
 include/crypto/internal/rsa.h |   2 +
 4 files changed, 604 insertions(+), 1 deletion(-)
 create mode 100644 crypto/rsa-padding.c

diff --git a/crypto/Makefile b/crypto/Makefile
index f7aba92..46fe0b4 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -40,6 +40,7 @@ rsa_generic-y := rsapubkey-asn1.o
 rsa_generic-y += rsaprivkey-asn1.o
 rsa_generic-y += rsa.o
 rsa_generic-y += rsa_helper.o
+rsa_generic-y += rsa-padding.o
 obj-$(CONFIG_CRYPTO_RSA) += rsa_generic.o
 
 cryptomgr-y := algboss.o testmgr.o
diff --git a/crypto/rsa-padding.c b/crypto/rsa-padding.c
new file mode 100644
index 000..b9f9f31
--- /dev/null
+++ b/crypto/rsa-padding.c
@@ -0,0 +1,586 @@
+/*
+ * RSA padding templates.
+ *
+ * Copyright (c) 2015  Intel Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+struct pkcs1pad_ctx {
+   struct crypto_akcipher *child;
+
+   unsigned int key_size;
+};
+
+struct pkcs1pad_request {
+   struct akcipher_request child_req;
+
+   struct scatterlist in_sg[3], out_sg[2];
+   uint8_t *in_buf, *out_buf;
+};
+
+static int pkcs1pad_set_pub_key(struct crypto_akcipher *tfm, const void *key,
+   unsigned int keylen)
+{
+   struct pkcs1pad_ctx *ctx = akcipher_tfm_ctx(tfm);
+   int err, size;
+
+   err = crypto_akcipher_set_pub_key(ctx->child, key, keylen);
+
+   if (!err) {
+   /* Find out new modulus size from rsa implementation */
+   size = crypto_akcipher_maxsize(ctx->child);
+
+   ctx->key_size = size > 0 ? size : 0;
+   if (size <= 0)
+   err = size;
+   }
+
+   return err;
+}
+
+static int pkcs1pad_set_priv_key(struct crypto_akcipher *tfm, const void *key,
+   unsigned int keylen)
+{
+   struct pkcs1pad_ctx *ctx = akcipher_tfm_ctx(tfm);
+   int err, size;
+
+   err = crypto_akcipher_set_priv_key(ctx->child, key, keylen);
+
+   if (!err) {
+   /* Find out new modulus size from rsa implementation */
+   size = crypto_akcipher_maxsize(ctx->child);
+
+   ctx->key_size = size > 0 ? size : 0;
+   if (size <= 0)
+   err = size;
+   }
+
+   return err;
+}
+
+static int pkcs1pad_get_max_size(struct crypto_akcipher *tfm)
+{
+   struct pkcs1pad_ctx *ctx = akcipher_tfm_ctx(tfm);
+
+   /*
+* The maximum destination buffer size for the encrypt/sign operations
+* will be the same as for RSA, even though it's smaller for
+* decrypt/verify.
+*/
+
+   return ctx->key_size ?: -EINVAL;
+}
+
+static void pkcs1pad_sg_set_buf(struct scatterlist *sg, void *buf, size_t len,
+   struct scatterlist *next)
+{
+   int nsegs = next ? 1 : 0;
+
+   if (offset_in_page(buf) + len <= PAGE_SIZE) {
+   nsegs += 1;
+   sg_init_table(sg, nsegs);
+   sg_set_buf(sg, buf, len);
+   } else {
+   nsegs += 2;
+   sg_init_table(sg, nsegs);
+   sg_set_buf(sg + 0, buf, PAGE_SIZE - offset_in_page(buf));
+   sg_set_buf(sg + 1, buf + PAGE_SIZE - offset_in_page(buf),
+   offset_in_page(buf) + len - PAGE_SIZE);
+   }
+
+   if (next)
+   sg_chain(sg, nsegs, next);
+}
+
+static int pkcs1pad_encrypt_sign_complete(struct akcipher_request *req, int 
err)
+{
+   struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
+   struct pkcs1pad_ctx *ctx = akcipher_tfm_ctx(tfm);
+   struct pkcs1pad_request *req_ctx = akcipher_request_ctx(req);
+   uint8_t zeros[ctx->key_size - req_ctx->child_req.dst_len];
+
+   if (!err) {
+   if (req_ctx->child_req.dst_len < ctx->key_size) {
+   memset(zeros, 0, sizeof(zeros));
+   sg_copy_from_buffer(req->dst,
+   sg_nent

[PATCH 2/4] crypto: rsa: only require output buffers as big as needed.

2015-11-10 Thread Andrew Zaborowski
rhe RSA operations explicitly left-align the integers being written
skipping any leading zero bytes, but still require the output buffers to
include just enough space for the integer + the leading zero bytes.
Since the size of integer + the leading zero bytes (i.e. the key modulus
size) can now be obtained more easily through crypto_akcipher_maxsize
change the operations to only require as big a buffer as actually needed
if the caller has that information.  The semantics for request->dst_len
don't change.

Signed-off-by: Andrew Zaborowski <andrew.zaborow...@intel.com>
---
 crypto/rsa.c | 24 
 1 file changed, 24 deletions(-)

diff --git a/crypto/rsa.c b/crypto/rsa.c
index 1093e04..58aad69 100644
--- a/crypto/rsa.c
+++ b/crypto/rsa.c
@@ -91,12 +91,6 @@ static int rsa_enc(struct akcipher_request *req)
goto err_free_c;
}
 
-   if (req->dst_len < mpi_get_size(pkey->n)) {
-   req->dst_len = mpi_get_size(pkey->n);
-   ret = -EOVERFLOW;
-   goto err_free_c;
-   }
-
ret = -ENOMEM;
m = mpi_read_raw_from_sgl(req->src, req->src_len);
if (!m)
@@ -136,12 +130,6 @@ static int rsa_dec(struct akcipher_request *req)
goto err_free_m;
}
 
-   if (req->dst_len < mpi_get_size(pkey->n)) {
-   req->dst_len = mpi_get_size(pkey->n);
-   ret = -EOVERFLOW;
-   goto err_free_m;
-   }
-
ret = -ENOMEM;
c = mpi_read_raw_from_sgl(req->src, req->src_len);
if (!c)
@@ -180,12 +168,6 @@ static int rsa_sign(struct akcipher_request *req)
goto err_free_s;
}
 
-   if (req->dst_len < mpi_get_size(pkey->n)) {
-   req->dst_len = mpi_get_size(pkey->n);
-   ret = -EOVERFLOW;
-   goto err_free_s;
-   }
-
ret = -ENOMEM;
m = mpi_read_raw_from_sgl(req->src, req->src_len);
if (!m)
@@ -225,12 +207,6 @@ static int rsa_verify(struct akcipher_request *req)
goto err_free_m;
}
 
-   if (req->dst_len < mpi_get_size(pkey->n)) {
-   req->dst_len = mpi_get_size(pkey->n);
-   ret = -EOVERFLOW;
-   goto err_free_m;
-   }
-
ret = -ENOMEM;
s = mpi_read_raw_from_sgl(req->src, req->src_len);
if (!s) {
-- 
2.1.4

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 1/4] lib/mpi: only require buffers as big as needed for the integer

2015-11-10 Thread Andrew Zaborowski
Since mpi_write_to_sgl and mpi_read_buffer explicitly left-align the
integers being written it makes no sense to require a buffer big enough for
the number + the leading zero bytes which are not written.  The error
returned also doesn't convey any information.  So instead require only the
size needed and return -EOVERFLOW to signal when buffer too short.

Signed-off-by: Andrew Zaborowski <andrew.zaborow...@intel.com>
---
 lib/mpi/mpicoder.c | 21 +
 1 file changed, 17 insertions(+), 4 deletions(-)

diff --git a/lib/mpi/mpicoder.c b/lib/mpi/mpicoder.c
index c7e0a70..074d2df 100644
--- a/lib/mpi/mpicoder.c
+++ b/lib/mpi/mpicoder.c
@@ -135,7 +135,9 @@ EXPORT_SYMBOL_GPL(mpi_read_from_buffer);
  * @buf:   bufer to which the output will be written to. Needs to be at
  * leaset mpi_get_size(a) long.
  * @buf_len:   size of the buf.
- * @nbytes:receives the actual length of the data written.
+ * @nbytes:receives the actual length of the data written on success and
+ * the data to-be-written on -EOVERFLOW in case buf_len was too
+ * small.
  * @sign:  if not NULL, it will be set to the sign of a.
  *
  * Return: 0 on success or error code in case of error
@@ -148,7 +150,7 @@ int mpi_read_buffer(MPI a, uint8_t *buf, unsigned buf_len, 
unsigned *nbytes,
unsigned int n = mpi_get_size(a);
int i, lzeros = 0;
 
-   if (buf_len < n || !buf || !nbytes)
+   if (!buf || !nbytes)
return -EINVAL;
 
if (sign)
@@ -163,6 +165,11 @@ int mpi_read_buffer(MPI a, uint8_t *buf, unsigned buf_len, 
unsigned *nbytes,
break;
}
 
+   if (buf_len < n - lzeros) {
+   *nbytes = n - lzeros;
+   return -EOVERFLOW;
+   }
+
p = buf;
*nbytes = n - lzeros;
 
@@ -332,7 +339,8 @@ EXPORT_SYMBOL_GPL(mpi_set_buffer);
  * @nbytes:in/out param - it has the be set to the maximum number of
  * bytes that can be written to sgl. This has to be at least
  * the size of the integer a. On return it receives the actual
- * length of the data written.
+ * length of the data written on success or the data that would
+ * be written if buffer was too small.
  * @sign:  if not NULL, it will be set to the sign of a.
  *
  * Return: 0 on success or error code in case of error
@@ -345,7 +353,7 @@ int mpi_write_to_sgl(MPI a, struct scatterlist *sgl, 
unsigned *nbytes,
unsigned int n = mpi_get_size(a);
int i, x, y = 0, lzeros = 0, buf_len;
 
-   if (!nbytes || *nbytes < n)
+   if (!nbytes)
return -EINVAL;
 
if (sign)
@@ -360,6 +368,11 @@ int mpi_write_to_sgl(MPI a, struct scatterlist *sgl, 
unsigned *nbytes,
break;
}
 
+   if (*nbytes < n - lzeros) {
+   *nbytes = n - lzeros;
+   return -EOVERFLOW;
+   }
+
*nbytes = n - lzeros;
buf_len = sgl->length;
p2 = sg_virt(sgl);
-- 
2.1.4

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[RFC PATCH] crypto: RSA padding transform

2015-09-05 Thread Andrew Zaborowski
This patch adds PKCS#1 v1.5 standard RSA padding as a separate template.
This way an RSA cipher with padding can be obtained by instantiating
"pkcs1pad(rsa-generic)".  The reason for adding this is that RSA is almost
never used without this padding (or OAEP) so it will be needed for
either certificate work in the kernel or the userspace, and also I hear
that it is likely implemented by hardware RSA.

>From the generic code I could not figure out why the crypto_type.init and
.ctxsize functions are needed for a template but not for a standalone
algorithm but I see the word "compat" in their implementations for
shash or blkcipher.  If they are to be added for akcipher it should
probably be a separate patch.

Signed-off-by: Andrew Zaborowski <andrew.zaborow...@intel.com>
---
 crypto/Makefile   |   1 +
 crypto/akcipher.c |  16 +-
 crypto/rsa-padding.c  | 438 ++
 crypto/rsa.c  |  16 +-
 include/crypto/akcipher.h |   4 +-
 include/crypto/algapi.h   |   1 +
 include/crypto/internal/rsa.h |   2 +
 7 files changed, 474 insertions(+), 4 deletions(-)
 create mode 100644 crypto/rsa-padding.c

diff --git a/crypto/Makefile b/crypto/Makefile
index a16a7e7..b548a27 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -36,6 +36,7 @@ clean-files += rsakey-asn1.c rsakey-asn1.h
 rsa_generic-y := rsakey-asn1.o
 rsa_generic-y += rsa.o
 rsa_generic-y += rsa_helper.o
+rsa_generic-y += rsa-padding.o
 obj-$(CONFIG_CRYPTO_RSA) += rsa_generic.o
 
 cryptomgr-y := algboss.o testmgr.o
diff --git a/crypto/akcipher.c b/crypto/akcipher.c
index 528ae6a..7f82ee8 100644
--- a/crypto/akcipher.c
+++ b/crypto/akcipher.c
@@ -54,6 +54,11 @@ static void crypto_akcipher_show(struct seq_file *m, struct 
crypto_alg *alg)
seq_puts(m, "type : akcipher\n");
 }
 
+static int crypto_akcipher_init(struct crypto_tfm *tfm, u32 type, u32 mask)
+{
+   return 0;
+}
+
 static void crypto_akcipher_exit_tfm(struct crypto_tfm *tfm)
 {
struct crypto_akcipher *akcipher = __crypto_akcipher_tfm(tfm);
@@ -76,8 +81,16 @@ static int crypto_akcipher_init_tfm(struct crypto_tfm *tfm)
return 0;
 }
 
-static const struct crypto_type crypto_akcipher_type = {
+static unsigned int crypto_akcipher_ctxsize(struct crypto_alg *alg, u32 type,
+   u32 mask)
+{
+   return alg->cra_ctxsize;
+}
+
+const struct crypto_type crypto_akcipher_type = {
+   .ctxsize = crypto_akcipher_ctxsize,
.extsize = crypto_alg_extsize,
+   .init = crypto_akcipher_init,
.init_tfm = crypto_akcipher_init_tfm,
 #ifdef CONFIG_PROC_FS
.show = crypto_akcipher_show,
@@ -88,6 +101,7 @@ static const struct crypto_type crypto_akcipher_type = {
.type = CRYPTO_ALG_TYPE_AKCIPHER,
.tfmsize = offsetof(struct crypto_akcipher, base),
 };
+EXPORT_SYMBOL_GPL(crypto_akcipher_type);
 
 struct crypto_akcipher *crypto_alloc_akcipher(const char *alg_name, u32 type,
  u32 mask)
diff --git a/crypto/rsa-padding.c b/crypto/rsa-padding.c
new file mode 100644
index 000..106ce62
--- /dev/null
+++ b/crypto/rsa-padding.c
@@ -0,0 +1,438 @@
+/*
+ * RSA padding templates.
+ *
+ * Copyright (c) 2015  Intel Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+struct pkcs1pad_ctx {
+   struct crypto_akcipher *child;
+
+   unsigned int key_size;
+};
+
+static int pkcs1pad_setkey(struct crypto_akcipher *tfm, const void *key,
+   unsigned int keylen)
+{
+   struct pkcs1pad_ctx *ctx = akcipher_tfm_ctx(tfm);
+   struct akcipher_request *req;
+   int err;
+
+   err = crypto_akcipher_setkey(ctx->child, key, keylen);
+
+   if (!err) {
+   /* Find out new modulus size from rsa implementation */
+   req = akcipher_request_alloc(ctx->child, GFP_KERNEL);
+   if (IS_ERR(req))
+   return PTR_ERR(req);
+
+   akcipher_request_set_crypt(req, NULL, NULL, 0, 0);
+   if (crypto_akcipher_encrypt(req) != -EOVERFLOW)
+   err = -EINVAL;
+
+   ctx->key_size = req->dst_len;
+   akcipher_request_free(req);
+   }
+
+   return err;
+}
+
+static int pkcs1pad_encrypt_sign_complete(struct akcipher_request *req, int 
err)
+{
+   struct akcipher_request *child_req = akcipher_request_ctx(req);
+
+   kfree(child_req->src);
+   req->dst_len = child_req->dst_len;
+   return err;
+}
+
+static void pkcs1pad_encrypt_sign_complete_cb(
+   struct cr