Re: [RFC PATCH v3] crypto: Add IV generation algorithms

2017-01-18 Thread Binoy Jayan
Hi Gilad,

On 18 January 2017 at 20:51, Gilad Ben-Yossef  wrote:
> I have some review comments and a bug report -

Thank you very much for testing this on ARM and for the comments.

> I'm pretty sure this needs to be
>
>  n2 = bio_segments(ctx->bio_out);

Yes you are right, that was a typo :)

>> +
>> +   rinfo.is_write = bio_data_dir(ctx->bio_in) == WRITE;
>
> Please consider wrapping the above boolean expression in parenthesis.

Well, I can do that to enhance the clarity.

>> +   rinfo.iv_sector = ctx->cc_sector;
>> +   rinfo.nents = nents;
>> +   rinfo.iv = iv;
>> +
>> +   skcipher_request_set_crypt(req, dmreq->sg_in, dmreq->sg_out,
>
> Also, where do the scatterlist src2 and dst2 that you use
> sg_set_page() get sg_init_table() called on?
> I couldn't figure it out...

Thank you pointing this out. I missed out to add sg_init_table(src2, 1)
and sg_init_table(dst2, 1), but sg_set_page is used in geniv_iter_block.
This is probably the reason for the panic on ARM platform. However it
ran fine under qemu-x86. May be I should setup an arm platform too
for testing.

Regards,
Binoy
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC PATCH 0/6] Add bulk skcipher requests to crypto API and dm-crypt

2017-01-18 Thread Binoy Jayan
Hi Milan,

On 13 January 2017 at 17:31, Ondrej Mosnáček  wrote:
> 2017-01-13 11:41 GMT+01:00 Herbert Xu :
>> On Thu, Jan 12, 2017 at 01:59:52PM +0100, Ondrej Mosnacek wrote:
>>> the goal of this patchset is to allow those skcipher API users that need to
>>> process batches of small messages (especially dm-crypt) to do so 
>>> efficiently.
>>
>> Please explain why this can't be done with the existing framework
>> using IV generators similar to the ones used for IPsec.
>
> As I already mentioned in another thread, there are basically two reasons:
>
> 1) Milan would like to add authenticated encryption support to
> dm-crypt (see [1]) and as part of this change, a new random IV mode
> would be introduced. This mode generates a random IV for each sector
> write, includes it in the authenticated data and stores it in the
> sector's metadata (in a separate part of the disk). In this case
> dm-crypt will need to have control over the IV generation (or at least
> be able to somehow retrieve it after the crypto operation... but
> passing RNG responsibility to drivers doesn't seem to be a good idea
> anyway).
>
> 2) With this API, drivers wouldn't have to provide implementations for
> specific IV generation modes, and just implement bulk requests for the
> common modes/algorithms (XTS, CBC, ...) while still getting
> performance benefit.

I just sent out v3 for the dm-crypt changes I was working on. I
came across your patches for authenticated encryption support.
Although I haven't looked at it entirely, I was wondering how it could
be put together including the points Ondrej was mentioning. Will look at
it more. Please keep me in cc when you send out the next revision if
that is possible.

Thanks,
Binoy
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 3/8] random: trigger random_ready callback upon crng_init == 1

2017-01-18 Thread Stephan Müller
Am Dienstag, 17. Januar 2017, 23:12:50 CET schrieb Theodore Ts'o:

Hi Theodore,

> On Tue, Dec 27, 2016 at 11:39:57PM +0100, Stephan Müller wrote:
> > The random_ready callback mechanism is intended to replicate the
> > getrandom system call behavior to in-kernel users. As the getrandom
> > system call unblocks with crng_init == 1, trigger the random_ready
> > wakeup call at the same time.
> 
> It was deliberate that random_ready would only get triggered with
> crng_init==2.
> 
> In general I'm assuming kernel callers really want real randomness (as
> opposed to using prandom), where as there's a lot of b.s. userspace
> users of kernel randomness (for things that really don't require
> cryptographic randomness, e.g., for salting Python dictionaries,
> systemd/udev using /dev/urandom for non-cryptographic, non-security
> applications etc.)

Users of getrandom want to ensure that they get random data from a DRNG that 
is seeded, just like in-kernel users may want if they choose the callback-
approach.

I do not understand why there should be different treatment of in-kernel vs 
user space callers in that respect.

(And yes, I do not want to open a discussion whether crng_init==1 can 
considered as a sufficiently seeded DRNG as such discussion will lead 
nowhere.)

Ciao
Stephan
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] crypto: tcrypt - Add mode to test specified algs

2017-01-18 Thread Rabin Vincent
From: Rabin Vincent 

tcrypt offers a bunch of mode= values to test various (groups of)
algorithms, but there is no way provided to test a subset of the
algorithms.  This adds a new mode=2000 which interprets alg= as a
colon-separated list of algorithms to test with alg_test().  Colon is
used since the names may contain commas.

This is useful during driver development and also for regression testing
to avoid the errors that are otherwise generated when attempting to test
non-enabled algorithms.

 # insmod tcrypt.ko dyndbg mode=2000 
alg="cbc(aes):ecb(aes):hmac(sha256):sha256:xts(aes)"
 [  649.418569] tcrypt: testing cbc(aes)
 [  649.420809] tcrypt: testing ecb(aes)
 [  649.422627] tcrypt: testing hmac(sha256)
 [  649.424861] tcrypt: testing sha256
 [  649.426368] tcrypt: testing xts(aes)
 [  649.430014] tcrypt: all tests passed

Signed-off-by: Rabin Vincent 
---
 crypto/tcrypt.c | 13 -
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
index 9a11f3c..fe5adf6 100644
--- a/crypto/tcrypt.c
+++ b/crypto/tcrypt.c
@@ -1021,7 +1021,7 @@ static inline int tcrypt_test(const char *alg)
return ret;
 }
 
-static int do_test(const char *alg, u32 type, u32 mask, int m)
+static int do_test(char *alg, u32 type, u32 mask, int m)
 {
int i;
int ret = 0;
@@ -2042,6 +2042,17 @@ static int do_test(const char *alg, u32 type, u32 mask, 
int m)
case 1000:
test_available();
break;
+
+   case 2000:
+   while (alg) {
+   char *tmp = strsep(, ":");
+
+   if (!tmp || !*tmp)
+   break;
+
+   ret += tcrypt_test(tmp);
+   }
+   break;
}
 
return ret;
-- 
2.1.4

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 7/8] random: remove noop function call to xfer_secondary_pool

2017-01-18 Thread Theodore Ts'o
On Tue, Dec 27, 2016 at 11:41:46PM +0100, Stephan Müller wrote:
> Since the introduction of the ChaCha20 DRNG, extract_entropy is only
> invoked with the input_pool. For this entropy pool, xfer_secondary_pool
> is a no-op and can therefore be safely removed.
> 
> Signed-off-by: Stephan Mueller 

Instead of doing some minor deletions of single lines, what I want to
do is to look at a more comprehensive refactoring of the code.  The
fact that we have extract_entropy() only being used for the input
pool, and extract_entropy_user() ony being used for the non-blocking
pool, is not obvious from the function name and the arguments that
these functions take.

Either the functions should be kept general (so someone else using
them in the future won't get confused about how they work), or they
should be made more speceific.  But doing light modifications like
have the danger of causing confusion and bugs in the future.

 - Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC PATCH v3] crypto: Add IV generation algorithms

2017-01-18 Thread Gilad Ben-Yossef
Hi Binoy,


On Wed, Jan 18, 2017 at 11:40 AM, Binoy Jayan  wrote:
> Currently, the iv generation algorithms are implemented in dm-crypt.c.
> The goal is to move these algorithms from the dm layer to the kernel
> crypto layer by implementing them as template ciphers so they can be
> implemented in hardware for performance. As part of this patchset, the
> iv-generation code is moved from the dm layer to the crypto layer and
> adapt the dm-layer to send a whole 'bio' (as defined in the block layer)
> at a time. Each bio contains an in memory representation of physically
> contiguous disk blocks. The dm layer sets up a chained scatterlist of
> these blocks split into physically contiguous segments in memory so that
> DMA can be performed. Also, the key management code is moved from dm layer
> to the cryto layer since the key selection for encrypting neighboring
> sectors depend on the keycount.
>
> Synchronous crypto requests to encrypt/decrypt a sector are processed
> sequentially. Asynchronous requests if processed in parallel, are freed
> in the async callback. The dm layer allocates space for iv. The hardware
> implementations can choose to make use of this space to generate their IVs
> sequentially or allocate it on their own.
> Interface to the crypto layer - include/crypto/geniv.h
>
> Signed-off-by: Binoy Jayan 
> ---

I have some review comments and a bug report -



>   */
> -static int crypt_convert(struct crypt_config *cc,
> -struct convert_context *ctx)
> +
> +static int crypt_convert_bio(struct crypt_config *cc,
> +struct convert_context *ctx)
>  {
> +   unsigned int cryptlen, n1, n2, nents, i = 0, bytes = 0;
> +   struct skcipher_request *req;
> +   struct dm_crypt_request *dmreq;
> +   struct geniv_req_info rinfo;
> +   struct bio_vec bv_in, bv_out;
> int r;
> +   u8 *iv;
>
> atomic_set(>cc_pending, 1);
> +   crypt_alloc_req(cc, ctx);
> +
> +   req = ctx->req;
> +   dmreq = dmreq_of_req(cc, req);
> +   iv = iv_of_dmreq(cc, dmreq);
>
> -   while (ctx->iter_in.bi_size && ctx->iter_out.bi_size) {
> +   n1 = bio_segments(ctx->bio_in);
> +   n2 = bio_segments(ctx->bio_in);


I'm pretty sure this needs to be

 n2 = bio_segments(ctx->bio_out);

> +   nents = n1 > n2 ? n1 : n2;
> +   nents = nents > MAX_SG_LIST ? MAX_SG_LIST : nents;
> +   cryptlen = ctx->iter_in.bi_size;
>
> -   crypt_alloc_req(cc, ctx);
> +   DMDEBUG("dm-crypt:%s: segments:[in=%u, out=%u] bi_size=%u\n",
> +   bio_data_dir(ctx->bio_in) == WRITE ? "write" : "read",
> +   n1, n2, cryptlen);
>


>
> -   /* There was an error while processing the request. */
> -   default:
> -   atomic_dec(>cc_pending);
> -   return r;
> -   }
> +   sg_set_page(>sg_in[i], bv_in.bv_page, bv_in.bv_len,
> +   bv_in.bv_offset);
> +   sg_set_page(>sg_out[i], bv_out.bv_page, bv_out.bv_len,
> +   bv_out.bv_offset);
> +
> +   bio_advance_iter(ctx->bio_in, >iter_in, bv_in.bv_len);
> +   bio_advance_iter(ctx->bio_out, >iter_out, bv_out.bv_len);
> +
> +   bytes += bv_in.bv_len;
> +   i++;
> }
>
> -   return 0;
> +   DMDEBUG("dm-crypt: Processed %u of %u bytes\n", bytes, cryptlen);
> +
> +   rinfo.is_write = bio_data_dir(ctx->bio_in) == WRITE;

Please consider wrapping the above boolean expression in parenthesis.


> +   rinfo.iv_sector = ctx->cc_sector;
> +   rinfo.nents = nents;
> +   rinfo.iv = iv;
> +
> +   skcipher_request_set_crypt(req, dmreq->sg_in, dmreq->sg_out,

Also, where do the scatterlist src2 and dst2 that you use
sg_set_page() get sg_init_table() called on?
I couldn't figure it out...

Last but not least, when performing the following sequence on Arm64
(on latest Qemu Virt platform) -

1. cryptsetup luksFormat fs3.img
2. cryptsetup open --type luks fs3.img croot
3. mke2fs /dev/mapper/croot


[ fs3.img is a 16MB file for loopback ]

The attached kernel panic happens. The same does not occur without the patch.

Let me know if you need any additional information to recreate it.

I've tried to debug it a little but did not came up with anything
useful aside from above review notes.

Thanks!


-- 
Gilad Ben-Yossef
Chief Coffee Drinker

"If you take a class in large-scale robotics, can you end up in a
situation where the homework eats your dog?"
 -- Jean-Baptiste Queru
# 
# 
# mk
mkdir mkdosfs   mke2fsmkfifomknod mkpasswd  mkswapmktemp
# mk
mkdir mkdosfs   mke2fsmkfifomknod mkpasswd  mkswapmktemp
# mke2fs /dev/mapper/croot 
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
3584 inodes, 14336 blocks
716 blocks (5%) reserved for the super user

Re: [PATCH 00/13] crypto: copy AAD during encrypt for AEAD ciphers

2017-01-18 Thread Cyrille Pitchen
Hi Stephan,

this series of patches sounds like a good idea. I haven't tested it with
the Atmel AES hardware yet but I have many dummy questions:

Looking at some driver patches in the series, it seems you only add a call
to crypto_aead_copy_ad() but I don't see any removal of the previous driver
specific implementation to perform that copy.

I take the Atmel gcm(aes) driver case aside since looking at the current
code, it seems that I've forgotten to copy that assoc data from req->src
into req->dst scatter list. Then I assume that I was lucky so when I tested
with the test manager or IPSec connections, I always fell in the case where
req->src == req->dst. I thought the test manager also covered the case
req->src != req->dst but I don't remember very well and I haven't checked
recently, sorry!

Then I now look at the default drivers/gcm.c driver and, if I understand
this driver correctly, it doesn't copy the associated data from req->src
into req->dst when req->src != req->dst...
What does it mean? Did I misunderstand the gmc.c driver or is it a bug? Is
that copy the associated data needed after all?
Also looking at the drivers/authenc.c driver, this one does copy the
associated data.
So now I understand why I didn't make the copy for gcm(aes) but did it for
authenc(hmac(shaX),cbc(aes)) in atmel-aes.c: when I add new crypto
algorithms support in the Atmel drivers, I always take the crypto/*.c
drivers as reference.

So finally, what shall we do? copy or not copy? That is my question!

One last question: why do we copy those associated data only for encrypting
requests but not for decrypting requests?

The associated data might still be needed in the req->dst scatter list even
when it only refers to plain data so no other crypto operation is needed
after. However, let's take the example of an IPSec connection with ESP: the
first 8 bytes of the ESP header (4-byte SPI + 4-byte sequence number) are
used as associated data. They must be authenticated but they cannot be
ciphered as we need the plain SPI value to attach some IP packet to the
relevant IPSec session hence knowing the crypto algorithms to be used to
process the network packet.
However, once the received IPSec packet has been decrypted and
authenticated, the sequence number part the the ESP header might still be
needed in req->dst if for some reason the req->src is no longer available
when performing the anti-replay check.
Maybe the issue simply never occurs because req->src is always == req->dst
or maybe because the anti-replay check is always performed before the
crypto stuff. I dunno!

So why not copying the associated data also when processing decrypt
requests too?

Sorry for those newbie questions! I try to improve my understanding and
knowledge of the crypto subsystem and its interaction with the network
subsystem without digging too much in the source code :p

Best regards,

Cyrille

Le 10/01/2017 à 02:36, Stephan Müller a écrit :
> Hi,
> 
> to all driver maintainers: the patches I added are compile tested, but
> I do not have the hardware to verify the code. May I ask the respective
> hardware maintainers to verify that the code is appropriate and works
> as intended? Thanks a lot.
> 
> Herbert, this is my proprosal for our discussion around copying the
> AAD for algif_aead. Instead of adding the code to algif_aead and wait
> until it transpires to all cipher implementations, I thought it would
> be more helpful to fix all cipher implementations.
> 
> To do so, the AAD copy function found in authenc is extracted to a global
> service function. Furthermore, the generic AEAD TFM initialization code
> now allocates the null cipher too. This allows us now to only invoke
> the AAD copy function in the various implementations without any additional
> allocation logic.
> 
> The code for x86 and the generic code was tested with libkcapi.
> 
> The code for the drivers is compile tested for drivers applicable to
> x86 only. All others are neither compile tested nor functionally tested.
> 
> Stephan Mueller (13):
>   crypto: service function to copy AAD from src to dst
>   crypto: gcm_generic - copy AAD during encryption
>   crypto: ccm_generic - copy AAD during encryption
>   crypto: rfc4106-gcm-aesni - copy AAD during encryption
>   crypto: ccm-aes-ce - copy AAD during encryption
>   crypto: talitos - copy AAD during encryption
>   crypto: picoxcell - copy AAD during encryption
>   crypto: ixp4xx - copy AAD during encryption
>   crypto: atmel - copy AAD during encryption
>   crypto: caam - copy AAD during encryption
>   crypto: chelsio - copy AAD during encryption
>   crypto: nx - copy AAD during encryption
>   crypto: qat - copy AAD during encryption
> 
>  arch/arm64/crypto/aes-ce-ccm-glue.c  |  4 
>  arch/x86/crypto/aesni-intel_glue.c   |  5 +
>  crypto/Kconfig   |  4 ++--
>  crypto/aead.c| 36 
> ++--
>  crypto/authenc.c | 36 
> 

[PATCH] crypto: tcrypt - Add debug prints

2017-01-18 Thread Rabin Vincent
From: Rabin Vincent 

tcrypt is very tight-lipped when it succeeds, but a bit more feedback
would be useful when developing or debugging crypto drivers, especially
since even a successful run ends with the module failing to insert. Add
a couple of debug prints, which can be enabled with dynamic debug:

Before:

 # insmod tcrypt.ko mode=10
 insmod: can't insert 'tcrypt.ko': Resource temporarily unavailable

After:

 # insmod tcrypt.ko mode=10 dyndbg
 tcrypt: testing ecb(aes)
 tcrypt: testing cbc(aes)
 tcrypt: testing lrw(aes)
 tcrypt: testing xts(aes)
 tcrypt: testing ctr(aes)
 tcrypt: testing rfc3686(ctr(aes))
 tcrypt: all tests passed
 insmod: can't insert 'tcrypt.ko': Resource temporarily unavailable

Signed-off-by: Rabin Vincent 
---
 crypto/tcrypt.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
index ae22f05..9a11f3c 100644
--- a/crypto/tcrypt.c
+++ b/crypto/tcrypt.c
@@ -22,6 +22,8 @@
  *
  */
 
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
 #include 
 #include 
 #include 
@@ -1010,6 +1012,8 @@ static inline int tcrypt_test(const char *alg)
 {
int ret;
 
+   pr_debug("testing %s\n", alg);
+
ret = alg_test(alg, alg, 0, 0);
/* non-fips algs return -EINVAL in fips mode */
if (fips_enabled && ret == -EINVAL)
@@ -2059,6 +2063,8 @@ static int __init tcrypt_mod_init(void)
if (err) {
printk(KERN_ERR "tcrypt: one or more tests failed!\n");
goto err_free_tv;
+   } else {
+   pr_debug("all tests passed\n");
}
 
/* We intentionaly return -EAGAIN to prevent keeping the module,
-- 
2.1.4

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[RFC PATCH v3] crypto: Add IV generation algorithms

2017-01-18 Thread Binoy Jayan
Currently, the iv generation algorithms are implemented in dm-crypt.c.
The goal is to move these algorithms from the dm layer to the kernel
crypto layer by implementing them as template ciphers so they can be
implemented in hardware for performance. As part of this patchset, the
iv-generation code is moved from the dm layer to the crypto layer and
adapt the dm-layer to send a whole 'bio' (as defined in the block layer)
at a time. Each bio contains an in memory representation of physically
contiguous disk blocks. The dm layer sets up a chained scatterlist of
these blocks split into physically contiguous segments in memory so that
DMA can be performed. Also, the key management code is moved from dm layer
to the cryto layer since the key selection for encrypting neighboring
sectors depend on the keycount.

Synchronous crypto requests to encrypt/decrypt a sector are processed
sequentially. Asynchronous requests if processed in parallel, are freed
in the async callback. The dm layer allocates space for iv. The hardware
implementations can choose to make use of this space to generate their IVs
sequentially or allocate it on their own.
Interface to the crypto layer - include/crypto/geniv.h

Signed-off-by: Binoy Jayan 
---
 drivers/md/dm-crypt.c  | 1891 ++--
 include/crypto/geniv.h |   47 ++
 2 files changed, 1399 insertions(+), 539 deletions(-)
 create mode 100644 include/crypto/geniv.h

diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
index 7c6c572..7275b0f 100644
--- a/drivers/md/dm-crypt.c
+++ b/drivers/md/dm-crypt.c
@@ -32,170 +32,113 @@
 #include 
 #include 
 #include 
-
 #include 
-
-#define DM_MSG_PREFIX "crypt"
-
-/*
- * context holding the current state of a multi-part conversion
- */
-struct convert_context {
-   struct completion restart;
-   struct bio *bio_in;
-   struct bio *bio_out;
-   struct bvec_iter iter_in;
-   struct bvec_iter iter_out;
-   sector_t cc_sector;
-   atomic_t cc_pending;
-   struct skcipher_request *req;
+#include 
+#include 
+#include 
+#include 
+
+#define DM_MSG_PREFIX  "crypt"
+#define MAX_SG_LIST(BIO_MAX_PAGES * 8)
+#define MIN_IOS64
+#define LMK_SEED_SIZE  64 /* hash + 0 */
+#define TCW_WHITENING_SIZE 16
+
+struct geniv_ctx;
+struct geniv_req_ctx;
+
+/* Sub request for each of the skcipher_request's for a segment */
+struct geniv_subreq {
+   struct skcipher_request req CRYPTO_MINALIGN_ATTR;
+   struct scatterlist src;
+   struct scatterlist dst;
+   int n;
+   struct geniv_req_ctx *rctx;
 };
 
-/*
- * per bio private data
- */
-struct dm_crypt_io {
-   struct crypt_config *cc;
-   struct bio *base_bio;
-   struct work_struct work;
-
-   struct convert_context ctx;
-
-   atomic_t io_pending;
-   int error;
-   sector_t sector;
-
-   struct rb_node rb_node;
-} CRYPTO_MINALIGN_ATTR;
-
-struct dm_crypt_request {
-   struct convert_context *ctx;
-   struct scatterlist sg_in;
-   struct scatterlist sg_out;
+struct geniv_req_ctx {
+   struct geniv_subreq *subreq;
+   bool is_write;
sector_t iv_sector;
+   unsigned int nents;
+   u8 *iv;
+   struct completion restart;
+   atomic_t req_pending;
+   struct skcipher_request *req;
 };
 
-struct crypt_config;
-
 struct crypt_iv_operations {
-   int (*ctr)(struct crypt_config *cc, struct dm_target *ti,
-  const char *opts);
-   void (*dtr)(struct crypt_config *cc);
-   int (*init)(struct crypt_config *cc);
-   int (*wipe)(struct crypt_config *cc);
-   int (*generator)(struct crypt_config *cc, u8 *iv,
-struct dm_crypt_request *dmreq);
-   int (*post)(struct crypt_config *cc, u8 *iv,
-   struct dm_crypt_request *dmreq);
+   int (*ctr)(struct geniv_ctx *ctx);
+   void (*dtr)(struct geniv_ctx *ctx);
+   int (*init)(struct geniv_ctx *ctx);
+   int (*wipe)(struct geniv_ctx *ctx);
+   int (*generator)(struct geniv_ctx *ctx,
+struct geniv_req_ctx *rctx,
+struct geniv_subreq *subreq);
+   int (*post)(struct geniv_ctx *ctx,
+   struct geniv_req_ctx *rctx,
+   struct geniv_subreq *subreq);
 };
 
-struct iv_essiv_private {
+struct geniv_essiv_private {
struct crypto_ahash *hash_tfm;
u8 *salt;
 };
 
-struct iv_benbi_private {
+struct geniv_benbi_private {
int shift;
 };
 
-#define LMK_SEED_SIZE 64 /* hash + 0 */
-struct iv_lmk_private {
+struct geniv_lmk_private {
struct crypto_shash *hash_tfm;
u8 *seed;
 };
 
-#define TCW_WHITENING_SIZE 16
-struct iv_tcw_private {
+struct geniv_tcw_private {
struct crypto_shash *crc32_tfm;
u8 *iv_seed;
u8 *whitening;
 };
 
-/*
- * Crypt: maps a linear range of a block device
- * and encrypts / decrypts at the 

[RFC PATCH v3] IV Generation algorithms for dm-crypt

2017-01-18 Thread Binoy Jayan
===
GENIV Template cipher
===

Currently, the iv generation algorithms are implemented in dm-crypt.c. The goal
is to move these algorithms from the dm layer to the kernel crypto layer by
implementing them as template ciphers so they can be used in relation with
algorithms like aes, and with multiple modes like cbc, ecb etc. As part of this
patchset, the iv-generation code is moved from the dm layer to the crypto layer
and adapt the dm-layer to send a whole 'bio' (as defined in the block layer)
at a time. Each bio contains the in memory representation of physically
contiguous disk blocks. Since the bio itself may not be contiguous in main
memory, the dm layer sets up a chained scatterlist of these blocks split into
physically contiguous segments in memory so that DMA can be performed.

One challenge in doing so is that the IVs are generated based on a 512-byte
sector number. This infact limits the block sizes to 512 bytes. But this should
not be a problem if a hardware with iv generation support is used. The geniv
itself splits the segments into sectors so it could choose the IV based on
sector number. But it could be modelled in hardware effectively by not
splitting up the segments in the bio.

Another challenge faced is that dm-crypt has an option to use multiple keys.
The key selection is done based on the sector number. If the whole bio is
encrypted / decrypted with the same key, the encrypted volumes will not be
compatible with the original dm-crypt [without the changes]. So, the key
selection code is moved to crypto layer so the neighboring sectors are
encrypted with a different key.

The dm layer allocates space for iv. The hardware drivers can choose to make
use of this space to generate their IVs sequentially or allocate it on their
own. This can be moved to crypto layer too. Postponing this decision until
the requirement to integrate milan's changes are clear.

Interface to the crypto layer - include/crypto/geniv.h

Revisions:

v1: https://patchwork.kernel.org/patch/9439175
v2: https://patchwork.kernel.org/patch/9471923

v2 --> v3
--

1. Moved iv algorithms in dm-crypt.c for control
2. Key management code moved from dm layer to cryto layer
   so that cipher instance selection can be made depending on key_index
3. The revision v2 had scatterlist nodes created for every sector in the bio.
   It is modified to create only once scatterlist node to reduce memory
   foot print. Synchronous requests are processed sequentially. Asynchronous
   requests are processed in parallel and is freed in the async callback.
4. Changed allocation for sub-requests using mempool

v1 --> v2
--

1. dm-crypt changes to process larger block sizes (one segment in a bio)
2. Incorporated changes w.r.t. comments from Herbert.

Binoy Jayan (1):
  crypto: Add IV generation algorithms

 drivers/md/dm-crypt.c  | 1891 ++--
 include/crypto/geniv.h |   47 ++
 2 files changed, 1399 insertions(+), 539 deletions(-)
 create mode 100644 include/crypto/geniv.h

-- 
Binoy Jayan

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html